Building an AI Interior Design App

Jul 6, 2023

I’ve been developing mobile apps in some form since 2008. I’ve been part of teams who built successful products from the ground up, like at Vine, Dresr (acquired by Google) and Rodeo (Area 120, acquired by YouTube), but I had never launched something on my own.

Over the holidays, I decided to see what it takes to launch an app solo. The result was Decorous: AI Interior Design. I wanted to write up the highlights of what I learned along the way to share with all of you.

Shower Thoughts

Decorating our newly purchased home, I had trouble visualizing empty rooms in different styles. The Pinterest photos look great but does the style go with my room? One morning, I was struck with the thought in the shower: can generative AI do virtual staging or at least give me some inspiration? Could I build something where users could upload a photo of their room, pick a style and let generative AI do the rest?

Noise to Signal

If you aren’t familiar, StableDiffusion is a powerful text to image diffusion model. Diffusion models work by progressively denoising an image such that each round of denoising produces an image that better matches what you’ve typed in. In the basic case, it starts with pure noise and generates something new from just text.

StableDiffusion works a little differently by compressing images into a vector with many fewer dimensions used to represent the information, allowing it to run faster and on less powerful hardware. Noise is added to this vector representation in a similar way and resultant vectors are decoded into an image.

If that’s not cool enough, you also don’t have to start with pure noise, so with its img2img functionality, you can supply a starting image, choose how much noise to add to the it and let it rip, effectively controlling how much it will change the input image. Sounds pretty useful design recommendations.

You can also fine-tune it, teaching it to reproduce specific subjects, which is what those avatar/profile picture AI apps are built on.

Getting Started

First step was to mess with StableDiffusion to see what I could get out of it. The plan was to engineer the prompt, get the knobs set right and then build the UX at the right abstraction.

I figured the easiest and most flexible way to get it up and running was to deploy it to a Google Compute Engine VM. My goal was to get it running on cloud hardware so I could play with it, Dockerize it and deploy it to GKE for scaling. Turns out, just getting the dependencies installed correctly was a pain. When it finally worked, experimenting was just as painful, having to transfer the resulting images off the VM before being able to see the results, turning knobs by modifying the command line parameters. Every 24 hours I kept the instance running cost $14, or $400 a month. Not ideal.

If you just want to mess with SD you can use StabilityAI’s web interface, dreamstudio.ai. After an some free credits, you do have to buy more. With a graphics card with sufficient VRAM (~10GB+), there are also great open source web interfaces like Automatic1111’s. Fortunately, my gaming PC was up to the challenge. With the web UIs, I was able to get very fast feedback on prompts and settings, and iterated to a point I was happy enough with to use in the product.

The Product

I have to mention, after the idea for the product hit me, I looked around to see what was out there. InteriorAI.com was already seeing success and was very similar to my idea. You upload an image, choose type of room, style, quality, and level of inventiveness. I was a bit bummed to realize how unoriginal my own thoughts are but also saw the creator, @levelsio tweet about how in another space he’s in, AI avatars, saying there’s value in meeting people where their photos are: on their phone.

I mostly do iOS development anyway, so I figured it’s enough of a point of differentiation. I looked on the App Store and there are a couple almost direct ripoffs of InteriorAI.com (even one called InteriorAI!) and I’m pretty convinced these ripoffs are leeching off his API. Either way, I felt I could at least build better UX than them and it gave me enough confidence to get started. And I was excited that users could take pictures from anywhere in their home and get results on the spot.

In addition to being on the phone, I wanted to differentiate in customization as well, and decided to add the ability to choose a color palette in addition to the type of room and style.

Building

I designed and built the app over the holiday break. The app is pretty straightforward: a single ViewController, an image staging area, a button to pick or take a photo, a couple of list selections for the type of room, and a submit button. I had to figure out where to run the AI model. Given that running my own server on GCE would cost $400 a month, I looked for alternatives.

Fortunately, there are a handful of APIs, taking care of hosting the models, operations, load balancing and autoscaling. And new ones are launching daily. When I started, I decided between banana.dev, replicate.com, and StabilityAI’s own API. Banana’s didn’t have an out of the box img2img model, and StabilityAI’s billing was annoying, having to buy tokens manually. Replicate has straightforward metered billing (you pay for the seconds you’re running the model) with autopay and has a ready-to-use StableDiffusion 2.1 img2img configuration.

With the settings I determined earlier, each run on Replicate is about $0.02 per generation. I would have to pass 200k requests per month to make it worth running the GCE instance, and even then it wouldn’t scale because each request used most of the VRAM so I’d have to serialize all requests.

I wanted to avoid having to go through App Store review if I wanted to update my prompt or switch which API I’m using, so I also deployed a small app to AppEngine to handle taking the user customizations, combining them into a prompt and sending it off to Replicate. Surprisingly, I did find Apple’s reviews to be super fast. I’ve even gotten through in under a couple hours, so kudos to them! I used StableDiffusion to generate an app icon, put together some screenshots and shipped it to Apple.

Results

I’m proud of what I made. Check it out and let me know what you think! Would love to hear any feedback, suggestions or complaints. I don’t have much usage yet but it was great to go through the process.

Pros

  • Output images look great and high quality

  • Useful for inspiration or exploring high level themes for your room

  • Pretty magical for people who haven’t yet played with generative AI

  • I did what I set out to do! And from zero to launch in under a month!

Cons

  • Images don’t necessarily live up to scrutiny — a closer look reveals AI artifacts.

  • Color palettes in general don’t have widely used names, so the palette customization doesn’t work as well as I’d hoped, but useful enough for guiding it.

  • It’s tough to balance inventiveness and maintaining the structure of the room. I want it to have the freedom to fill empty rooms with furniture and decor but I don’t want it to invent doors and windows where it’s impossible.

Next up for me

I plan on compiling all of my learnings into a video course, complete with a skeleton of the app and everything you need to launch a subscription based, generative AI-powered iOS app, including the nitty-gritty, from getting an LLC, a business Apple Developer Account, app icons, screenshots, websites and more.

If you’re interested in hearing more, you can follow me on Twitter or get notified when it’s out.