A voice for startups

Where does conversational UI leave design?

What we learned making a bot that talks like a person

It’s 12:30pm at Truth Labs which means we’re standing outside trying to decide where to go for lunch. We’re a creative bunch, so this seemed like a problem we should be able to solve. And a perfect opportunity to take a shot at experimenting with conversational UIs.

First, some terminology

Graphical user interfaces (GUIs) are a visual way to interact with a device. Instead of typing specific (and often cryptic) commands in a terminal, GUIs let users interact with files and programs by clicking and dragging.

Conversational user interfaces (CUIs) are a spoken or written way of interacting with a device. CUIs aren’t completely new, but they’re becoming smarter, more natural, and — therefore — more useful.

If you build it, they won’t come

Companies are realizing it’s hard to get users to download and use their apps. More than half of the time we spend on our phones is talking, texting or in email:

Talking, texting, and email take up 55% of the time spent on our phones.

Here’s where CUIs come in: since users already spend so much time in apps like Slack, Facebook Messenger, and even plain-old email, why not integrate your app inside these platforms?

Sure, users still have to find and install your app, but you can now interact in a more natural way. Users are more likely to use your service if it doesn’t break their workflow by requiring them to switch apps.

Making Lunchy

Lunchy is a Slack bot that smartly suggests where to go for lunch. He lives on a Node server hooked up to Slack using Botkit. The first step was getting him some data. We created a simple database using Fieldbook and added the 30 carry-out restaurants within 0.5 miles of our office.

Finding a place to eat. Snowflakes let you know suggestions are nearby due to the weather.

Lunchy started out being able to understand a few commands:

  • Suggesting a restaurant
    Typing something like “what’s for lunch?” or “where should we eat?” results in Lunchy responding with a suggestion.
  • Responding to feedback
    If you don’t like a suggestion, typing “what’s for lunch” again isn’t the most natural solution. We could have gone with something like “nah” or “try again” but even easier is responding with a single click. A 👎 on a recommendation results in a new one.
  • Finding a location
    Responding to a suggestion with “where’s that?” or “show me a map” results in Lunchy sending a link to the restaurant.
  • Telling a joke
    Just because he’s a robot doesn’t mean he can’t have a personality. We gave Lunchy a few jokes to keep up his sleeve. My personal fav: “What’s better than MacOS? TacOS.”

There’s still a design process

The “no UI” buzz may sound worrying to designers, but “no UI” is a misnomer. Conversational UIs are interfaces, just — well — conversational.

How are conversations started? What options do users have? How do they discover these options? What happens when people get confused? The role for intuitive design in conversational UIs is even more important without any visual affordances to fall back on.

Don’t be a robot

During our process, a surprising finding was that personality matters. The Slack API is fast, meaning Lunchy replied almost as soon as you finished typing.

Part of the magic was lost. Lunchy felt like a robot, not a helpful member of the team who happened to love lunch.

The fix? Simulate that Lunchy was “typing” and delay his response by a mere second.

Make sure you solve the problem at hand

At this point, Lunchy was a personable bot that drew suggestions from a digital hat. He’d find you somewhere to eat, but not necessarily somewhere you’d like. The next step was making him smart.

Recommendation algorithms live on user data. The data drive suggestions and let you evaluate an algorithm’s effectiveness.

We were already providing feedback on what we liked (restaurants skipped less often > restaurants skipped more often), we just had to start storing this information in our database. Lunchy could now use an algorithm to prioritize our favorite spots and stop recommending places we never went.

Try to anticipate user needs

It’s the middle of winter right now and we quickly found suggestions more than a few blocks away were often getting passed — not because we didn’t like the food — but because Chicago winters are not so forgiving. The fix? When making a suggestion, Lunchy quickly checks to see if it’s bitterly cold or raining (using Forecast). The worse the weather, the closer the recommendation.

Several days into our experiment we noticed a familiar trend reemerge: the team congregating around a computer or phone to ask Lunchy where to go. Automating repetitive tasks like this are what computers have been doing for decades. Since we always go for lunch around the same time, we now have Lunchy send a friendly ping every day at noon.

What’s next?

There are a couple areas we’d like to explore such as menu suggestions using Yelp and automating orders for pickup or delivery. We’ll post our updates…

Lunchy was a fun experiment to spend a few days between projects on. But more than that, he’s added some spontaneity to our daily routine. Welcome to the team our brown-bagged friend.

This post originally appeared on Medium

Stelios Constantinides

UX Designer/Engineer at Truth Labs