Advanced UX Design
A project from my time at Mercedes-Benz R&D North America. The R&D focus is on innovations such as fuelcell, self-driving, 3d graphics, and advanced design. As with all super cool projects I'm under an NDA, so instead I'll highlight the process and problems surrounding my AI project.
Topics: Artificial Intelligence, Voice interface, Maps, Natural language, Self driving cars
Roles: Interaction design, Interface design, Software prototyping
Most people are not sold on the value of siri, alexa, and similar natural language voice interfaces. Some interfaces may disappoint users by sounding smarter than they really are, but what people really mean to say is that the voice interface isn't smart enough.
At first, that may seem like an easy problem to grasp. Smart design means putting the user first, meaning.. (1) start with a need (2) design an intuitive layout, (3) provide interactive content, (4) optimize by testing
Unfortunately, this isn't the case. When designing for intelligence, simple is not better. The goal is to replicate the subtle complexity of another human. The cool thing about interacting with humans is that you're not going to catch everything in your first read. For example, when you interact with a person, there's always more to discover and something new to learn. So to have software replicate this intelligent relationship also means that we're moving away from dumb software requiring really mind numbing user input. I'm really excited about this shift because it's a chance for us ourselves to become smarter and more human because of the technology we interact with.
1. Defining existing problems, brainstorm sessions with team
2. Outline interactions, designing example dialogues, find relevant literature
3. Start sketching and wireframing the GUI aspect of the agent.
4. Low fidelity testing using static mockups and off board text agent
5. Code up the GUI
6. Add in the logic, voice interface, recognizer, TTS
7. Run in the app in the car, tune the speech response, train agent with more examples
Natural language processing
So how do you replicate the intelligence of a human? In my case, it's through a voice agent. So I've been working on how the agent understands what you're saying as well as how it should talk back. Here are some specific problems I've identified and spent my time solving:
How do you dynamically create and update non-verbal context in voice interaction?
Teaching an agent to extract a combination of names, titles, and descriptions referenced in a single utterance
Taking an indirect reference made by the user and assigning it to an object with the help of prepositions
Sometimes it's still better to show then tell, so how do you anthropomorphize a GUI without being creepy?
Designers are practical problem solvers. For us it's more about the functionality; there's no such thing as a proper way. You can't really "fake" a natural language demo, but there are still more and less practical ways of achieving your goal. What I learned in the process was that even though we have these new techniques like deep learning at our hands, the agent/program doesn't have to think the same way a human does in order to present itself like it thinks the same way a human does.
There's nothing wrong with artificial intelligence being artificial. We ourselves rely on this phenomenon all the time - for instance during interviews, or when someone asks you for advice. Realizing this was a big jedi milestone as a designer fighting the temptation to code too much.
Each year some work from our studio is shown publicly at CES. If you want to get a better idea of how intelligent interaction plays into cars, check out these projects my team has worked on in the past. Just to be clear, I was not involved in either of these past projects.