Understanding the gap between voice interface design and engineering

Voice User Interfaces (VUIs) and their core technologies are increasingly popular, commonly found in households, on mobile devices, and adopted by a growing number of businesses to perform key functions – transcription services, and customer support – across many industries including automotive, finance and insurance. However, voice’s fluidity makes it complex and there are considerable design challenges that pose new problems for designers more familiar with web, mobile or desktop UX design. In addition, many of the underlying technologies such as natural language processing (NLP), speech recognition, text-to-speech generation and dialogue systems, are state of the art and still the subject of significant active research. This presents problems for how designers and developers form teams, and establish mutual understandings for both what is possible, and how design solutions might be proposed. If we understand how designers and engineers account for these complexities both of the technologies, design processes, and their team relations, we might find ways to improve systems design.

This project will investigate how voice interface engineers working with NLP, speech recognition, text-to-speech generation and dialogue systems engage with designers in implementing voice interface-based services and products.

The project will run from 1/11/2020 until 23/3/2021