Language of logistics: How user voices help us build better assistants

German Autolabs
5 min readMar 15, 2022

--

One of our obsessions in the field of natural language understanding is how best to deal with language data. After years of analysis and optimization, we’ve developed a gold standard for language data annotation with which we can rapidly and repeatedly build customised solutions for specific workflows. German Autolabs’ Language Data Manager, Dr. Lara Ehrenhofer, used the example of our voice assistant for newspaper delivery to explain how listening to users helped us to learn and to build better products.

I — Why do we need to train assistants?

To build a good voice assistant, you need to feed it data. This data trains the assistant and allows it to constantly learn and iterate. For multinationals building consumer voice assistants, there’s a convenient and near-infinite resource with which to train their assistants: the internet.

Users of Google Assistant and Siri have a broad range of intents and desires that their respective assistants must understand. To deal with this, their developers discovered that training the assistants on the entire content of the internet is a pretty neat way to cover a broad range of topics, by simplifying common machine learning models. Their system is not infallible, as Lara noted when we sat down to discuss the topic:

“Sometimes the big data approach doesn’t work for ‘niche’ users. Back when Apple first launched Siri, its UK English setting was completely incapable of understanding Scottish people.”

Language Data Manager, Lara Ehrenhofer, preparing to field test our Courier Assistant.

II — What makes our training needs different to consumer assistants?

The big data approach has other drawbacks. At German Autolabs, we build voice assistants for professional users who operate in very specific scenarios, and who only wish to use the assistant for work. Using big data to train our assistants would be like using an anvil to crack a walnut. Natural language human conversations differ from dialogues with professional voice assistants, as Lara explained:

“With the assistant, conversations are driven by functionality and brevity. Our challenge in logistics is to build assistants that can understand jargon-rich language, often uttered by non-native speakers working in dynamic and noisy outdoor environments.”

So, if the internet is a good training resource for the consumer-grade assistants, how do we train our assistants? It always starts on the street. We go straight to the people using our products: our users.

3:30am: A courier makes final checks at the newspaper delivery hub.

III — Focusing in on user groups

Our efforts to extract the language data that we need to successfully iterate an assistant require several carefully-planned phases, as Lara explained:

“Firstly, we create a string of hypothetical scenarios that the couriers can conceivably find themselves in — locked letterboxes, missing newspapers, unnamed subscribers.”

“We learn the couriers’ vocabulary — it’s concise and technical. Sometimes there’s a trap of making false assumptions. For example, we thought that couriers would refer to the people receiving newspapers as ‘customers’, but no: they call them subscribers. It’s an example of bias that can be rectified by a focus group.”

Focus group survey answers

“A typical focus group contains around 20 users that we select with our client. We aim for the broadest demographic range possible, from newly onboarded non-native speakers to old-timers with 20 years in the job.”

Learning courier vocabulary from assistant recordings is essential, but for maximum immersion, Lara and the team always like to witness workflows first-hand. This means getting up very early and hitting the streets with the couriers.

“There’s nothing like walking ten miles with a courier before breakfast to give you a clear picture of whether the tool is working efficiently or not.”

Steps counter from Lara’s phone during street testing; the superceded analogue delivery book.

IV — The gold standard of data annotation

AI is not always the complete answer to everything. Not quite yet, anyway. Lara explained why we augment data with human annotation:

“Full automation in this arena is less desirable than human interpretation of the key facts. We choose to edit the data where necessary. We select what we need, based on understandings learned in the field, and this hybrid approach helps us to build a better assistant.”

Speech recognition confidence reports help us identify areas of potential weakness.

“To go a level deeper, this methodology also allows us to easily make subdivisions within the data: a good example would be to differentiate between native and non-native speakers. This kind of categorization can lead us towards, for example, phrasing a voice assistant question differently the second time round to accommodate for varying comprehension levels.”

We use machine learning where and when it makes sense. AI helps us to scale, from a nucleus of human understanding. What ever we do in the past contributes to building up our general NLU database, because everything goes into a repository for logistics language. This gives us years of understanding and industry expertise.

Lara and the team’s people-centric (they are not just users, but people too) approach to iterating our products across the board — from User Experience to Natural Language Understanding — has resulted in an iterative, inclusive, and scalable development model that helps us to make working life easier for couriers and professional mobile workers on the street.

For more information, visit German Autolabs. To find out how our Solutions can help your business, talk to us today. As always, thank you for reading.

--

--