This week’s most exciting tech news was probably the inclusion of a radar sensor in the latest Google phone. While Project Soli, which enables the phone to wake up when it detects someone’s face or the fact that someone is reaching for it, may be just a gimmick, as we embed computing everywhere the technology behind the sensor is showing up in more and more places, not just phones.
When we put computers in everyday objects and offer multiple modes of interaction, one of the biggest challenges facing those computers (or digital assistants) is to figure out our intent. When I’m typing on my laptop keyboard, the computer clearly knows what I’m trying to do and what device I’m addressing. But think back to the last time you called for Alexa or Google somewhere that had multiple voice assistants. What happened? Did several of them respond? Was it the one you wanted