In an interesting social/behavioral development, Microsoft’s latest Xiaolce chatbot AI upgrade includes learnings for when to interrupt human conversation.
The functionality is called “full duplex voice sense” and what it does, on a basic level, is that it allows the chatbot to talk and listen simultaneously. (The old, walkie-talkie way of AI conversation is called “half duplex”.) It can predict what you’re likely to say next, and knows when to interrupt you with relevant information.
There are two goals for this functionality:
- Provide a more natural flow to your conversation
- Users don’t need to use a wake word every time they respond during conversations
Microsoft plans on spreading this technology to Microsoft’s chatbots in the US and Japan, though it could quickly catch on in other conversational AI tools as well.
Why It’s Hot: What makes a computer feel more human? I’d venture to say that human speech patterns have a lot to do with it. How will having a more human-like AI assistant change how we speak to our computers, how we interact with them, and on a bigger level, how we start to view them within the context of our lives? Will this change how we feel about our computers, how we rely on them in our daily lives? Will our brains begin to process AI like how we process other humans? (Basically, will we all be like Joaquin?)