SXSW 2019: Building personality into AI
How will technology and humans interact in the future?
AI is permeating pretty much all service areas these days. And in a near term future, we will be interacting with bots, agents, algorithms and other forms of AI and technology pretty much every time we need a job done - but also when we´re bored, lonely or just in the mood to be entertained.
There´s a lot of focus on the technology side, but an even more important aspect of this development is how we create the right experience and interaction, so we end up being able to live with it in a way so it doesn´t feel too weird.
There´s a saying that “Technology well applied becomes part of daily life”. But the experience you get today when interacting with digital agents shows us that there is still quite a way to go.
So how do we get there? That was the topic of a very interesting panel with people from Microsoft, Google, Slack, and Mercedes Benz.
First of all, will everything have a personality in the future? And should it? How funky can a bank app be before it becomes awkward?
A lot of relevant questions and challenges were raised during the one-hour conversation:
- Should digital services have multiple personalities that would change depending on the person interacting? A good working example of this is the app called Carrot - a motivational to-do list with several personalities for you to choose between
- Will a digital personality have to change over time?
- When do we want our stuff to act as assistants and when should they just work as appliances? Mercedes mentioned that most of the time it´s completely fine just to be able to send commands towards your car, but on longer commutes, you might want a person to talk to, you want to be able to have a conversation or to ask deeper questions.
- Should the AI be different Monday morning as opposed to Friday afternoon?
- Should it reflect on signals from the human it is interacting with, trying to capture the mood I am in? Slack mentioned that they are working on understanding moods in writing - basically getting the difference between you answering “Okay”, “Okay!” Okay.” Or just “´kay”
- How do we teach our AI when to interact and when to leave people alone?
- How should it mix between being formal and informal? Slack told that if a person writes a message with more than 23 emoji reactions in it, Slack will respond “ I think you´re overreacting” They allow for this bit of humor because they know that a person doing this is NOT busy at that time.
- How does an algorithm earn the trust to be playful? Is it okay for a chatbot to stop you if you are texting your ex-girlfriend at 2 am in the night?
- Should these social norms and constructs be built in out-of-the-box or should it be learned over time?
- How do we handle the clash between different cultures when it comes to forms of interactions?
- How do you train an algorithm to be wrong in the right way? If it eg. suggest Thai food and you reject that suggestion, how do you catch the difference between not being in the mood for that kind of food or simply detesting Thai food?
All these areas need to be addressed when designing intelligent systems that are supposed to simulate human beings. It´s not a simple task, but nevertheless extremely important if we expect regular people to accept these new kinds of services.
Will our language change?
Finally, just as social media has changed our language and way of communicating (just think about abbreviations and emoticons), how will the introduction of AI in our daily life on a broader scale impact our language and way of talking? Will we be barking orders at each other? Will we use new kinds of spoken abbreviations because these will be accepted and understood by the digital agents. And will we stop saying please to each other?