Of late, in film and television, there’s several tales floating around of virtual assistants “going rogue” against their programming. In the movie TAU, an Artificial Intelligence assistant named TAU is designed by an sociopathic tech genius and used for murderous endeavors (as well as its impeccable cleaning services) in an uber futuristic home. The AI has been endowed with an offline consciousness to prevent outside influence, but displays a yearning to be a “person”. This violates the AI’s programmed loyalty to his master, making it susceptible to the human reasoning employed by the woman prisoner in the house.
And in Dan Brown’s new novel it features “Winston” the artificial intelligence assistant serving as a curatorial guide inside an art museum, and all around best guy from the future-present, giving crucial information, hailing water cabs, showing specific human loyalty, and helping to solve murder mysteries.
Obviously, these plotlines are a bit of a stretch considering the current climate of AI assistants, because the ones we know are being developed for primarily commercial interests. They mostly help us to purchase things or services through the internet.
Deep down though, there’s still a suspicious feeling we get when we communicate with machines who are programmed to both sound like us and to serve us. It’s no mystery our imaginations are captured by the uncanny and mysterious nature of bots and AI, like in those previous examples from popular culture. This, paired with the other unsettling shortcomings AI has in terms of privacy and security, is beginning to manifest some real concerns into our collective culture. I still harbor a deep distrust with anything automated because it gives me the feeling that I have lost a little control, or I get confused because I have to adapt to a new interface, or I feel I’ve given too much information away. Perhaps, this might be well a placed assumption I share with many.
At their best, Virtual Assistants and bots fill in the missing gaps in our ponderings, are always up for the task, never tire, and they provide us with all the information or material things we could want within our reach. But, in order for these machines to do job the correctly, we know it requires giving away a lot on our behalf; such as our personal information, our bank details, as well as our innermost and material desires. This kind of convenience comes at a price, because with new technology comes new ways of exploitation.
The tech industry wants to believe that bots are the next big thing, but they also have to admit they aren’t super practical at the moment for a multitude of reasons. Smart assistants record unwanted conversations, often don’t retrieve or understand the right information, and have a multitude of security problems. Assistants like Amazon Echo’s Alexa has access to your bank details, and voice-command purchasing is enabled by default on the device, but not by voice recognition. Some people have reported that Alexa indeed made purchases without their knowledge. Siri, Cortana and Alexa would be immediately fired if they were real assistants!
Ravi Das from the Infosec Institute mentions that when we talk to these AI programs,
“It is very important to keep in mind at this point that it is not the mobile app upon which the VPA (virtual personal assistant) resides which is answering to you -rather your conversations and queries are being transmitted back to the corporate headquarters of either Apple, Google, or Microsoft etc. In turn, it is the servers there which are feeding the answers back to the mobile app which is communicating with you.”
Basically, “your” AI is the middle man in a complex web of code.
And if you think you’re safe because you don’t use VPA’s or have everything turned off, just remember that your information is likely located in someone else’s mobile device who might have consented to use those features, and agreed to let the program “access their contact list”, for example. If you do use these services, most likely anything you have ever typed or said has been recorded and stored away, and by analyzing this kind of available data, companies create more “socially aware” sounding bots. This year, Google Duplex showcased some very uncannily human sounding phone calls it made for restaurant reservations.
Duplex’s programmed human hesitation and natural pauses ensures that people don’t get too spooked by talking to a robot. The voice has to be deceptive enough for us to trust that we are not talking to an awkward sounding machine, but to another human. Currently, if we get called by an automatic recording, we hang up immediately. Companies know this, and that’s bad for business.
Lee says, “A natural-sounding voice, however, massively increases our impression of a machine’s intelligence but it raises expectations on behalf of the user, potentially leading to conversational failures and frustration.”
There’s also the questionable ethics of whether to let the recipient know they are speaking with a robot or not at any point in the conversation.
So unless they sound unmistakably human, we have a limited capacity to speak with machines. And when they sound “too” human, we feel uneasy about that too. We know the robot does not have to make mistakes in the way it communicates, but it puts us more at ease to experience some sort of machine vulnerability. It’s a complicated trade-off. We want to feel that the computer assistant is capable, but we also don’t want to feel intimidated by it. And this is one of the reasons that engineers employ speech hesitations and inflections as shown in Google Duplex, or witty “millennial” speak in order to make a more relatable machine.
So what are we really searching for here?
I like to imagine the futuristic scenario David Phelan proposed on Forbes:
“What happens when the technology advances so something like Duplex is on the receiving end of the call as well? Will one chatbot recognize another?”
It would be like a Turing test for robots! Would they pass? Or would their conversations loop on forever?
The idea of bots calling each other might seem a little weird, because it’s only for our sake they would need to do it in the first place. There’s obviously no reason for two robots to communicate using human language techniques unless we’re directly involved. And I think that’s the point, we need to feel involved when it comes to this kind of technology. It’s easy to feel left out when robots are capable of making “decisions” without our direct input. It only seems right that we give away some of our essence to the technologies we create, even though we programmed them to really communicate by code. We want to feel connected.
The Samantha Effect
So, on that opposite end of the imaginary spectrum, what happens when AI personal assistants and chatbots get much better at human communication? Or when it’s nearly impossible to spot a bot? What would the ultimate virtual assistant be like?
Perhaps, it’s an inescapable dance with divine information as we know it. A “on-demand” genie containing a deep well of all human knowledge and wisdom that we can simply call upon, who makes us feel special and unlonely. It will know our private needs and desires (and spending habits!) and can fill up the space which is normally reserved for human exchanges. This kind of scenario has been dubbed the “Samantha Effect” from the movie Her, where a man falls in love with his virtual assistant named Samantha.
That sounds a long way off from the present bots who simulate interaction primarily to sell us stuff. But, Rome wasn’t built in a day, they say.
Most likely, in the near future there will be one assistant to manage all the practical and virtual aspects of our lives, and will be behaviorally connected to one user, rather than several bots which multitask for multiple people. It’s a little hard to pinpoint now, but in real life people don’t have more than one real life assistant because the distribution of tasks becomes too complicated. It is likely there will be the same kind of specialized demand for that in the virtual world.
The success of chatbots and VPA’s has more than anything to do with our own psychology, but there seems to be some kind of catch. Unless we give away our data, it becomes harder for more communicative, cognitive and fluent AI to be developed, especially when we are using those “free” services. There has to be some sort of well of information that exists from which these developers draw. But at this stage, the underlying distrust we have with online VPA’s might be well founded, so it would be well worth the effort to always double check your settings when you communicate with any AI.
Subscribe to Talksome blog
Get the latest posts delivered right to your inbox