:: State of tech

Part 1: (Bot)ched communication- Why aren't bots taking over the internet?

As the first industrial revolution plowed its way through the course of history, machines took over many aspects of hard manual labor. Factories and farms still needed humans to run the machines, but it was more efficient labor in the long term. Now with the onset of the fourth industrial revolution (that is the onset of bio-cyber-physical systems), the jobs of the past are even more at stake at becoming totally automated – and now machines are gaining the ability to assist with mental labor as well. And by machines in this case, I mean virtual assistants and chatbots.

Bots are intended to be the machine solution to finding answers in a growing sea of data, parsing the information that humans simply do not have the time or capacity to struggle through. They can manage our time and travel, get the things we want, predict the weather, have the answers to our random questions, or even provide medical advice.

But even with the many possible uses and grandiose ideas for bots, people don’t seem to be all that into using them yet. Why is this?

How do bots work and why don’t we like them?

Most chatbots, if not all of them, aren’t real artificial intelligence. UX designer, Rafael González calls them a decision tree model (a database to keep things simple) that stores every “command” the user can text/pronounce. When it detects that one of the commands is used, the bot performs the corresponding action.

The advantage of this approach is that it’s pretty easy to list all the cases that they are designed to cover. And that’s precisely their disadvantage, too. That’s because these bots are purely a reflection of the capability, fastidiousness and patience of the person who created them; and how many user needs and inputs they were able to anticipate. Problems arise when life refuses to fit into those boxes. That usually results in the bot responding something similar to, “I don’t understand the question”.

Furthermore, chatbots aren’t able to talk like a normal person because they are focused on a particular topic. Interaction will never feel natural if their language is limited to a single topic or even a clearly defined set of words.

George Kassabgi likens the chatbot to a mechanical music box in his blog post “The “AI” Label is Bullsh*t”:

“It’s a machine that produces output according to patterns. Rather than using pins and cylindrical drums the chatbot uses software code and mathematics. A mechanical music box knows nothing about music theory, likewise, chatbot machinery knows nothing about language.”

In order to make an efficient bot developers have to have a good sense of human psychology. And this is a problem for a couple of reasons. To build a well-behaved bot, one must factor in a complex jumble of cognitive psychological factors as well as an in-depth knowledge of statistics, market capitalism, commercial advertising, and consumer behavior with technology, not to mention the actual software building itself.

Perhaps, there might be a part of the bigger picture missing in terms of developing emotional intelligence and broader ways of non-discriminative thinking when making these kinds of technologies. Justin Lee from GrowthBot argues that the data which is selected for machine-learning often reflects the unconscious biases of the researcher/developer. This is propagating what some would call an algorithmic white male bias, primarily from western society. For example, Ivana Bartoletti, a privacy and data protection professional states that,

“It is not possible for algorithms to remain immune from the human values of their creators. If a non-diverse workforce is creating them, they are more prone to be implanted with unexamined, undiscussed, often unconscious assumptions and biases about things such as race, gender and class. What if the workforce designing those algorithms is male-dominated? This is the first major problem: the lack of female scientists and, even worse, the lack of true intersectional thinking behind the creation of algorithms.”

Just take a look at the propensity for “genderless” bots with female personas or voices. Siri, Alexa, Cortana, Clara, Julie, Riley, Amy and the list goes on. As Zachary M. Seward & Sonali Kohli put it: “Companies appear to be more comfortable with creating male assistants if they are based on existing fictional superheroes’ assistants or butlers. For some reason, fresh inspiration for female virtual assistants flows more easily.”

Aside from the male bias issue, there’s also just the general human interaction problem. Justin Lee from GrowthBot explains,

“Conversational user interfaces are meant to replicate the way humans prefer to communicate, but they end up requiring extra cognitive effort. Essentially, we’re swapping something simple for a more-complex alternative. And here’s the other thing. Conversational UIs are built to replicate the way humans prefer to communicate — with other humans. But is that how humans prefer to interact with machines? Not necessarily.”

Unfortunately, machine efficiency and human emotion are often at odds with each other. And so far, people are not very nice to bots. We don’t have much patience for this kind of interaction, but we expect immediate results. Also, there’s something inside us that really gets a kick out of trolling bots and software.

It’s difficult to design a bot which responds correctly to an demanding human who has no regard for the decency that’s usually required in human conversations.

As Justin Lee put it:

“Machine learning is still basically a statistical process. This means it’s a reflection of the quality of the data it depends on. This is a blessing and a curse. The sensitivity of machine learning to the characteristics of its input data means that it can easily learn the wrong thing, as demonstrated in the case of Tay, Microsoft’s very own racist robot.”

“Tay” was supposed to have conversations with Twitter users and learn how to sound like a “millennial”. Instead, it learned how to love Hitler and hate feminism. Tay is an object lesson in how artificial intelligence can be “taught” all the wrong things.

Part 2: (Bot)ched Communication- What Do We Want From Bots?

Cybil

Cybil

Perception and reality at the intersections of art, science and technology.

Read More