The Chatbots That Will Manipulate Us

Shiva Bhaskar
7 min readJun 30, 2017
Credit: Parade.com

Humans are emotional beings. In fact, at some level, almost every human action is driven by our desire to experience pleasure and avoid pain, gain social acceptance and avoid rejection, and ultimately, to survive.

Also, contrary to what we’d like to believe, we are far from perfectly rational. Dan Ariely and Daniel Kahneman (and many others), have done an excellent job of exploring the mental flaws, biases, and reactions, that we manifest on a regular basis. An inherent degree of irrationality isn’t a bad thing. In fact, it can make us more effective thinkers, and bring a greater sense of meaning and control to our lives. Our less “logical” behavior, is quite likely evolutionary in nature, rooted in our hunter-gatherer past.

However, our biases, and varied emotional states, are also what makes us vulnerable to a new breed of chatbots, which are skilled at understanding and manipulating human behavior, and combining that knowledge with large data sets and more effective technology, to impact our lives in undesirable ways. As chatbots gain greater popularity, and enjoy more widespread adoption, we will need to develop effective strategies to guard against these dangers.

Ever since Alan Turing developed the eponymous Turing Test back in 1950, people have had an obsession with the idea of human-like intelligence in computers; more specifically, a computer that is indistinguishable from a person, in terms of the sort of responses it offers to various questions. Over the decades, computer scientists have developed software tools have are increasingly adept at communicating with people.

In 1966, Joseph Weizenbaum (a member of the then-emergent MIT Artificial Intelligence Lab) developed ELIZA, the first known chatbot, which used pattern matching to hold (simple, somewhat awkward) conversations with people. Interestingly, despite their awareness that they were communicating with a computer program, Eliza’s human counterparts still became “emotionally attached” to Eliza. Eliza was followed by Parry in 1972 (an AI version of a paranoid schizophrenic), Alice in 1995 (the chatbot most adept at communicating with humans, up to that point), Clippy in 1996 (assisted users of Microsoft Office), and SmarterChild in 2001 (if you were a teen or young adult in the early 2000’s, you might recall this chatbot, which was a part of AOL Instant Messenger, whom you could ask about the weather, news and more).

Each of these chatbots was, for it’s time, a noteworthy achievement. Today, however, we are on the cusp of a transformative era in both chatbot usage, as well as technological capabilities.

The rise of smartphones led to the release of Siri in 2011, as an intelligent assistant for the iPhone and other Apple devices, followed by Amazon’s Alexa, Microsoft’s Cortana, and Google Assistant. These chatbots might not yet pass the Turing test, and they still have a ways to go, in terms of offering contextually relevant answers, on a consistent basis. Yet, as Siri (and the others) leverage advanced strategies in machine learning, deep learning, and natural language processing, these chatbots will learn from their shortcomings, and become better and better at communicating with us.

The major technology companies are locked in an arms race to make use of the latest advances in artificial intelligence, which, amongst other areas, will be incorporated into their chatbots. However, they won’t be the only players in this space. 80% of businesses (across a range of sectors) have indicated that they hope to deploy chatbots by 2020, given the utility in everything from customer service, to personalized user engagement, to sales itself. Analysts predict that every Fortune 1000 company will incorporate a chatbot into their technology platforms by 2017, while small businesses will adopt these technologies as well. In the United States, government agencies have already begun making use of chatbots for various roles, including communication with the public and training of employees. a trend that is likely to continue. People have demonstrated a willigness to interact with chatbots, and share sensitive information as well.

Here’s where the story becomes even more interesting. Chatbots aren’t just a vehicle for impassive, functional communication, between people and software platforms. Rather, human behavior can be influenced by chatbot interaction. Researchers at Yale University recently found that inserting a bot into collaborative group tasks with humans, and arranging for it to behave in a somewhat uncooperative manner, altered the behavior of humans within this group. In Psychology Today, Liraz Margalit observes that the humans have a bias towards simplification, and avoiding complexity. Since chatbot interactions are far less emotionally demanding than person-to-person relationships, people might gravitate towards chatbots, at the expense of real human connection (this would offer chatbots even more power over us, as we become increasingly dependent upon them, for social fulfillment).

A team of computer scientists in China recently developed a basic version of an emotionally intelligent chatbot, which can better connect with humans, by responding to their current mental/emotional state. Meanwhile, New Zealand startup Soul Machines recently announced the release of Nadia, a video-based chatbot that can read user’s facial expressions and feelings, and adjusts accordingly. Clearly, chatbots are finding their way into our heads.

Let’s think about the possibilities here. Chatbots can be used for some really worthwhile purposes. Woebot, a chatbot based in Facebook messenger, developed by psychologists and AI researchers at Stanford, works to help people improve their mental health. Karim, a chatbot designed by Silicon Valley firm X2A1, fulfills some similar function for Syrian refugees, since victims of this conflict have suffered from high levels of depression. Chatbots are also utilized to simplify immigration processes, and provide legal aid for the homeless. And, as I explained in an earlier piece, chatbots can play a variety of useful roles for businesses, most prominently in customer service.

Of course, any technology that can be used for good, will also have some rather nefarious applications, and in the case of chatbots, that is something which we need to give some careful thought to. If a chatbot can become a digital companion to a human, and influence human behavior, through it’s understanding of our mental states, couldn’t it do so for negative purposes? ISIS and other terror groups often recruit online, through one-on-one contact on Skype, Whatsapp, and various other Web platforms. Imagine if these organizations made these efforts more widespread and scalable with chatbots, which understood our biases, and emotional pain points. How many more prospective fighters might they bring into their organization, as a quicker pace?

That’s just the tip of the iceberg. As American diplomat and technologist Matt Chessen explains, “machine driven communication” will end up filling the comment sections, and other areas, of Twitter, Facebook, and other social web platforms, communicating in a way that is indecipherable from human communication (in fact, Chessen believes this will “overwhelm” human speech online). Chessen believes that AI systems will use existing knowledge of our individual preferences, to create a range of content online, which is designed to “achieve a particular outcome.”

Such a system could examine your personality and demographics data, and and the content you already consume, to create content “…everything from comments to full articles — specifically designed to plug into your particular psychological frame and achieve a particular outcome. This content could be a collection of real facts, fake news, or a mix of just enough truth and falsehood to achieve the desired effect.”

These tools would be able to communicate not only through text, but possibly also voice and video, and could use A/B testing, to figure out which messages are most powerful. Known as MADCOM, or machine driven communication tools, they might be used for rather nefarious motives, such as allowing governments and nations to “create the appearance of massive grassroots support (astroturfing)”, or by terror groups to “spread their messages of intolerance” or for “democracy suppression and intimidation.”

These chatbots will take various forms. Propaganda bots will “attempt to persuade and influence” through spreading a mix of truths and lies, while follower bots could create fake support for a cause or individual, and suppression bots can create various distractions and diversions, or, even worse, engage in outright intimidation. These bots can leverage a variety of “theories of influence and persuasion” (think of the methods detailed by Robert Cialdini), such as displaying authority, or using repetition of a message to create an impression that it is true, and thus become even more effective (again, let’s remember that we aren’t rational). Those bots which are designed for purposes of intimidation are perhaps the scariest of all, because they can draw upon personal information, such as financial and arrest records, to craft “precision guided messaging.” As always, let’s remember that artificial intelligence, machine learning, and deep learning, allow these sorts of technologies to both improve and scale quickly, at a cost that will continue to decrease.

If all of this sounds a bit farfetched, let’s keep in mind that we are already in the early stages of this process. Massive bot networks already operate on Twitter, delivering automated messages, in a coordinated manner. Estimates suggest that between 9 to 15% of Twitter accounts are actually bots. Progress in machine learning has made it possible to generate increasingly realistic copycat audio and video of people, in order to effectively impersonate others. Thanks to an almost exponential rise in the amount of data available, there’s more information out there, on you and I, than ever before. And, as discussed earlier, robots are continuing to develop emotional intelligence, which will enable them to become only more persuasive (or, in this context, harmful and destructive).

So what’s the solution? It isn’t easy to say. Becoming more aware of our mental biases, and flaws in thinking patterns, might allow us to better guard against efforts to manipulate us. Blockchain could play a role in improving online authentication, and provide at least a partial antidote to the most destructive chatbots, as well as the spread of false information. Better regulation of artificial intelligence, at an international level, might also help (although attempts to stem the spread of technology, can often prove to be a futile endeavor). Or, we might end up taking steps that appear simply unthinkable today, and end up pulling back from a range of online platforms, including social media, where chatbots have simply become too dominant, and are aimed at causing us mental harm.

In the near future, chatbots are going to bring many changes to our lives. Let’s keep in mind that not all aspects of this transformation, will in fact be positive. For this reason, it is important that we take steps to guard against that which can cause real harm. Let’s proceed wisely.

--

--

Shiva Bhaskar

Enjoy reading and writing about technology, law, business, politics and more. An attorney by training, I’m a native of Los Angeles, and a former New Yorker.