Online chatbots save time and efforts by automating customer support. Gartner forecasts that by 2020, over 85% of customer interactions will be handled without a human. However, the opportunites provided by chatbot systems go far beyond giving responses to customers’ inquiries. They are also used for other business tasks, like collecting information about users, helping to organize meetings and reducing overhead costs. There is no wonder that size of the chatbot market is growing exponentially.
This chatbot aims to make medical diagnoses faster, easier, and more transparent for both patients and physicians – think of it like an intelligent version of WebMD that you can talk to. MedWhat is powered by a sophisticated machine learning system that offers increasingly accurate responses to user questions based on behaviors that it “learns” by interacting with human beings.

Reports of political interferences in recent elections, including the 2016 US and 2017 UK general elections,[3] have set the notion of botting being more prevalent because of the ethics that is challenged between the bot’s design and the bot’s designer. According to Emilio Ferrara, a computer scientist from the University of Southern California reporting on Communications of the ACM,[4] the lack of resources available to implement fact-checking and information verification results in the large volumes of false reports and claims made on these bots in social media platforms. In the case of Twitter, most of these bots are programmed with searching filter capabilities that target key words and phrases that reflect in favor and against political agendas and retweet them. While the attention of bots is programmed to spread unverified information throughout the social media platform,[5] it is a challenge that programmers face in the wake of a hostile political climate. Binary functions are designated to the programs and using an Application Program interface embedded in the social media website executes the functions tasked. The Bot Effect is what Ferrera reports as when the socialization of bots and human users creates a vulnerability to the leaking of personal information and polarizing influences outside the ethics of the bot’s code. According to Guillory Kramer in his study, he observes the behavior of emotionally volatile users and the impact the bots have on the users, altering the perception of reality.


A rapidly growing, benign, form of internet bot is the chatbot. From 2016, when Facebook Messenger allowed developers to place chatbots on their platform, there has been an exponential growth of their use on that forum alone. 30,000 bots were created for Messenger in the first six months, rising to 100,000 by September 2017.[8] Avi Ben Ezra, CTO of SnatchBot, told Forbes that evidence from the use of their chatbot building platform pointed to a near future saving of millions of hours of human labour as 'live chat' on websites was replaced with bots.[9]
The issue is only going to get more relevant. Facebook has made a big push with chatbots in its Messenger chat app. The company wants 1.2 billion people on the app to use it for everything from food delivery to shopping. Facebook also wants it to be a customer service utopia, in which people text with bots instead of calling up companies on the phone.
Several studies accomplished by analytics agencies such as Juniper or Gartner [36] report significant reduction of cost of customer services, leading to billions of dollars of economy in the next 10 years. Gartner predicts an integration by 2020 of chatbots in at least 85% of all client's applications to customer service. Juniper's study announces an impressive amount of $8 billion retained annually by 2022 due to the use of chatbots.
Yes. Messenger bots are approved by Facebook before being made available inside the Messenger app so you can rest assured that they aren’t trying to steal your identity (or anything else). What’s more is a bot is tied to a Facebook page and a Facebook app making it all the more inconvenient to use for fraudulent activity. That said, don’t exchange private and/or personal information with a bot. Finally, because Messenger doesn’t support credit cards and purchasing just yet, anything you buy will likely be done via a browser with a bot aiding you so far as to place an order. If you want, you can always block a bot.
×