The idea was to permit Tay to “learn” about the nuances of human conversation by monitoring and interacting with real people online. Unfortunately, it didn’t take long for Tay to figure out that Twitter is a towering garbage-fire of awfulness, which resulted in the Twitter bot claiming that “Hitler did nothing wrong,” using a wide range of colorful expletives, and encouraging casual drug use. While some of Tay’s tweets were “original,” in that Tay composed them itself, many were actually the result of the bot’s “repeat back to me” function, meaning users could literally make the poor bot say whatever disgusting remarks they wanted. 
This chatbot aims to make medical diagnoses faster, easier, and more transparent for both patients and physicians – think of it like an intelligent version of WebMD that you can talk to. MedWhat is powered by a sophisticated machine learning system that offers increasingly accurate responses to user questions based on behaviors that it “learns” by interacting with human beings.
Reports of political interferences in recent elections, including the 2016 US and 2017 UK general elections,[3] have set the notion of botting being more prevalent because of the ethics that is challenged between the bot’s design and the bot’s designer. According to Emilio Ferrara, a computer scientist from the University of Southern California reporting on Communications of the ACM,[4] the lack of resources available to implement fact-checking and information verification results in the large volumes of false reports and claims made on these bots in social media platforms. In the case of Twitter, most of these bots are programmed with searching filter capabilities that target key words and phrases that reflect in favor and against political agendas and retweet them. While the attention of bots is programmed to spread unverified information throughout the social media platform,[5] it is a challenge that programmers face in the wake of a hostile political climate. Binary functions are designated to the programs and using an Application Program interface embedded in the social media website executes the functions tasked. The Bot Effect is what Ferrera reports as when the socialization of bots and human users creates a vulnerability to the leaking of personal information and polarizing influences outside the ethics of the bot’s code. According to Guillory Kramer in his study, he observes the behavior of emotionally volatile users and the impact the bots have on the users, altering the perception of reality.
Increases sales, reduce costs and automates support—that what ChatFuel claims to come with. Qualify your leads and engage with prospects on a 24X7 basis. Automate sales or connect warm leads to a sales representative in a live chat. With ChatFuel, you can share content with the followers and subscribers while they interact with your brand over Facebook Messenger. This tool takes about 7 minutes to come up with a fully functional bot.
Marketer’s Take: The bot was surprisingly effective yet fell short several times when queries like “Show me Blue Jeans” came with a canned bot response, “Sorry, I didn't find any products for this criteria.” Yet I know they sell “blue jeans”. Still, the bot was one of the best eCommerce bots I’ve seen on the platform thus far, and marketers should study it.
They have an intuitive visual interface for those without a coding background, but developers will like the editable front-end and customization options. While you can build a bot for free, a lot of the more complex (and interesting) tools are only available with Chatfuel Pro accounts. Either way, it might be helpful to know that Chatfuel integrates with Hootsuite Inbox using the Facebook Messenger handover protocol.
×