“We believe that you don’t need to know how to program to build a bot, that’s what inspired us at Chatfuel a year ago when we started bot builder. We noticed bots becoming hyper-local, i.e. a bot for a soccer team to keep in touch with fans or a small art community bot. Bots are efficient and when you let anyone create them easily magic happens.” — Dmitrii Dumik, Founder of Chatfuel
Build a bot directly from one of the top messaging apps themselves. By building a bot in Telegram, you can easily run a bot in the application itself. The company recently open-sourced their chatbot code, making it easy for third-parties to integrate and create bots of their own. Their Telegram API, which they also built, can send customized notifications, news, reminders, or alerts. Integrate the API with other popular apps such as YouTube and Github for a unique customer experience.
Marketer’s Take: The bot was surprisingly effective yet fell short several times when queries like “Show me Blue Jeans” came with a canned bot response, “Sorry, I didn't find any products for this criteria.” Yet I know they sell “blue jeans”. Still, the bot was one of the best eCommerce bots I’ve seen on the platform thus far, and marketers should study it.
It didn’t take long, however, for Turing’s headaches to begin. The BabyQ bot drew the ire of Chinese officials by speaking ill of the Communist Party. In the exchange seen in the screenshot above, one user commented, “Long Live the Communist Party!” In response, BabyQ asked the user, “Do you think that such a corrupt and incompetent political regime can live forever?”
The idea was to permit Tay to “learn” about the nuances of human conversation by monitoring and interacting with real people online. Unfortunately, it didn’t take long for Tay to figure out that Twitter is a towering garbage-fire of awfulness, which resulted in the Twitter bot claiming that “Hitler did nothing wrong,” using a wide range of colorful expletives, and encouraging casual drug use. While some of Tay’s tweets were “original,” in that Tay composed them itself, many were actually the result of the bot’s “repeat back to me” function, meaning users could literally make the poor bot say whatever disgusting remarks they wanted. 
The chatbot uses keywords that users type in the chat line and guesses what they may be looking for. For example, if you own a restaurant that has vegan options on the menu, you might program the word “vegan” into the bot. Then when users type in that word, the return message will include vegan options from the menu or point out the menu section that features these dishes.
Marketer’s Take: This is a good demonstration of how you can add a gaming dimension to your bots. If you’re a marketer that likes to tell stories, then you can design a choose-your-own adventure bot that educates and sells prospective customers that are following along. There are many twists and turns that can be built into a bot like this, so creative marketers will readily take advantage.
^ "From Russia With Love" (PDF). Retrieved 2007-12-09. Psychologist and Scientific American: Mind contributing editor Robert Epstein reports how he was initially fooled by a chatterbot posing as an attractive girl in a personal ad he answered on a dating website. In the ad, the girl portrayed herself as being in Southern California and then soon revealed, in poor English, that she was actually in Russia. He became suspicious after a couple of months of email exchanges, sent her an email test of gibberish, and she still replied in general terms. The dating website is not named. Scientific American: Mind, October–November 2007, page 16–17, "From Russia With Love: How I got fooled (and somewhat humiliated) by a computer". Also available online.
Despite all efforts during almost half a century, most chatbots are still easily uncovered, but over the next decades they will definitely get smarter and finally we will distinguish human beings by them giving us silly answers as opposed to the much smarter chatbots. All of this will really start accelerating as soon as one single chatbot is smarter than one single human being. They will then be able to learn from each other, instead of learning from human beings, their knowledge will explode and they will be able to design even better learning mechanisms. In the long run, we will learn language from chatbots instead of the other way around.
Yes. Messenger bots are approved by Facebook before being made available inside the Messenger app so you can rest assured that they aren’t trying to steal your identity (or anything else). What’s more is a bot is tied to a Facebook page and a Facebook app making it all the more inconvenient to use for fraudulent activity. That said, don’t exchange private and/or personal information with a bot. Finally, because Messenger doesn’t support credit cards and purchasing just yet, anything you buy will likely be done via a browser with a bot aiding you so far as to place an order. If you want, you can always block a bot.
×