If left unattended, people will exploit its weaknesses-and Tay was a prime example of this. You can think of today's AI like a child prodigy it's brilliant in some ways, but it's still a child nonetheless. Until AI can regulate itself, it can't be left unsupervised. ![]() AI Chatbots Should Never Be Left Unsupervised Everything that you don't want a chatbot to do must be manually programmed into it. The point here is that artificial intelligence is perfectly obedient, which is perhaps its greatest strength and weakness. Even today, after all our advancements in natural language processing, people are still finding linguistic loopholes to make AI chatbots like ChatGPT "hallucinate" and deliver results that were intended to be restricted. AI Has to Be Programmed to Reject Invalid RequestsĪI is very impressionable and must be programmed to reject requests promoting harm. It may behave and sound human to a surprising degree, but good programming can only go so far. This is why when Twitter users were feeding Tay all kinds of propaganda, the bot simply followed along-unaware of the ethics of the info it was gathering.ĪIs like Tay do not understand why humans are the way we are, act the way we do, and the eccentricities of each individual. Learning from interactions with Twitter users, Tay was shut down after. These qualities more or less come naturally to humans as social creatures, but AI can't form independent judgments, feel empathy, or experience pain. In 2016, Microsoft launched Tay, an experimental artificial intelligence chat bot. It has to be programmed to simulate the knowledge of what's right and wrong, what's moral and immoral, and what's normal and peculiar. The concept of good and evil is something AI doesn't intuitively understand. AI Can't Intuitively Differentiate Between Good and Bad In a way, internal trolls act as a feedback mechanism for quality assurance, but that's not to say that a chatbot should be let loose without proper safeguards put in place before launch. ![]() People naturally want to test the limits of new technologies, and it's ultimately the developer's job to account for these malicious attacks. Tay was designed to be an internet teen, from the dredges of Tumblr. As a freely available tool, anyone could access the AI and tell it anything they wanted. As part of the company’s early waves of AI research, Tay was supposed to learn from the public. ![]() Still, it definitely wasn't the smartest idea, either. What was Microsoft Tay Back in 2016, Microsoft created its AI chatbot Tay. We're not saying that building a chatbot for "entertainment purposes" targeting 18-to-24-year-olds had anything to do with the rate at which the service was abused. Surprisingly, Xiaoice was an instant success. The internet is full of trolls, and that's not exactly news, is it? Apparently, it was so to Microsoft back in 2016. A couple of years before the release of Tay, Microsoft released Xiaoice (Little Bing in Chinese), a chatbot with a teenage personality mixing banter, mood swings, and a cheery voice.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |