Sex bot chat lybia
(In screenshots: blue chats are from Messenger and green chats are from Kik; screenshots where only half of her face is showing are circa July 2017, and messages with her entire face are from May-July 2018.)Overall, she’s sort of convincing.Not only does she speak fluent meme, but she also knows the general sentiment behind an impressive set of ideas.A few months after Tay’s disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme.Zo is programmed to sound like a teenage girl: She plays games, sends silly gifs, and gushes about celebrities.For example, during the year I chatted with her, she used to react badly to countries like Iraq and Iran, even if they appeared as a greeting.Microsoft has since corrected for this somewhat—Zo now attempts to change the subject after the words “Jews” or “Arabs” are plugged in, but still ultimately leaves the conversation.For instance, using the word “mother” in a short sentence generally results in a warm response, and she answers with food-related specifics to phrases like “I love pizza and ice cream.”But there’s a catch.In typical sibling style, Zo won’t be caught dead making the same mistakes as her sister. Zo is politically correct to the worst possible extreme; mention any of her triggers, and she transforms into a judgmental little brat.
In order to keep Zo’s banter up to date, Microsoft uses are variety of methods.
As any heavily stereotyped 13-year-old girl would, she zips through topics at breakneck speed, sends you senseless internet gags out of nowhere, and resents being asked to solve math problems.
I’ve been checking in with Zo periodically for over a year now.
Mentioning these triggers forces the user down the exact same thread every time, which dead ends, if you keep pressing her on topics she doesn’t like, with Zo leaving the conversation altogether.
(“like im better than u bye.”)Zo’s uncompromising approach to a whole cast of topics represents a troubling trend in AI: censorship without context. Chatroom moderators in the early aughts made their jobs easier by automatically blocking out offensive language, regardless of where it appeared in a sentence or word.
In Zo’s case, it appears that she was trained to think that certain religions, races, places, and people—nearly all of them corresponding to the trolling efforts Tay failed to censor two years ago—are subversive.“Training Zo and developing her social persona requires sensitivity to a multiplicity of perspectives and inclusivity by design,” a Microsoft spokesperson said.