Taylor Swift as soon as Threatened to Sue Microsoft Over Its 'Tay' Chatbot, Brad Smith unearths


Inspired by the success of its Chinese female AI-based social chatbot ‘Xiaoice,’ Microsoft planned its American avatar in the name of ‘Tay’ that immediately ran into a controversy with singer Taylor Swift.

Revealing the controversy for the first time, Microsoft President Brad Smith, in his new book titled “Tools and Weapons: The Promise and the Peril of the Digital Age,” said that it all went wrong with Tay, including its name.

About Taylor
Taylor may refer to:

Taylor Swift Once Threatened to Sue Microsoft Over Its 'Tay' Chatbot, Brad Smith Reveals

About Threatened
Threatened species are any species (including animals, plants, fungi, etc.) which are vulnerable to endangerment in the near future. Species that are threatened are sometimes characterised by the population dynamics measure of critical depensation, a mathematical measure of biomass related to population growth rate. This quantitative metric is one method of evaluating the degree of endangerment.

“The world’s different taste in technology was revealed when we brought ‘Xiaoice’ to the US in the spring of 2016. We launched her to the US market with new name ‘Tay’. The new name turned out to be just the start of our problems with the American debut of Xiaoice,” Smith wrote.

On a vacation, he received an email: “We represent Taylor Swift, on whose behalf this is directed to you.”

Taylor Swift Once Threatened to Sue Microsoft Over Its 'Tay' Chatbot, Brad Smith Reveals

The Beverly Hills lawyer representing Taylor went on to state that “the name ‘Tay’, as I’m sure you must know, is closely associated with our client.”

The lawyer argued that the use of the name Tay “created a false and misleading association between the popular singer and our chatbot, and that it violated federal and state laws.”

Smith said that Microsoft lawyers took a different view, but “we had not sought to pick a fight with or even offend Taylor Swift,” as the company grappled with larger issues with Tay.

The AI chatbot Tay ran into trouble when Twitterati began slamming the “innocent” bot with racist and offensive comments. Launched as an experiment to engage people through “casual and playful conversation”, Tay was soon taken off Twitter.

“In little more than a day, we had to withdraw Tay from the market, providing a lesson not just about cross-cultural norms, but also about the need for stronger AI safeguards,” said Smith.

Microsoft even apologised in a blog post for Tay’s “unintended offensive and hurtful tweets.”