No kidding: AI created the most unethical survey to promote The Guardian article

Lord777

Professional
Messages
2,583
Reputation
15
Reaction score
1,259
Points
113
The future of generative technologies in journalism is questionable due to the neural network from Microsoft.

A serious conflict broke out between The Guardian and Microsoft. The subject of contention was a survey generated by artificial intelligence, which was published on the Microsoft Start platform next to the news of the tragic incident. The survey asked readers to guess the cause of death of 21-year-old Lily James, a water polo coach whose body was recently found in the locker room of a sports complex in Sydney. According to preliminary information, the girl was killed by her young man.

A neural network designed to create interactive content and attract an audience offered the following answers: murder, accident, or suicide. Of course, the post outraged readers and hit the publication's reputation hard. It was immediately removed, but screenshots and angry comments continue to spread around the network.

"This is probably the most pathetic and disgusting survey I've ever seen," writes one user.

Anna Bateson, executive director of the Guardian Media Group, sent an open letter of complaint to Microsoft President Brad Smith. In her address, she noted that materials created by generative AI that relate to such sensitive topics must be approved by editors. The text also emphasizes that carelessness on the part of the company can not only offend the relatives of the deceased, but also threaten the safety of journalists who worked on the original article. Commenters listed some of the authors in their posts by name.

Bateson especially emphasizes that it is important to adopt strong copyright protection rules that will allow publishers to control how their content is used on external platforms.

The Guardian has a licensing agreement with Microsoft that allows the tech giant to publish their articles on Microsoft Start, a news aggregation platform. The director of the Media Group insists that any experimental AI tools should be used to promote news only after approval from the publisher.

She also called on the company to take responsibility for the controversial survey, attaching an explanatory note to the initial publication on the site. Microsoft has not yet made any official comments to the press.

This isn't the first time that Microsoft's AI-generated content has been controversial. In September last year, an article was published on the MSN platform, where the neural network described the recently deceased basketball player Brandon Hunter as a person who had achieved nothing by the time he was 42 years old. And in August, AI added the bank to the list of attractions in Ottawa.

These incidents raise serious questions about the role of neural networks in journalism and how to teach generative technologies ethics and a "sense of tact" before the creation of most content on the Internet becomes automated.
 
Top