Microsoft is transforming news reporting by taking significant steps in the field of artificial intelligence (AI). According to the Financial Times, the internet giant is collaborating with a number of media outlets to offer journalists AI-powered tools that will improve news coverage.
One significant partnership is with Semafor, a journalism company headed by Ben Smith, the former chief editor of BuzzFeed. Microsoft is funding a feature dubbed “Signals” that will use AI to quickly expose information from worldwide sources while highlighting breaking news and analysis on the site.
Although Semafor’s staff of journalists will write all of the stories, the AI will assist by swiftly locating pertinent articles that have been published in many languages all around the world. Additionally, it will offer translation services, making it simple for journalists to properly contextualize and include foreign sources and points of view in their reporting.
Though the precise sum is still unknown, Microsoft is giving Semafor for Signals significant financial support. The intention is to leverage AI to help journalists find different viewpoints on breaking news events.
The partnerships come in the wake of the controversies surrounding AI tools for content generation, such as ChatGPT, which have been producing whole news articles and causing disinformation. Microsoft wants to show how AI can improve journalism in a way that doesn’t include taking the place of human reporters.
The New York Times filed a copyright infringement lawsuit against Microsoft and ChatGPT maker OpenAI last year. According to The Times, its articles were used—without payment—to train AI algorithms.
In addition to Semafor, Microsoft is working with journalistic associations like as the Online News Association and the Craig Newmark School of journalistic to create moral guidelines for the use of AI in newsrooms.
Microsoft is taking the lead in creating AI applications for journalism, even as media organizations struggle with new technology like chatbots. Its most recent collaborations aim to responsibly map out the use of AI in reporting, even though there are still hazards involved.