Home > Article > Technology peripherals > BBC blocks OpenAI from scraping data, but is open to AI being used in news
IT House News on October 7, the BBC, the UK’s largest news organization, released its principles for evaluating the use of generative artificial intelligence (Gen AI), including research and development for news, archives and “personalized experiences” Make.
BBC National Affairs Director Rhodri Talfan Davies pointed out in a blog post that the BBC believes that artificial intelligence technology brings more value opportunities to our viewers and society
He proposed three guiding principles. First, the BBC will always act in the best interests of the public. Secondly, the BBC will prioritize talent and creativity and respect the rights of artists. Finally, the BBC remains open and transparent about content generated by artificial intelligence
The BBC said it will work with technology companies, other media organizations and regulators to safely develop generative artificial intelligence, with a focus on maintaining trust in the news industry.
But while the BBC determines how best to use generative AI, it has also blocked web crawlers such as OpenAI and Common Crawl from accessing the BBC website. The broadcaster joins CNN, The New York Times, Reuters and other news organizations in blocking web crawlers from accessing their copyrighted content. Davies said the move was to "safeguard the interests of license fee payers" and that training AI models on BBC data without the BBC's permission was not in the public interest.
IT House noticed that other news organizations also expressed their views on this technology. The Associated Press, which published its own guidelines earlier this year, has also partnered with OpenAI to share its content to train GPT models.
The above is the detailed content of BBC blocks OpenAI from scraping data, but is open to AI being used in news. For more information, please follow other related articles on the PHP Chinese website!