Source: Simon, F. M., Becker, K. B., & Crum, C. (2023). Policies in parallel? A comparative study of journalistic AI policies in 52 global news organisations.

In just a little over a year, newsrooms worldwide have dramatically shifted their use of artificial intelligence (AI). Initially, very few news organizations had clear guidelines or policies on using AI tools in their journalism. But that quickly changed. Big-name global players like USA Today, The Atlantic, National Public Radio, the Canadian Broadcasting Corporation, and the Financial Times all established formal AI-related documents for their operations.

The rise of AI in journalism got a significant boost with OpenAI’s ChatGPT introduction in July 2022. This chatbot could create various written content, transforming how news stories, code snippets, essays, and even jokes were made. Backed by Elon Musk and Sam Altman since 2015, OpenAI received substantial investments, especially from Microsoft, underlining AI’s significance in newsrooms. The Chatbot has become crucial in shaping how news evolves. This is a big step forward for journalism in embracing new techlogy. Just imagine how easy and efficient it has become to fact-check or design interactive stories that engage the audiences at different levels. Many have argued that this step is a game changer for journalism practice amid several criticisms.

A recent study dives into AI policies among 52 global news organizations. Lead author Kim Björn Becker, a lecturer at Trier University in Germany and writer for Frankfurter Allgemeine Zeitung, highlights the study’s uniqueness, focusing on how newsrooms handle AI’s evolving capabilities. The findings revealed intriguing insights. Commercial news organizations, compared to publicly-funded ones, had more intricate AI rules. They emphasized protecting information sources, possibly due to legal risks impacting their business models. Moreover, the findings emphasized the importance of journalistic values in AI policies. Christopher Crum, a co-author and doctoral candidate at Oxford University, stressed these policies’ aim: protecting journalism’s credibility, audience trust, and core values amidst rampant misinformation.

Key findings included:

  • Over 71% of documents outlined journalistic values like objectivity and ethics.
  • About 70% of AI guidelines targeted editorial staff, with 69% mentioning AI pitfalls like misinformation.
  • Around 54% cautioned against exposing sources when using AI to avoid revealing confidential information.
  • Only 8% detailed how AI policies would be enforced, revealing a lack of accountability in most documents.

These finding provide a fascinating look into how newsrooms handle AI’s integration, balancing innovation with maintaining journalistic integrity. Yet a crucial chapter remains largely untold: the role of AI in financially strained news environments. While the spotlight shines on how established media juggernauts handle AI integration, the narrative of community media, declining student outlets, and news deserts is a tale often overlooked.

The implications for underprivileged sectors are stark—dearth of resources and technical expertise. Studies underscore technological disparities, disproportionately impacting older or financially constrained demographics, prevalent in these news deserts. Yet, a beacon of hope emerges in the form of tailored AI tools. They possess the transformative potential to engage these communities and streamline content creation. Partnerships, educational initiatives, and policy advocacy stand as pillars that could bridge these glaring gaps, granting access to AI tools and education. By infusing these sectors with AI capabilities, a brighter horizon emerges—a media environment that embraces inclusivity. It’s an opportunity to cater to diverse community information needs, paving the way for a more equitable and comprehensive journalistic landscape.

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php