Categories
Future-of-PSM Practitioner's Corner

Practitioner’s Corner: Olle Zachrison

Olle Zachrison is Senior News Editor AI, and leads the acceleration of artificial intelligence in BBC News with 5,500 journalists worldwide. He is co-founder of the network Nordic AI Journalism that organizes the Nordic AI in Media Summit. Before joining the BBC in April 2025, Olle held various senior editorial roles at Sveriges Radio for nine years, most recently as Head of Artificial Intelligence & News Strategy. In parallel to his positions at the Swedish public broadcaster, he also served as Collaboration Lead for the development of a public service algorithm at the European Broadcasting Union (EBU). Previously, Olle worked as an editor for various Swedish newsrooms. 

Could you give us an insight into your current activities at BBC News?
In BBC News, we’re structuring our AI acceleration around four strategic themes. Firstly, boosting productivity. On the productivity side, we are refining tools around translation and transcription, AI-powered text assistance, and automated summaries. These free journalists to focus on higher-value reporting. Secondly, reformatting news content. We produce so much content in different modalities. Therefore, we’re exploring new ways to reformat content, like video re-versioning and turning live audio into text pages, and articles into audio or vice versa. Thirdly, augment our journalism. For example, we explore investigative tools that can process large datasets and experiment with external platforms for news gathering and story discovery. Fourthly, innovate user experiences. In this theme, you could find conversational news formats and synthetic voice readings of articles. The objective is both to maximize newsroom effectiveness and deliver audience value. Underpinning all of this is a strong commitment to drive digital growth and newsroom effectiveness, while ensuring we remain trusted and relevant globally.

Are there specific challenges in developing algorithms that adhere to public service values?
Yes, and they are significant. Public service values – accuracy, impartiality, inclusivity – don’t easily map onto algorithms that are often trained on open web data trenched with bias and misinformation. One of the biggest challenges is ensuring that AI systems reflect BBC editorial standards and core principles. For example, an algorithm optimising purely for engagement could surface the most sensationalist content, which runs counter to our responsibility to inform all on the most important news, not just drive clicks. Transparency is another challenge: many large AI models are “black boxes”, making it difficult to explain decisions to audiences. At the BBC, explainability matters because trust depends on accountability. We also need to ensure accessibility across languages, dialects, and formats so that audiences everywhere are served equitably. In practice, this often means investing in bespoke BBC models, careful testing, and editorial oversight at every stage – embedding our values directly into the design of these tools.

In what ways can AI and generative AI be a solution to enhance the quality of news and promote quality journalism?
AI is about enhancing human creativity and editorial processes, not replacing them. Generative AI, for example, can streamline repetitive tasks like transcription, translation, and producing initial summaries, leaving reporters more time for analysis, investigation, and storytelling. However, my deep conviction is that clever use of AI can increase the quality of output. A good example is text descriptions of images, where AI is often doing a better job at describing complex charts and diagrams than humans. AI can also make us more consistent in style. AI can support large-scale investigative projects with tools that can sift through vast datasets, uncovering patterns that would otherwise be missed. However, the main win is perhaps allowing journalists to focus on what only humans can do: applying judgement, human touch, context, and verification. When designed responsibly, AI becomes a multiplier for trusted journalism.

What is needed for public service media to sustain as drivers of innovation in a digital ecosystem dominated by commercial tech platforms?
Public service media face a paradox: we must innovate quickly to stay relevant, but under greater scrutiny and responsibility than commercial rivals. The rise of generative AI heightens the pressure, as tech giants increasingly serve news through chat assistants aimed at keeping users inside their ecosystems. That threatens both reach and revenue of all media companies. Yet amid the AI panic that grips the media industry, it is worth recalling our unique advantages. Public service media have trusted brands, ethical principles, and millions of loyal users. We hold decades of high-quality content in archives, perfect for training models aligned with public value. We also benefit from public charters, guaranteed funding, and national infrastructure for distribution. A tech company or clever start-up can never copy this. We need to use these unique advantages better in the AI ecosystem. And of course, we have to scale AI ourselves with confidence, not fear, to strengthen journalism and serve the public.