The LSE Journalism AI global survey: what we’ve learned so far

Over the last few weeks, newsrooms from all over the world have been completing our Journalism AI survey. Contributions came in from Europe, South and North America, Africa, and Asia. We’re immensely grateful to each and every one of the respondents. When we launched Journalism AI earlier this year, we had just a vague picture of how artificial intelligence technologies were being applied to journalism. And that is why we started this ambitious investigation, with the support of the Google News Initiative.

Image for post
Image for post
Paris #JAI workshop

Now, thanks to the invaluable expertise and insights that news organisations are sharing with us through meetings, one-on-one in-depth interviews, and answers to the survey, we can start painting a more detailed picture of what AI actually means for journalism — and what it is likely to mean in the immediate future.

Providing that first look is the aim of this post. Informed by the careful review of some 30+ filled-in questionnaires, each comprised of 28 questions ranging from the technical (which AI technologies have you adopted?) to the ethical (are you aware of AI biases, and how do you avoid them?), it will provide an early glimpse of the AI landscape in journalism. A much more sophisticated and thorough analysis of all inputs gathered from our respondents will be provided with the release of a full report this autumn.

At the moment, answers to the survey are still coming in, and might alter both the frame and the scenery within the initial picture. We will review them as carefully, and do our best to not be guided by these early assessments in future evaluations.

And yet, these important caveats aside, something can be said.

Most respondents lament the (notorious) vagueness surrounding the term “AI” and felt the necessity of translating it into more operational terms. So far, “machine learning” and “natural language processing” are the two most used substitutes. These are the technologies underpinning most journalistic applications of artificial intelligence.

Image for post
Image for post

Respondents mostly agree that no one wants AI to fully replace humans in the newsroom, and no one is actively working towards that end. AI serves other, less sinister goals: Improving efficiency and productivity in the newsroom; Delegating routine tasks and tedious work to machines, to free up more time that journalists can then dedicate to creative work (“to save time or brain power”), leading to “more job satisfaction”; Better understanding and serving our communities, “to grow the loyalty and engagement of our audiences”.

As one respondent put it, “This is the next generation of data journalism”.

Is everything working? It’s probably too soon to tell. “Most applications are still in nascent form and their success or failure is still uncertain”, says one of our participants. “We’re on a learning curve — and we don’t see setbacks as failures”, argues another. But still, most respondents report positive results on a wide range of applications: for ads targeting, propensity models for subscriptions, comment moderation, and even for facial recognition technologies applied to public figures in public interest reporting.

Practical Hurdles

Other applications show practical hurdles. Automated transcription, and language-based applications more generally, still suffer from a significant “language divide”, according to some of our respondents. Non-English speaking journalists note that algorithms are much better trained in English than even widespread languages such as Spanish or French, with the risk of “somehow creating a two-speed world in AI: the English speaking world benefits from the state of the art performance, while the rest, which could be argued need it the most, have significantly worse trained tools.”

A large portion of our questionnaire concerns the organisational changes that AI is generating in journalism. Some newsrooms are trying to manage them through dedicated “AI strategies”, either overarching or project-specific. But most are still in the process of defining one, and still rely on the complex — and at times confusing — interplay of different parts of the organisation: marketing, product, data science, innovation, research and development, and of course editorial. Responsibilities are often not clearly defined either.

As a consequence, culture clashes may arise. “Letting go of pre-digital assumptions and processes is a big challenge for us in applying AI tools”, argues one respondent. Also, this challenge is very differently met in innovation-driven or risk-averse organisations — with the latter ones in full FOMO (fear of missing out) mode, and the former further entrenching their competitive advantage in the meantime.

Image for post
Image for post
Charlie Beckett introducing Journalism AI at the WCSJ in Lausanne

Many newsrooms have ambivalent attitudes towards AI: “On the one hand we face skepticism, but at the same time also a sense of desperation from editorial, as they realise the urgency of innovation to sustain the business”; “People are always interested in how technology can make things better but are also very nervous about it,’’ write our respondents. Some journalists literally “hate” AI, as one puts it, while some others are defiant: “Technology itself does not define the roles and workflow. Rather, our mission and goals do”.

We are impressed by the level of sophistication showed in many answers to tough, broad questions about the deeper meaning of journalism and the shape of the news industry in the era of artificial intelligence.

What we take away from it is the awareness that something fundamental is changing, because of the unprecedented possibilities offered by AI-powered technologies. Something so big that it requires new skills and training, learning how to get data scientists and investigative journalists on the same page, speaking the same language, and even rethinking the whole notion of “journalism”, what its “core business” has become, and ultimately what ethics and transparency — a crucial notion to each and every respondent — mean when machines augment human journalists.

As one respondent put it, AI simply does not work without ‘us’ — and reasonably won’t for quite some time: “The key thing to remember when working with AI is that it doesn’t just work on its own. You can’t just set up an AI solution and leave it to fix whatever your problem is. AI works at its best when it is a partnership between technology and human”.

We hope that our survey, and the community that we’re building around Journalism AI, will contribute to improve that effort towards mutual understanding.

This article was written by Fabio Chiusi, Research Assistant of Journalism AI, and Professor Charlie Beckett, @CharlieBeckett who is leading the project.

Journalism AI is a collaboration between Polis — the journalism think-tank at the London School of Economics and Political Science — and the Google News Initiative. The complete results of the global survey will be presented in a public report this fall. You can follow all project updates on this blog and on Twitter via the hashtag #JournalismAI.

For more information about Journalism AI you can contact Mattia Peretti at:

Further Reading:

Written by

Journalist, LSE media professor, Polis think-tank director. Writes about journalism, UK & global politics

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store