AI and machine learning is a big deal for journalism and news information. Possibly as important as the other developments we have seen in the last 20 years such as online platforms, digital tools and social media. My 2008 book on how journalism was being revolutionised by technology was called SuperMedia because these technologies offered extraordinary opportunities to make journalism much more efficient and effective — but also to transform what we mean by news and how we relate to it as individuals and communities. Of course, that can be super good or super bad.
[These are my rough speaking notes for a panel discussion hosted by Demos.]
Artificial intelligence and machine learning can help the news media with its three core problems:
- The overabundance of information and sources that leave the public confused
- The credibility of journalism in a world of disinformation and falling trust and literacy
- The Business model crisis — how can journalism become more efficient — avoiding duplication; be more engaged, add value and be relevant to the individual’s and communities’ need for quality, accurate information and informed, useful debate.
But like any technology they can also be used by bad people or for bad purposes: in journalism that can mean clickbait, misinformation, propaganda, and trolling.
Some caveats about using AI in journalism:
- Narratives are difficult to program. Trusted journalists are needed to understand and write meaningful stories.
- Artificial Intelligence needs human inputs. Skilled journalists are required to double check results and interpret them.
- Artificial Intelligence increases quantity, not quality. It’s still up to the editorial team and developers to decide what kind of journalism the AI will help create.
AI in its broadest sense provides all sorts of opportunities for journalism — and journalism needs all the help it can get right now — not just to boost its core value but to re-form its relationship to the public and its ability to deal with the new information ecosystem. Here are a few applications that I think are interesting:
- Curating the abundance of data — finding stories eg through Trending Topics
- Responding to instant news (breaking news)
- Monitoring — eg during terror incidents
- Producing ‘robot’ news for basic reporting (eg financial services/weather etc)
- Reducing duplication -(Kaleida trending news)
- Helping with fact-checking (FullFact live AI verification)
- Verification (esp on platforms) to identify fake news — hate speech and to counter bots — not easy as we saw with recent YouTube ‘crisis actors’ problems — it can be gamed.
- Personalisation of journalism eg Compass News app — gives you more specialised, diverse, serendipitous curation;
- Editorial planning (Chartbeat, Orphan)
- Marketing (WSJ, FT subscription efforts)
- Data mining for investigative journalism, relevance mining eg local news
- New platform opportunities such as voice/Alexa/Google assistant — or Augmented Reality — and of course, blockchain
Of course — like any technological change there are going to be negatives as well as positives:
- Automated journalism still needs to be edited by humans
- Verification at its most important is always humanly complex
- Platforms find it difficult to us AI at scale and at speed and in detail (YouTube Crisis Actors) and can be gamed
- Marketing — how do we use AI to find new people not just to track a core readership — how do we use it to find underserved communities
- Personalisation — how do we use AI to provide diversity not just favourites
- Discovery — data sets are often very bad — eg court records in the UK
- Blockchain — really interesting work being done on decentralised content creation and dissemination but can you scale it and make it useful in real time and in a news cycle?
Then there are the broader structural issues around this profound shift to a new tech paradigm:
- Does mainstream journalism have the skills and insights to make the most of the changes? Are savings ploughed back into ‘real’ human journalism?
- Trust and transparency — there are a new set of ethical dilemmas that need to be addressed — with AI how do we know who has created content and the sources? How do we hold them accountable? How do you even know it’s a machine?
- Plus the usual algorithmic biases of gender and the dangers of tech companies and developers gaining power at the expense of the journalists or the public. There’s nothing innately democratic or progressive about AI.
This article by LSE, Polis director Charlie Beckett @CharlieBeckett