Artificial intelligence (AI) is both a threat and an opportunity for journalism, with more than half of those surveyed for a new report saying they had concerns about its ethical implications on their work.
While 85% of respondents had experimented with generative AI such as ChatGPT or Google Bard for tasks including writing summaries and generating headlines, 60% said they also had reservations.
The study, carried out by the London School of Economic's JournalismAI initiative, surveyed over 100 news organisations from 46 countries about their use of AI and associated technologies between April and July. «More than 60% of respondents noted their concern about the ethical implications of AI on journalistic values including accuracy, fairness and transparency and other aspects of journalism,» the researchers said in a statement.
«Journalism around the world is going through another period of exciting and scary technological change,» added report co-author and project director Charlie Beckett.
He said the study showed that the new generative AI tools were both a «potential threat to the integrity of information and the news media» but also an «incredible opportunity to make journalism more efficient, effective and trustworthy».
Journalists recognised the time-saving benefits of AI with tasks such as interview transcription.
But they also noted the need for AI generated content to be checked by a human «to mitigate potential harms like bias and inaccuracy», the authors said.
Challenges surrounding AI integration were «more pronounced for newsrooms in the global south» they added.
«AI technologies developed have been predominantly available in English, but not in many Asian languages.