AI and journalism: Can we work together?

AI and journalism: Can we work together?

Turning on the news today, it’s difficult to avoid the topic of artificial intelligence (AI). From AI chatbots like Eliza helping people with their mental health to support chat AIs making up entire refund policies, it’s obvious that, like any tool, AI has the potential to really improve our lives or make them much more difficult.

In the world of journalism, a lot of talk has occurred surrounding what AI could do for the news. It’s been heralded as the first big innovation in years, letting journalists create more efficient workflows by automating repetitive tasks inherent to the news cycle. It also presented itself as a threat to the humans in the industry, with vets like Buzzfeed phasing out human-written articles in favor of the machine.

Before we can dissect what AI might mean for your favorite community news network, it’s helpful to outline how the tool might be used. 

The Basics

AI, thanks to a mixture of science fiction, tech bros, and doomsayers, has taken on a lot of complex properties. It helps the unartistic create art, the soft-spoken speak boldly, and the amateur sound like a pro.

In reality, the AI used in journalism – generative AI – is more like a wordbank. It takes information from a dataset, such as all public domain writing or anything a company decides to sell, and reads it to create probabilities for the order of words. It then can create strings of words that are likely to follow each other, often forming sentences.

AI isn’t going to be writing the Great American Novel anytime soon, but thanks to advances paved by companies like Google, it can create things like travel guides, cover letters, and essays in a way that is relatively human.

The Good

Many of the uses of AI in news production are mundane, helpful, and increase accessibility. Tools like screen readers that can read you the latest news on your morning commute are powered by AI. The translation of an article from one language to another is AI. The recommendation that you read similar stories to what you’re already enjoying is powered by AI.

AI can be very useful in giving journalists more time to focus on the creative aspects of writing a good article. Some journalists have opted to use an AI transcription app to record interviews, meaning they have to take less notes. This can let them be more attentive to the interviewee and catch snippets that might have otherwise been missed while writing a quote in a notebook.

AI, when used with purpose, is a great tool for creating a higher quality product. Even in community interactions on forums like comment sections, AI can be used to filter potentially hateful comments before the rest of the community is exposed to them. 

Overall, using AI tools can have a lot of benefits in the news cycle, and it’s already been integrated in a lot of the day-to-day functions of many big name news sources. Even at GreatNews.Life, our use of platforms like Facebook, YouTube, and Google puts us in direct contact with AI every day, and those sites are usually better for it. 

That’s not to say that we should automate everything. Doing so can cause significant problems.

The Bad

The biggest concern for a lot of journalists in regards to AI is that it’ll eventually steal your job. This is mostly due to generative AI being hyped up way beyond its capabilities.

Firstly, generative AI is never going to create something entirely unique because it pulls and rearranges data. It’s a large amount of data, but it’s still finite. As a result, a lot of AI content begins feeling the same. When Buzzfeed was using AI to write travel guides, the common refrain was that the articles were low quality.

Another issue involves plagiarism and inaccuracy. There was a pretty big case last year where lawyers cited cases that ChatGPT cited as relevant to their arguments. The cases ended up being fake. Entire initiatives have been created to counter misinformation being spread by AI.

When asked if it was capable of spreading false information, ChatGPT, one of the most accessible generative AI models, confirmed that it can inadvertently spread misinformation if its training data is not factual. 

“My purpose is to provide helpful and truthful information to the best of my abilities,” it said. “However, it's important to note that I can generate text based on the patterns and information in my training data, so there might be instances where I inadvertently provide inaccurate or misleading information.“

When it comes to news writing, providing incorrect information can be a death sentence for credibility. For this reason, even an article written entirely by AI needs to be researched, fact-checked, and edited. That seems pretty inefficient in terms of content creation.

The Future

Ultimately, journalists need to find a balance between using AI and relying on AI.

AI, like any tool, can be useful when it has a clearly defined purpose. Providing journalists with avenues to focus on delivering the best content possible is always a great thing. It makes the field more accessible and allows for greater specialization. 

That necessitates learning more about the tool, developing strategies for its implementation, and maintaining a human presence in its use. Churning out 30 pieces a week of AI-generated content that proves inaccurate and illiterate does nothing but destroy the credibility of news organizations and the faith our communities place in them to keep them up to date on everything happening.

As we move into a future increasingly integrated with technology, trying to fight AI will prove to be a slow but definite loss. Instead, we should turn our attention to learning and mastering this tool.