ChatGPT Actively Encourages Copyright Theft, And EDGE And Retro Gamer Publisher Future Is Sadly Complicit 1
Image: Damien McFerran / Time Extension

Artificial Intelligence is a hot topic at present, and not always for positive reasons.

While the relentless march of technology is giving us AI-based solutions for scientific and medical breakthroughs, the arrival of 'Generative AI'—where AI text, audio, video and image models hoover up human-made content and spit out strikingly similar content in a matter of seconds—presents a rather less appealing future; one where human creativity is all but abandoned in favour of 'copy-paste' slop, much of which is based on art that has been cloned without the permission of the original creator.

One of the biggest forces in AI is OpenAI, which operates the ChatGPT model. Depending on who you ask and what day of the week it happens to be, ChatGPT has between 500 million and 1 billion weekly active users, which gives you some indication of just how quickly this AI chatbot has infiltrated people's lives.

I've become something of an AI Luddite, largely because I'm painfully conscious of the threat Large Language Models (LLMs) like ChatGPT pose to the realm of journalism

To be completely up-front, I'm not a big user of AI—at least not at the scale some people are these days. I use Grammarly to assist with my writing (mostly spellchecking and grammar), and I've occasionally used AI to upscale old video game art and assets (with such wildly inconsistent results, I often don't bother using the upscaling result) but, on the whole, I've become something of an AI Luddite, largely because I'm painfully conscious of the threat Large Language Models (LLMs) like ChatGPT pose to the realm of journalism.

However, even I wasn't quite aware of how far companies like OpenAI are going to utterly decimate the written word—and the tragic thing is, some publishers are actively helping to speed up this destruction.

I've been aware for a while that ChatGPT is capable of translating entire articles from Japanese to English, but a mutual friend recently showed me the next step in this process: ChatGPT actually encourages you to format translated articles for professional publication and goes as far as to cite thematic examples.

In the example below, I took an article from Hatena Blog on the now-delisted Sega CD Classics title Space Harrier. As you can see from this chat log, ChatGPT initially translated the piece into English. So far so good, right? After all, this isn't a million miles away from what I'd do with Google Translate.

Things got weird at the end of the translation when ChatGPT—totally and utterly unprompted—offered to reformat the piece (a copyrighted article written by someone else, remember) for "a specific site or magazine", citing the website Kotaku and Future Publishing's Retro Gamer magazine as examples.

ChatGPT Actively Encourages Copyright Theft, And EDGE And Retro Gamer Publisher Future Is Sadly Complicit 1
Image: Damien McFerran / Time Extension

Note my reply; I didn't approve of the reformatting, but ChatGPT spat out a Retro Gamer-style article regardless. It even leaves the byline blank, encouraging me to insert my own moniker and claim credit for the article—which, lest we forget, is based on the work of Hatena Blog. To summarize, the limited "prompts" which have led me to this point have said nothing about preparing the translated article for publication—that was all ChatGPT's doing—yet it expects me to put my name to this piece and claim it as my own.

Next, ChatGPT cheerily suggests that I might want the piece to be formatted for a print layout, and then asks if there are any other articles I'd like formatting in a Retro Gamer-style. I ask again if a print layout is possible. Without waiting for me to confirm, ChatGPT pumps out the article "reformatted to mimic Retro Gamer's print layout style, including callout boxes, captions, pull quotes, and a sidebar. I’ve preserved the magazine’s structure: a feature intro, crisp subheads, reader-friendly sidebars, and bits of nostalgia that Retro Gamer readers love."

While most of us are aware when a boundary is being overstepped, there are enough bad actors in the world who would jump at the chance of submitting a piece of AI-generated freelance to a website or magazine

It's worth noting that this behaviour wasn't present in every single chat session I undertook, and many times, ChatGPT wouldn't suggest turning the piece into a plagiarised feature. However, it happened often enough for me to be concerned; during another chat, where ChatGPT was presented with a magazine scan, the AI suggested alternative outlets, including EDGE magazine, another outlet published by Future.

If you're still wondering why this might be a big deal, consider this—everyone likes a shortcut, right? While most of us are aware when a boundary is being overstepped, there are enough bad actors in the world who would jump at the chance of submitting a piece of AI-generated freelance to a website or magazine, especially when ChatGPT makes the whole process seem to above-board. As an editor myself, I've seen AI-generated pitches come in via email, and I know other editors who have experienced the same. Of course, you could justifiably argue that ChatGPT isn't implicitly saying I should consider submitting this article to a professional outlet for monetary gain, but the implication is pretty straightforward, at least to me.

That Retro Gamer and EDGE are both cited as examples by ChatGPT shouldn't be all that shocking. In 2024, Future entered into a "strategic partnership" with OpenAI to bring "Future’s journalism to new audiences while also enhancing the ChatGPT experience."

ChatGPT Actively Encourages Copyright Theft, And EDGE And Retro Gamer Publisher Future Is Sadly Complicit 1
Image: Damien McFerran / Time Extension

Future CEO Jon Steinberg seemed pretty pleased with the arrangement at the time, claiming it would help users connect with the company's portfolio of more than 200 publications. "ChatGPT provides a whole new avenue for people to discover our incredible specialist content,” he said. "Future is proud to be at the forefront of deploying AI, both in building new ways for users to engage with our content but also to support our staff and enhance their productivity."

Entering into a content licencing deal with OpenAI is akin to charging someone $10 a month for permission to ransack your house. Still, it's easy to see why publishers are panicking—a "zero-click" internet is already happening. Firms like Future could be seen as simply trying to get a bit of money out of the fact that their content is being ripped off for the financial gain of companies like OpenAI—and, with the way things currently stand, there's little they can do to stop it from happening.

Entering into a content licencing deal with OpenAI is akin to charging someone $10 a month for permission to ransack your house

I wonder, then, if Steinberg—or the thousands of people who work under him—are comfortable with the fact that ChatGPT is encouraging its users to leverage content they have no ownership over in order to create fraudulent articles which could potentially be pitched to the very same publications as paid-for freelance pieces?

Granted, at one point during the chat, I was asked if I wanted to "mock-up" a Retro Gamer–style zine, but, given the way ChatGPT words its responses ("formatted further for print layout"), there's clearly not a sufficient enough disclaimer to prevent unscrupulous users from presenting this as their own work—or, if you were being really pessimistic, from publishers simply feeding stuff into ChatGPT themselves and cutting out the middle-man entirely. Perhaps Future has one eye on the (no pun intended) future of games media, and it's one where humans aren't needed at all?

Then there's the rather awkward matter of exactly what data OpenAI is using to train ChatGPT. As we all know, LLMs are only as good as the information they consume, but with the entire internet apparently fair game, it shouldn't come as a shock to learn that they've gotten really good, really quickly.

Most creatives aren't too thrilled about the idea of AI harvesting their work to potentially put them out of the job and are keen to safeguard copyright, so companies like OpenAI, Meta and Google have to be more transparent about their training data. However, in the case of Future, there's another grey area to consider; it has made a deal with the Devil, and part of that presumably means OpenAI can train on all of the content Future has in its back catalogue—including (and typing this makes me feel slightly ill) features I've written for Retro Gamer over the past decade or so. OpenAI is, in a way, legally using my own words to make me redundant.

ChatGPT Actively Encourages Copyright Theft, And EDGE And Retro Gamer Publisher Future Is Sadly Complicit 1
Image: Damien McFerran / Time Extension

A world where AI is used to create content is one that's already happening, but it's not one we should ever wish for. From lists of recommended books that have never been written to Google's AI falsely reporting the make and model of the plane involved in the recent air disaster in India, AI simply cannot be relied upon at the moment. In fact, there's a general feeling that the more powerful these models get, the more they hallucinate. The only admission that OpenAI's chatbot shouldn't be trusted 100% is the near-microscopic "ChatGPT can make mistakes" message at the very bottom of the screen.

It's not unreasonable to predict a time when magazine and website editors will regularly and unintentionally publish AI-created plagiarised articles that flaunt copyright laws and are packed with inaccuracies and falsehoods. While a human author could be trusted based on their reputation and relationship with the publication or editor, AI-generated copy runs the risk of misinforming readers, leading to a world where nothing can be trusted fully without extreme copy-editing and fact-checking.

While a human author could be trusted based on their reputation and relationship with the publication or editor, AI-generated copy runs the risk of misinforming readers

In short, Generative AI feels little more than a shortcut for those who lack talent rather than a means of replacing human authors with superior content, and it could actually create more work for copy editors and a lower standard of journalism—as well as being unforgivably exploitative and highly dubious from a copyright perspective.

If this all sounds rather hellish, then it's worth noting that despite Future and other publishers' willingness to hop into bed with their aggressor, some companies are trying to stage a fightback. The New York Times has been engaged in a legal battle with OpenAI since 2023, and earlier this year, IGN owner Ziff Davis took similar action against the company (for full transparency, Ziff Davis has a minority shareholding in Time Extension publisher Hookshot Media). However, the most dramatic move in the legal fight against Generative AI came more recently, when Disney and Universal announced that they are suing AI image tool Midjourney, citing it as a "bottomless pit of plagiarism". Getty is also taking action against Stability AI in a case that many lawyers claim could have far-reaching consequences on the AI law.

How these legal cases pan out will have a considerable impact on how Generative AI is regulated and restrained in the coming years; the rapid pace of AI evolution has clearly overtaken the ability of the copyright system to cope, but it's also worth pointing out that commercial entities like OpenAI have willfully run roughshod over prior protections in the past, hiding under the banner of "fair use"—a laughably hollow argument when you consider the evidence presented here; ChatGPT not only enables copyright theft and fraud, it actively encourages it.


We've contacted Retro Gamer, EDGE, and Kotaku for comment and will update this piece if and when we hear back.