clock menu more-arrow no yes mobile

Filed under:

Actually, It’s About Ethics In Sport Journalism

Nothing to do with Leicester City, everything to do with the day after tomorrow

Demonstration model of Babbage�s Difference Engine No 1, 19th century. Photo by SSPL/Getty Images

Howdy friends.

I’m a person. You probably are as well, although if you’re an aggregator algorithm parsing what I’ve written, you’re not. You’re not even close to human, and the distinction is an important one. We’ll come back to that shortly.

I’m writing this in response to reports that a certain prestigious sporting publication has been credibly accused of publishing content generated by “AI.” By “credibly accused,” I mean “the photo of the author is a headshot from an AI content mill, their biography is laughably generic, and the content attributed to ranges from ‘bland and obvious’ to ‘comically inept and clearly not written by a human.’”

I’m going to use up my quota for quotation marks pretty quickly tonight, huh?

So, is publishing AI-generated content a big deal? I contend that it is, especially when the origin of the content isn’t disclosed. Trying to pass off a non-existent author with a fake bio as a real person is lying to your readers. If you’re a journalist, your credibility is everything. If you’re a publisher, the credibility of your journalists is everything. Why would you ever believe anything published by an organization that lies to you?

This is a true story. Even if Faulkner isn’t your cuppa, at least it was really Faulkner.

...actually, you probably don’t want to answer that.

“Aha!” says the strawman I’m using to make my point while giving the impression that I’m thoughtfully considering other points of view, “What about the ol’ nom de plume? Writers have written and even reported under fictional names since the dawn of doxing! Isn’t that lying?”

It is, but it isn’t the same thing. If you want to accuse me of special pleading here, by all means, do so, but at least hear me out. Writing under an assumed name is an attempt to separate the author’s identity from the work, perhaps to protect them, perhaps to work in another genre than the one where they already have a brand to protect or a myriad of other reasons.

When publishing AI content without disclosure, the publisher is not just obscuring the identity of the author; they are hiding the fact that there is no author. I think it is generally understood that the “intelligence” part of “AI’ is grossly overestimated. It uses algorithmic processes that sort of, kind of, emulate the way we think human brains work, but it remains closer to a glorified Akinator than true intelligence.

The Forbin Project
We’re not here yet, but...
Photo by LMPC via Getty Images

I’d also like for you to consider the possibility that the “artificial” side is somewhat overstated as well. AI has to train on untold examples of what it is trying to emulate. It’s drawing on the works of tens of thousands of writers/artists/musicians and creating a mashup of their work. Many of those creators have no idea that their work is being used to feed the machines. Most sites these days require users to opt out of having their work used in this way.

Is it oversimplifying to say that the way a modern AI generates content is simply remixing existing work into something that fits the parameters of the request? Sure. But, and I cannot emphasize this enough, this is closer to the truth than the idea that science has created a true artificial intelligence and deployed it to create lousy ad copy.

Now, what if AI were to achieve the mythical status of “good” and create decent content? Would that still be a problem? If it were passed off as human-created, then yes, because there’s still the deception involved. But...what if the publication made it clear that what you were reading came from an AI?

And if the AI was giving serious UwU vibes?
Photo by RICHARD A. BROOKS/AFP via Getty Images

That’s a more complicated case, and I think it’s instructive to understand why publishers are using AI content to fill their pages. There’s only one reason, and it’s not one I find compelling: AI is cheaper than people. You can only get so many fans to generate free content just because they love the subject, love writing, and hope that the experience might potentially lead to something bigger. Most of the time, people require substantial compensation to stick around.

Here’s the thing: Your company might say that people are their most special-est asset and they might genuinely believe it. In this day and age, most companies of a certain size are owned by investment funds. Investors do not regard people as a special asset. They are a cost. They are an impediment to the holy dictum “line must go up!” People are a cost to be minimized or, ideally, eliminated.

That’s a long way of saying that the reason publishers would love to use as much AI content as possible is the same reason you’re probably bagging your own groceries at the supermarket and pumping your own gasoline. If you want to go down the rabbit hole, you’ll probably intuit that the savings go beyond the individual salaries eliminated. Every job that disappears means that there are more people competing for fewer jobs which drives down the wages for everyone. This path inevitably leads to Trier, Germany’s favourite son, and that’s not really today’s subject.

No, not this guy. He’s from Lundtofte, Denmark

So, let’s get back on track here. Publishing AI content without disclosure is dishonest. The quality of the work is generally poor. The work is generated by taking the work of actual people without their receiving compensation or providing consent. And, if you care about such things, it puts people out of work.

Speaking only for myself, I can generate poor work on my own just fine. If you’re here, you’ve probably read some of what I’ve written. Some of it is pretty good, some of it is very outside-the-box, and some of it is really poor. Heck, some of it has been factually incorrect. However, it’s mine. I wrote it. I hope you know and understand that fact.

Here at the Fosse Posse, we have literally several readers and we love you all. Our little stand against the dishonest use of AI is a whisper in a gale, but this is a big issue and it’s one that more of us will be encountering with increasing frequency. Don’t be a jerk. Don’t support people who steal other people’s work, run it through the artistic or journalistic equivalent of a money-laundering scheme, and try to pass it off as original. It’s bullshit and shame on any publisher that engages in the practice.

So, that was a long way of saying “It is our opinion that AI-generated content is bad.” What’s your take? Are we dying on the wrong hill here?


How do you feel about procedurally-generated content?

This poll is closed

  • 20%
    I’m fine with it! I’m here for the information. Also, aren’t the AIs here to free us from the drudgery of creative expression so we can spend more time at the office?
    (1 vote)
  • 20%
    So long as the publication is being transparent about the source, I have no problem with it.
    (1 vote)
  • 0%
    I’m concerned about the quality, but if AI generates work that’s as good or better than a human writer? It just makes sense.
    (0 votes)
  • 60%
    It’s terrible on many levels. It sounds weird to say this, but I think you have the right of it.
    (3 votes)
5 votes total Vote Now