I was Accused of Being an AI
So I checked with OpenAI’s Text Classifier and, it seems, I’m not. Phew!
Some editors care about whether they are publishing AI-generated articles; others, I imagine, do not.
And there is an argument, I suppose, that as long as the work is of good enough quality, why worry? But if publications are serious about promoting original work, they need to be aware of when that work is not, in fact, original.
So, it is not surprising that the editors that do care are taking steps to detect non-human authors. There are a few detection tools available but they are not, I think, terrifically reliable. So it makes sense that editors should also rely on their common sense and judgement, not just some flaky bit of AI software.
This is what happened.
I wrote an article on what is a fairly new product and submitted it to a popular publication. One of their AI-generated text detection tools flagged up parts of my article.
Now, I’ve kind of been there before. What feels like an awfully long time ago (because it is) I used to teach at a University. And sometimes students are not as honest as you’d like them to be; plagiarism was not rife but it existed. So, I was aware that occasionally I needed to check up on the essays submitted to me. Sometimes it would be because I felt that I’d seen something very similar before and, sometimes, it was because I knew that the quality of the work did not match the ability of the student. Sometimes it was just a gut feeling. But I needed to check.
In those days detection was much easier, of course, there were no AI chatbots; plagiarism was normally just copying from other sources and if the student could find those resources, so could I.
So I get it. I understand that checks need to be made.
But I also understand that those checks can be unreliable. OpenAI’s AI Text Classifier for example is a GPT trained to detect text written by ChatGPT and the like. But we know that these AIs, while extremely impressive, are not always accurate. OpenAI themselves warn that their classifier “…isn’t always accurate; it can mislabel both AI-generated and human-written text”.
So, while using tools can be helpful to an editor, judgement and common sense are valuable, too — maybe more so.
Back to my case.
As I said, the topic of my article was a new product — it had been around for a only couple of weeks when I wrote the article. The article is full of images that I clearly made myself, and it documents a process, using the new product, that is unique to me. So the idea that something like ChatGPT could have enough information to write sensibly about is unlikely in the extreme; there simply is no content out there for it to use for training.
Some of the parts that were flagged as suspicious were fairly basic instructions that might be common to many different products. So while it’s possible that similar text could be found elsewhere and adapted by a chatbot, that would be because it is common or generic. And it is unlikely that anyone would bother to use an AI to produce anything so basic.
Anyway, the point is that the article is entirely my work and common sense should support such a conclusion.
But the detection software tells you it isn’t sure. So, on balance, what does your judgement tell you? What do you do?
Do you think to yourself that, on balance, the software is giving you the wrong answer, or do you approach the author for confirmation?
If it were me, I would trust the author rather than risk insulting them — because that’s, frankly, what it feels like.
I can hear some of you saying “Don’t be such a wimp” but when I have put effort into producing what I feel is a decent piece of work, I find it upsetting for someone to ask “Are you sure you actually wrote this?”.
As a matter of interest, I ran the article through OpenAI’s classifier it responded “The classifier considers the text to be very unlikely AI-generated”. I also ran the ‘suspicious’ parts through four other classifiers, and three agreed with OpenAI. One, however, suggested a 66% chance that it was AI-written — it was wrong! All of them appeared supremely confident in their judgement!
So, editors, keep up the good work but remember that if you are good at your job, you don’t need to be a slave to your software tools. Sometimes your own good judgement is more useful.