This is a great point, Patrick. No AI or ML model is perfect; these AI checkers will also have false positives (calling something AI-written when it isn't) and false negatives (calling something human-written when it isn't). Whoever builds the model for the AI checker needs to balance the costs of being wrong in those two directions. Whoever uses the AI checker needs to consider offering some recourse to people who are harmed by the false positives - i.e. calling their human-written content AI-written.
And Jay, good callout on the irony of getting an automated rejection email telling you your writing was AI-written.
I'm sorry this happened to you, it happened to me as well. How in the world do you proove it's all your own writing? One of the reasons I'm not using Medium much anymore. To spend all that time on an article only to be told you didn't write it? It's so frustrating.
One way might be to misspell things like "prove." But on a less silly note, there's really no way to do it unless they're okay with you getting eyewitnesses from other people? And they might even claim that was fabricated.
I wrote my book starting in 2019 and finished it in 2023 and shared it with coworkers all the while. (Then I failed to find a publisher and started pushing chapters to Substack last month.) But people who don't know me might not believe that I wrote it. There's no way to reasonably convince anyone of anything on the internet.
I think you are perfectly within your right to question their poor quality process and name and shame them. For them to hide behind poor software is not good enough as you have taken time and effort to submit to their publication, even if they have ‘good’ intentions…
As AI still puts out meh content, people forget AI checkers are in fact AI and thus their output is meh as well.
This is a great point, Patrick. No AI or ML model is perfect; these AI checkers will also have false positives (calling something AI-written when it isn't) and false negatives (calling something human-written when it isn't). Whoever builds the model for the AI checker needs to balance the costs of being wrong in those two directions. Whoever uses the AI checker needs to consider offering some recourse to people who are harmed by the false positives - i.e. calling their human-written content AI-written.
And Jay, good callout on the irony of getting an automated rejection email telling you your writing was AI-written.
The AI is broken. After reading the two articles, I could tell a human wrote it.
I'm sorry this happened to you, it happened to me as well. How in the world do you proove it's all your own writing? One of the reasons I'm not using Medium much anymore. To spend all that time on an article only to be told you didn't write it? It's so frustrating.
One way might be to misspell things like "prove." But on a less silly note, there's really no way to do it unless they're okay with you getting eyewitnesses from other people? And they might even claim that was fabricated.
I wrote my book starting in 2019 and finished it in 2023 and shared it with coworkers all the while. (Then I failed to find a publisher and started pushing chapters to Substack last month.) But people who don't know me might not believe that I wrote it. There's no way to reasonably convince anyone of anything on the internet.
I think you are perfectly within your right to question their poor quality process and name and shame them. For them to hide behind poor software is not good enough as you have taken time and effort to submit to their publication, even if they have ‘good’ intentions…