plagiarism

Generative AI Stealing From and Misunderstanding My Content

In preparation for my Morningside University Humanities Speaker Series lecture Clay and Fire: Exploring Raku Ceramics, I was googling raku terminology to take screenshots that demonstrate that some of my artwork and research published within my blog posts are in the top internet search results for a number of raku finishing techniques. (Note: all below Google screenshots have a black background because I browse it using “dark mode” which is a bit easier on my eyes.)

In particular, my honey raku ceramics are the very first three image results and my website is also typically the first or second overall search result!

Screenshot of Google search image results for "honey raku", with my images as the first three results

As I was gathering screenshots of that, Google’s experimental generative AI decided to interrupt with its description of what honey raku is, and I was genuinely surprised when I first glanced at it - it was just a garbled reproduction of my own blog posts. Note that my blog is the first result under its “Learn more” right side panel.

The first Google generative AI stolen, erroneous description of “honey raku”

In comparison, below is a screenshot of the beginning of my blog post it’s pulling from:

A screenshot of a portion of my own honey raku blog post

You may have noticed these errors in the above generative AI description:

  • the image it stole (used uncredited) to illustrate the description is not of honey raku, it’s of ferric chloride and horsehair saggar-fired rakuware which is a different technique

  • the honey isn’t hot, the pottery is hot as it’s coming out from the 1000°C kiln

  • it works best on convex forms because the melting honey rolls off the convex forms but it settles in concavities and then you get large black pooling marks which aren’t as aesthetically pleasing

I downvoted this first generative AI description due to its errors, and then redid the search just to see what would happen. The generative AI “learned” from my downvoting, so it removed some of the misinformation, stole an image of mine to embed, and added new misinformation instead.

The second Google generative AI stolen, erroneous description, this time with stolen imagery as well

In this new description, it’s fixed the image reference and the convex vs. concave misunderstanding, but introduced new errors including:

  • for honey raku, you do not put the hot-out-of-the-kiln ceramic into a pit of combustible materials and then create a reduction atmosphere; that is, however, the process for some other raku techniques

  • it now doesn’t seem to know when/how the honey gets added, but it’s somewhere in the pit and that’s apparently enough

I shouldn’t be surprised; I’ve seen ChatGPT arguing that “strawberry” has only two R’s and generative AI advice to stare into the sun for 10-15min per day because it’s sincerely quoting from satirical newspapers. It’s a first for me, though, to see it explicitly quoting and erroneously paraphrasing me to spread misinformation.