This is terrifying. I'm putting in partial abstracts of academic papers and it completes it in a convincing way! How can we trust what we are reading in future?
This was my comment on a Two Minute Papers video about GPT-2 back in October 2019. It got 86 likes and a reply from the channel itself. Three years before ChatGPT launched, and four years before AI-generated papers became a genuine crisis in academia, the trust problem was already obvious if you were paying attention. Funny how the thing I ended up building (Scopus AI) is partly an answer to the question I was asking here.
Originally posted as a YouTube comment.

