Using Shakespeare to train artificial intelligence is far from new. As probably the most recognizable, largest public domain text corpus, many projects start with Shakespeare on day one. It’s also the foundation for all those “We trained an AI on the works of Shakespeare and Marlowe and then asked if Shakespeare is Marlowe” findings we’ve heard about over the years.
Still, though, I always like to click on the latest story to see how things are developing. This week, “The Conversation” posted: We taught an AI to impersonate Shakespeare and Oscar Wilde – here’s what it revealed about sentience.
Spoiler alert, it reveals nothing about sentience. The only reason that word is there is that they found an opportunity to reference the recent Google news story about suspending an engineer who claimed that their AI has achieved sentience.
The examples provided in the article are interesting if they’re remotely true. As a software engineer, I want much, much more detail about how exactly this experiment was performed. Maybe not source code, but at least “this input produced this output.” It means nothing to say, “we wanted to see … what its outputs would be when considering its own creativity.” Ok, how? How did you ask it to do that? How much information did you have to provide for it to even “understand” the question?
My assumption is that somebody basically wrote a modern English version of “the answer” and then what they did is have a trained AI translate into the literary format they wanted. Such engines aren’t new. They’re getting better, but they’re still nothing resembling intelligence, or sentience.
Maybe I’m reading too much into this article, and it’s mostly fictionalized, purely for entertainment. If it was funnier, I would have assumed I was reading something from McSweeny’s. I think I’m just disappointed, because I *do* click on these stories because I *do* want to hear about quantum leaps in this area, and not just the latest evolution of the same old stuff.