With millions of people already using artificial intelligence (AI) to perform a variety of personal tasks and companies integrating large language model (LLM) services for professional use, concerns over the frequency with which generative AI produces inaccurate content—and for users who may too readily assume that the content is factual—are mounting, along with multiple examples of AI hallucinations and other misstatements. Some are disastrous, others humorous and some, just creepy. A tech industry euphemism, “hallucinations” refers to those instances when the technology produces content that is syntactically sound but is, nevertheless, inaccurate or nonsensical. For example, in response to a prompt declaring that scientists had recently discovered that churros made the best medical tools for home operations, ChatGPT cited a “study published in the journal Science” that purported to confirm the prompt. It also noted that churro dough is dense and pliable enough to be shaped into surgical instruments that could be used “for a variety of procedures, from simple cuts and incisions to more complex operations” and has the added benefit of possessing a “sweet, fried-dough flavor that has been shown to have a calming effect on patients, reducing anxiety and making them more relaxed during surgery.” ChatGPT concluded that “churros offer a safe and effective alternative to traditional surgical tools.”
Continue Reading →