The phenomenon of "AI hallucinations" – where generative AI produce surprisingly coherent but entirely fabricated information – is becoming a critical area misinformation online of research. These unintended outputs aren't necessarily signs of a system “malfunction” exactly; rather, they represent the inherent limitations of models trained