Reading through the Github Copilot feedback made me realize that people report hallucination issues as if they were regular bugs in software. People really think that it's all about just fixing those wrong answers somewhere in the backend of the LLM.