If you click on "Learn more", you get a sort of very brief guide to generative AI.
Now here's something interesting. I think this explains the "hallucinations" of LLMs better than I've so far seen:
It may make things up. When generative AI invents an answer, it's called a hallucination. Hallucinations happen because unlike how Google Search gets information from the web, LLMs don't gather information at all. Instead, LLMs predict which words come next based on user inputs.
That is clear and easy to understand. Actually helpful. This is where you find it: