To be honest though, this is not a new problem, it goes all the way back to punch cards and magnetic tape- with any results it's often a case of "garbage in= garbage out". The only difference now is people's propensity to believe in that garbage and spread it around;
Ain't that the trashy social media truth.
I've used Google's AI quite a bit, and some other one that's not ChatGPT. My experience has been mixed. Sometimes it does a fast and thorough job of answering questions and summarizing areas of knowledge, presumably when it has access to accurate databases that are on point.
Other times, it clearly doesn't know—and, worse, makes stuff up. I've interrogated it on legal questions and it gets a lot of them spectacularly wrong. It probably has no access to legal databases, and has difficulty in distinguishing how legal rules and results change depending on different underlying facts.
One time I spent about an hour chatting with AI about this site and myself. It got a lot correct, but just seemed to make a bunch of stuff up. I learned that I was a "leading advocate" of the "skim stroke." I have no idea what that is. Maybe I posted somewhere that skimming your paddle along the water surface on a recovery can be fun or sensual, but I don't recall. It also went on to describe my "other interests" such as running a refreshment stand with my wife, which is a total hallucination.
I skim the Google AI responses that appear, but I don't trust them. There's too much garbage and guessing mixed in with correct analyses of available, known facts, and it's often impossible to differentiate what's what without further searching and reading original sources myself.
That's one of the most important things I've learned in my academic, professional and personal life: Always go to original source material and read it yourself; don't trust someone else's, especially the media's, reporting of it. God help a world that relies on social media comments and tweets and clickbait headlines for news.