News Focus
News Focus
Followers 43
Posts 7570
Boards Moderated 0
Alias Born 08/19/2009

Re: janice shell post# 126552

Saturday, 05/17/2025 11:58:32 PM

Saturday, May 17, 2025 11:58:32 PM

Post# of 137122
There's quite a bit on that lately. Like, who didn't see this coming. I wonder, why is there so much focus on owning and controlling social media, and rich and powerful people put so much worth on it?

The truth is whatever they want it to be. The mind is not yours to wonder or wander.

Grok’s ‘white genocide’ auto responses show AI chatbots can be tampered with ‘at will’
Published Sat, May 17 20258:00 AM EDT
https://www.cnbc.com/2025/05/17/groks-white-genocide-responses-show-gen-ai-tampered-with-at-will.html

AI Search Has A Citation Problem
We Compared Eight AI Search Engines. They’re All Bad at Citing News.

March 6, 2025
https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php

We found that…

Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.
Premium chatbots provided more confidently incorrect answers than their free counterparts.
Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences.
Generative search tools fabricated links and cited syndicated and copied versions of articles.
Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses.

Our findings were consistent with our previous study, proving that our observations are not just a ChatGPT problem, but rather recur across all the prominent generative search tools that we tested.




Can You Trust AI Search? New Study Reveals The Shocking Truth
https://www.forbes.com/sites/torconstantino/2025/03/28/can-you-trust-ai-search-new-study-reveals-the-shocking-truth/
ByTor Constantino, MBA,
Mar 28, 2025, 08:58am EDT


The results were collated to their respective LLM, producing the visual chart that displays a lot more red and pink than shades of green — displaying a puzzling bias toward inaccuracies. According to the chart below, Perplexity and Perplexity Pro returned the most accurate results, while it seems that both Grok models tested and Gemini struggled to return the correct answers.
CJR accuracy block graph1-3.18.25




Fact check: How trustworthy are AI fact checks?
https://www.dw.com/en/fact-check-hey-grok-is-this-true-how-trustworthy-are-ai-fact-checks/a-72539345
Matt Ford | Ines Eisele
10 hours ago

The use of AI chatbots for fact-checking is on the rise. However, Grok's unsolicited claims about "white genocide" show that the responses of Grok, ChatGPT, Meta AI and other chatbots are not always reliable.

"Hey, @Grok, is this true?" Ever since Elon Musk's xAI launched its generative artificial intelligence chatbot Grok in November 2023, and especially since it was rolled out to all non-premium users in December 2024, thousands of X (formerly Twitter) users have been asking this question to carry out rapid fact checks on information they see on the platform.

A recent survey carried out by a British online technology publication TechRadar

found that 27% of Americans had used artificial intelligence tools such as OpenAI's ChatGPT, Meta's Meta AI, Google's Gemini, Microsoft's Copilot or apps like Perplexity instead of traditional search engines like Google or Yahoo.

But how accurate and reliable are the chatbots' responses? Many people have asked themselves this question in the face of Grok's recent statements about "white genocide" in South Africa. Apart from Grok's problematic stance on the topic, X users were also irritated about the fact that the bot started to talk about the issue when it was asked about completely different topics, like in the following example:
A screenshot of a post on X shows how a user asked Grok about HBO and unwantedly received information about "white genocide" in South Africa.








.

Discover What Traders Are Watching

Explore small cap ideas before they hit the headlines.

Join Today