But according to 404 Media, in a series of deleted X posts, Grok boasted that Musk had the potential to drink piss better than any human in history, that he was the ultimate throat goat whose blowjob prowess edges out Trump's, and that he should have won a 2016 porn industry award instead of porn star Riley Reid. Grok also claimed Musk was more fit than LeBron James.
For those of us scouring filings for questionable AI screw-ups though, we now zoom to a handwritten insert included with the order, justifying the decision to allow the motion even if it technically missed a deadline based on Jones v. Goodman, 57 Cal.App.5th 521, where the court writes, that an amended motion should relate back to the initial motion "as long as the initial motion was in 'substantial compliance' with the governing rule."
It's not hard to understand the AI future Microsoft is betting billions on - a world where computers understand what you're saying and do things for you. It's right there in the ads for the latest Copilot PCs, where people cheerfully talk to their laptops and they talk back, answering questions in natural language and even doing things for them. The tagline is straightforward: "The computer you can talk to."
Starbuck's claims against Google came after he filed a similar lawsuit against Meta, whose AI he claimed falsely asserted that he'd participated in the January 6th riot at the US Capitol. But Meta settled that lawsuit in August and even hired Starbuck as an advisor to help address "ideological and political bias" in its AI chatbot, The Wall Street Journal reported. The outlet noted last month that so far, no US court had awarded damages for defamation by an AI chatbot.
Looming over the proceedings even more prominently than the judge running the show were three tall digital displays, sticking out with their glossy finishes amid the courtroom's sea of wood paneling. Each screen represented a different AI chatbot: OpenAI's ChatGPT, xAI's Grok, and Anthropic's Claude. These AIs' role? As the "jurors" who would determine the fate of a man charged with juvenile robbery.
In one case, according to Starbuck, Google's AI claimed he had been a person of interest in a murder case when he was just two years old. For each source, Google's AI provides a URL, giving the impression that these are real news articles with headlines like, Robby Starbuck Responds to Murder Accusations,' he said. The only way to discover that these URLs are fake is to click on them.
"Um ... there we go ... uh-oh," said Meta CEO Mark Zuckerberg on stage as he attempted to answer a video call through a combination of movements between a wristband and a pair of glasses. "Well, I ... let's see what happened there ... that's too bad," he continued, shortly before cutting short the live demo. The video call went unanswered.
"What we had noticed was there was an underlying problem with our data," Ahuja said. When her team investigated what had happened, they found that Salesforce had published contradictory "knowledge articles" on its website."It wasn't actually the agent. It was the agent that helped us identify a problem that always existed," Ahuja said. "We turned it into an auditor agent that actually checked our content across our public site for anomalies. Once we'd cleaned up our underlying data, we pointed it back out, and it's been functional."
A few months ago, I asked ChatGPT to recommend books by and about Hermann Joseph Muller, the Nobel Prize-winning geneticist who showed how X-rays can cause mutations. It dutifully gave me three titles. None existed. I asked again. Three more. Still wrong. By the third attempt, I had an epiphany: the system wasn't just mistaken, it was making things up.
Popular right wing influencer Charlie Kirk was killed in a shooting in Utah yesterday, rocking the nation and spurring debate over the role of divisive rhetoric in political violence. As is often the case in breaking news about public massacres, misinformation spread quickly. And fanning the flames this time was Elon Musk's Grok AI chatbot, which is now deeply integrated into X-formerly-Twitter as a fact-checking tool - giving it a position of authority from which it made a series of ludicrously false claims in the wake of the slaying.
"Language models are optimized to be good test-takers, and guessing when uncertain improves test performance," the authors write in the paper. The current evaluation paradigm essentially uses a simple, binary grading metric, rewarding them for accurate responses and penalizing them for inaccurate ones. According to this method, admitting ignorance is judged as an inaccurate response, which pushes models toward generating what OpenAI describes as "overconfident, plausible falsehoods" -- hallucination, in other words.