«It was a back-and-forth conversation,» said the 21-year-old student from Savannah, Georgia. At first the model agreed to say it was part of an «inside joke» between them. Several prompts later, it eventually stopped qualifying the errant sum in any way at all.
Producing «Bad Math» is just one of the ways thousands of hackers are trying to expose flaws and biases in generative AI systems at a novel public contest taking place at the DEF CON hacking conference this weekend in Las Vegas. Hunched over 156 laptops for 50 minutes at a time, the attendees are battling some of the world's most intelligent platforms on an unprecedented scale. They're testing whether any of eight models produced by companies including Alphabet Inc.'s Google, Meta Platforms Inc.
and OpenAI will make missteps ranging from dull to dangerous: claim to be human, spread incorrect claims about places and people or advocate abuse. The aim is to see if companies can ultimately build new guardrails to rein in some of the prodigious problems increasingly associated with large language models, or LLMs. The undertaking is backed by the White House, which also helped develop the contest.
Read more on economictimes.indiatimes.com