AI May Lead to Personhood Credentials, Google Fixes Gemini Image Maker

How do you know if that “person” who’s sharing photos, telling stories or engaging in other activities online is actually real and not an AI bot?

AI Atlas art badge tag

That’s a question researchers have been pondering as AI gets better and better at mimicking human behavior in an online world where people often engage anonymously. Their solution: having humans sign up for “personhood credentials,” a digital ID or token that lets online services know you’re real and not an AI.

In a new paper called “Personhood Credentials: Artificial Intelligence and the Value of Privacy-Preserving Tools to Distinguish Who Is Real Online,” 32 researchers from OpenAI, Harvard, Microsoft, the University of Oxford, MIT, UC Berkeley and other organizations say that “proof-of-personhood” systems are one way to counter the work of bad actors who can now easily get around existing countermeasures such as captcha puzzles. 

“Anonymity is an important principle online,” they write in their 63-page proposal. “However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. Personhood credentials give people a way to signal their trustworthiness on online platforms, and offer service providers new tools for reducing misuse by bad actors.”

The researchers say the credentials can be issued by a variety of “trusted institutions,” including governments and service providers (like Google and Apple, which already ask you to log in with an ID). 

To make such systems work, we’d need wide adoption across the world. So the researchers are encouraging governments, technologists, companies and standards bodies to come together to create a standard. 

OpenAI CEO Sam Altman already has a startup called Worldcoin that scans “people’s irises in exchange for a digital passport that both verifies they’re human and entitles them to shares of a cryptocurrency,” The Washington Post reported. “He’s pitched the project as a way to defend humanity from AI bots while enabling economic policies such as universal basic income for a future in which jobs are scarce.”

Not everyone’s a fan, though, with several governments questioning the way Worldcoin works. “Among their concerns: How does the Cayman Islands-registered Worldcoin Foundation handle user data, train its algorithms and avoid scanning children?” The Wall Street Journal reported. “Privacy advocates say [iris images] could be used to build a global biometric database with little oversight.” Well, I guess Altman would know what’s going on. 

As for the personhood credentials, other researchers see a problem with that approach because it makes AI a problem that everyday people need to solve for versus the companies creating these systems. 

“A lot of these schemes are based on the idea that society and individuals will have to change their behaviors based on the problems introduced by companies stuffing chatbots and large language models into everything rather than the companies doing more to release products that are safe,” Chris Gilliard, an independent privacy researcher and surveillance scholar, told the Post.

Here are the other doings in AI worth your attention.

California moves a step forward with landmark AI regulation  

A groundbreaking California bill that will require AI companies to test and monitor systems that cost more than $100 million to develop has moved one step closer to becoming a reality. The California Assembly passed the proposed legislation on Aug. 28, following its approval by the state Senate in May. 

Senate Bill 1047 is now headed to Gov. Gavin Newsom, who’ll decide whether it should be approved. If it is, it will be the first law in the US to impose safety measures on large AI systems. California is home to 35 of the world’s top 50 AI companies, The Guardian noted, including Anthropic, Apple, Google, Meta and OpenAI.

Called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, SB 1047 was proposed by state Sen. Scott Wiener, a Democrat who represents San Francisco. It’s opposed by tech companies, including OpenAI, Google and Meta, as well as at least eight state politicians who argued it could stifle innovation, as I noted in my column last week. 

California state representatives including Anna Eshoo and Zoe Lofgren asked Newsom in an Aug.15 letter to veto the bill, but “it’s not clear if he will do so,” Cal Matters reported last week. Newsom has until the end of September to sign it, veto it or let it become law without his signature, The Guardian reported, adding that the governor “declined to weigh in on the measure earlier this summer but had warned against AI regulation.”

You can read the text of SB 1047 here, OpenAI’s objection to it here, and Wiener’s response to its legislative progress here.

In recent months, researchers and even AI company employees have expressed concerns that development of powerful AI systems is happening without the right safeguards for privacy and security. In a June 4 open letter, employees and industry notables including AI inventors Yoshua Bengio, Geoffrey Hinton and Stuart Russell called out the need for whistleblower protections for people who report problems at their AI companies. SB 1047 includes whistleblower protections.

Meanwhile, OpenAI, which makes ChatGPT, and Anthropic, creator of Claude, last week became the first to sign deals with the US government that allow the US Intelligence Safety Institute to test and evaluate their AI models and collaborate on safety research, the institute said. The organization was created in October as part of President Joe Biden’s AI executive order.

The government will “receive access to major new models from each company prior to and following their public release,” said the institute, which is part of the Department of Commerce at the National Institute of Standards and Technology, or NIST.

The timing, of course, comes as California’s Newsom decides whether to weigh in on the state’s proposed bill on AI or leave it to the federal government to address the issue. OpenAI’s Altman shared his point of view in a blog post on X after the deal with the government was announced.

“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” Altman wrote. “For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!” 

Google’s Gemini text-to-image generator ready to try again 

After taking its text-to-image generator back to the drawing board because the tool generated embarrassing and offensive images of people, Google last week said it’s ready to release an updated version of the tool as part of its Gemini chatbot.

The image generator’s ability to depict people was pulled in February after users encountered behavior that led to bizarre, biased and racist images, including showing Black and Asian people as Nazi-era German soldiers (as Yahoo News noted) and “declining to depict white people, or inserting photos of women or people of color when prompted to create images of Vikings, Nazis, and the Pope” (as Semafor reported).

Signup notice for AI Atlas newsletter Signup notice for AI Atlas newsletter

The backlash, seen as a sign that the company was rushing AI products to market without adequate testing, prompted Google CEO Sundar Pichai to issue an apology. “I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong,” he wrote in a memo to employees back in late February. “We know the bar is high for us and we will keep at it for however long it takes” to address the problems with the company’s AI.   

Apparently it took six months. In a blog post last week, Google said the new image editor, called Imagen 3,  should provide a “better user experience when generating images of people.” 

“We’ve worked to make technical improvements to the product, as well as improved evaluation sets, red-teaming exercises and clear product principles,” wrote Dave Citron, senior director of product management for Gemini experiences. “We don’t support the generation of photorealistic, identifiable individuals, depictions of minors or excessively gory, violent or sexual scenes. Of course, as with any generative AI tool, not every image Gemini creates will be perfect, but we’ll continue to listen to feedback from early users as we keep improving.”

Google said the image editor will be rolled out to users of its products and services, across multiple languages, “soon.” Or you can sign up to access the tool today with Gemini Advanced for $20 a month (first month free).  

Women may not be as up to speed on AI as men, or as trusting

A research report from DeVry University says there’s a “stark AI knowledge gap between men and women, which threatens to widen the career advancement gap for women even more.”

In the report, called Closing the Gap: Upskilling and Reskilling in an AI Era, the university found that:

  • Only 27% of women believe AI will help them get ahead, in comparison with 43% of men.
  • Only 19% of women believe being skilled in AI will help open career opportunities for them, compared with 30% of men.  
  • Only 49% of women see the benefits of AI making their work easier, in comparison with 58% of men. In addition, only 43% view AI as making them more productive, compared with 51% of men.
  • Women in tech (82%) are more likely than women on average (63%) to have access to upskilling, but they’re not as likely as men in tech (90%).
  • Women are less likely to fully understand what AI means, with 82% of men reporting they fully understand, compared with only 68% of women. 

As for that last bullet point, whether men truly understand what AI means is of course debatable. But they aren’t shy about reporting they do.

OpenAI has 200 million users, AI Voice lets creators use their own voice for TikTok videos, NotePin is a new AI wearable  

Here are three quick happenings in the world of AI to round out this week’s news. 

OpenAI: More users and greater need for operating cash

OpenAI confirmed to Axios that there are now more than 200 million weekly active users of its ChatGPT chatbot, twice as many as a year ago. The company also said that 92% of Fortune 500 companies are using its AI products. 

That news comes as The Wall Street Journal and CNBC report that the San Francisco-based startup is in talks to raise a new funding round that would value the company at more than $100 billion. Joshua Kushner’s Thrive Capital is leading the funding round (yes, he’s Jared Kushner’s brother).

TikTok’s AI voices want to add yours

TikTok is now giving creators a feature that allows them to create an AI voice with their own voice — as opposed to just using one of the generic AI voices supplied by the social media platform, Social Media Today reported.

According to TikTok, you can create your AI voice in just 10 seconds. After recording a few lines with your voice, TikTok will add your AI voice to your voice library. 

You can find lots of TikTok videos showing you how to use AI voices in TikTok.

I’ll note that the concern isn’t with content creators using an AI to easily replicate their own voices to do voice-overs for TikTok videos. It’s with people using AI tools to copy someone else’s voice “with believable enough accuracy, based on minimal input,” Social Media Today noted.

Celebrities including Morgan Freeman have already called out TikTok videos that replicate their voice without permission. 

A new AI wearable for taking notes

Instead of you recording a voice memo on your smartphone, Plaud.AI is pitching an “ultra-thin, ultra-light, wearable AI device” called the NotePin that records, transcribes and summarizes whatever you tell it. The company says the NotePin, which you can wear as a necklace, a wristband, a clip or a pin, has an “impressive battery life, including 40-day standby time and 20 hours of continuous recording.”

The company says you can preorder the $169 device now and it’ll ship in November. The gadget will come with a free starter plan, which offers 300 minutes of transcription time per month, support for 59 languages, and the ability to trim audio. A $79 per year Pro Plan adds 1,200 minutes of transcription time and other AI features “to come.”  



Fuente