

How do you know if that 'person” who's sharing photos, telling stories or engaging in other activities online is actually real and not an AI bot?
That’s a question researchers have been pondering as AI gets better and better at mimicking human behavior in an online world where people often engage anonymously.
Researchers are proposing “personhood credentials” to help online service providers distinguish between real people and AI bots, in an effort to counter bad actors and preserve privacy.
The proposal, outlined in a new paper from 32 researchers at OpenAI, Harvard, Microsoft, the University of Oxford, MIT, UC Berkeley and other organizations, comes as AI gets better and better at mimicking human behavior in an online world where people often engage anonymously.
Their solution: have humans sign up for “personhood credentials,” a digital ID or token that lets online services know you’re real and not an AI. They say the credentials can be issued by a variety of “trusted institutions,” including governments and service providers (like Google and Apple, which already ask you to log in with an ID).
To make such systems work, we’d need wide adoption across the world. So the researchers are encouraging governments, technologists, companies and standards bodies to come together to create a standard.
Not everyone’s a fan, though. Some researchers say a better approach would be to have the companies creating these systems solve for the problems introduced by AI, rather than making everyday people responsible for detecting and reporting AI bots.
Here are the other doings in AI worth your attention.
California moves a step forward with landmark AI regulation
A groundbreaking California bill that will require AI companies to test and monitor systems that cost more than $100 million to develop has moved one step closer to becoming a reality.
The California Assembly passed the proposed legislation on Aug. 28, following its approval by the state Senate in May. The bill, now headed to Gov. Gavin Newsom, will be the first law in the US to impose safety measures on large AI systems.
Called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, Senate Bill 1047 was proposed by state Sen. Scott Wiener, a Democrat who represents San Francisco. It’s opposed by tech companies, including OpenAI, Google and Meta, as well as at least eight state politicians who argued it could stifle innovation.
California is home to 35 of the world’s top 50 AI companies, The Guardian noted, including Anthropic, Apple, Google, Meta and OpenAI.
You can read the text of SB 1047 here, OpenAI’s objection to it here, and Wiener’s response to its legislative progress here.
In recent months, researchers and even AI company employees have expressed concerns that development of powerful AI systems is happening without the right safeguards for privacy and security. In a June 4 open letter, employees and industry notables including AI inventors Yoshua Bengio, Geoffrey Hinton and Stuart Russell called out the need for whistleblower protections for people who report problems at their AI companies.
Meanwhile, OpenAI, which makes ChatGPT, and Anthropic, creator of Claude, last week became the first to sign deals with the US government that allow the US Intelligence Safety Institute to test and evaluate their AI models and collaborate on safety research, the institute said. The organization was created in October as part of President Joe Biden’s AI executive order.
The government will “receive access to major new models from each company prior to and following their public release,” said the institute, which is part of the Department of Commerce at the National Institute of Standards and Technology, or NIST.
The timing, of course, comes as California’s Newsom decides whether to weigh in on the state’s proposed bill on AI or leave it to the federal government to address the issue.
OpenAI’s Sam Altman shared his point of view in a blog post on X after the deal with the government was announced.
“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” Altman wrote. “For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!”
Google’s Gemini text-to-image generator ready to try again
After taking its text-to-image generator back to the drawing board because the tool generated embarrassing and offensive images of people, Google last week said it’s ready to release an updated version of the tool as part of its Gemini chatbot.
The image generator’s ability to depict people was pulled in February after users encountered behavior that led to bizarre, biased and racist images, including showing Black and Asian people as Nazi-era German soldiers (as Yahoo News noted) and “declining to depict white people, or inserting photos of women or people of color when prompted to create images of Vikings, Nazis, and the Pope” (as Semafor reported).
The backlash, seen as a sign that the company was rushing AI products to market without adequate testing, prompted Google CEO Sundar Pichai to issue an apology. “I know that some of its responses have
The above is the detailed content of How do you know if that 'person” who's sharing photos, telling stories or engaging in other activities online is actually real and not an AI bot?. For more information, please follow other related articles on the PHP Chinese website!

We all watched Bitcoin decline after the massive tariffs imposed by Donald Trump and rebound when he eased them for everyone but China.

In today's fast-paced blockchain world, building a crypto application that seamlessly interacts with multiple networks shouldn't take hours or require juggling endless APIs.

The crypto market is showing positive signs right now. The total market is worth $2.6 trillion, up 0.36%. Bitcoin trades just under $83,000, up 2.27%, while Cardano ADA sits at $0.6268, up 1.79%.

Bitcoin prices fell below a key psychological threshold on Monday, dipping to $79,000 in the afternoon. Although recovering some of its earlier losses

FARTCOIN has been one of the standout performers in the meme coin space, with its price soaring nearly 250% over the last 30 days.

Crypto analyst Saeed has outlined a bearish case for the Bitcoin price, predicting that it could still drop to as low as $74,000.

Grayscale Investments has released its updated “Assets Under Consideration” list for Q2 2025, highlighting a new batch of altcoins that may be featured in future crypto investment products.

The world of cryptocurrency regulation is heating up, with the SEC actively seeking advice from major players like Uniswap (UNI) and Coinbase.

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

WebStorm Mac version
Useful JavaScript development tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.