

How do you know if that 'person” who's sharing photos, telling stories or engaging in other activities online is actually real and not an AI bot?
That’s a question researchers have been pondering as AI gets better and better at mimicking human behavior in an online world where people often engage anonymously.
Researchers are proposing “personhood credentials” to help online service providers distinguish between real people and AI bots, in an effort to counter bad actors and preserve privacy.
The proposal, outlined in a new paper from 32 researchers at OpenAI, Harvard, Microsoft, the University of Oxford, MIT, UC Berkeley and other organizations, comes as AI gets better and better at mimicking human behavior in an online world where people often engage anonymously.
Their solution: have humans sign up for “personhood credentials,” a digital ID or token that lets online services know you’re real and not an AI. They say the credentials can be issued by a variety of “trusted institutions,” including governments and service providers (like Google and Apple, which already ask you to log in with an ID).
To make such systems work, we’d need wide adoption across the world. So the researchers are encouraging governments, technologists, companies and standards bodies to come together to create a standard.
Not everyone’s a fan, though. Some researchers say a better approach would be to have the companies creating these systems solve for the problems introduced by AI, rather than making everyday people responsible for detecting and reporting AI bots.
Here are the other doings in AI worth your attention.
California moves a step forward with landmark AI regulation
A groundbreaking California bill that will require AI companies to test and monitor systems that cost more than $100 million to develop has moved one step closer to becoming a reality.
The California Assembly passed the proposed legislation on Aug. 28, following its approval by the state Senate in May. The bill, now headed to Gov. Gavin Newsom, will be the first law in the US to impose safety measures on large AI systems.
Called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, Senate Bill 1047 was proposed by state Sen. Scott Wiener, a Democrat who represents San Francisco. It’s opposed by tech companies, including OpenAI, Google and Meta, as well as at least eight state politicians who argued it could stifle innovation.
California is home to 35 of the world’s top 50 AI companies, The Guardian noted, including Anthropic, Apple, Google, Meta and OpenAI.
You can read the text of SB 1047 here, OpenAI’s objection to it here, and Wiener’s response to its legislative progress here.
In recent months, researchers and even AI company employees have expressed concerns that development of powerful AI systems is happening without the right safeguards for privacy and security. In a June 4 open letter, employees and industry notables including AI inventors Yoshua Bengio, Geoffrey Hinton and Stuart Russell called out the need for whistleblower protections for people who report problems at their AI companies.
Meanwhile, OpenAI, which makes ChatGPT, and Anthropic, creator of Claude, last week became the first to sign deals with the US government that allow the US Intelligence Safety Institute to test and evaluate their AI models and collaborate on safety research, the institute said. The organization was created in October as part of President Joe Biden’s AI executive order.
The government will “receive access to major new models from each company prior to and following their public release,” said the institute, which is part of the Department of Commerce at the National Institute of Standards and Technology, or NIST.
The timing, of course, comes as California’s Newsom decides whether to weigh in on the state’s proposed bill on AI or leave it to the federal government to address the issue.
OpenAI’s Sam Altman shared his point of view in a blog post on X after the deal with the government was announced.
“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” Altman wrote. “For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!”
Google’s Gemini text-to-image generator ready to try again
After taking its text-to-image generator back to the drawing board because the tool generated embarrassing and offensive images of people, Google last week said it’s ready to release an updated version of the tool as part of its Gemini chatbot.
The image generator’s ability to depict people was pulled in February after users encountered behavior that led to bizarre, biased and racist images, including showing Black and Asian people as Nazi-era German soldiers (as Yahoo News noted) and “declining to depict white people, or inserting photos of women or people of color when prompted to create images of Vikings, Nazis, and the Pope” (as Semafor reported).
The backlash, seen as a sign that the company was rushing AI products to market without adequate testing, prompted Google CEO Sundar Pichai to issue an apology. “I know that some of its responses have
The above is the detailed content of How do you know if that 'person” who's sharing photos, telling stories or engaging in other activities online is actually real and not an AI bot?. For more information, please follow other related articles on the PHP Chinese website!

The decision was made in part because a “path to profitability” was unrealistic, Gitcoin co-founder Kevin Owocki said in a statement.

The XRP price has stabilized at $2.13, with a 24-hour trading volume of $2.33 billion. However, its momentum has slowed, leaving investors seeking the next top crypto pick.

Investor and CEO Cathie Wood's ARK Invest firm projects that Bitcoin could reach $1.5 million per coin by 2030

"For cryptocurrencies, market liquidity, even if it may seem ok at times, is especially during crises naturally called into question"

As the US President Donald Trump announced a 90-day delay for the tariffs on cryptocurrency imports, speculators and investors began to outline potential risks to the broader cryptocurrency market.

This Olympics-themed coin is highly sought after by collectors if it features a specific design.

This coin is highly prized by collectors if it features a specific design.

This Olympics-themed coin is highly sought after by collectors if it features a specific design.

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

Atom editor mac version download
The most popular open source editor

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Dreamweaver CS6
Visual web development tools

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.
