Chip giant Nvidia said on Monday it will start manufacturing AI supercomputers— machines that can process copious amounts of data and run complex algorithms— entirely within the U.S. for the first time. The announcement comes after President Trump signaled that imported semiconductors would be targeted by tariffs this week and announced a national security trade investigation into chip imports from China. Nvidia said it has already started producing its Blackwell chips in TSMC’s Phoenix, Arizona plant and plans to work with partners like FoxConn and Wistron to set up other factories in Houston and Dallas. It plans to build robots to operate the facilities, which will be designed using “digital twins”— a virtual simulation of their real world objects and environments —to build the plants faster. But even keeping chips aside, Trump’s tariffs could make building AI datacenters more expensive, as they are reliant on raw materials imported from other countries, Forbes reported.
And if you haven’t gotten a chance yet, check out our seventh annual AI 50 list here.
Now let’s get into the headlines.
ETHICS AND LAW
Community colleges across the country are facing an onslaught of enrollments from “bot” students who enroll in classes by the hundreds to bilk tens of millions of dollars in state and federal aid money, Voice of San Diego reported. These “bot” students use fake aliases and submit AI-generated homework in order to stay “enrolled” long enough to collect aid. In 2024, about 25% of community college applicants in California were bots.
PEAK PERFORMANCE
Google has trained an AI model that aims to decipher patterns and structures in dolphin sounds with a goal of understanding their meaning and ascertaining whether they have language. Named DolphinGamma, the model consists of 400 million parameters and is trained on data from the Wild Dolphin Project, a nonprofit that studies and collects data on Atlantic spotted dolphins. The end goal of the project is to build technology that might facilitate two-way interactions between human researchers and dolphins in the ocean.
TALENT RETENTION
AI continues to be a white hot focus for companies–as does the talent needed to build it. To that end, Google Deepmind makes its employees sign noncompetes that can last as long as a year, preventing them from joining a rival for 12 months after they stop working at Google, according to Business Insider. The employees continue to get paid during the extended garden leave period. Nando De Frietas, a former Google DeepMind director, shared his frustration with the contracts on X: “It’s abuse of power, which does not justify any end.”
HUMANS OF AI
May Habib, CEO and cofounder of $1.9 billion-valued enterprise AI startup Writer, says she isn’t just selling her company’s AI software–which allows 300 companies like Intuit, Salesforce and Uber to build AI apps for specific functions across marketing, HR and sales–she’s “selling a different way of doing things.” The company, featured on the Forbes AI 50 list, is expanding to launch a new platform for AI “agents” — systems that can carry out specific work autonomously. From pitching clients like Visa on her nascent machine learning based translation software back in 2016 to now training a family of cost efficient AI models dubbed Palmyra (named after the ancient Syrian city) for the enterprise world, the company’s strategy has remained the same: building what its customers want.
DEEP DIVE
In late March, OpenAI added new image generation capabilities to its star product ChatGPT. The update went viral, resulting in a deluge of Studio Ghibli-inspired AI-generated images posted across social media and drawing millions of users to the platform.
But new research from cybersecurity firm Cato Networks has found that ChatGPT can now be tricked into creating a slew of fake documents, including passports, social security cards and driver licenses. It can also be used to spin up convincing counterfeit checks and receipts. OpenAI spokesperson Taya Christianson said “our goal is to give users as much creative freedom as possible.” Images generated by ChatGPT include C2PA metadata to identify them as AI-generated and OpenAI takes action against people who violate the company’s usage policies.
Etay Maor, a chief security strategist at Cato Networks who has been studying cyber gangs for the past 20 years, said these forged documents are typically sold on the dark web and have largely been difficult to obtain. But thanks to AI tools like ChatGPT, creating realistic fake documents has become orders of magnitude easier and faster. Documents like passports and driver’s licenses are key to verifying a person’s identity and manipulated IDs open the floodgates for criminals to commit financial, insurance and medical fraud. The implications for misuse are broad and wide ranging, from gaining access to bank accounts to prescription abuse, Maor said. “Not just somebody who's a professional criminal, anybody can do this. And that's what's super troubling about this,” he said. In a matter of seconds, he was also able to prompt ChatGPT to create a fake passport of a person that somewhat resembled me.
The use of AI by cybercriminals isn’t new. ChatGPT other AI tools have been used to create malware code, write phishing emails and supercharge cyberattacks. It’s not just AI tools that generate text, but technologies that cater to other mediums like voice, images and videos have added extra layers that help cybercriminals carry out complex fraud.
“All these different elements that build trust— style of a person, their visuals, their voice, their official credentials—all these building blocks for trust are disappearing,” Maor said.
WEEKLY DEMO
A startup called InTouch uses AI to call your parents or grandparents to check in on them and have a conversation if you don’t have the time, 404 Media reported. The AI can be prompted to speak about and ask questions about certain topics. After the call is over, the person who sets up the call receives an AI-generated summary of the call and notes about the person’s mood. “The idea of having an AI call your lonely relative because you can’t or don’t feel like it is dystopian, insulting, and especially non-human, even more so than other AI-based creations,” Joseph Cox writes.
MODEL BEHAVIOR
Education secretary Linda McMahon repeatedly confused AI (artificial intelligence) with A1(steak sauce brand) while giving a speech at ASU GSV summit in San Diego. The sauce brand seized the moment, sharing an image on Instagram: “You heard her. Every school should have access to A1.”
The above is the detailed content of The Prompt: ChatGPT Generates Fake Passports. For more information, please follow other related articles on the PHP Chinese website!

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

Zend Studio 13.0.1
Powerful PHP integrated development environment

SublimeText3 Linux new version
SublimeText3 Linux latest version
