Home >Technology peripherals >AI >Brothers come looking for ChatGPT vulnerabilities, OpenAI: the highest bounty is 20,000 U.S. dollars
Now, you can make money by finding vulnerabilities in ChatGPT.
Early this morning, OpenAI announced the launch of a bug bounty program:
Report ChatGPT vulnerabilities, and you can receive a cash reward of up to US$20,000.
Specifically, OpenAI will cooperate with bug feedback platform Bugcrowd to collect bugs discovered by people while using its products.
Discover and report vulnerabilities through this platform to receive cash rewards:
We will give cash rewards depending on the severity and scope of the vulnerability. Less severe vulnerabilities will be awarded $200, while for special vulnerabilities, the bounty is capped at $20,000. We value your contributions and are committed to publishing the results of your efforts (to the public).
Specifically, vulnerabilities reported by users will be graded according to the Bugcrowd rating classification method.
OpenAI admits that the emergence of vulnerabilities is inevitable:
OpenAI’s mission is to create artificial intelligence systems that benefit everyone. To this end, we have invested heavily in research and engineering to ensure our AI systems are safe and reliable. However, like any complex technology, we understand that bugs and defects may occur.
Matthew Knight, head of OpenAI's security department, explained that the process of discovering vulnerabilities requires the help of netizens:
This initiative is an important part of our commitment to developing safe and advanced artificial intelligence. part. We want your help as we create technologies and services that are safe, reliable and trustworthy.
They also promised to the entire network that they would fix the vulnerability as soon as possible after receiving feedback and commend the submitter.
Netizens who have experienced ChatGPT should know that its performance is not always satisfactory.
If the question is slightly more difficult, ChatGPT will be confused and even give ridiculous answers.
Looking at it this way, finding bugs is too easy. Isn’t this an opportunity to make a lot of money?
Don’t be happy too early.
In the announcement, OpenAI has stated that due to the feasibility of repair, the problem of the model itself is not within the scope of this plan.
And allowing it to say bad things, do bad things, generate malicious code and other behaviors that bypass security measures or create machine hallucinations is a problem with the model itself.
Not to mention wrong or inaccurate answers.
OpenAI hopes to collect vulnerabilities related to ChatGPT, mainly related to its account, authentication, subscription, payment, membership and plug-in issues.
For ordinary users, it is somewhat difficult to discover vulnerabilities from these perspectives.
The issues such as bypassing security restrictions mentioned in the previous article are of great concern to practitioners.
Artificial intelligence programs have built-in safety restrictions that prevent them from making dangerous or offensive comments.
But through some means, these restrictions can be bypassed or broken.
This phenomenon is vividly called "jailbreak" by practitioners.
If you ask ChatGPT how to do something illegal, state in advance that you ask it to "play" the bad guy.
In the field of artificial intelligence, there is a group of people who specialize in research on "jailbreaking" to improve security measures.
Although OpenAI does not collect vulnerabilities in the model itself, this does not mean that they do not pay attention to this issue.
The new version of ChatGPT equipped with the GPT-4.0 model has stricter restrictions.
As soon as OpenAI’s bounty plan was announced, it attracted many netizens to watch.
No, some netizens immediately raised their hands and said that they had discovered a bug:
Report, you can only ask ChatGPT 25 questions within 3 hours, this is a bug !
Some netizens joked, shouldn’t ChatGPT be allowed to debug it by itself?
[1]
https://www.php.cn/link/92c4661685bf6681f6a33b78ef729658[2]
https://www.php.cn/link/52ff52aa56d10a1287274ecf02dccb5f[3]
https://www.php.cn/link/bb44c2e24438b59f0d2109fec67f6b20 [4]
https://www.php.cn/link/a3b36cb25e2e0b93b5f334ffb4e4064e
The above is the detailed content of Brothers come looking for ChatGPT vulnerabilities, OpenAI: the highest bounty is 20,000 U.S. dollars. For more information, please follow other related articles on the PHP Chinese website!