OpenAI to offer up to Rs 16 lakh for finding vulnerabilities in AI Systems (See Details)

OpenAI, the company responsible for the popular chatbot ChatGPT, will be providing rewards of up to $20,000 to users who report vulnerabilities found in its artificial intelligence systems.

OpenAI has extended an invitation to users, encouraging them to report any weaknesses, bugs, or security issues they come across while utilizing its AI products.

ChatGPT, the artificial intelligence system developed by OpenAI with support from Microsoft Corp, has joined forces with Bugcrowd, a crowdsourced cybersecurity platform, to handle submissions and rewards for vulnerability reports.

Roping in members of the public

OpenAI has introduced the “OpenAI Bug Bounty” program, where individuals will be rewarded based on the severity of the vulnerabilities they report.

The program offers rewards that start from $200 per reported vulnerability.

OpenAI welcomes public participation in reviewing specific functionalities of ChatGPT and the framework governing how its systems communicate and share data with third-party applications.

OpenAI extends an invitation for users to report any vulnerabilities, bugs, or security flaws they identify in their systems.

Your findings will play a crucial role in enhancing the safety of our technology for the benefit of everyone.

OpenAI appreciates your contribution in making their systems more secure.

Rewards

OpenAI stated on their website, “Our rewards vary from $200 for low-severity findings to as much as $20,000 for extraordinary discoveries.”

Bug bounty programs are commonly used by technology companies to incentivize programmers and ethical hackers to report bugs in their software systems.

Jailbreaking 

It’s important to note that the bounty program does not include rewards for jailbreaking ChatGPT or intentionally causing it to generate malicious code or text.

Jailbreaking ChatGPT typically involves inputting elaborate scenarios into the system to bypass its safety filters.

Content of model prompts and responses

The program does not encompass rewards for encouraging the chatbot to roleplay as its “evil twin” or eliciting banned responses, such as hate speech or instructions for creating weapons.

It’s important to note that incorrect or malicious content generated by OpenAI systems is also not covered by the bounty program.

OpenAI explicitly stated that issues related to the content of model prompts and responses are not within the scope of the bounty program and will not be rewarded, unless they have a direct and verifiable security impact on an in-scope service, as mentioned in their statement.

Why?

OpenAI acknowledges that model safety issues may not be well-suited for a bug bounty program, as they are not individual, discrete bugs that can be easily addressed.

These issues often require extensive research and a broader approach to effectively tackle them, as stated by OpenAI.

Model Hallucinations

The program does not cover any bugs or issues related to Model Hallucinations, which refer to instances where the AI model wrongly assumes the validity of a statement and produces a confident response that may not be justified by its training data.

These types of issues are also considered out of scope for the bounty program, as stated by OpenAI.

Italy ban

After ChatGPT was banned in Italy due to suspected breaches of privacy rules, regulators in other European countries are scrutinizing generative AI services more closely.

While AI can provide quick responses to questions and be useful in certain contexts, it has also gained attention for potential inaccuracies and causing distress to users in certain situations.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest

More Articles

- Advertisemet -