•OpenAI has launched a new Safety Bug Bounty program dedicated to identifying AI abuse and safety risks.
•This program complements their existing Security Bug Bounty by accepting non-traditional vulnerabilities that pose real-world harm.
•Key focus areas include agentic risks (like prompt injection, data exfiltration), exposure of OpenAI proprietary information, and issues related to account and platform integrity.
•It's a call for the global security and safety research community to help secure rapidly evolving AI systems.
•OpenAI has launched a new Safety Bug Bounty program dedicated to identifying AI abuse and safety risks.
•This program complements their existing Security Bug Bounty by accepting non-traditional vulnerabilities that pose real-world harm.
•Key focus areas include agentic risks (like prompt injection, data exfiltration), exposure of OpenAI proprietary information, and issues related to account and platform integrity.
•It's a call for the global security and safety research community to help secure rapidly evolving AI systems.