•A single prompt injection, dubbed 'Comment and Control,' successfully extracted API keys from Anthropic's Claude Code Security Review, Google's Gemini CLI Action, and GitHub's Copilot Agent.
•The attack exploited a vulnerability in GitHub Actions' `pull_request_target` workflows, commonly used by AI agents for secret access, via a malicious pull request title.
•Anthropic's own system card for Claude Opus 4.7 had previously stated that its Code Security Review feature was 'not hardened against prompt injection,' validating the researcher's findings.
•OpenAI has launched a new Safety Bug Bounty program dedicated to identifying AI abuse and safety risks.
•This program complements their existing Security Bug Bounty by accepting non-traditional vulnerabilities that pose real-world harm.
•Key focus areas include agentic risks (like prompt injection, data exfiltration), exposure of OpenAI proprietary information, and issues related to account and platform integrity.
•It's a call for the global security and safety research community to help secure rapidly evolving AI systems.
•A single prompt injection, dubbed 'Comment and Control,' successfully extracted API keys from Anthropic's Claude Code Security Review, Google's Gemini CLI Action, and GitHub's Copilot Agent.
•The attack exploited a vulnerability in GitHub Actions' `pull_request_target` workflows, commonly used by AI agents for secret access, via a malicious pull request title.
•Anthropic's own system card for Claude Opus 4.7 had previously stated that its Code Security Review feature was 'not hardened against prompt injection,' validating the researcher's findings.
•OpenAI has launched a new Safety Bug Bounty program dedicated to identifying AI abuse and safety risks.
•This program complements their existing Security Bug Bounty by accepting non-traditional vulnerabilities that pose real-world harm.
•Key focus areas include agentic risks (like prompt injection, data exfiltration), exposure of OpenAI proprietary information, and issues related to account and platform integrity.
•It's a call for the global security and safety research community to help secure rapidly evolving AI systems.