•A single prompt injection, dubbed 'Comment and Control,' successfully extracted API keys from Anthropic's Claude Code Security Review, Google's Gemini CLI Action, and GitHub's Copilot Agent.
•The attack exploited a vulnerability in GitHub Actions' `pull_request_target` workflows, commonly used by AI agents for secret access, via a malicious pull request title.
•Anthropic's own system card for Claude Opus 4.7 had previously stated that its Code Security Review feature was 'not hardened against prompt injection,' validating the researcher's findings.
•A single prompt injection, dubbed 'Comment and Control,' successfully extracted API keys from Anthropic's Claude Code Security Review, Google's Gemini CLI Action, and GitHub's Copilot Agent.
•The attack exploited a vulnerability in GitHub Actions' `pull_request_target` workflows, commonly used by AI agents for secret access, via a malicious pull request title.
•Anthropic's own system card for Claude Opus 4.7 had previously stated that its Code Security Review feature was 'not hardened against prompt injection,' validating the researcher's findings.