•Over 70 civil rights and privacy organizations have warned Meta CEO Mark Zuckerberg against integrating facial recognition into its smart glasses.
•The coalition argues that facial recognition on wearables would empower stalkers, predators, and bad actors, insisting the feature be completely abandoned due to inherent risks that safeguards cannot ...
•Groups are demanding Meta disclose any past misuse of its wearables for harassment or violence, and reveal discussions with federal law enforcement regarding their use.
•OpenAI CEO Sam Altman's home was attacked with a Molotov cocktail, prompting a public statement.
•Altman attributes the incident partly to 'incendiary articles' and general AI anxiety, stressing the dangerous power of words and narratives.
•He outlined core beliefs: AI must be for universal prosperity, requires urgent societal safety measures (beyond model alignment), and demands democratization of power.
•Altman also shared personal reflections, expressing pride in resisting unilateral control (e.g., Elon Musk) but regret over past conflict-aversion and mistakes with the previous board.
•OpenAI has introduced the Model Spec, a formal, public framework defining how their AI models *should* behave.
•The Spec covers how models follow instructions, resolve conflicts, respect user freedom, and maintain safety across diverse queries.
•It serves as a public target for intended model behavior, not a claim of current perfection, guiding training, evaluation, and improvement.
•The initiative aims for democratized access and understanding of AI, allowing users, developers, and policymakers to inspect and debate AI's foundational rules.
•An Anthropic Claude Max subscriber reported over $180 in erroneous 'Extra Usage' charges despite no activity.
•Similar billing discrepancies and incorrect usage readings have been reported by other users across GitHub and Reddit.
•Anthropic's 'Fin AI Agent' support system proved unhelpful for the specific billing issue, directing users to irrelevant refund flows.
•The user has been waiting over a month for a human response from Anthropic support, highlighting a significant customer service gap.
•The situation raises concerns about the reliability of AI-only support systems for complex or critical customer issues, especially for an AI-first company.
•OpenAI has launched a new Safety Bug Bounty program dedicated to identifying AI abuse and safety risks.
•This program complements their existing Security Bug Bounty by accepting non-traditional vulnerabilities that pose real-world harm.
•Key focus areas include agentic risks (like prompt injection, data exfiltration), exposure of OpenAI proprietary information, and issues related to account and platform integrity.
•It's a call for the global security and safety research community to help secure rapidly evolving AI systems.
•Over 70 civil rights and privacy organizations have warned Meta CEO Mark Zuckerberg against integrating facial recognition into its smart glasses.
•The coalition argues that facial recognition on wearables would empower stalkers, predators, and bad actors, insisting the feature be completely abandoned due to inherent risks that safeguards cannot ...
•Groups are demanding Meta disclose any past misuse of its wearables for harassment or violence, and reveal discussions with federal law enforcement regarding their use.
•OpenAI CEO Sam Altman's home was attacked with a Molotov cocktail, prompting a public statement.
•Altman attributes the incident partly to 'incendiary articles' and general AI anxiety, stressing the dangerous power of words and narratives.
•He outlined core beliefs: AI must be for universal prosperity, requires urgent societal safety measures (beyond model alignment), and demands democratization of power.
•Altman also shared personal reflections, expressing pride in resisting unilateral control (e.g., Elon Musk) but regret over past conflict-aversion and mistakes with the previous board.
•OpenAI has introduced the Model Spec, a formal, public framework defining how their AI models *should* behave.
•The Spec covers how models follow instructions, resolve conflicts, respect user freedom, and maintain safety across diverse queries.
•It serves as a public target for intended model behavior, not a claim of current perfection, guiding training, evaluation, and improvement.
•The initiative aims for democratized access and understanding of AI, allowing users, developers, and policymakers to inspect and debate AI's foundational rules.
•An Anthropic Claude Max subscriber reported over $180 in erroneous 'Extra Usage' charges despite no activity.
•Similar billing discrepancies and incorrect usage readings have been reported by other users across GitHub and Reddit.
•Anthropic's 'Fin AI Agent' support system proved unhelpful for the specific billing issue, directing users to irrelevant refund flows.
•The user has been waiting over a month for a human response from Anthropic support, highlighting a significant customer service gap.
•The situation raises concerns about the reliability of AI-only support systems for complex or critical customer issues, especially for an AI-first company.
•OpenAI has launched a new Safety Bug Bounty program dedicated to identifying AI abuse and safety risks.
•This program complements their existing Security Bug Bounty by accepting non-traditional vulnerabilities that pose real-world harm.
•Key focus areas include agentic risks (like prompt injection, data exfiltration), exposure of OpenAI proprietary information, and issues related to account and platform integrity.
•It's a call for the global security and safety research community to help secure rapidly evolving AI systems.