•OpenAI CEO Sam Altman apologized for the company's failure to alert police about a ChatGPT account belonging to a mass shooting suspect.
•The suspect, Jesse Van Rootselaar, had his account banned by OpenAI in June (prior to the January shooting) for 'problematic usage' but was not reported.
•OpenAI initially stated the usage did not meet their internal threshold for a 'credible or imminent plan for serious physical harm,' sparking debate over AI company responsibilities in public safety.
•YouTube is expanding its AI-powered likeness detection technology to identify and moderate content featuring celebrities.
•This move aims to protect celebrity rights against unauthorized use of their image and voice, including deepfakes and other AI-generated media.
•The expansion signifies a broader industry shift towards sophisticated AI content moderation, posing new considerations for creators, platforms, and ethical AI development.
•A new report by the Tech Transparency Project (TTP) alleges that Apple and Google are not only hosting but actively promoting 'nudify' deepfake apps on their App Store and Google Play, three months af...
•Despite explicit policies against sexually explicit content, many of these AI-powered apps, which can create non-consensual deepfake nudity, are rated 'E' for Everyone and have collectively generated ...
•The ongoing availability and promotion of these apps highlight significant challenges in content moderation, AI ethics, and platform accountability, drawing increased scrutiny from governments and reg...
•OpenAI CEO Sam Altman apologized for the company's failure to alert police about a ChatGPT account belonging to a mass shooting suspect.
•The suspect, Jesse Van Rootselaar, had his account banned by OpenAI in June (prior to the January shooting) for 'problematic usage' but was not reported.
•OpenAI initially stated the usage did not meet their internal threshold for a 'credible or imminent plan for serious physical harm,' sparking debate over AI company responsibilities in public safety.
•YouTube is expanding its AI-powered likeness detection technology to identify and moderate content featuring celebrities.
•This move aims to protect celebrity rights against unauthorized use of their image and voice, including deepfakes and other AI-generated media.
•The expansion signifies a broader industry shift towards sophisticated AI content moderation, posing new considerations for creators, platforms, and ethical AI development.
•A new report by the Tech Transparency Project (TTP) alleges that Apple and Google are not only hosting but actively promoting 'nudify' deepfake apps on their App Store and Google Play, three months af...
•Despite explicit policies against sexually explicit content, many of these AI-powered apps, which can create non-consensual deepfake nudity, are rated 'E' for Everyone and have collectively generated ...
•The ongoing availability and promotion of these apps highlight significant challenges in content moderation, AI ethics, and platform accountability, drawing increased scrutiny from governments and reg...