DEEPSEEK
OpenAI Launches Teen Safety Blueprint for AI Development
OpenAI introduces the Teen Safety Blueprint, a framework for developing AI tools with safeguards to protect and empower teens online, emphasizing responsible AI use.
AI Development Framework Aims for Greater Transparency and Safety
Anthropic proposes a framework for AI transparency, focusing on safety and accountability. This initiative aims to enhance public safety and responsible AI development.
Anthropic Strengthens AI Safeguards for Claude
Anthropic enhances its AI model Claude's safety and reliability with robust safeguards, ensuring beneficial outcomes while preventing misuse and harmful impacts.
Character.AI Implements New Safety Measures for Teen Users
Character.AI announces significant changes to enhance the safety of its platform for users under 18, including removing open-ended chat and introducing age assurance tools.
OpenAI Enhances GPT-5 for Sensitive Conversations with New Safety Measures
OpenAI has released an addendum to the GPT-5 system card, showcasing improvements in handling sensitive conversations with enhanced safety benchmarks.
Ensuring Safety: A Comprehensive Framework for AI Voice Agents
Explore the safety framework for AI voice agents, focusing on ethical behavior, compliance, and risk mitigation, as detailed by ElevenLabs.
Microsoft's UX Strategy Targets Cybercrime with Intuitive Design
Microsoft is leveraging smarter UX design to combat the rise in online scams and cybercrime, integrating security into product development for safer user experiences.
NVIDIA Introduces Safety Measures for Agentic AI Systems
NVIDIA has launched a comprehensive safety recipe to enhance the security and compliance of agentic AI systems, addressing risks such as prompt injection and data leakage.
AI Tool Enhances Patient Safety by Analyzing Nurses' Notes
An AI-powered tool, CONCERN EWS, developed by researchers, reduces patient risk and hospital stays by analyzing nurses' shift notes for early health deterioration signs.
NVIDIA NeMo Guardrails Enhance LLM Streaming for Safer AI Interactions
NVIDIA introduces NeMo Guardrails to enhance large language model (LLM) streaming, improving latency and safety for generative AI applications through real-time, token-by-token output validation.