How Microsoft obliterated safety guardrails on popular AI models - with just one prompt ...
Chinese artificial intelligence start-up DeepSeek has conducted internal evaluations on the "frontier risks" of its AI models, according to a person familiar with the matter. The development, not ...
The "Petri" tool deploys AI agents to evaluate frontier models. AI's ability to discern harm is still highly imperfect. Early tests showed Claude Sonnet 4.5 and GPT-5 to be safest. Anthropic has ...
AI isn't getting smarter, it's getting more power hungry - and expensive ...
Artificial intelligence security lab startup Irregular announced today that it has raised $80 million in new funding to build its defensive systems, testing infrastructure and security tools to help ...
Amazon Web Services Inc. envisions a world in which billions of AI agents will be working together. That will take a significant advance in frontier model reasoning, and the company made several major ...
On Monday, Anthropic launched a new frontier model called Claude Sonnet 4.5, which it claims offers state-of-the-art performance on coding benchmarks. The company says Claude Sonnet 4.5 is capable of ...
On Wednesday, Anthropic released Claude Haiku 4.5, a small AI language model that reportedly delivers performance similar to what its frontier model Claude Sonnet 4 achieved five months ago but at one ...
At the AI Everything 2026 event in Cairo, Egypt, IBM’s chief scientist Ruchir Puri shared that data chaos, unrealistic ...