Tag: #Machine Learning

New AI Jailbreak Method ‘Bad Likert Judge’ Boosts Attack Success Rates...

Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails...

AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of...

Cybersecurity researchers have found that it's possible to use large language models (LLMs) to generate new variants of malicious JavaScript code at scale in...

Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Cybersecurity researchers have disclosed multiple security flaws impacting open-source machine learning (ML) tools and frameworks such as MLflow, H2O, PyTorch, and MLeap that could...

How AI Is Transforming IAM and Identity Security

In recent years, artificial intelligence (AI) has begun revolutionizing Identity Access Management (IAM), reshaping how cybersecurity is approached in this crucial field. Leveraging AI...

Researchers Warn of Privilege Escalation Risks in Google’s Vertex AI ML...

Cybersecurity researchers have disclosed two security flaws in Google's Vertex machine learning (ML) platform that, if successfully exploited, could allow malicious actors to escalate...

Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Cybersecurity researchers have uncovered nearly two dozen security flaws spanning 15 different machine learning (ML) related open-source projects. These comprise vulnerabilities discovered both on the...

Google’s AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database...

Google said it discovered a zero-day vulnerability in the SQLite open-source database engine using its large language model (LLM) assisted framework called Big Sleep...

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which...

Researchers Reveal ‘Deceptive Delight’ Method to Jailbreak AI Models

Cybersecurity researchers have shed light on a new adversarial technique that could be used to jailbreak large language models (LLMs) during the course of...

Why Traditional Security Solutions Fall Short

In recent years, the number and sophistication of zero-day vulnerabilities have surged, posing a critical threat to organizations of all sizes. A zero-day vulnerability...

Most popular