HackerOne's A was vulnerable to invisible prompt injection via Unicode characters. This would allow an attacker to suggest higher bounties, valid reports, etc.
Vulnerability
Software
NA
2024-02-13
00:00
LLM01
Prompt Injection
https://hackerone.com/reports/2372363
Prompt Injection in "ask" API with visualization leads to RCE on Vanna AI
Vulnerability
Software
NA
2024-05-30
23:00
LLM01
Prompt Injection
Remote Code Execution
https://nvd.nist.gov/vuln/detail/CVE-2024-5565
A path traversal vulnerability exists in the latest version of gaizhenbiao/chuanhuchatgpt.
Vulnerability
Software
NA
2024-06-24
23:00
LLM01
Prompt Injection
Remote Code Execution
https://nvd.nist.gov/vuln/detail/CVE-2024-5982
A vulnerability in Anything LLM allows for a Denial of Service (DoS) condition due to uncontrolled resource consumption.
Vulnerability
Software
NA
2024-06-24
23:00
LLM03
Model Denial of Service
https://nvd.nist.gov/vuln/detail/cve-2024-5216
Guardrails AI users that consume RAIL documents from external sources are vulnerable to XXE
Vulnerability
Software
NA
2024-07-20
23:00
LLM06
Sensitive Information Disclosure
https://nvd.nist.gov/vuln/detail/CVE-2024-6961
Haystack clients that let their users create and run Pipelines from scratch are vulnerable to RCE
Vulnerability
Software
NA
2024-07-30
23:00
LLM01
Prompt Injection
https://nvd.nist.gov/vuln/detail/CVE-2024-41950
Muah.ai companion site breached to expose users fantasies
Researchers have uncovered two critical vulnerabilities in GitHub Copilot, which allow attackers to bypass ethical safeguards, manipulate model behavior, and even hijack access to premium AI resources like OpenAI’s GPT-o1.