Protect AI warns of a dozen critical vulnerabilities in open source AI/ML tools reported via its bug bounty program.
June 13, 2024 By Ionut Arghire
A dozen critical vulnerabilities have been discovered in various open source AI/ML tools over the past few months, a new Protect AI report shows.
The AI security firm warns of a total of 32 security defects reported as part of its Huntr AI bug bounty program, including critical-severity issues that could lead to information disclosure, access to restricted resources, privilege escalation, and complete server takeover.
The most severe of these bugs is CVE-2024-22476 (CVSS score of 10), an improper input validation in Intel Neural Compressor software that could allow remote attackers to escalate privileges. The flaw was addressed in mid-May.
A critical-severity issue in ChuanhuChatGPT (CVE-2024-3234) that allowed attackers to steal sensitive files existed because the application used an outdated, vulnerable iteration of the Gradio open source Python package.