CVE-2025-59425 | Teknoloji dünyasından en güncel haberleri ve güvenlikle ilgili gelişmeleri takip edin.

vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a meth…
High CVSS: 7.5

CVE-2025-59425

vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison that takes longer the more characters the provided API key gets correct. Data analysis across many attempts could allow an attacker to determine when it finds the next correct character in the key sequence. Deployments relying on vLLM's built-in API key validation are vulnerable to authentication bypass using this technique. Version 0.11.0rc2 fixes the issue.
Vendor
Vllm
Product
Vllm
CWE
CWE-385
Yayın Tarihi
2025-10-07 14:15:38
Güncelleme
2025-10-16 18:02:09
Source Identifier
security-advisories@github.com
KEV Date Added
-

Kategoriler

Referanslar