CVE-2025-62164 | Teknoloji dünyasından en güncel haberleri ve güvenlikle ilgili gelişmeleri takip edin.

vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to…
High CVSS: 8.8

CVE-2025-62164

vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.
Vendor
Vllm
Product
Vllm
CWE
CWE-20
Yayın Tarihi
2025-11-21 02:15:43
Güncelleme
2025-12-04 17:14:20
Source Identifier
security-advisories@github.com
KEV Date Added
-

Kategoriler

Referanslar