CVE-2026-27893 | Teknoloji dünyasından en güncel haberleri ve güvenlikle ilgili gelişmeleri takip edin.

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files…
High CVSS: 8.8

CVE-2026-27893

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
Vendor
Vllm
Product
Vllm
CWE
CWE-693
Yayın Tarihi
2026-03-27 00:16:22
Güncelleme
2026-03-30 18:56:21
Source Identifier
security-advisories@github.com
KEV Date Added
-

Kategoriler

Referanslar