CVE-2026-34760 | Teknoloji dünyasından en güncel haberleri ve güvenlikle ilgili gelişmeleri takip edin.

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before version 0.18.0, Librosa defaults to using numpy.mean for…
Medium CVSS: 5.9

CVE-2026-34760

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before version 0.18.0, Librosa defaults to using numpy.mean for mono downmixing (to_mono), while the international standard ITU-R BS.775-4 specifies a weighted downmixing algorithm. This discrepancy results in inconsistency between audio heard by humans (e.g., through headphones/regular speakers) and audio processed by AI models (Which infra via Librosa, such as vllm, transformer). This issue has been patched in version 0.18.0.
Vendor
-
Product
-
CWE
CWE-20
Yayın Tarihi
2026-04-02 20:16:25
Güncelleme
2026-04-03 16:10:23
Source Identifier
security-advisories@github.com
KEV Date Added
-

Kategoriler

Referanslar