Researchers at Reversing Labs have discovered two malicious machine learning (ML) models available on Hugging Face, the leading hub for sharing AI models and applications. While these models contain ...
Researchers have concocted a new way of manipulating machine learning (ML) models by injecting malicious code into the process of serialization. The method focuses on the "pickling" process used to ...
Fake Alibaba Labs AI SDKs hosted on PyPI included PyTorch models with infostealer code inside. With support for detecting malicious code inside ML models lacking, expect the technique to spread.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results