Microsoft Azure documentation
Security & Compliance
Security & Compliance
In addition to the enterprise-grade features available in Microsoft Azure services, the following security measures and requirements are enforced to safeguard the deployment and use of open models on Azure:
Model Eligibility Requirements
Only models that meet strict security criteria are included in the Hugging Face collection on Microsoft Foundry and Azure Machine Learning:
Public availability: Models must be public on the Hugging Face Hub, even if gated; private models are currently not eligible.
trust_remote_codeandcustom_codenot allowed: Models that requiretrust_remote_code=Trueare disallowed unless they are explicitly verified by Hugging Face or come from a trusted/verified organization e.g.microsoft.Secure format: Model weights must be uploaded in the Safetensors format to eliminate the risks associated with pickle-based formats.
Mandatory Security Scanning
All models made available via the Hugging Face collection on Microsoft Foundry and Azure Machine Learning undergo a robust set of security scans like ClamAV malware scanning, including third-party scanners such as Protect AI and JFrog solutions.
These checks help identify embedded malware or harmful binaries, unsafe deserialization, unintended external connections and security-sensitive content in model artifacts before being imported in customers’ tenancy.
For more details on Hugging Face Hub’s security practices and tooling, refer to Hugging Face Hub Security.
Network Isolation and Compliance
For enhanced protection and compliance, model hosting and serving can be configured to run in isolated compute environments on Microsoft Foundry and Azure Machine Learning, aligned with regulatory or internal policy requirements. Azure Foundry and Azure ML comes with enterprise-grade audit, logging, and access control frameworks that ensures full traceability and governance.
Update on GitHub