Skip to main content

Adversarial Robustness Toolbox: A Python library for ML Security

Adversarial Robustness Toolbox (ART) provides tools that enable developers and researchers to evaluate, defend, and verify Machine Learning models and applications against adversarial threats. IBM moved ART to LF AI in July 2020. 

Features

Extended Support

ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, generation, certification, etc.).

39 Attack Modules

On a high level, ART supports 4 attack modules: Evasion, Poisoning, Extraction, and Inference.

Detailed information about the supported attack modules can be found here.

29 Defense Modules

On a high level, ART supports 5 attack modules: Preprocessor, Postprocessor, Trainer, Transformer, and Detector. Detailed information about the supported defense modules can be found here.

Estimators and Metrics

ART supports 3 robustness metrics, 1 certification and 1 verification metric. It also supports multiple estimators and details about the same can be found here.

Getting Started

Learn how to set up the toolbox and find example notebooks in the user guide, along with documentation of the modules attacks, defenses, metrics and more here.

GitHub

Please visit us on GitHub where our development happens.  We invite you to join our community both as a user of ai-robustness and also as a contributor to its development. We look forward to your contributions!

Join the Conversation

ART maintains three mailing lists. You are invited to join the one that best meets your interest.

trusted-ai-360-announce: Top-level milestone messages and announcements

trusted-ai-technical-discuss: Technical discussions

trusted-ai-360-tsc: Technical governance discussions