Adversarial Robustness Toolbox (ART) provides tools that enable developers and researchers to evaluate, defend, and verify Machine Learning models and applications against adversarial threats. IBM moved ART to LF AI in July 2020.
ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, generation, certification, etc.).
29 Defense Modules
On a high level, ART supports 5 attack modules: Preprocessor, Postprocessor, Trainer, Transformer, and Detector. Detailed information about the supported defense modules can be found here.
Please visit us on GitHub where our development happens. We invite you to join our community both as a user of ai-robustness and also as a contributor to its development. We look forward to your contributions!