[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
-
Updated
Sep 29, 2024 - Python
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
The open-sourced Python toolbox for backdoor attacks and defenses.
Open-source framework for uncertainty and deep learning models in PyTorch 🌱
[ICML2022 Long Talk] Official Pytorch implementation of "To Smooth or Not? When Label Smoothing Meets Noisy Labels"
Neural Network Verification Software Tool
A project to add scalable state-of-the-art out-of-distribution detection (open set recognition) support by changing two lines of code! Perform efficient inferences (i.e., do not increase inference time) and detection without classification accuracy drop, hyperparameter tuning, or collecting additional data.
[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Papers and online resources related to machine learning fairness
PyTorch package to train and audit ML models for Individual Fairness
A project to improve out-of-distribution detection (open set recognition) and uncertainty estimation by changing a few lines of code in your project! Perform efficient inferences (i.e., do not increase inference time) without repetitive model training, hyperparameter tuning, or collecting additional data.
SyReNN: Symbolic Representations for Neural Networks
Privacy-Preserving Machine Learning (PPML) Tutorial
Framework for Adversarial Malware Evaluation.
A list of research papers of explainable machine learning.
a tool for comparing the predictions of any text classifiers
Trustworthy AI method based on Dempster-Shafer theory - application to fetal brain 3D T2w MRI segmentation
[Findings of EMNLP 2022] Holistic Sentence Embeddings for Better Out-of-Distribution Detection
Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models trained on MNIST and CIFAR10.
MERLIN is a global, model-agnostic, contrastive explainer for any tabular or text classifier. It provides contrastive explanations of how the behaviour of two machine learning models differs.
"Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning" by Chongyu Fan*, Jiancheng Liu*, Licong Lin*, Jinghan Jia, Ruiqi Zhang, Song Mei, Sijia Liu
Add a description, image, and links to the trustworthy-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the trustworthy-machine-learning topic, visit your repo's landing page and select "manage topics."