Highlights
- Pro
Pinned Loading
-
lancopku/Embedding-Poisoning
lancopku/Embedding-Poisoning PublicCode for the paper "Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models" (NAACL-HLT 2021)
-
lancopku/SOS
lancopku/SOS PublicCode for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)
-
lancopku/RAP
lancopku/RAP PublicCode for the paper "RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models" (EMNLP 2021)
-
Federated_Learning_Experiments
Federated_Learning_Experiments PublicA Research Platform for Federated Learning Experiments
Python 3
-
lancopku/agent-backdoor-attacks
lancopku/agent-backdoor-attacks PublicCode&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]
-
weak-to-strong-deception
weak-to-strong-deception PublicCode&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"
Python 10
If the problem persists, check the GitHub status page or contact support.