Abstract: Backdoor attacks threaten federated learning (FL) models, where malicious participants embed hidden triggers into local models during training. These triggers can compromise crucial ...
Abstract: Current state-of-the-art plug-and-play countermeasures for mitigating adversarial examples (i.e., purification and detection) exhibit several fatal limitations, impeding their deployment in ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results