Skip to content

Commit

Permalink
Add note that execution inside TEEs will be added in the future
Browse files Browse the repository at this point in the history
Signed-off-by: Phillip Rieger <[email protected]>
  • Loading branch information
phillip-rieger committed Feb 8, 2024
1 parent 118de17 commit e582594
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion openfl-tutorials/experimental/CrowdGuard/readme.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# On the Integration of CrowdGuard into OpenFL
Federated Learning (FL) is a promising approach enabling multiple clients to train Deep Neural Networks (DNNs) collaboratively without sharing their local training data. However, FL is susceptible to backdoor (or targeted poisoning) attacks. These attacks are initiated by malicious clients who seek to compromise the learning process by introducing specific behaviors into the learned model that can be triggered by carefully crafted inputs. Existing FL safeguards have various limitations: They are restricted to specific data distributions or reduce the global model accuracy due to excluding benign models or adding noise, are vulnerable to adaptive defense-aware adversaries, or require the server to access local models, allowing data inference attacks.

This tutorial implements CrowdGuard [1], which effectively mitigates backdoor attacks in FL and overcomes the deficiencies of existing techniques. It leverages clients' feedback on individual models, analyzes the behavior of neurons in hidden layers, and eliminates poisoned models through an iterative pruning scheme. CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback. The experiments that were conducted in the paper show a 100% True-Positive-Rate and True-Negative-Rate across various scenarios, including IID and non-IID data distributions. Additionally, CrowdGuard withstands adaptive adversaries while preserving the original performance of protected models. To ensure confidentiality, CrowdGuard uses a secure and privacy-preserving architecture leveraging Trusted Execution Environments (TEEs) on both client and server sides.
This tutorial implements CrowdGuard [1], which effectively mitigates backdoor attacks in FL and overcomes the deficiencies of existing techniques. It leverages clients' feedback on individual models, analyzes the behavior of neurons in hidden layers, and eliminates poisoned models through an iterative pruning scheme. CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback. The experiments that were conducted in the paper show a 100% True-Positive-Rate and True-Negative-Rate across various scenarios, including IID and non-IID data distributions. Additionally, CrowdGuard withstands adaptive adversaries while preserving the original performance of protected models. To ensure confidentiality, CrowdGuard requires a secure and privacy-preserving architecture leveraging Trusted Execution Environments (TEEs) on both client and server sides. Full instructions to set up CrowdGuard's workflows inside TEEs using the OpenFL Workflow API will be made available in a future release of OpenFL.



Expand Down

0 comments on commit e582594

Please sign in to comment.