-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dev 1.14.0 #2341
Dev 1.14.0 #2341
Conversation
…neural_network_model_learning.ipynb
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## dev_1.18.0 #2341 +/- ##
==============================================
- Coverage 85.60% 85.37% -0.23%
==============================================
Files 324 327 +3
Lines 29326 30205 +879
Branches 5407 5589 +182
==============================================
+ Hits 25104 25789 +685
- Misses 2840 2966 +126
- Partials 1382 1450 +68 |
Hi @OrsonTyphanel93 Could you please add a description and title to this PR? |
The backdoor attack, TranStyBack, involves the insertion of malicious triggers (audio clapping) into audio data using digital musical effects. The triggers were generated on the basis of six different styles, each with specific parameters. These stylistic triggers are then applied to the audio data during the backdoor attack phase. The attack involves poisoning a subset of the training data, specifically up to 1% of the samples, during which the trigger is adjusted to match the duration of the audio data, ensuring correct alignment. The backdoor attack is implemented by adding the scaled trigger values to the corresponding audio samples. |
Stylistic Backdoors in audio data (TranStyBack) The backdoor attack, TranStyBack, involves the insertion of malicious triggers (audio clapping) into audio data using digital musical effects. The triggers were generated on the basis of six different styles, each with specific parameters. These stylistic triggers are then applied to the audio data during the backdoor attack phase. The attack involves poisoning a subset of the training data, specifically up to 1% of the samples, during which the trigger is adjusted to match the duration of the audio data, ensuring correct alignment. The backdoor attack is implemented by adding the scaled trigger values to the corresponding audio samples. |
…_Model_Automatic_Speech_Recognition.ipynb
…dversarial_machine_learning.ipynb erratum....
…dversarial_machine_learning.ipynb
…dversarial_machine_learning.ipynb
9b1b3f2
to
3281c4b
Compare
…machine_learning.ipynb
Description
Please include a summary of the change, motivation and which issue is fixed. Any dependencies changes should also be included.
Fixes # (issue)
Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
Test Configuration:
Checklist