BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models. QEBA: Query-Efficient Boundary-Based Blackbox Attack. Identifying Audio Adversarial Examples via Anomalous Pattern Detection. Delving into Transferable Adversarial Examples and Black-box Attacks. Double Backpropagation for Training Autoencoders against Adversarial Attack. FakeLocator: Robust Localization of GAN-Based Face Manipulations via Semantic Segmentation Networks with Bells and Whistles. Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial Attacks. A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees. Evading Person Detectors in A Physical World. Torchattacks : A Pytorch Repository for Adversarial Attacks. Watch out! (2%), A Real-time Defense against Website Fingerprinting Attacks. is that it is primarily a paper about adversarial examples, The Dilemma Between Dimensionality Reduction and Adversarial Robustness. Evaluating Graph Vulnerability and Robustness using TIGER. (99%), Target Training Does Adversarial Training Without Adversarial Samples. Adversaries in Online Learning Revisited: with applications in Robust Optimization and Adversarial training. Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model. Learning to Cope with Adversarial Attacks. Adversarial Attacks on Optimization based Planners. Universal Adversarial Perturbations Generative Network for Speaker Recognition. Defense-friendly Images in Adversarial Attacks: Dataset and Metrics forPerturbation Difficulty. Fooling thermal infrared pedestrian detectors in real world using small bulbs. Defending Against Physically Realizable Attacks on Image Classification. Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors. One pixel attack for fooling deep neural networks. Minimalistic Attacks: How Little it Takes to Fool a Deep Reinforcement Learning Policy. Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics. Efficient Formal Safety Analysis of Neural Networks. Characterizing Speech Adversarial Examples Using Self-Attention U-Net Enhancement. Stars. Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack. Luring of transferable adversarial perturbations in the black-box paradigm. Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses. Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack. Defending Against Adversarial Machine Learning. A Noise-Sensitivity-Analysis-Based Test Prioritization Technique for Deep Neural Networks. Transferable Perturbations of Deep Feature Distributions. Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation. Targeted Nonlinear Adversarial Perturbations in Images and Videos. Learning Universal Adversarial Perturbations with Generative Models. Towards Privacy and Security of Deep Learning Systems: A Survey. Backpropagating Linearly Improves Transferability of Adversarial Examples. Detecting Adversarial Perturbations with Saliency. Adversarial machine learning is a machine learning technique that attempts to fool models by supplying deceptive input. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks. (1%). Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability. The full paper list appears below. (95%), Adversarially robust deepfake media detection using fused convolutional neural network predictions. Identifying Adversarial Sentences by Analyzing Text Complexity. Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines. A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples. Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints. On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples. Intelligent image synthesis to attack a segmentation CNN using adversarial learning. As a result, there may be some Physical Adversarial Examples for Object Detectors. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model's parameters. Adversarially Robust Learning via Entropic Regularization. Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks. Adversarial Attacks against Deep Saliency Models. Adversarial Network Traffic: Towards Evaluating the Robustness of Deep Learning-Based Network Traffic Classification. Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation. HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural Networks against Adversarial Malware Samples.