Adversarial Attacks Comparison Hub
6 papers - avg viability 4.8
Recent advancements in adversarial attack methodologies are reshaping the landscape of machine learning security, particularly in text and computer vision domains. New strategies, such as PivotAttack, are optimizing query efficiency in hard-label text attacks by employing an inside-out approach that minimizes search space traversal. In the realm of black-box models, the Contract And Conquer method guarantees the identification of adversarial examples within a fixed number of iterations, enhancing the robustness of model testing. Meanwhile, novel white-box attacks leveraging SHAP values are demonstrating increased effectiveness in generating misclassifications, particularly in scenarios where traditional gradient-based methods falter. The introduction of motion-aware frameworks for event cameras highlights the urgent need to address vulnerabilities in safety-critical applications like autonomous driving. Collectively, these developments signal a shift towards more systematic and efficient adversarial strategies, addressing pressing commercial concerns about the reliability and security of AI systems across various applications.
Top Papers
- PivotAttack: Rethinking the Search Trajectory in Hard-Label Text Attacks via Pivot Words(7.0)
PivotAttack revolutionizes hard-label text attacks by using an efficient inside-out strategy to minimize query costs and improve success rates.
- Contract And Conquer: How to Provably Compute Adversarial Examples for a Black-Box Model?(7.0)
Contract And Conquer provides a provable method to compute adversarial examples for black-box models, outperforming existing techniques.
- Adversarial Evasion Attacks on Computer Vision using SHAP Values(5.0)
Develop a robust adversarial attack tool for computer vision using SHAP values to test model resilience.
- Generating Adversarial Events: A Motion-Aware Point Cloud Framework(5.0)
Develop a framework to improve the security of event-based perception systems by generating adversarial events using motion-aware techniques.
- Devling into Adversarial Transferability on Image Classification: Review, Benchmark, and Evaluation(3.0)
Develop a standardized framework and benchmark for evaluating adversarial transferability in image classification models.
- Jailbreak Scaling Laws for Large Language Models: Polynomial-Exponential Crossover(2.0)
This paper explores the theoretical underpinnings of adversarial prompt-injection attacks on large language models.