Defending the Robustness of Large Language Models: Mitigating Adversarial Threats and Input Variability

Authors

  • Abuelgasim Saadeldin Author

Keywords:

Large Language Models , Robustness, Adversarial Attacks, Input Perturbations, Adversarial Training, Robust Optimization, Input Preprocessing, Vulnerabilities

Abstract

The robustness of large language models (LLMs) against adversarial threats and input variability is crucial for their reliable deployment in real-world applications. This paper investigates strategies for defending the robustness of LLMs by mitigating adversarial threats and addressing input variability. We survey existing approaches for enhancing LLM robustness and propose novel methods to counter adversarial threats and accommodate input variability. By focusing on defending LLM robustness, this research contributes to the development of dependable and trustworthy large language models suitable for diverse application domains.

Downloads

Published

01-05-2024

How to Cite

Defending the Robustness of Large Language Models: Mitigating Adversarial Threats and Input Variability. (2024). Asian American Research Letters Journal, 1(1). https://aarlj.com/index.php/AARLJ/article/view/9

Similar Articles

1-10 of 29

You may also start an advanced similarity search for this article.