skip to content
LLM Persuasion Safety Hub

About

The Safe Persuasion Hub is a curated collection of datasets, codebases, and research papers related to the evaluation of persuasive capabilities in large language models (LLMs). The hub focuses on measurement, analysis, and safety-relevant research rather than the development of persuasion techniques.

Why this hub?

Work on LLM persuasion spans multiple disciplines—including dialogue systems, behavioral science, rhetoric, and AI safety—and relevant resources are often scattered across GitHub, arXiv, Hugging Face, and institutional repositories. This hub aims to provide a single, structured entry point for researchers, auditors, and policymakers.

What we include

How to contribute

If you know of a relevant resource, please contact us or submit an issue or pull request on our GitHub repository.