Overview
Prevents sensitive data from being sent to public LLM tools like ChatGPT, Claude, and Gemini.
LLM Guard helps you use ChatGPT, Claude, Gemini, and other AI tools safely by detecting and blocking sensitive information before it’s sent. This is a no-bullsh*t, open source, privacy first extension that does one simple job with minimal performance impact. All it does is prevent certain types of data from being sent to LLMs, thats it. You can configure rules, regex patterns, or use the presets available in the app. No data gets sent anywhere, no data gets stored, just plain and simple client-side detection and prevention.
0 out of 5No ratings
Details
- Version1.0.1
- UpdatedJanuary 6, 2026
- Size42.7KiB
- LanguagesEnglish
- DeveloperWebsite
Email
reece@yourwebconsultant.com - Non-traderThis developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.
Privacy
The developer has disclosed that it will not collect or use your data. To learn more, see the developer’s privacy policy.
This developer declares that your data is
- Not being sold to third parties, outside of the approved use cases
- Not being used or transferred for purposes that are unrelated to the item's core functionality
- Not being used or transferred to determine creditworthiness or for lending purposes