Overview
Prevents sensitive data from being sent to public LLM tools like ChatGPT, Claude, and Gemini.
LLM Guard – Prevent Accidental Data Leaks to AI Tools LLM Guard helps you use ChatGPT, Claude, Gemini, and other AI tools safely by detecting and blocking sensitive information before it’s sent. All analysis happens locally in your browser. No data is stored or transmitted anywhere. Whether you’re pasting emails, client documents, code snippets, credentials, or internal notes, LLM Guard provides an automatic guardrail that stops high-risk messages before they reach an LLM. ⭐ Why People Use LLM Guard Accidental AI data leaks are increasingly common. It’s easy to paste the wrong thing into ChatGPT without realizing it. LLM Guard adds a lightweight safety layer that alerts you — and blocks the message — whenever it detects potentially sensitive material. Used by professionals across: Software & SaaS Consulting & freelancing Finance & healthcare Legal, HR, compliance, and operations How LLM Guard Works 1. You paste text into ChatGPT or another supported LLM site Scanning starts automatically, directly in your browser. 2. Sensitive patterns are detected instantly LLM Guard looks for emails, personal data, source code, tokens, IDs, logs, and other potentially sensitive content. 3. The message is blocked unless you approve it A warning appears showing what was detected. You can choose to: review and edit, allow it once, add it to your allow list, or pause protection temporarily. 4. (Optional) Review a local-only log Detections can be stored locally on your device for reference. No logs ever leave your machine. 5. Customize rules and supported sites Enable/disable detection categories and optionally add additional websites through the settings menu. Key Features 🛡️ Real-Time Detection Get instant warnings before risky content is submitted. 🚫 Local-Only Processing LLM Guard never sends or stores your data. Everything is analyzed on your device. ✋ Automatic Blocking Messages containing sensitive patterns are blocked unless you explicitly choose to continue. 🧠 Smart Pattern Recognition Detect PII, emails, IDs, credentials, API keys, secrets, logs, and code fragments. 📁 Optional Local Logs View a private, local history of what was detected. ⚙️ Customizable Rules Toggle categories and adjust detection behavior to match your workflow. 🌐 Works With ChatGPT (More Coming Soon) Support for additional LLM platforms is being expanded over time. Who LLM Guard Is For SaaS & Tech Professionals Catch accidental code or credential leaks before submitting prompts. Consultants & Freelancers Avoid unintentionally sharing client or project data. Legal, HR, Ops, and Compliance Use AI for drafting and summarizing without risking sensitive content exposure. Anyone Using AI at Work Add a simple, automated privacy safeguard to daily AI use. FAQ Does the extension send or store any data? No. All scanning is performed locally in your browser. LLM Guard does not collect, transmit, or share any data. Does it block submissions? Yes. If sensitive content is detected, LLM Guard blocks the message and shows a warning. You control whether to allow it once, allow it always, or pause protection. Do I need an account? No. It works immediately after installation. Can I customize what it detects? Yes. You can toggle detection categories in the Settings page. Can I add support for additional websites? Yes. You can manually add sites in the settings menu. Chrome will ask for permission before LLM Guard can run on any new site you add. Protect Yourself From Accidental AI Data Leaks Use ChatGPT, Claude, and other LLMs confidently with a privacy-first guardrail that blocks sensitive messages before they’re sent. Privacy-first. Zero data stored. Fully local.
0 out of 5No ratings
Details
- Version1.0.1
- UpdatedNovember 25, 2025
- Size42.7KiB
- LanguagesEnglish
- DeveloperWebsite
Email
reece@yourwebconsultant.com - Non-traderThis developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.
Privacy
This developer declares that your data is
- Not being sold to third parties, outside of the approved use cases
- Not being used or transferred for purposes that are unrelated to the item's core functionality
- Not being used or transferred to determine creditworthiness or for lending purposes