PII Prompt Shield
Overview
Detects and blocks likely private information before it is sent to LLM chat interfaces.
PII Prompt Shield helps prevent accidental data leaks when using AI assistants. It scans prompt text before submission and flags likely sensitive information so you can review, redact, or stop the send action. Key features: - Detects personal data patterns in English and German. - Covers names, age, email, phone, addresses, date of birth, SSN, German Steuer-ID, German IBAN, credit cards, and API keys. - Supports two protection modes: Block + Review, or Warn Only. - Selective prompt redaction before sending. - Trusted-site allowlist for domains you choose. - Works on supported AI chat sites. - Local/on-device detection logic (no external analysis service). Important note: Detection is heuristic and intended as a safety layer, not legal/compliance certification.
0 out of 5No ratings
Details
- Version0.1.3
- UpdatedFebruary 17, 2026
- Size131KiB
- LanguagesEnglish
- Developer
Email
sven86606@gmail.com - Non-traderThis developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.
Privacy
This developer declares that your data is
- Not being sold to third parties, outside of the approved use cases
- Not being used or transferred for purposes that are unrelated to the item's core functionality
- Not being used or transferred to determine creditworthiness or for lending purposes