Item logo image for WebRAG

WebRAG

ExtensionTools1 user
Item media 1 (screenshot) for WebRAG

Overview

网页知识库助手 - Local RAG-powered knowledge base from web pages

# WebRAG - 网页知识库助手 A browser extension that builds a local RAG (Retrieval-Augmented Generation) knowledge base from web pages, powered by Ollama for complete privacy and local processing. ## Features - 📚 Build personal knowledge base from web pages - 🌐 Batch import links with search and filtering - 🔍 Semantic search powered by deep learning embeddings - 💬 AI-powered Q&A with Markdown support - 🔒 100% local processing - your data stays private - 📦 Multiple knowledge bases management - 🇨🇳 Excellent Chinese language support ## Quick Start ### Installation 1. Open Chrome and navigate to `chrome://extensions` 2. Enable "Developer mode" (toggle in top right) 3. Click "Load unpacked" and select the `rag-extension` folder 4. Pin the extension to your toolbar ### Ollama Setup **Important:** Ollama must be configured for CORS to work with browser extensions. Start Ollama with CORS enabled: ```bash # macOS/Linux OLLAMA_ORIGINS="*" ollama serve ``` Download required models: ```bash # Chat model (required) ollama pull qwen2.5:7b # Embedding model (required for Chinese) ollama pull lrs33/bce-embedding-base_v1 ``` ## Usage ### 1. Add Current Page Click "Add Current Page" to save the current webpage to your knowledge base. ### 2. Batch Add Links Click "Add Page Links" to batch import multiple pages: - Extracts all links from current page - Search and filter links - Select multiple pages to add ### 3. Search Knowledge Base Enter your question and click "Search" to find relevant content using semantic search. ### 4. AI Q&A Click "AI Answer" to get intelligent responses based on your knowledge base. Responses support full Markdown formatting including code blocks, tables, and lists. ## Configuration Open the extension and scroll to "LLM Config" section: - **Ollama URL**: `http://localhost:11434` (default) - **Chat Model**: Select your installed model (e.g., `qwen2.5:7b`) - **Embedding Model**: Select Chinese-optimized model (e.g., `lrs33/bce-embedding-base_v1`) The extension automatically loads available models from Ollama. ## Technical Stack | Component | Technology | |-----------|------------| | **Frontend** | Vanilla JavaScript + Chrome Extension API | | **Embedding** | Ollama Embeddings API (supports Chinese) | | **Vector Store** | IndexedDB (browser local storage) | | **LLM** | Ollama (local inference) | | **Markdown** | marked.js | | **Similarity** | Cosine similarity | ## Recommended Models ### For Chinese Content - **Chat**: `qwen2.5:7b`, `qwen:14b`, `yi:34b` - **Embedding**: `lrs33/bce-embedding-base_v1`, `bge-large-zh-v1.5` ### For English Content - **Chat**: `llama3.2`, `mistral`, `phi3` - **Embedding**: `nomic-embed-text`, `mxbai-embed-large` ## Data Privacy ✅ **100% Local Processing** - All data stored in browser IndexedDB - AI inference runs on your local Ollama - No data sent to external servers - Complete privacy and security

Details

  • Version
    1.0
  • Updated
    January 12, 2026
  • Size
    68.36KiB
  • Languages
    English (United States)
  • Developer
    Email
    xuhuiming6991@gmail.com
  • Non-trader
    This developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.

Privacy

Manage extensions and learn how they're being used in your organization
The developer has disclosed that it will not collect or use your data. To learn more, see the developer’s privacy policy.

This developer declares that your data is

  • Not being sold to third parties, outside of the approved use cases
  • Not being used or transferred for purposes that are unrelated to the item's core functionality
  • Not being used or transferred to determine creditworthiness or for lending purposes
Google apps