Item logo image for Ollama Client - Chat with Local LLM Models

Ollama Client - Chat with Local LLM Models

https://ollama-client.shishirchaurasiya.in/
4.7(

11 ratings

)
Item media 6 (screenshot) for Ollama Client - Chat with Local LLM Models
Item video thumbnail
Item media 2 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 3 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 4 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 5 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 6 (screenshot) for Ollama Client - Chat with Local LLM Models
Item video thumbnail
Item video thumbnail
Item media 2 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 3 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 4 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 5 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 6 (screenshot) for Ollama Client - Chat with Local LLM Models

Overview

Privacy-first Ollama Chrome extension to chat with local AI models like LLaMA, Mistral, Gemma — fully offline.

🧠 Ollama Client – Chat with Local LLMs Inside Your Browser Ollama Client is a lightweight, privacy-first Ollama Chrome extension that brings the power of local AI models and offline AI chat directly to your browser. No cloud dependencies. No API keys. No data sent externally. Just fast, secure, Ollama browser extension–powered offline AI chat powered by open-source models like LLaMA 3, GPT-OSS, Mistral, Gemma, CodeLLaMA — all running on your own machine using the Ollama backend. ✨ Works on all Chromium-based browsers (Chrome, Edge, Brave) and Firefox (with additional setup). 100% open-source. 🚀 Key Features 🤖 Model Management 🔌 Local Ollama Integration – Connect to a local Ollama server (no API keys required) 🌐 LAN/Local Network Support – Connect to Ollama servers on your local network using IP addresses (e.g., http://192.168.x.x:11434) 🔄 Model Switcher – Switch between models in real time with a beautiful UI 🔍 Model Search & Pull – Search and pull models directly from Ollama.com in the UI (with progress indicator) 🗑️ Model Deletion – Clean up unused models with confirmation dialogs 🧳 Load/Unload Models – Manage Ollama memory footprint efficiently 📦 Model Version Display – View and compare model versions easily 🎛️ Advanced Parameter Tuning – Per-model configuration: temperature, top_k, top_p, repeat penalty, stop sequences, system prompts 💬 Chat & Conversations 💬 Beautiful Chat UI – Modern, polished interface built with Shadcn UI 🗂️ Multi-Chat Sessions – Create, manage, and switch between multiple chat sessions 📤 Export Chat Sessions – Export single or all chat sessions as PDF or JSON 📥 Import Chat Sessions – Import single or multiple chat sessions from JSON files 📋 Copy & Regenerate – Quickly rerun or copy AI responses ⚡ Streaming Responses – Real-time streaming with typing indicators 🌐 Webpage Integration 🧠 Enhanced Content Extraction – Advanced extraction with multiple scroll strategies (none, instant, gradual, smart) 🔄 Lazy Loading Support – Automatically waits for dynamic content to load 📄 Site-Specific Overrides – Configure extraction settings per domain (scroll strategies, delays, timeouts) 🎯 Defuddle Integration – Smart content extraction with Defuddle fallback 📖 Mozilla Readability – Fallback extraction using Mozilla Readability 🎬 YouTube Transcripts – Automated YouTube transcript extraction 📊 Extraction Metrics – View scroll steps, mutations detected, and content length ⚙️ Customization & Settings 🎨 Professional UI – Modern design system with glassmorphism effects, gradients, and smooth animations 🌓 Dark Mode – Beautiful dark theme with smooth transitions 📝 Prompt Templates – Create, manage, and use custom prompt templates (Ctrl+/) 🔊 Advanced Text-to-Speech – Searchable voice selector with adjustable speech rate & pitch 🎚️ Cross-Browser Compatibility – Works with Chrome, Brave, Edge, Opera, Vivaldi, LibreWolf, and more 🧪 Voice Testing – Test voices before using them 🔒 Privacy & Performance 🛡️ 100% Local and Private – All storage and inference happen on your device 🧯 Declarative Net Request (DNR) – Automatic CORS handling 💾 IndexedDB Storage – Efficient local storage for chat sessions ⚡ Performance Optimized – Lazy loading, debounced operations, optimized re-renders 🔄 State Management – Clean Zustand-based state management 🧭 Tab Access (Optional) Want your LLM to understand the content of a page you're viewing? Enable Tab Access in the settings to fetch page content or transcripts for better contextual answers. ✔️ Fully opt-in ✔️ You choose which tabs to share ✔️ Customizable exclude list (regex supported) ✔️ No tab data ever leaves your device ⚙️ Installation & Setup 1️⃣ Install Ollama Client from the Chrome Web Store 2️⃣ Install Ollama on your machine from https://ollama.com and run `ollama serve` 3️⃣ Pull your favorite models (e.g., `ollama pull llama3:8b`, `gemma:2b`) and start chatting! Advanced users can customize themes, model parameters, prompt templates, and excluded URLs from the Options page. 🎯 Who Should Use Ollama Client? 👩‍💻 Developers building with or debugging LLMs 📚 Researchers who want local, private LLM interfaces 🎓 Students using AI as study aids on local hardware 🔐 Privacy advocates avoiding cloud AI and APIs 🤖 AI tinkerers and open-source model enthusiasts ⚡ Performance & Hardware Recommendations 💻 8 GB RAM (no GPU): gemma:2b, mistral:7b-q4 💻 16 GB RAM (no GPU): gemma:3b-q4, gemma:2b 🚀 16 GB+ with GPU (6GB VRAM): llama3:8b-q4, gemma:3b 💥 32 GB+ or high-end GPU: llama3:8b, codellama:13b 🔥 RTX 3090+, Apple M3 Max: llama3:70b, mixtral Note: Ollama Client Chrome extension is a frontend interface only. All LLM generation happens via your local Ollama install. Speed and output depend on your system. 🔗 Useful Links 🌐 Chrome Web Store: https://chromewebstore.google.com/detail/ollama-client/bfaoaaogfcgomkjfbmfepbiijmciinjl 📘 Setup Guide: https://ollama-client.shishirchaurasiya.in/ollama-setup-guide 💻 Landing Page: https://ollama-client.shishirchaurasiya.in/ 🔒 Privacy Policy: https://ollama-client.shishirchaurasiya.in/privacy-policy 🧑‍💻 GitHub: https://github.com/Shishir435/ollama-client 🐞 Bug: https://github.com/Shishir435/ollama-client/issues 🚀 Start chatting in seconds — private, fast, and fully local AI conversations on your own machine. Built for developers, researchers, and anyone who values speed, privacy, and offline AI control. #ollama #privacy #olama-client #opensource #offline #ollama-ui #ollamachat #gpt-oss

Details

  • Version
    0.2.6
  • Updated
    November 5, 2025
  • Size
    902KiB
  • Languages
    English
  • Developer
    Shishir
    Taramani Chennai, Tamil Nadu 600036 IN
    Website
    Email
    shishirchaurasiya435@gmail.com
  • Non-trader
    This developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.

Privacy

The developer has disclosed that it will not collect or use your data. To learn more, see the developer’s privacy policy.

This developer declares that your data is

  • Not being sold to third parties, outside of the approved use cases
  • Not being used or transferred for purposes that are unrelated to the item's core functionality
  • Not being used or transferred to determine creditworthiness or for lending purposes

Support

For help with questions, suggestions, or problems, visit the developer's support site

Related

OpenTalkGPT - UI to access DeepSeek,Llama or open source modal with rag.

4.7

This extension hosts an ollama ui on localhost and help you to access all open srouce modals.

Ollama Text Insertion

3.0

Premium assistant to generate text with Ollama and insert it at your cursor position

AiBrow: Local AI for your browser

5.0

Run small AI language models locally on your machine, allowing you to develop with the window.aibrow and window.ai APIs.

Orian (Ollama WebUI)

3.1

Quick access to your favorite local LLM from your browser (Ollama).

codereview.ollama

1.0

Reviews your Pull/Merge Requests using Ollama/LMStudio

Offload: Fully private AI for any website using local models.

5.0

A fully private in-browser AI assistant. Works even offline. No external API dependencies.

Ollama KISS UI

5.0

A simple, stupid UI for Ollama. Keep It Simple, Stupid (KISS).

Page Assist - A Web UI for Local AI Models

4.9

Use your locally running AI models to assist you in your web browsing.

AIskNet

5.0

Locally-run AI that answers questions from your current webpage

LLM-X

4.0

LLM-X! An app for people to talk to Ollama, LM Studio, Automatic 1111, Gemini nano and more!

Offline AI Chat (Ollama)

4.0

Chat interface for your local Ollama AI models. Requires Ollama to be installed and running on localhost.

Cognito: ChatGPT in Extension, Ollama, GPT 4o, Gemini

5.0

A chrome extension that intelligently improves productivity with AI, supports Ollama models for full privacy

OpenTalkGPT - UI to access DeepSeek,Llama or open source modal with rag.

4.7

This extension hosts an ollama ui on localhost and help you to access all open srouce modals.

Ollama Text Insertion

3.0

Premium assistant to generate text with Ollama and insert it at your cursor position

AiBrow: Local AI for your browser

5.0

Run small AI language models locally on your machine, allowing you to develop with the window.aibrow and window.ai APIs.

Orian (Ollama WebUI)

3.1

Quick access to your favorite local LLM from your browser (Ollama).

codereview.ollama

1.0

Reviews your Pull/Merge Requests using Ollama/LMStudio

Offload: Fully private AI for any website using local models.

5.0

A fully private in-browser AI assistant. Works even offline. No external API dependencies.

Ollama KISS UI

5.0

A simple, stupid UI for Ollama. Keep It Simple, Stupid (KISS).

Page Assist - A Web UI for Local AI Models

4.9

Use your locally running AI models to assist you in your web browsing.

Google apps