Item logo image for Ollama Client - Chat with Local LLM Models

Ollama Client - Chat with Local LLM Models

5.0(

7 ratings

)
Item media 6 (screenshot) for Ollama Client - Chat with Local LLM Models
Item video thumbnail
Item media 2 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 3 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 4 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 5 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 6 (screenshot) for Ollama Client - Chat with Local LLM Models
Item video thumbnail
Item video thumbnail
Item media 2 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 3 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 4 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 5 (screenshot) for Ollama Client - Chat with Local LLM Models
Item media 6 (screenshot) for Ollama Client - Chat with Local LLM Models

Overview

Chat privately with your local Ollama LLM models in-browser. Fast, lightweight, and secure AI without cloud dependencies.

🧠 Ollama Client – Chat with Local LLMs Inside Your Browser Ollama Client is a lightweight, privacy-first Chrome extension that brings the power of locally hosted large language models (LLMs) directly to your browser. No cloud dependencies. No API keys. No data sent externally. Just fast, secure, offline-first AI chat powered by open-source models like LLaMA 3, Mistral, Gemma, CodeLLaMA, and more — all running on your own machine using the Ollama backend. ✨ Works on all Chromium-based browsers (Chrome, Edge, Brave) and Firefox (with additional setup). 100% open-source. 🚀 Key Features 🔌 Local Ollama Integration – Connect to your own Ollama server, no API keys required. 💬 In-Browser Chat UI – Lightweight, minimal chat interface. ⚙️ Custom Settings Panel – Configure base URL, default model, themes, excluded URLs, and prompt templates. 🔄 Model Switcher – Switch between any installed Ollama models on the fly. 🧭 Model Search & Add – Search, pull, and add new Ollama models and track download progress directly from the options page. (Known issue: pressing Stop during model pull may cause some glitches.) 🎛️ Model Parameter Tuning – Adjust temperature, top_k, top_p, repeat penalty and stop sequence. ✂️ Content Parsing – Automatically extract and summarize page content with Mozilla Readability. 📜 Transcript Parsing – Supports transcripts from YouTube, Udemy, Coursera. 🔊 Text-to-Speech – Click the “Speak” button to have the browser read aloud chat responses or page summaries using the Web Speech API. 📋 Regenerate / Copy Response – Easily rerun AI responses or copy results to clipboard. 🗂️ Multi-Chat Sessions – Manage multiple chat sessions locally with save, load, and delete. 🛡️ Privacy-First – All data processing and storage stays local on your machine. 🧯 Declarative Net Request (DNR) – Handles CORS automatically, no manual config needed (since v0.1.3). 🧭 Tab Access (Optional) Want your LLM to understand the content of a page you're viewing? Enable Tab Access in the settings to fetch page content or transcripts for better contextual answers. ✔️ Fully opt-in ✔️ You choose which tabs to share ✔️ Customizable exclude list (regex supported) ✔️ No tab data ever leaves your device ⚙️ Installation & Setup 1️⃣ Install Ollama Client from the Chrome Web Store 2️⃣ Install Ollama on your machine from https://ollama.com and run `ollama serve` 3️⃣ Pull your favorite models (e.g., `ollama pull llama3:8b`, `gemma:2b`) and start chatting! Advanced users can customize themes, model parameters, prompt templates, and excluded URLs from the Options page. 🎯 Who Should Use Ollama Client? 👩‍💻 Developers building with or debugging LLMs 📚 Researchers who want local, private LLM interfaces 🎓 Students using AI as study aids on local hardware 🔐 Privacy advocates avoiding cloud AI and APIs 🤖 AI tinkerers and open-source model enthusiasts ⚡ Performance & Hardware Recommendations 💻 8 GB RAM (no GPU): gemma:2b, mistral:7b-q4 💻 16 GB RAM (no GPU): gemma:3b-q4, gemma:2b 🚀 16 GB+ with GPU (6GB VRAM): llama3:8b-q4, gemma:3b 💥 32 GB+ or high-end GPU: llama3:8b, codellama:13b 🔥 RTX 3090+, Apple M3 Max: llama3:70b, mixtral Note: Ollama Client is a frontend interface only. All LLM generation happens via your local Ollama install. Speed and output depend on your system. 🔗 Useful Links 🌐 Chrome Web Store: https://chromewebstore.google.com/detail/ollama-client/bfaoaaogfcgomkjfbmfepbiijmciinjl 📘 Setup Guide: https://shishir435.github.io/ollama-client/ollama-setup-guide 💻 Landing Page: https://shishir435.github.io/ollama-client/ollama-client 🧑‍💻 GitHub: https://github.com/Shishir435/ollama-client 🧳 Portfolio: https://www.shishirchaurasiya.in 🚀 Start chatting in seconds — private, fast, and fully local AI conversations on your own machine. Built for developers, researchers, and anyone who values speed, privacy, and full control.

5 out of 57 ratings

Google doesn't verify reviews. Learn more about results and reviews.

Details

  • Version
    0.1.10
  • Updated
    June 9, 2025
  • Offered by
    Shishir Chaurasiya
  • Size
    1.15MiB
  • Languages
    English
  • Developer
    Shishir
    Taramani Chennai, Tamil Nadu 600036 IN
    Email
    shishirchaurasiya435@gmail.com
  • Non-trader
    This developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.

Privacy

Ollama Client - Chat with Local LLM Models has disclosed the following information regarding the collection and usage of your data. More detailed information can be found in the developer's privacy policy.

Ollama Client - Chat with Local LLM Models handles the following:

User activity

This developer declares that your data is

  • Not being sold to third parties, outside of the approved use cases
  • Not being used or transferred for purposes that are unrelated to the item's core functionality
  • Not being used or transferred to determine creditworthiness or for lending purposes

Support

For help with questions, suggestions, or problems, visit the developer's support site

Related

Ollama EasyPHPRompt

5.0(1)

Ollama EasyPHPRompt Chrome Extension

Local AI helper

0.0(0)

LocalAI provides access to web content with your authorisation, without storing it. It can be configured to use files and…

Cognito - AI Sidekick

5.0(3)

Cognito uses AI to interpret your needs, letting you guide, query, and control your Chrome browser through natural interaction.

Orian (Ollama WebUI)

3.1(21)

Quick access to your favorite local LLM from your browser (Ollama).

AI Summary Helper - OpenAI, Mistral, Ollama, Kindle Summarize Save Articles

5.0(1)

Get AI summaries of web content. Use Send To Kindle for reading on the go.

Ollama KISS UI

5.0(1)

A simple, stupid UI for Ollama. Keep It Simple, Stupid (KISS).

Page Assist - A Web UI for Local AI Models

4.9(183)

Use your locally running AI models to assist you in your web browsing.

AIskNet

5.0(2)

Locally-run AI that answers questions from your current webpage

LLM-X

0.0(0)

LLM-X ( also called llmx) is a webapp and a chrome extension. The entire codebase (no secrets!) is on Github. The entire webapp is…

Offline AI Chat (Ollama)

4.0(1)

Chat interface for your local Ollama AI models. Requires Ollama to be installed and running on localhost.

OpenTalkGPT - UI to access DeepSeek,Llama or all open source modal offilne.

4.7(6)

This extension hosts an ollama ui on localhost and help you to access all open srouce modals.

open-os LLM Browser Extension

4.5(4)

Quick access to your favorite local LLM from your browser (Ollama).

Ollama EasyPHPRompt

5.0(1)

Ollama EasyPHPRompt Chrome Extension

Local AI helper

0.0(0)

LocalAI provides access to web content with your authorisation, without storing it. It can be configured to use files and…

Cognito - AI Sidekick

5.0(3)

Cognito uses AI to interpret your needs, letting you guide, query, and control your Chrome browser through natural interaction.

Orian (Ollama WebUI)

3.1(21)

Quick access to your favorite local LLM from your browser (Ollama).

AI Summary Helper - OpenAI, Mistral, Ollama, Kindle Summarize Save Articles

5.0(1)

Get AI summaries of web content. Use Send To Kindle for reading on the go.

Ollama KISS UI

5.0(1)

A simple, stupid UI for Ollama. Keep It Simple, Stupid (KISS).

Page Assist - A Web UI for Local AI Models

4.9(183)

Use your locally running AI models to assist you in your web browsing.

AIskNet

5.0(2)

Locally-run AI that answers questions from your current webpage

Google apps