Item logo image for FlowFRAM Connector

FlowFRAM Connector

Item media 3 (screenshot) for FlowFRAM Connector
Item media 1 (screenshot) for FlowFRAM Connector
Item media 2 (screenshot) for FlowFRAM Connector
Item media 3 (screenshot) for FlowFRAM Connector
Item media 1 (screenshot) for FlowFRAM Connector
Item media 1 (screenshot) for FlowFRAM Connector
Item media 2 (screenshot) for FlowFRAM Connector
Item media 3 (screenshot) for FlowFRAM Connector

Overview

Connect FlowFRAM to local or remote Runtime and Ollama instances

FlowFRAM Connector bridges the FlowFRAM web application with your local services, enabling two powerful capabilities from a single extension: ▶ RUNTIME PROXY — Run high-performance FRAM (Functional Resonance Analysis Method) simulations on your own machine or private network using FlowFRAM's distributed runtime agent. ▶ OLLAMA PROXY — Use your local Ollama LLM instance for AI-assisted analysis directly from flowfram.com. No API keys needed — your models, your hardware, your data. 🚀 Key Features: - Dual-purpose: Runtime + Ollama proxy in a single extension - Seamless connection between FlowFRAM and local runtime agents - Local LLM inference via Ollama (generate, chat, list models) - Real-time flow deployment and execution monitoring via SSE - Tabbed configuration: separate settings for Runtime and Ollama - Visual badge indicator: R (Runtime), O (Ollama), R·O (both), or red ! (error) - Automatic connection status detection - Origin header stripping for full Ollama compatibility - On-demand host permissions — only requests access when needed - Secure local communication without exposing data to external servers 💡 Why use a local runtime? - Execute thousands of simulation iterations per second - Access local APIs, databases, and network resources - Keep sensitive data on your own infrastructure - Work offline once connected 🧠 Why use local Ollama? - Run AI analysis with your own hardware — no cloud API costs - Full privacy: your prompts and data never leave your machine - Use any Ollama-supported model (Llama 3, Mistral, Gemma, Phi, etc.) - No API keys or subscriptions required 🔧 How it works: 1. Install the FlowFRAM Runtime Agent on your machine (Docker Hub image available) and/or install Ollama with a model 2. Install this extension and configure the service URLs (defaults: localhost:3010 for Runtime, localhost:11434 for Ollama) 3. Open FlowFRAM — the extension automatically bridges your browser to local services, bypassing CORS restrictions 📋 Requirements: - For Runtime: FlowFRAM Runtime Agent running locally or on your network (Docker: cgoudouris/flowfram-runtime) - For Ollama: Ollama installed with at least one model pulled (e.g., ollama pull llama3) - Both services are optional — enable only what you need 🔒 Privacy: - No data collection, no analytics, no tracking - Configuration stored locally via chrome.storage.local - All communication stays between your browser and your local services This extension is part of PhD research on complex systems modeling using the FRAM methodology. Learn more at https://flowfram.com/agent

Details

  • Version
    2.0.0
  • Updated
    February 27, 2026
  • Offered by
    cesar.goudouris
  • Size
    29.36KiB
  • Languages
    English
  • Developer
    Email
    cesar.goudouris@gmail.com
  • Non-trader
    This developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.

Privacy

Manage extensions and learn how they're being used in your organization
The developer has disclosed that it will not collect or use your data. To learn more, see the developer’s privacy policy.

This developer declares that your data is

  • Not being sold to third parties, outside of the approved use cases
  • Not being used or transferred for purposes that are unrelated to the item's core functionality
  • Not being used or transferred to determine creditworthiness or for lending purposes

Support

For help with questions, suggestions, or problems, visit the developer's support site

Related

AutoNest By CSVNest

4.7

Automate prompt submission and image download workflow

Network Request Analyzer

5.0

Analyzes the timing of all network requests on the current page

NativeMind: Your fully private, open-source, on-device AI assistant

4.2

NativeMind connects to local LLMs via Ollama to bring powerful AI into your browser — with zero data sent to the cloud.

Local LLM Helper

1.8

Interact with your local LLM server directly from your browser.

FLUF Connect Utility Extension

5.0

Supports the functionality of FLUF Connect (fluf.io/connect)

Bad Connection Simulator

4.5

Get out of calls by simulating a bad connection

Internet Connection Monitor

4.4

Monitor and test Internet connectivity. Detect and log when Internet doesn't work even with operating LAN (Wi-Fi or Ethernet)

Local LLM

5.0

Use Local LLM extension: run llm locally (LLama 70B or DeepSeek with WebLLM + Gemini Nano), ask ai models on your tabs - private ai.

Snap Links

3.5

Streamline your workflow with Snap Links extension, effortlessly managing application links and opening links with a simple way.

FluxDown

5.0

Intercept browser downloads and send to FluxDown app for high-speed downloading

Ollama Client - Chat with Local LLM Models

4.7

Local-first Chrome extension for private LLM chat with Ollama, LM Studio, and llama.cpp, including local RAG workflows.

All API Hub – AI 中转站 & New API 管理器

5.0

一站式管理 New API 兼容中转站账号:余额/用量看板、自动签到、密钥一键导出到常用应用、网页内 API 可用性测试、渠道与模型同步/重定向

AutoNest By CSVNest

4.7

Automate prompt submission and image download workflow

Network Request Analyzer

5.0

Analyzes the timing of all network requests on the current page

NativeMind: Your fully private, open-source, on-device AI assistant

4.2

NativeMind connects to local LLMs via Ollama to bring powerful AI into your browser — with zero data sent to the cloud.

Local LLM Helper

1.8

Interact with your local LLM server directly from your browser.

FLUF Connect Utility Extension

5.0

Supports the functionality of FLUF Connect (fluf.io/connect)

Bad Connection Simulator

4.5

Get out of calls by simulating a bad connection

Internet Connection Monitor

4.4

Monitor and test Internet connectivity. Detect and log when Internet doesn't work even with operating LAN (Wi-Fi or Ethernet)

Local LLM

5.0

Use Local LLM extension: run llm locally (LLama 70B or DeepSeek with WebLLM + Gemini Nano), ask ai models on your tabs - private ai.

Google apps