FastFlowLM FastFlowLM
How It Works Models Benchmarks Demos Test Drive
Testimonials
Team Community News Roadmap
Docs
How It Works Models Benchmarks Demos Test Drive
Testimonials
Team Community News Roadmap
Docs
GitHub Discord YouTube Email

Docs

Instructions

Overview
Install

Instructions

Overview Sys Command and CLI Mode Server Mode Server Basics API / Client Usage Open WebUI LangChain RAG LangChain Web Search Obsidian Microsoft AI Toolkit

Models

Overview LLaMA DeepSeek Qwen Gemma MedGemma gpt-oss LiquidAI/LFM Whisper EmbeddingGemma

Benchmarks

Overview LLaMA3 Gemma3 Qwen3 gpt-oss LiquidAI/LFM2

🛠️ Instructions

FastFlowLM (FLM) is a deeply optimized runtime for local LLM inference on AMD NPUs —
ultra-fast, power-efficient, and 100% offline.

Its user interface and workflow are similar to Ollama, but purpose-built for AMD’s XDNA architecture.

This section will walk you through how to use FastFlowLM with examples.


📚 Sections

  • System Command and CLI Mode
  • Server Mode
  • Server Basics
  • API / Client Usage
  • Open WebUI
  • LangChain RAG
  • LangChain Web Search
  • Obsidian
  • Microsoft AI Toolkit
FastFlowLM The leading LLM inference runtime for parallel NPU architectures

© 2025 FastFlowLM. All rights reserved.

Site

Technology Testimonials Company Docs

Connect

GitHub Discord YouTube Email