🧩 Model Card: LiquidAI/LFM2-1.2B
- Type: Text-to-Text
- Think: No
- Tool Calling Support: No
- Base Model: LiquidAI/LFM2-1.2B
- Quantization: Q4_0
- Max Context Length: 32k tokens
- Default Context Length: 32k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run lfm2:1.2b
🧩 Model Card: LiquidAI/LFM2-2.6B
- Type: Text-to-Text
- Think: No
- Tool Calling Support: No
- Base Model: LiquidAI/LFM2-2.6B
- Quantization: Q4_0
- Max Context Length: 32k tokens
- Default Context Length: 32k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run lfm2:2.6b
🧩 Model Card: LiquidAI/LFM2-2.6B-Transcript
- Type: Text-to-Text
- Think: No
- Tool Calling Support: No
- Base Model: LiquidAI/LFM2-2.6B-Transcript
- Quantization: Q4_0
- Max Context Length: 32k tokens
- Default Context Length: 32k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run lfm2-trans:2.6b
⚠️ This model is intended for single-turn conversations with a specific format, described here.
🧩 Model Card: LiquidAI/LFM2.5-1.2B-Instruct
- Type: Text-to-Text
- Think: No
- Tool Calling Support: No
- Base Model: LiquidAI/LFM2.5-1.2B-Instruct
- Quantization: Q4_0
- Max Context Length: 32k tokens
- Default Context Length: 32k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run lfm2.5-it:1.2b
🧩 Model Card: LiquidAI/LFM2.5-1.2B-Thinking
- Type: Text-to-Text
- Think: Yes
- Tool Calling Support: Yes
- Base Model: LiquidAI/LFM2.5-1.2B-Thinking
- Quantization: Q4_0
- Max Context Length: 32k tokens
- Default Context Length: 32k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run lfm2.5-tk:1.2b