🧩 Model Card: LiquidAI/LFM2-1.2B
- Type: Text-to-Text
- Think: No
- Base Model: LiquidAI/LFM2-1.2B
- Quantization: Q4_0
- Max Context Length: 32k tokens
- Default Context Length: 32k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run lfm2:1.2b
🧩 Model Card: LiquidAI/LFM2-2.6B
- Type: Text-to-Text
- Think: No
- Base Model: LiquidAI/LFM2-2.6B
- Quantization: Q4_0
- Max Context Length: 32k tokens
- Default Context Length: 32k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run lfm2:2.6b