mlx-lm
« Back to VersTracker
Description:
Run LLMs with MLX
Type: Formula  |  Latest Version: 0.29.0@0  |  Tracked Since: Dec 16, 2025
Links: Homepage  |  GitHub  |  formulae.brew.sh
Stars: 3,153  |  Forks: 336  |  Language: Python  |  Category: Ai ml
Tags: llm machine-learning apple-silicon python mlx
Install: brew install mlx-lm
About:
mlx-lm is a Python library that enables efficient inference and fine-tuning of Large Language Models (LLMs) on Apple silicon using the MLX framework. It provides a simple interface to run popular open-source models locally, leveraging the performance and memory efficiency of Apple's hardware. This allows developers and researchers to experiment with LLMs without requiring specialized GPU setups.
Key Features:
  • Run popular LLMs locally on Apple silicon
  • Efficient inference using the MLX framework
  • Simple Python API for loading and generating text
  • Support for model fine-tuning
  • Optimized for memory and performance on macOS
Use Cases:
  • Local LLM inference for privacy-sensitive applications
  • Prototyping and experimenting with open-source language models on Mac
  • Educational and research purposes for understanding LLM behavior
Alternatives:
  • llama.cpp – Also runs LLMs locally on CPU/Apple Silicon, but mlx-lm is specifically built on the MLX framework for deeper Apple hardware integration.
  • transformers (by Hugging Face) – A more general library for transformers models; mlx-lm is a specialized, optimized backend for Apple silicon via MLX.
Version History
Detected Version Rev Change Commit
Dec 19, 2025 10:04am 0.29.0 0 VERSION_BUMP 98344d65
Dec 16, 2025 6:41pm 0 VERSION_BUMP 7dfcc68f
Dec 4, 2025 4:34pm 0 VERSION_BUMP b8999b14
Dec 4, 2025 10:37am 0 VERSION_BUMP 48391351
Oct 15, 2025 8:09pm 1 VERSION_BUMP 19194ddf
Sep 30, 2025 5:13pm 0 VERSION_BUMP b67a41cd