openvino
« Back to VersTracker
Description:
Open Visual Inference And Optimization toolkit for AI inference
Type: Formula  |  Latest Version: 2025.4.0@1  |  Tracked Since: Dec 18, 2025
Links: Homepage  |  formulae.brew.sh
Category: Ai ml
Tags: ai inference intel optimization machine-learning
Install: brew install openvino
About:
OpenVINO is an open-source toolkit for optimizing and deploying AI inference models across various hardware. It accelerates deep learning models on Intel hardware and supports heterogeneous execution across CPU, GPU, and other accelerators. It enables developers to achieve low latency and high throughput for computer vision and NLP applications.
Key Features:
  • Model optimization for faster inference
  • Heterogeneous execution across multiple hardware types
  • Support for ONNX, TensorFlow, and PyTorch models
  • OpenVINO Runtime for low-latency deployment
Use Cases:
  • Real-time computer vision and video analytics
  • Natural language processing and LLM inference
  • Edge AI deployment on Intel-based devices
Alternatives:
  • TensorRT – NVIDIA-specific optimizer, while OpenVINO focuses on Intel hardware
  • ONNX Runtime – Cross-platform runtime, while OpenVINO offers Intel-specific optimizations
Version History
Detected Version Rev Change Commit
Dec 18, 2025 8:41pm 2025.4.0 1 VERSION_BUMP e5475c1e
Oct 22, 2025 5:25am 3 VERSION_BUMP 18130759
Sep 11, 2025 1:46pm 0 VERSION_BUMP 2a5c8d0b
Jan 10, 2025 5:04am 1 VERSION_BUMP 33f2ea67
Jan 7, 2025 7:14pm 0 VERSION_BUMP 69b49721
Dec 27, 2024 4:07am 0 VERSION_BUMP a24f2c37
Dec 26, 2024 4:27pm 0 VERSION_BUMP f3fe48bb
Oct 26, 2024 9:51pm 0 VERSION_BUMP 34547e44