djl-serving
« Back to VersTracker
Description:
This module contains an universal model serving implementation
Type: Formula  |  Latest Version: 0.35.0@0  |  Tracked Since: Dec 17, 2025
Links: Homepage  |  @DeepJavaLibrary  |  formulae.brew.sh
Category: Ai ml
Tags: ai machine-learning model-serving inference deep-learning mlops
Install: brew install djl-serving
About:
DJL Serving is a high-performance, open-source model serving framework designed for seamless deployment of deep learning models. It provides a universal serving solution that supports multiple deep learning engines including PyTorch, TensorFlow, and MXNet. The tool simplifies model deployment with built-in REST APIs, batch processing capabilities, and dynamic loading for efficient inference at scale.
Key Features:
  • Universal model serving across multiple frameworks
  • High-performance inference with batching support
  • REST API for easy integration
  • Dynamic model loading and versioning
  • Built-in metrics and monitoring
Use Cases:
  • Deploying trained ML models for production inference
  • Building scalable AI-powered APIs and services
  • A/B testing different model versions
  • Batch processing for large-scale predictions
Alternatives:
  • TensorFlow Serving – TensorFlow-specific, while DJL Serving is framework-agnostic
  • TorchServe – PyTorch-only, DJL Serving supports multiple frameworks
  • BentoML – Broader MLOps platform, DJL Serving focuses on serving performance
Version History
Detected Version Rev Change Commit
Dec 14, 2025 7:44pm 0 VERSION_BUMP fa5a8a24
Oct 30, 2025 11:19pm 0 VERSION_BUMP be042681
Oct 1, 2025 1:38pm 0 VERSION_BUMP 505bff18
Nov 19, 2024 12:45am 0 VERSION_BUMP cf96e072
Oct 22, 2024 12:40am 0 VERSION_BUMP ea9da569