You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
maestro is a streamlined tool to accelerate the fine-tuning of multimodal models.
By encapsulating best practices from our core modules, maestro handles configuration,
data loading, reproducibility, and training loop setup. It currently offers ready-to-use
recipes for popular vision-language models such as Florence-2, PaliGemma 2, and
Qwen2.5-VL.
Fine-tune VLMs for free
model, task and acceleration
open in colab
Florence-2 (0.9B) object detection with LoRA (experimental)
PaliGemma 2 (3B) JSON data extraction with LoRA
Qwen2.5-VL (3B) JSON data extraction with QLoRA
Qwen2.5-VL (7B) object detection with QLoRA (experimental)
News
2025/02/05 (1.0.0): This release introduces support for Florence-2, PaliGemma 2, and Qwen2.5-VL and includes LoRA, QLoRA, and graph freezing to keep hardware requirements in check. It offers a single CLI/SDK to reduce code complexity, and a consistent JSONL format to streamline data handling.
Quickstart
Install
To begin, install the model-specific dependencies. Since some models may have clashing requirements,
we recommend creating a dedicated Python environment for each model.
pip install "maestro[paligemma_2]"
CLI
Kick off fine-tuning with our command-line interface, which leverages the configuration
and training routines defined in each model’s core module. Simply specify key parameters such as
the dataset location, number of epochs, batch size, optimization strategy, and metrics.
For greater control, use the Python API to fine-tune your models.
Import the train function from the corresponding module and define your configuration
in a dictionary. The core modules take care of reproducibility, data preparation,
and training setup.
We appreciate your input as we continue refining Maestro. Your feedback is invaluable in guiding our improvements. To
learn how you can help, please check out our Contributing Guide.
If you have any questions or ideas, feel free to start a conversation in our GitHub Discussions.
Thank you for being a part of our journey!
About
streamline the fine-tuning process for multimodal models: PaliGemma 2, Florence-2, and Qwen2.5-VL