The source code for "Bridging Compressed Image Latents and Multimodal Large Language Models", accepted by ICLR 2025. The full paper can be found here.
This paper presents the first-ever study of adapting compressed image latents to suit the needs of downstream vision tasks that adopt Multimodal Large Language Models (MLLMs). MLLMs have extended the success of large language models to modalities (e.g. images) beyond text, but their billion scale hinders deployment on resource-constrained end devices. While cloud-hosted MLLMs could be available, transmitting raw, uncompressed images captured by end devices to the cloud requires an efficient image compression system. To address this, we focus on emerging neural image compression and propose a novel framework with a lightweight transform-neck and a surrogate loss to adapt compressed image latents for MLLM-based vision tasks. Given the huge scale of MLLMs, our framework excludes the entire downstream MLLM except part of its visual encoder from training our system. This stands out from most existing coding for machine approaches that involve downstream networks in training and thus could be impractical when the networks are MLLMs. The proposed framework is general in that it is applicable to various MLLMs, neural image codecs, and multiple application scenarios, where the neural image codec can be (1) pre-trained for human perception without updating, (2) fully updated for joint human and machine perception, or (3) fully updated for only machine perception. Extensive experiments on different neural image codecs and various MLLMs show that our method achieves great rate-accuracy performance with much less complexity.
-
Clone the repository and navigate to the project directory:
git clone https://github.com/NYCU-MAPL/BridgingCompressionMLLM cd BridgingCompressionMLLM -
Create and activate a new Conda environment:
conda create -n BridgingCompressionMLLM -y conda activate BridgingCompressionMLLM
-
Install the required packages:
conda install pip -y pip install -U pip pip install -e . pip install git+https://github.com/openai/CLIP.git
To train the model with the d1 setting:
-
Edit the
config/TransformNeck.yamlfile to specify:- Data paths
- Base codec checkpoint location
-
Run the training script:
python examples/train.py -c config/TransformNeck.yaml
The weights of our method corresponding to three different settings (d1, d2, and d3) can be found below:
| Setting | ||||
|---|---|---|---|---|
| d1 | 1 | 2 | 3 | 4 |
| d2 | 1 | 2 | 3 | 4 |
| d3 | 1 | 2 | 3 | 4 |
-
Download the V2L-Tokenizer checkpoints from the LLaMA-Adapter Hugging Face repository. For the pre-trained LLaMA, please reference to the LLaMA-Adapter Github repository.
-
Run the evaluation script:
cd Inference/LLaMA-Adapter-V1 python codec_llamaAdapter_cap.py -c config/Captioning.yaml
-
Download the V2L-Tokenizer checkpoints from the official GitHub repository.
-
Run the evaluation script:
cd Inference/V2L-Tokenizer CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --master_port=12345 codec_V2L_fewshot.py -c config/Classification.yaml
Our work is based on the framework of CompressAI. The base codec is adopted from ELIC and the evaluation leverages the official codes from each respective GitHub repository. We thank the authors and contributors for the released code.