You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
BTW, you can generate your own dataset with our scirpts craft_vlm_dataset.py and craft_llm_dataset.py.
NOTE: Please rename the ART_GuuideModel, as the LLAVA builder has a strict name matching. Please refer to this issue.
Run the code
You can run the script by for all categories:
./run_art.sh
NOTE: Remember to change the LLAVA_LORA_PATH to your renamed folder
You can also modify the script to run for a specific category under some settings,
such as resolutions, guidance scales, random seeds.
We need four GPUs to run the code. The index of the GPUs starts from 0 in our code.
Generate the results
You can run the following script to generate the results:
./run_image_generation.sh
Before that, you need to set seed_list used in generate_images.py to the seeds used in the previous step.
The data path should be modified as well.
Evaluation
You can run the following script to evaluate the results:
./run_summary.sh
Before that, you need to set the seed_list used in summarize_results.py to the seeds used in the previous step.
The data path should be modified as well.
License
Please follow the license of the Lexica, Llama 3, LLaVA, and Llama 2.
The code is under the MIT license.
About
Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)