To download the color images, sparse annotations, and segmentation masks for the dataset, please use the links in the FaceSynthetics repository.
Our dataset has been generated for a warm and for a cold condition. Each dataset can be downloaded separately as
- A small sample with 100 images from here warm and here cold
- A medium sample with 1,000 images from here warm and here cold
- The full dataset with 100,000 images from here warm and here cold
- The dense annotations are available from here
A landmarker trained on the T-FAKE dataset is available via pip and from this repository: thermal-face-alignment.
Install and run:
pip install thermal-face-alignmentimport cv2
from tfan import ThermalLandmarks
# Read a thermal image, normalized grayscale or temperature values:
image = cv2.imread("thermal.png", cv2.IMREAD_GRAYSCALE)
# Initialize landmarker (downloads weights on first use)
landmarker = ThermalLandmarks(device="cpu", n_landmarks=478)
landmarks, confidences = landmarker.process(image)
Predicted 70 and 478 point landmarks on an example from the BU-TIV Benchmark.
The models for the thermalization can be downloaded from here.
Our baseline U-Net translation model is imported from segmentation_models_pytorch library. Specifically, we define the translator as follows:
import segmentation_models_pytorch as smp
translator = smp.Unet(
encoder_name="resnet34",
encoder_weights="imagenet",
in_channels=3,
classes=1,
activation="sigmoid"
)This model is based on a U-Net architecture with a ResNet-34 encoder pre-trained on ImageNet. It takes three-channel RGB input images and outputs a single-channel thermal image with a sigmoid activation function. For training progress of the thermalization model see ThermalizationCode/ThermalizerOutput.ipynb.
To run the benchmark, you have to download the CHARLOTTE ThermalFace dataset.
This dataset and the landmarking methods are licensed under the Attribution-NonCommercial-ShareAlike 4.0 International license as it is derived from the FaceSynthetics dataset.
If you use this code for your own work, please cite our paper:
P. Flotho, M. Piening, A. Kukleva and G. Steidl, “T-FAKE: Synthesizing Thermal Images for Facial Landmarking,” Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025. CVF Open Access
BibTeX entry
@InProceedings{tfake2025_CVPR,
author = {Flotho, Philipp and Piening, Moritz and Kukleva, Anna and Steidl, Gabriele},
title = {T-FAKE: Synthesizing Thermal Images for Facial Landmarking},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {26356-26366}
}

