You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After downloading, create three folders (train/, test/, dev/) in the project directory and extract the corresponding dataset archives into their respective folders.
Preprocessing
We rely on the MPEG G-PCC codec to obtain LOD (Level of Detail) structures. For convenience, we provide a precompiled and modified codec for both Windows and Linux. Below are the steps for processing files in the train/ directory on a Linux system:
Place run-train.sh, tmc3_lod, and encoder.cfg into the train/ directory.
Execute run-train.sh to generate LOD information.
Modify the second line of pdata.py to "train_list.py".
Run pdata.py to validate and convert the LOD information into .npy format.
Repeat the same process for the test/ and dev/ directories.
For details on how MPEG generates LOD structures, refer to the buildPredictorsFast function in the MPEG source code: buildPredictorsFast Function
Encoding/Decoding
To use the pretrained models, extract the pretrained model files and place them in the pretrain/ directory, and install the modified torchac. Then, you can run python encode.py and python decode.py for encoding and decoding.
Training
To train the model from scratch, perform preprocessing as described above and then simply run python train.py
About
LOD-PCAC: Level-of-detail-based Deep Lossless Point Cloud Attribute Compression (TIP 2025)