You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Analysis360 provides open reference implementations for a variety of downstream analyses
that can be done with and for LLM360 models, covering a range of topics including:
mechanistic interpretability, visualization, machine unlearning, data memorization, AI
safety, assessing toxicity & bias, and a large set of evaluation metrics.
Open LLM Leaderboard metrics, language & code metrics, perplexity evaluation, and more.
Quick Start
In each subfolder of analysis you will find instructions for
installation, documentation, and a demo notebook showing usage of a given analysis tool.
Experiments and demos in all subfolders use Python 3.11.
Tip
Dive into each subfolder of analysis and find the demo.ipynb notebook. Have fun!
Citation
If you are interested in using our results in your work, you can cite the LLM360 overview paper.
@article{liu2023llm360,
title={LLM360: Towards Fully Transparent Open-Source LLMs},
author={Liu, Zhengzhong and Qiao, Aurick and Neiswanger, Willie and Wang, Hongyi and Tan, Bowen and Tao, Tianhua and Li, Junbo and Wang, Yuqi and Sun, Suqi and Pangarkar, Omkar and Fan, Richard and Gu, Yi and Miller, Victor and Zhuang, Yonghao and He, Guowei and Li, Haonan and Koto, Fajri and Tang, Liping and Ranjan, Nikhil and Shen, Zhiqiang and Ren, Xuguang and Iriondo, Roberto and Mu, Cun and Hu, Zhiting and Schulze, Mark and Nakov, Preslav and Baldwin, Tim and Xing, Eric},
year={2023}}