HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://daizedong.github.io/projects/
access-control-allow-origin: *
strict-transport-security: max-age=31556952
expires: Tue, 30 Dec 2025 12:48:34 GMT
cache-control: max-age=600
x-proxy-cache: MISS
x-github-request-id: 7F40:292AC1:A11EA2:B4EDD1:6953C7C9
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 12:38:34 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210032-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767098314.429033,VS0,VE214
vary: Accept-Encoding
x-fastly-request-id: 6c6f3b22848d0390e6d3f551662cf32f92e352c6
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Sat, 13 Dec 2025 08:53:11 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"693d2977-2377"
expires: Tue, 30 Dec 2025 12:48:34 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 14B9:2D64E0:A0554E:B422E3:6953C7C8
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 12:38:34 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210032-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767098315.656550,VS0,VE238
vary: Accept-Encoding
x-fastly-request-id: 26b4fdceb6448299dfae15b374b224f9ec1c234a
content-length: 3364
Projects - Daize Dong You can find more of my projects on my GitHub Page.
Deep Learning
LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training [GitHub] | [Paper]
A series of open-sourced Mixture-of-Expert (MoE) models based on LLaMA 2 and SlimPajama.
LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training [GitHub] | [Paper]
A series of open-sourced Mixture-of-Expert (MoE) models based on LLaMA 3.
Other Stuff
ChatGPT ArXiv Paper Assistant [GitHub]
A ChatGPT/Gemini/DeepSeek based personalized arXiv paper assistant bot for automatic paper filtering.
Easier PS and SoP [GitHub]
A LaTeX template for writing Personal Statements (PS) and Statements of Purpose (SoP) for graduate school applications.