| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://taiweis.com/projects/
access-control-allow-origin: *
expires: Mon, 29 Dec 2025 18:04:43 GMT
cache-control: max-age=600
x-proxy-cache: MISS
x-github-request-id: 6F99:2D8B9D:926FE4:A443D8:6952C062
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 17:54:43 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210035-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767030883.917337,VS0,VE206
vary: Accept-Encoding
x-fastly-request-id: 1f2f08242b481ae8929f9d2f79b32bb509641492
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 26 Sep 2025 03:48:45 GMT
access-control-allow-origin: *
etag: W/"68d60d1d-5866"
expires: Mon, 29 Dec 2025 18:04:43 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: F087:2D8B9D:926FEA:A443DC:6952C061
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 17:54:43 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210035-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767030883.140426,VS0,VE211
vary: Accept-Encoding
x-fastly-request-id: b6770777f7fcfa2b900e156bd66ac55ea5827d4b
content-length: 3894
Projects — Taiwei Shi
Taiwei Shi
Projects
Things I do, including research, academic course projects, and miscellaneous interests.
Research
Research publications for fans of natural language processing, computational social science, and machine learning.
A benchmark assessing the steerability of large language models using Reddit communities across 30 subreddit pairs in 19 domains.
EMNLP, 2025
Enabling LLMs to Reason About Uncertainty
EMNLP Findings, 2025
Introducing computer-using agents with coding as actions, a novel paradigm for task automation.
Preprint, 2025
Efficient Reinforcement Finetuning via Adaptive Curriculum Learning
Preprint, 2025
Discovering Knowledge Deficiencies of Language Models on Massive Knowledge Base
COLM, 2025
On the Trustworthiness of Generative Foundation Models: Guideline, Benchmark, and Perspective
Preprint, 2025
Detecting and Filtering Unsafe Training Data via Data Attribution
Preprint, 2025
Aligning LLMs With In-situ User Interactions And Feedback
Preprint, 2024
Exposure to only a small amount of ideologically driven samples significantly alters the ideology of LLMs
EMNLP, 2024
Can Language Model Moderators Improve the Health of Online Discourse?
NAACL, 2024
Safer-Instruct Aligning Language Models with Automated Preference Data
NAACL, 2024
Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation
EMNLP, 2023
Combining symbolic and neural story generation
AAAI Creative AI Workshop, 2023
Investigating AAVE in Question Answering Systems
GT Thesis, 2023