| CARVIEW |
ai lab
Key Areas
Recent Works
When Does Verification Pay Off? A Closer Look at LLMs as Solution Verifiers
Cross-family verification is found to be especially effective, and post-training reduces self-improvement but strengthens cross-family improvement.
Published: 2025-12-02
In-Context Clustering with Large Language Models
In-Context Clustering (ICC) is a flexible LLM-based procedure for clustering data from diverse distributions.
Published: 2025-10-09
Local Reinforcement Learning with Action-Conditioned Root Mean Squared Q-Functions
Action-conditioned Root mean squared Q-Functions (ARQ) is a novel backprop-free value estimation method that applies a goodness function and action conditioning for local reinforcement learning.
Published: 2025-10-08
Midway Network: Learning Representations for Recognition and Motion from Latent Dynamics
Midway Network is a new self-supervised learning architecture that learns strong visual representations for both object recognition and motion understanding solely from natural videos by modeling latent dynamics.
Published: 2025-10-07
StreamMem: Query-Agnostic KV Cache Memory for Streaming Video Understanding
StreamMem is a query-agnostic KV cache memory mechanism for streaming video understanding.
Published: 2025-08-21
Context Tuning for In-Context Optimization
Context Tuning is a simple and effective method to significantly enhance few-shot adaptation of LLMs without fine-tuning model parameters.
Published: 2025-07-06
Discrete JEPA: Learning Discrete Token Representations without Reconstruction
Discrete-JEPA extends the latent predictive coding JEPA framework with semantic tokenization and complementary objectives for symbolic reasoning tasks.
Published: 2025-06-22
Memory Storyboard: Leveraging Temporal Segmentation for Streaming Self-Supervised Learning from Egocentric Videos
Memory Storyboard groups recent past frames into temporal segments and provides effective summarization of the past visual streams for memory replay.
Published: 2025-01-21
Are LLMs Prescient? A Continuous Evaluation using Daily News as Oracle
Our new benchmark, Daily Oracle, automatically generates question-answer (QA) pairs from daily news, challenging LLMs to predict "future" events based on pre-training data.
Published: 2024-11-13
PooDLe: Pooled and Dense Self-Supervised Learning from Naturalistic Videos
We propose PooDLe, a self-supervised learning method that combines an invariance-based objective on pooled representations with a dense SSL objective that enforces equivariance to optical flow warping.
Published: 2024-08-20
ProCreate, Don't Reproduce! Propulsive Energy Diffusion for Creative Generation
ProCreate is a simple and easy-to-implement method to improve sample diversity and creativity of diffusion-based image generative models and to prevent training data reproduction.
Published: 2024-08-05