| CARVIEW |
- arXiv.org
- General Physics
- Geophysics
- Computational Physics
- Physics and Society
- Optics
- Space Physics
- Medical Physics
- History and Philosophy of Physics
- Atomic Physics
- Instrumentation and Detectors
- Fluid Dynamics
- Plasma Physics
- Applied Physics
- Chemical Physics
- Atmospheric and Oceanic Physics
- Physics Education
- Popular Physics
- Biological Physics
- Data Analysis, Statistics and Probability
- Classical Physics
- Accelerator Physics
- Atomic and Molecular Clusters
- Spectral Theory
- Number Theory
- Algebraic Geometry
- Quantum Algebra
- Metric Geometry
- History and Overview
- Classical Analysis and ODEs
- Differential Geometry
- Combinatorics
- Complex Variables
- Numerical Analysis
- Representation Theory
- Information Theory
- Probability
- Logic
- Optimization and Control
- Dynamical Systems
- Analysis of PDEs
- Commutative Algebra
- Geometric Topology
- Functional Analysis
- Category Theory
- Rings and Algebras
- General Topology
- K-Theory and Homology
- Mathematical Physics
- Statistics Theory
- Operator Algebras
- Group Theory
- General Mathematics
- Symplectic Geometry
- Algebraic Topology
- Multiagent Systems
- Computation and Language
- Computer Vision and Pattern Recognition
- Computational Complexity
- Data Structures and Algorithms
- Mathematical Software
- Discrete Mathematics
- Machine Learning
- Social and Information Networks
- Numerical Analysis
- Sound
- Graphics
- Multimedia
- Information Theory
- Other Computer Science
- Systems and Control
- Artificial Intelligence
- Robotics
- Computers and Society
- Distributed, Parallel, and Cluster Computing
- Databases
- Emerging Technologies
- Software Engineering
- Cryptography and Security
- Performance
- Logic in Computer Science
- Networking and Internet Architecture
- Computer Science and Game Theory
- Information Retrieval
- Formal Languages and Automata Theory
- Operating Systems
- General Literature
- Neural and Evolutionary Computing
- Human-Computer Interaction
- Computational Engineering, Finance, and Science
- Hardware Architecture
- Programming Languages
- Symbolic Computation
- Digital Libraries
- Computational Geometry
Top arXiv papers
sign in to customize- Dec 30 2025 quant-ph arXiv:2512.23037v1Circuit simulation tools are critical for developing and assessing quantum-error-correcting and fault-tolerant strategies. In this work, we present SOFT, a high-performance SimulatOr for universal Fault-Tolerant quantum circuits. Integrating the generalized stabilizer formalism and highly optimized GPU parallelization, SOFT enables the simulation of noisy quantum circuits containing non-Clifford gates at a scale not accessible with existing tools. To provide a concrete demonstration, we simulate the state-of-the-art magic state cultivation (MSC) protocol at code distance $d=5$, involving 42 qubits, 72 $T$ / $T^\dagger$ gates, and mid-circuit measurements. Using only modest GPU resources, SOFT performs over 200 billion shots and achieves the first ground-truth simulation of the cultivation protocol at a non-trivial scale. This endeavor not only certifies the MSC's effectiveness for generating high-fidelity logical $T$-states, but also reveals a large discrepancy between the actual logical error rate and the previously reported values. Our work demonstrates the importance of reliable simulation tools for fault-tolerant architecture design, advancing the field from simulating quantum memory to simulating a universal quantum computer.
- Dec 30 2025 quant-ph arXiv:2512.23013v1We consider the costs and benefits of embedding the states of one quantum system within those of another. Such embeddings are ubiquitous, e.g., in error correcting codes and in symmetry-constrained systems. In particular we investigate the impact of embeddings in terms of the resource theory of nonstabilizerness (also known as magic) quantified via the stabilizer entropy (SE). We analytically and numerically study the stabilizer entropy gap or magic gap: the average gap between the SE of a quantum state realized within a subspace of a larger system and the SE of the quantum state considered on its own. We find that while the stabilizer entropy gap is typically positive, requiring the injection of magic, both zero and negative magic gaps are achievable. This suggests that certain choices of embedding subspace provide strong resource advantages over others. We provide formulas for the average nonstabilizerness of a subspace given its corresponding projector and sufficient conditions for realizing zero or negative gaps: in particular, certain classes of stabilizer codes provide paradigmatic examples of the latter. Through numerical optimization, we find subspaces which achieve both minimal and maximal average SE for a variety of dimensions, and compute the magic gap for specific error-correcting codes and symmetry-induced subspaces. Our results suggest that a judicious choice of embedding can lead to greater efficiency in both classical and quantum simulations.
- We introduce the Clifford entropy, a measure of how close an arbitrary unitary is to a Clifford unitary, which generalizes the stabilizer entropy for states. We show that this quantity vanishes if and only if a unitary is Clifford, is invariant under composition with Clifford unitaries, and is subadditive under tensor products. Rewriting the Clifford entropy in terms of the stabilizer entropy of the corresponding Choi state allows us to derive an upper bound: that this bound is not tight follows from considering the properties of symmetric informationally complete sets. Nevertheless we are able to numerically estimate the maximum in low dimensions, comparing it to the average over all unitaries, which we derive analytically. Finally, harnessing a concentration of measure result, we show that as the dimension grows large, with probability approaching unity, the ratio between the Clifford entropy of a Haar random unitary and that of a fixed magic gate gives a lower bound on the depth of a doped Clifford circuit which realizes the former in terms of the latter. In fact, numerical evidence suggests that this result holds reliably even in low dimensions. We conclude with several directions for future research.
- Dec 30 2025 quant-ph cond-mat.stat-mech arXiv:2512.22888v1Fracton codes have been intensively studied as novel topological states of matter, yet their fault-tolerant properties remain largely unexplored. Here, we investigate the optimal thresholds of self-dual fracton codes, in particular the checkerboard code, against stochastic Pauli noise. By utilizing a statistical-mechanical mapping combined with large-scale parallel tempering Monte Carlo simulations, we calculate the optimal code capacity of the checkerboard code to be $p_{th} \simeq 0.108(2)$. This value is the highest among known three-dimensional codes and nearly saturates the theoretical limit for topological codes. Our results further validate the generalized entropy relation for two mutually dual models, $H(p_{th}) + H(\tilde{p}_{th}) \approx 1$, and extend its applicability beyond standard topological codes. This verification indicates the Haah's code also possesses a code capacity near the theoretical limit $p_{th} \approx 0.11$. These findings highlight fracton codes as highly resilient quantum memory and demonstrate the utility of duality techniques in analyzing intricate quantum error-correcting codes.
- Quantum simulation of non-Abelian gauge theories requires careful handling of gauge redundancy. We address this challenge by presenting universal principles for treating gauge symmetry that apply to any quantum simulation approach, clarifying that physical states need not be represented solely by gauge singlets. Both singlet and non-singlet representations are valid, with distinct practical trade-offs, which we elucidate using analogies to BRST quantization. We demonstrate these principles within a complete quantum simulation framework based on the orbifold lattice, which enables explicit and efficient circuit constructions relevant to real-world QCD. For singlet-based approaches, we introduce a Haar-averaging projection implemented via linear combinations of unitaries, and analyze its cost and truncation errors. Beyond the singlet-approach, we show how non-singlet approaches can yield gauge-invariant observables through wave packets and string excitations. This non-singlet approach is proven to be both universal and efficient. Working in temporal gauge, we provide explicit mappings of lattice Yang-Mills dynamics to Pauli-string Hamiltonians suitable for Trotterization. Classical simulations of small systems validate convergence criteria and quantify truncation and Trotter errors, showing concrete resource estimates and scalable circuit recipes for SU($N$) gauge theories. Our framework provides both conceptual clarity and practical tools toward quantum advantage in simulating non-Abelian gauge theories.
- Gapless quantum phases can become distinct when internal symmetries are enforced, in analogy with gapped symmetry-protected topological (SPT) phases. However, this distinction does not always lead to protected edge modes, raising the question of how the bulk-boundary correspondence is generalized to gapless cases. We propose that the spatial interface between gapless phases -- rather than their boundaries -- provides a more robust fingerprint. We show that whenever two 1+1d conformal field theories (CFTs) differ in symmetry charge assignments of local operators or twisted sectors, any symmetry-preserving spatial interface between the theories must flow to a non-invertible defect. We illustrate this general result for different versions of the Ising CFT with $\mathbb{Z}_2 \times \mathbb{Z}_2^T$ symmetry, obtaining a complete classification of allowed conformal interfaces. When the Ising CFTs differ by nonlocal operator charges, the interface hosts 0+1d symmetry-breaking phases with finite-size splittings scaling as $1/L^3$, as well as continuous phase transitions between them. For general gapless phases differing by an SPT entangler, the interfaces between them can be mapped to conformal defects with a certain defect 't Hooft anomaly. This classification also gives implications for higher-dimensional examples, including symmetry-enriched variants of the 2+1d Ising CFT. Our results establish a physical indicator for symmetry-enriched criticality through symmetry-protected interfaces, giving a new handle on the interplay between topology and gapless phases.
- Dec 30 2025 quant-ph arXiv:2512.23586v1Twirling, uniform averaging over symmetry actions, is a standard tool for reducing the description of quantum states and channels to symmetry-invariant data. We develop a framework for averaging quantum channels based on channel-state duality that converts pre- and post-processing averages into a group twirl acting directly on the Choi operator. For arbitrary unitary representations on the input and output spaces, the twirled channel is obtained as an explicit projection onto the commutant of the induced representation on $\mathcal H_{\rm out}\otimes \mathcal H_{\rm in}$. In the collective setting, where the commutant is the walled Brauer algebra, we introduce a partial-transpose reduction that maps channel twirling to an ordinary Schur-Weyl twirl of the partially transposed Choi operator, enabling formulas in terms of permutation operators. We further extend the construction beyond compact symmetries to reductive non-unitary groups via Cartan decomposition, yielding a weighted sum of invariant-sector projections with weights determined by the Abelian component. Finally, we provide two finite realizations of channel averaging. The first one is a ``dual'' averaging protocol as a convex mixture of unitary-$1$-design channels on invariant sectors. The second one is a notion of channel $t$-designs induced by weighted group $t$-designs for $t=t_{\rm in}+t_{\rm out}$.
- Dec 30 2025 quant-ph physics.optics arXiv:2512.23248v1Quantum phase transitions (QPTs) in coherent Ising machines (CIMs) are studied via a spectral mapping between the one-dimensional XY spin model and a network of degenerate optical parametric oscillators (DOPOs). This exact correspondence reveals that the DOPO network faithfully reproduces the quantum critical behavior of the XY model across its anisotropic, isotropic, and transverse-field Ising regimes. The ground-state energy density and its derivatives are analyzed to reveal second-order QPTs characterized by singularities in magnetic susceptibility at critical points. These results show that CIMs do not only serve as powerful platforms for solving combinatorial optimization problems but also provide a versatile optical simulator for studying universal quantum critical phenomena, bridging quantum-spin models and photonic quantum systems.
- Dec 30 2025 quant-ph physics.flu-dyn arXiv:2512.22559v1Quantum computing holds potential for accelerating the simulation of fluid dynamics. However, hardware noise in the noisy intermediate-scale quantum era significantly distorts simulation accuracy. Although error magnitudes are frequently quantified, the specific physical effects of quantum noise on flow simulation results remain largely uncharacterized. We investigate the influence of gate noise on the quantum simulation of one-dimensional scalar convection. By employing a quantum spectral algorithm where ideal time advancement affects only Fourier phases, we isolate and analyze noise-induced artifacts in spectral magnitudes. We derive a theoretical transition matrix based on Hamming distances between computational basis states to predict spectral decay, and then validate this model against density-matrix simulations and experiments on a superconducting quantum processor. Furthermore, using data-driven sparse regression, we demonstrate that quantum noise manifests in the effective partial differential equation primarily as artificial diffusion and nonlinear source terms. These findings suggest that quantum errors can be modeled as deterministic physical terms rather than purely stochastic perturbations.
- The study of quantum correlations within relativistic spacetimes, and the consequences of relativistic causality on information processing using such correlations, has gained much attention in recent years. In this paper, we establish a unified framework in the form of operational no-signalling constraints to study both nonlocal and temporal correlations within general relativistic spacetimes. We explore several intriguing consequences arising from our framework. Firstly, we show that the violation of the operational no-signalling constraints in Minkowski spacetime implies either a logical paradox or an operational infringement of Poincaré symmetry. We thereby examine and subvert recent claims in [Phys. Rev. Lett. 129, 110401 (2022)] on the possibility of witnessing operationally detectable causal loops in Minkowski spacetime. Secondly, we explore the possibility of jamming of nonlocal correlations, controverting a recent claim in [Nat. Comm. 16, 269 (2025)] that a physical mechanism for jamming would necessarily lead to superluminal signalling. Finally, we show that in black hole spacetimes certain nonlocal correlations under and across the event horizon can be jammed by any agent without spoiling the operational no-signalling constraints.
- Hippolyte Dourdent, Kyrylo Simonov, Andreas Leitherer, Emanuel-Cristian Boghiu, Ravi Kunjwal, Saronath Halder, Remigiusz Augusiak, Antonio AcínDec 30 2025 quant-ph arXiv:2512.23599v1Closed timelike curves (CTCs) challenge our conception of causality by allowing information to loop back into its own past. Any consistent description of such scenarios must avoid time-travel paradoxes while respecting the no-new-physics principle, which requires that the set of operations available within any local spacetime region remain unchanged, irrespective of whether CTCs exist elsewhere. Within an information-theoretic framework, this leads to process functions: deterministic classical communication structures that remain logically consistent under arbitrary local operations, yet can exhibit correlations incompatible with any definite causal order - a phenomenon known as non-causality. In this work, we provide the first complete recursive characterization of process functions and of (non-)causal process functions. We use it to establish a correspondence between process functions and unambiguous complete product bases, i.e., product bases in which every local state belongs to a unique local basis. This equivalence implies that non-causality of process functions is exactly mirrored by quantum nonlocality without entanglement (QNLWE) - the impossibility of perfectly distinguishing separable states using local operations and causal classical communication - for such bases. Our results generalize previous special cases to arbitrary local dimensions and any number of parties, enable systematic constructions of non-causal process functions and unambiguous QNLWE bases, and reveal an unexpected connection between certain non-signaling inequalities and causal inequalities.
- In this paper, we present efficient pseudodeterministic algorithms for both the global minimum cut and minimum s-t cut problems. The running time of our algorithm for the global minimum cut problem is asymptotically better than the fastest sequential deterministic global minimum cut algorithm (Henzinger, Li, Rao, Wang; SODA 2024). Furthermore, we implement our algorithm in sequential, streaming, PRAM, and cut-query models, where no efficient deterministic global minimum cut algorithms are known.
- Dec 30 2025 quant-ph arXiv:2512.23168v1Criticality-based quantum sensing exploits hypersensitive response to system parameters near phase transition points. This work uncovers two metrological advantages offered by topological phase transitions when the probe is prepared as topological edge states. Firstly, the order of topological band touching is found to determine how the metrology sensitivity scales with the system size. Engineering a topological phase transition with higher-order band touching is hence advocated, with the associated quantum Fisher information scaling as $ \mathcal{F}_Q \sim L^{2p}$, with $L$ the lattice size in one dimension, and $p$ the order of band touching. Secondly, with a topological lattice accommodating degenerate edge modes (such as multiple zero modes), preparing an $N$-particle entangled state at the edge and then adiabatically tuning the system to the phase transition point grows quantum entanglement to macroscopic sizes, yielding $\mathcal{F}_Q \sim N^2 L^{2p}$. This work hence paves a possible topological phase transition-based route to harness entanglement, large lattice size, and high-order band touching for quantum metrology.
- Dec 30 2025 cond-mat.mes-hall cond-mat.str-el arXiv:2512.23084v1We predict a new class of topological electronic crystals in bilayer graphene-Mott insulator heterostructures. Interlayer charge transfer creates a charge neutral electron hole bilayer, in which itinerant carriers in graphene interact attractively with localized carriers from a flat Hubbard band. In the heavy fermion limit and dilute limit, this interplay leads to symmetry breaking crystalline phases stabilized not only by pure repulsion, but also by interlayer Coulomb attraction shaped by band topology. Using comprehensive Hartree Fock calculations, we uncover triangular, honeycomb, and kagome charge orders hosting different quantized anomalous Hall effects at moderate interlayer attraction.
- We analyze few-body quantum states with particular correlation properties imposed by the requirement of maximal bipartite entanglement for selected partitions of the system into two complementary parts. A novel framework to treat this problem by encoding these constraints in a graph is advocated; the resulting objects are called ``graph-restricted tensors''. This framework encompasses several examples previously treated in the literature, such as 1-uniform multipartite states, quantum states related to dual unitary operators and absolutely maximally entangled states (AME) corresponding to 2-unitary matrices. Original examples of presented graph-restricted tensors are motivated by tensor network models for the holographic principle. In concrete cases we find exact analytic solutions, demonstrating thereby that there exists a vast landscape of non-stabilizer tensors useful for the lattice models of holography.
- This paper presents a counterexample to the optimality conjecture in convex quantum channel optimization proposed by Coutts et al. The conjecture posits that for nuclear norm minimization problems in quantum channel optimization, the dual certificate of an optimal solution can be uniquely determined via the spectral calculus of the Choi matrix. By constructing a counterexample in 2-dimensional Hilbert spaces, we disprove this conjecture.
- Dec 30 2025 quant-ph arXiv:2512.22856v1The Quantum Approximate Optimization Algorithm (QAOA) is a leading candidate for achieving quantum advantage in combinatorial optimization on Near-Term Intermediate-Scale Quantum (NISQ) devices. However, random initialization of the variational parameters typically leads to vanishing gradients, rendering standard variational optimization ineffective. This paper provides a comparative performance analysis of two distinct strategies designed to improve trainability: Lie algebraic pretraining framework that uses Lie-algebraic classical simulation to find near-optimal initializations, and non-variational QWOA (NV-QWOA) that targets a restrict parameter subspace covered by 3 hyperparameters. We benchmark both methods on the unweighted Maxcut problem using a circuit depth of $p = 256$ across 200 Erdős-Rényi and 200 3-regular graphs, each with 16 vertices. Both approaches significantly improve upon the standard randomly initialized QWOA. NV-QWOA attains a mean approximation ratio of 98.9\% in just 60 iterations, while the Lie-algebraic pretrained QWOA improves to 77.71\% after 500 iterations. That optimization proceeds more quickly for NV-QWOA is not surprising given its significantly smaller parameter space, however, that an algorithm with so few tunable parameters reliably finds near-optimal solutions is remarkable. These findings suggest that the structured parameterization of NV-QWOA offers a more robust training approach than pretraining on lower-dimensional auxiliary problems. Future work is needed to confirm scaling to larger problem sizes and to asses generalization to other problem classes.
- We study the mixing time of Glauber dynamics for Ising models in which the interaction matrix contains a single negative spectral outlier. This class includes the anti-ferromagnetic Curie-Weiss model, the anti-ferromagnetic Ising model on expander graphs, and the Sherrington-Kirkpatrick model with disorder of negative mean. Existing approaches to rapid mixing rely crucially on log-concavity or spectral width bounds and therefore can break down in the presence of a negative outlier. To address this difficulty, we develop a new covariance approximation method based on Gaussian approximation. This method is implemented via an iterative application of Stein's method to quadratic tilts of sums of bounded random variables, which may be of independent interest. The resulting analysis provides an operator-norm control of the full correlation structure under arbitrary external fields. Combined with the localization schemes of Eldan and Chen, these estimates lead to a modified logarithmic Sobolev inequality and near-optimal mixing time bounds in regimes where spectral width bounds fail. As a complementary result, we prove exponential lower bounds on the mixing time for low temperature anti-ferromagnetic Ising models on sparse Erdös-Rényi graphs, based on the existence of gapped states as in the recent work of Sellke.
- Quantum simulations of many-body systems offer novel methods for probing the dynamics of the Standard Model and its constituent gauge theories. Extracting low-energy predictions from such simulations rely on formulating systematically-improvable representations of lattice gauge theory Hamiltonians that are efficient at all values of the gauge coupling. One such candidate representation for SU(2) is the fully gauge-fixed Hamiltonian defined in the mixed basis. This work focuses on the quantum simulation of the smallest non-trivial system: two plaquettes with open boundary conditions. A mapping of the continuous gauge field degrees of freedom to qubit-based representations is developed. It is found that as few as three qubits per plaquette is sufficient to reach per-mille level precision on predictions for observables. Two distinct algorithms for implementing time evolution in the mixed basis are developed and analyzed in terms of quantum resource estimates. One algorithm has favorable scaling in circuit depth for large numbers of qubits, while the other is more practical when qubit count is limited. The latter algorithm is used in the measurement of a real-time observable on IBM's Heron superconducting quantum processor, ibm_fez. The quantum results match classical predictions at the percent-level. This work lays out a path forward for two- and three-dimensional simulations of larger systems, as well as demonstrating the viability of mixed-basis formulations for studying the properties of SU(2) gauge theories at all values of the gauge coupling.
- The out-of-time-ordered correlator (OTOC) is a powerful tool for probing quantum information scrambling, a fundamental process by which local information spreads irreversibly throughout a quantum many-body system. Experimentally measuring the OTOC, however, is notoriously challenging due to the need for time-reversed evolution. Here, we present an experimental evaluation of the OTOC on a quantum computer, using three distinct protocols to address this challenge: the rewinding time method (RTM), the weak-measurement method (WMM), and the irreversibility-susceptibility method (ISM). Our experiments investigate the quantum dynamics of an XXZ spin-1/2 chain prepared in a thermal Gibbs state. As a key contribution, we provide the first experimental demonstration of the ISM, using the numerical emulator of the trapped-ion quantum computer, reimei. We also conduct a detailed comparative analysis of all three methods, revealing method-dependent behaviors in the measured OTOC. This work not only validates these protocols as practical tools for exploring quantum chaos on near-term hardware but also offers crucial insights into their respective advantages and limitations, providing a practical framework for future experimental investigations.
- In quantum information and computation research, symbolic methods have been widely used for human specification and reasoning about quantum states and operations. At the same time, they are essential for ensuring the scalability and efficiency of automated reasoning and verification tools for quantum algorithms and programs. However, a formal theory for symbolic specification and reasoning about quantum data and operations is still lacking, which significantly limits the practical applicability of automated verification techniques in quantum computing. In this paper, we present a general logical framework, called Symbolic Operator Logic $\mathbf{SOL}$, which enables symbolic specification and reasoning about quantum data and operations. Within this framework, a classical first-order logical language is embedded into a language of formal operators used to specify quantum data and operations, including their recursive definitions. This embedding allows reasoning about their properties modulo a chosen theory of the underlying classical data (e.g., Boolean algebra or group theory), thereby leveraging existing automated verification tools developed for classical computing. It should be emphasised that this embedding of classical first-order logic into $\mathbf{SOL}$ is precisely what makes the symbolic method possible. We envision that this framework can provide a conceptual foundation for the formal verification and automated theorem proving of quantum computation and information in proof assistants such as Lean, Coq, and related systems.
- Dec 30 2025 quant-ph arXiv:2512.22163v1We present a quantum algorithm for the simulation of the linear advection-diffusion equation based on block encodings of high order finite-difference operators and the quantum singular value transform. Our complexity analysis shows that the higher order methods significantly reduce the number of gates and qubits required to reach a given accuracy. The theoretical results are supported by numerical simulations of one- and two-dimensional benchmarks.
- Dec 30 2025 cs.CV arXiv:2512.23709v1Diffusion-based video super-resolution (VSR) methods achieve strong perceptual quality but remain impractical for latency-sensitive settings due to reliance on future frames and expensive multi-step denoising. We propose Stream-DiffVSR, a causally conditioned diffusion framework for efficient online VSR. Operating strictly on past frames, it combines a four-step distilled denoiser for fast inference, an Auto-regressive Temporal Guidance (ARTG) module that injects motion-aligned cues during latent denoising, and a lightweight temporal-aware decoder with a Temporal Processor Module (TPM) that enhances detail and temporal coherence. Stream-DiffVSR processes 720p frames in 0.328 seconds on an RTX4090 GPU and significantly outperforms prior diffusion-based methods. Compared with the online SOTA TMP, it boosts perceptual quality (LPIPS +0.095) while reducing latency by over 130x. Stream-DiffVSR achieves the lowest latency reported for diffusion-based VSR, reducing initial delay from over 4600 seconds to 0.328 seconds, thereby making it the first diffusion VSR method suitable for low-latency online deployment. Project page: https://jamichss.github.io/stream-diffvsr-project-page/
- We identify quantum geometric bounds for observables in non-Hermitian systems. We find unique bounds on non-Hermitian quantum geometric tensors, generalized two-point response correlators, conductivity tensors, and optical weights. We showcase these findings in topological systems with non-Hermitian Chern numbers. We demonstrate that the non-Hermitian geometric constraints on response functions naturally arise in open quantum systems governed by out-of-equilibrium Lindbladian dynamics. Our findings are relevant to experimental observables and responses under the realistic setups that fall beyond the idealized closed-system descriptions.
- AI co-scientists are emerging as a tool to assist human researchers in achieving their research goals. A crucial feature of these AI co-scientists is the ability to generate a research plan given a set of aims and constraints. The plan may be used by researchers for brainstorming, or may even be implemented after further refinement. However, language models currently struggle to generate research plans that follow all constraints and implicit requirements. In this work, we study how to leverage the vast corpus of existing research papers to train language models that generate better research plans. We build a scalable, diverse training corpus by automatically extracting research goals and goal-specific grading rubrics from papers across several domains. We then train models for research plan generation via reinforcement learning with self-grading. A frozen copy of the initial policy acts as the grader during training, with the rubrics creating a generator-verifier gap that enables improvements without external human supervision. To validate this approach, we conduct a study with human experts for machine learning research goals, spanning 225 hours. The experts prefer plans generated by our finetuned Qwen3-30B-A3B model over the initial model for 70% of research goals, and approve 84% of the automatically extracted goal-specific grading rubrics. To assess generality, we also extend our approach to research goals from medical papers, and new arXiv preprints, evaluating with a jury of frontier models. Our finetuning yields 12-22% relative improvements and significant cross-domain generalization, proving effective even in problem settings like medical research where execution feedback is infeasible. Together, these findings demonstrate the potential of a scalable, automated training recipe as a step towards improving general AI co-scientists.
- Shaocong Xu, Songlin Wei, Qizhe Wei, Zheng Geng, Hong Li, Licheng Shen, Qianpu Sun, Shu Han, Bin Ma, Bohan Li, Chongjie Ye, Yuhang Zheng, Nan Wang, Saining Zhang, Hao ZhaoDec 30 2025 cs.CV arXiv:2512.23705v1Transparent objects remain notoriously hard for perception systems: refraction, reflection and transmission break the assumptions behind stereo, ToF and purely discriminative monocular depth, causing holes and temporally unstable estimates. Our key observation is that modern video diffusion models already synthesize convincing transparent phenomena, suggesting they have internalized the optical rules. We build TransPhy3D, a synthetic video corpus of transparent/reflective scenes: 11k sequences rendered with Blender/Cycles. Scenes are assembled from a curated bank of category-rich static assets and shape-rich procedural assets paired with glass/plastic/metal materials. We render RGB + depth + normals with physically based ray tracing and OptiX denoising. Starting from a large video diffusion model, we learn a video-to-video translator for depth (and normals) via lightweight LoRA adapters. During training we concatenate RGB and (noisy) depth latents in the DiT backbone and co-train on TransPhy3D and existing frame-wise synthetic datasets, yielding temporally consistent predictions for arbitrary-length input videos. The resulting model, DKT, achieves zero-shot SOTA on real and synthetic video benchmarks involving transparency: ClearPose, DREDS (CatKnown/CatNovel), and TransPhy3D-Test. It improves accuracy and temporal consistency over strong image/video baselines, and a normal variant sets the best video normal estimation results on ClearPose. A compact 1.3B version runs at ~0.17 s/frame. Integrated into a grasping stack, DKT's depth boosts success rates across translucent, reflective and diffuse surfaces, outperforming prior estimators. Together, these results support a broader claim: "Diffusion knows transparency." Generative video priors can be repurposed, efficiently and label-free, into robust, temporally coherent perception for challenging real-world manipulation.
- Dec 30 2025 physics.flu-dyn arXiv:2512.23704v1This study examines the applicability of two leading-edge dynamic stall criteria, namely, the maximum magnitudes of the leading-edge suction parameter (LESP) and the boundary enstrophy flux (BEF), in a moderately compressible flow regime. While previously shown to predict stall onset ahead of dynamic stall vortex (DSV) formation in incompressible and mildly compressible regimes, these criteria are assessed here at a Reynolds number of $1 \times 10^6$ and freestream Mach numbers between 0.3 and 0.5. Unsteady RANS simulations indicate that DSV formation occurs in close temporal proximity to the attainment of the stall criteria. However, at the highest Mach number considered, stronger shock interaction effects with the shear layer leads to DSV formation prior to the criteria being reached, reducing their predictive accuracy. These findings suggest that while the criteria remain effective at lower Mach numbers, their definitions require modification in compressible regimes where strong shock interactions significantly influence the stall process.
- The primary obstacle for applying reinforcement learning (RL) to real-world robotics is the design of effective reward functions. While recently learning-based Process Reward Models (PRMs) are a promising direction, they are often hindered by two fundamental limitations: their reward models lack step-aware understanding and rely on single-view perception, leading to unreliable assessments of fine-grained manipulation progress; and their reward shaping procedures are theoretically unsound, often inducing a semantic trap that misguides policy optimization. To address these, we introduce Dopamine-Reward, a novel reward modeling method for learning a general-purpose, step-aware process reward model from multi-view inputs. At its core is our General Reward Model (GRM), trained on a vast 3,400+ hour dataset, which leverages Step-wise Reward Discretization for structural understanding and Multi-Perspective Reward Fusion to overcome perceptual limitations. Building upon Dopamine-Reward, we propose Dopamine-RL, a robust policy learning framework that employs a theoretically-sound Policy-Invariant Reward Shaping method, which enables the agent to leverage dense rewards for efficient self-improvement without altering the optimal policy, thereby fundamentally avoiding the semantic trap. Extensive experiments across diverse simulated and real-world tasks validate our approach. GRM achieves state-of-the-art accuracy in reward assessment, and Dopamine-RL built on GRM significantly improves policy learning efficiency. For instance, after GRM is adapted to a new task in a one-shot manner from a single expert trajectory, the resulting reward model enables Dopamine-RL to improve the policy from near-zero to 95% success with only 150 online rollouts (approximately 1 hour of real robot interaction), while retaining strong generalization across tasks. Project website: https://robo-dopamine.github.io
- Identifying specific and often complex behaviors from large language models (LLMs) in conversational settings is crucial for their evaluation. Recent work proposes novel techniques to find natural language prompts that induce specific behaviors from a target model, yet they are mainly studied in single-turn settings. In this work, we study behavior elicitation in the context of multi-turn conversations. We first offer an analytical framework that categorizes existing methods into three families based on their interactions with the target model: those that use only prior knowledge, those that use offline interactions, and those that learn from online interactions. We then introduce a generalized multi-turn formulation of the online method, unifying single-turn and multi-turn elicitation. We evaluate all three families of methods on automatically generating multi-turn test cases. We investigate the efficiency of these approaches by analyzing the trade-off between the query budget, i.e., the number of interactions with the target model, and the success rate, i.e., the discovery rate of behavior-eliciting inputs. We find that online methods can achieve an average success rate of 45/19/77% with just a few thousand queries over three tasks where static methods from existing multi-turn conversation benchmarks find few or even no failure cases. Our work highlights a novel application of behavior elicitation methods in multi-turn conversation evaluation and the need for the community to move towards dynamic benchmarks.
- We explore the topological significance of the knot two-variable series $F_K$, proposed by Gukov--Manolescu and defined by Park for a class of `nice' knots. We show that the leading coefficient of $F_K$ is a monomial and express its exponent in terms of the Hopf invariant for all homogeneous braid knots and fibered knots up to 12 crossings. As an application, we deduce an explicit formula for the Hopf invariant in terms of colored Jones polynomials. For non-fibered strongly quasipositive knots, we study a relation between $F_K$ and the stability series of the colored Jones function, and explore similarities between $F_K$ and knot Floer homology. Finally, we propose a slope conjecture for $F_K$, relating it to the boundary slopes of the knot.
- Dec 30 2025 hep-th arXiv:2512.23699v1This work is motivated by the recent evidence for a double-copy relationship between open- and closed-string amplitudes in Anti-de Sitter (AdS) space. At present, the evidence has the form of a double-copy relation for string-amplitude building blocks, which are combined using the multiple-polylogarithm (MPL) generating functions. These generate MPLs relevant for all-order AdS curvature corrections of four-point string amplitudes. In this paper, we prove this building-block double copy using a new, noncommutative version of twisted de Rham theory. In flat space, the usual twisted de Rham theory is already known to be a natural framework to describe the Kawai-Lewellen-Tye (KLT) double-copy map from open- to closed-string amplitudes, in which the KLT kernel can be computed from the intersections of the open-string amplitude integration contours. We formulate twisted de Rham theory for noncommutative-ring-valued differential forms on complex manifolds and use it to derive the intersection number of two open-string contours, which are closed in the noncommutative twisted homology sense. The inverse of this intersection number is precisely the AdS double-copy kernel for the four-point open- and closed-string generating functions.
- We define a once extended non-compact 3-dimensional TQFT $\mathcal{Z}$ from the data of a (potentially) non-semisimple modular tensor category. This is in the framework of generators and relations of [Bartlett et al., arxiv:1509.06811 (2015)], having disallowed generating 2-morphisms whose source is the empty. Moreover, we show that the projective mapping class group representations this TQFT gives rise to, are dual to those of [Lyubashenko, arXiv:hep-th/9405167 (1994)] and [De Renzi et al., arXiv:2010.14852 (2020)]. We develop a method to decompose a closed 3-manifold in terms of 2-morphism generators. We use this to compute the value of $\mathcal{Z}$ on 3-manifolds, explaining why it should recover Lyubashenko's 3-manifold invariants [Lyubashenko, arXiv:hep-th/9405167 (1994)]. Finally, we explain that the value of the non-compact TQFT on the solid torus recovers the data of a modified trace [Geer et al., arXiv:0711.4229 (2007)].
- We implement a probe counterpart of Newman-Janis algorithm, which Wick rotates the all-orders geodesic deviation equation into a part of exact spinning-particle equations of motion. Consequently, the gravitational dynamics of the Kerr black hole in its point-particle effective theory is completely constrained in the self-dual sector for a hidden symmetry, implying the spin exponentiation of same-helicity gravitational Compton amplitudes to all multiplicities.
- Dec 30 2025 cs.GR arXiv:2512.23696v1OpenPBR is a physically based, standardized uber-shader developed for interoperable material authoring and rendering across VFX, animation, and design visualization workflows. This document serves as a companion to the official specification, offering deeper insight into the model's development and more detailed implementation guidance, including code examples and mathematical derivations. We begin with a description of the model's formal structure and theoretical foundations - covering slab-based layering, statistical mixing, and microfacet theory - before turning to its physical components. These include metallic, dielectric, subsurface, and glossy-diffuse base substrates, followed by thin-film iridescence, coat, and fuzz layers. A special-case mode for rendering thin-walled objects is also described. Additional sections explore technical topics in greater depth, such as the decoupling of specular reflectivity from transmission, the choice of parameterization for subsurface scattering, and the detailed physics of coat darkening and thin-film interference. We also discuss planned extensions, including hazy specular reflection and retroreflection.
- Dec 30 2025 math.OC arXiv:2512.23695v1Starting from a problem in elastoplasticity, we consider an optimization problem $C(c_1,c_2)=c_1+c_2\to \min$ under constraints $F_R^k(c_1,c_2)=a\cdot F^k(c_1,c_2)+b\cdot R^k(c_1,c_2)\ge 1$ and $F^k(c_1,c_2)\ge 1$, where both $F^k$ and $R^k$ non-linear, $a,b$ are constants, and $i\in\{1,2\}$ is an index. For each $(a,b)$ we determine which of the two values of $i\in\{1,2\}$ leads to the smaller minimum of the optimization problem. This way we obtain an interesting curve bounding the region where $k=1$ outperforms $k=2$.
- We introduce Iterated Bellman Calibration, a simple, model-agnostic, post-hoc procedure for calibrating off-policy value predictions in infinite-horizon Markov decision processes. Bellman calibration requires that states with similar predicted long-term returns exhibit one-step returns consistent with the Bellman equation under the target policy. We adapt classical histogram and isotonic calibration to the dynamic, counterfactual setting by repeatedly regressing fitted Bellman targets onto a model's predictions, using a doubly robust pseudo-outcome to handle off-policy data. This yields a one-dimensional fitted value iteration scheme that can be applied to any value estimator. Our analysis provides finite-sample guarantees for both calibration and prediction under weak assumptions, and critically, without requiring Bellman completeness or realizability.
- Dec 30 2025 cs.CL arXiv:2512.23693v1We present a method and dataset for fine-tuning language models with preference supervision using feedback-driven improvement chains. Given a model response, an annotator provides fine-grained feedback by marking ``liked'' and ``disliked'' spans and specifying what they liked or disliked about them. The base model then rewrites the disliked spans accordingly, proceeding from left to right, forming a sequence of incremental improvements. We construct preference pairs for direct alignment from each adjacent step in the chain, enabling the model to learn from localized, targeted edits. We find that our approach outperforms direct alignment methods based on standard A/B preference ranking or full contrastive rewrites, demonstrating that structured, revision-based supervision leads to more efficient and effective preference tuning.
- Modified thermal distributions (dispersion relations) are introduced within both the MATTER and LBT event generators used to describe jet modification in a heavy-ion collision, within the JETSCAPE framework. Hard partons, propagating through dense matter, scatter off the partonic substructure of the medium, leading to stimulated emission, accompanied by recoiling medium partons. We introduce a simple modification, a multiplicative $(1 + a/T)$ correction to the dispersion relation of quarks and gluons (equivalent to an effective fugacity). This leads to calculated transport coefficients (e.g. $\hat{q}/T^3$) showing the expected behavior of depreciating at lower temperatures, including within the hot hadronic gas. This simple modification recovers the light-like dispersion relations at high temperatures, and introduces an excess depreciation factor for parton populations at lower temperatures, allowing partonic energy loss and recoil calculations to be extended into the hadronic phase. This modified distribution, in combination with initial state cold nuclear matter effects (shadowing), is used to simultaneously describe the nuclear modification factor and elliptic anisotropy of jets and leading hadrons, over multiple centralities and collision energies.
- Mike Walmsley, Steven Bamford, Hugh Dickinson, Tobias Géron, Alexander J. Gordon, Annette M.N. Ferguson, Lucy Fortson, Sandor Kruk, Natalie Lines, Chris J. Lintott, Karen L. Masters, Robert G. Mann, James Pearson, Hayley Roberts, Anna M.M. Scaife, Stefan Schuldt, Brooke Simmons, Rebecca Smethurst, Josh Speagle, Kyle WillettDec 30 2025 astro-ph.IM astro-ph.GA arXiv:2512.23691v1We introduce Galaxy Zoo Evo, a labeled dataset for building and evaluating foundation models on images of galaxies. GZ Evo includes 104M crowdsourced labels for 823k images from four telescopes. Each image is labeled with a series of fine-grained questions and answers (e.g. "featured galaxy, two spiral arms, tightly wound, merging with another galaxy"). These detailed labels are useful for pretraining or finetuning. We also include four smaller sets of labels (167k galaxies in total) for downstream tasks of specific interest to astronomers, including finding strong lenses and describing galaxies from the new space telescope Euclid. We hope GZ Evo will serve as a real-world benchmark for computer vision topics such as domain adaption (from terrestrial to astronomical, or between telescopes) or learning under uncertainty from crowdsourced labels. We also hope it will support a new generation of foundation models for astronomy; such models will be critical to future astronomers seeking to better understand our universe.
- Dec 30 2025 cond-mat.mtrl-sci cond-mat.str-el arXiv:2512.23690v1Ba$_2$IrO$_4$ has been refined in the tetragonal $I4/mmm$ phase without octahedral rotations, and its physical properties have been interpreted in this high-symmetry structure. However, the dynamical stability of this undistorted phase has not previously been questioned. It is important to establish whether other lower-symmetry structures are energetically more favorable because octahedral rotations control electronic bandwidths and constrain which magnetic interactions are allowed by symmetry. Here I compute first-principles phonon dispersions of $I4/mmm$ Ba$_2$IrO$_4$ including spin-orbit interaction. I find a nearly-flat nondegenerate unstable branch along the Brillouin-zone boundary segment $XP$ associated with inplane rotations of the IrO$_6$ octahedra. Using group-theoretical analysis, I enumerate the symmetry-allowed distortions associated with the $X_2^+$ and $P_4$ instabilities and fully relax the resulting structures. Only five of the twelve possible distortions can be stabilized, and the energy gain scales with the number of layers that exhibit octahedral rotations: phases with rotations in every IrO$_6$ layer are lower by $-5.8$ meV/atom and are nearly degenerate with respect to the stacking phase. Electronic structure calculations show that these rotated phases host a narrow and well-separated half-filled $J_{\textrm{eff}} = 1/2$ manifold, whereas structures with rotations only in alternate layers have broader and more entangled bands. This motivates a reinvestigation of the crystal structure of Ba$_2$IrO$_4$ and indicates that octahedral rotations should be considered in modeling its correlated electronic and magnetic properties.
- Dec 30 2025 physics.optics arXiv:2512.23689v1A standard procedure to achieve accurate, precise, and fast polarization measurement is to choose analyzing and generating polarization states that yield an $\ell^2$-condition number optimized instrument matrix. This strategy works well for rotating-waveplate systems, where the accessible polarization states trace a curve on the Poincaré sphere and the corresponding optimization problem is generally well posed. However, it becomes degenerate for liquid-crystal-based systems, which can generate arbitrary polarization states, and whose additional degrees of freedom allow the optimization of metrics beyond the $\ell^2$-condition number. Leveraging this unique advantage of liquid-crystal polarimeters, we introduce additional performance measures derived from alternative norms and error distributions computed via Monte Carlo simulations to inform the design of measurement schemes. We then experimentally demonstrate their effectiveness in suppressing errors, paving the way for more robust and efficient polarization measurements.
- We present a software architecture to enable end user driven innovation of web multimedia communication applications. RTC Helper is a simple and easy-to-use software that can intercept WebRTC (web real-time communication) and related APIs in the browser, and change the behavior of web apps in real-time. Such customization can even be driven by the end user on third-party web apps using our flexible and general purpose browser extension. It also facilitates rapid prototyping of ideas by web developers in their existing web apps without having to rebuild or redeploy after every change. It has more than ten customization categories, and over a hundred built-in examples covering a wide range of novel use cases in web-based audio/video communication.
- Subgraph complementation is an operation that toggles all adjacencies inside a selected vertex set. Given a graph \(G\)and a target class \(\mathcalC\), the Minimum Subgraph Complementation problem asks for a minimum-size vertex set \(S\)such that complementing the subgraph induced by \(S\)transforms \(G\)into a graph belonging to \(\mathcalC\). While the decision version of Subgraph Complementation has been extensively studied and is NP-complete for many graph classes, the algorithmic complexity of its optimization variant has remained largely unexplored. In this paper, we study MSC from an algorithmic perspective. We present polynomial-time algorithms for MSC in several nontrivial settings. Our results include polynomial-time solvability for transforming graphs between bipartite, co-bipartite, and split graphs, as well as for complementing bipartite regular graphs into chordal graphs. We also show that MSC to the class of graphs of fixed degeneracy can be solved in polynomial time when the input graph is a forest. Moreover, we investigate MSC with respect to connectivity and prove that MSC to the class of disconnected graphs and to the class of 2-connected graphs can be solved in polynomial time for arbitrary inputs.
- Automatic Speech Recognition (ASR) in professional settings faces challenges that existing benchmarks underplay: dense domain terminology, formal register variation, and near-zero tolerance for critical entity errors. We present ProfASR-Bench, a professional-talk evaluation suite for high-stakes applications across finance, medicine, legal, and technology. Each example pairs a natural-language prompt (domain cue and/or speaker profile) with an entity-rich target utterance, enabling controlled measurement of context-conditioned recognition. The corpus supports conventional ASR metrics alongside entity-aware scores and slice-wise reporting by accent and gender. Using representative families Whisper (encoder-decoder ASR) and Qwen-Omni (audio language models) under matched no-context, profile, domain+profile, oracle, and adversarial conditions, we find a consistent pattern: lightweight textual context produces little to no change in average word error rate (WER), even with oracle prompts, and adversarial prompts do not reliably degrade performance. We term this the context-utilization gap (CUG): current systems are nominally promptable yet underuse readily available side information. ProfASR-Bench provides a standardized context ladder, entity- and slice-aware reporting with confidence intervals, and a reproducible testbed for comparing fusion strategies across model families. Dataset: https://huggingface.co/datasets/prdeepakbabu/ProfASR-Bench Code: https://github.com/prdeepakbabu/ProfASR-Bench
- Dec 30 2025 cond-mat.mes-hall arXiv:2512.23685v1Two-band Hamiltonians provide a typical description of topological band structures, in which the eigenfunctions can be characterized by a %Bloch vector field whose winding number that defines an integer topological invariant. This winding number is quantized and protected against continuous deformations of the Hamiltonian. Here we show that the Bloch vector and its winding number can be directly related to the gradient of the energy dispersion. Since the energy gradient is proportional to the group velocity, our result establishes an experimentally accessible correspondence between the Bloch vector field and angle-resolved photoemission spectroscopy measurements. We discuss a mapping between the gradient of the energy dispersion and the Bloch vector. This implies a direct and measurable relation between two-band Hamiltonians and their underlying topological structures.
- Large language models (LLMs) are increasingly considered for use in high-impact workflows, including academic peer review. However, LLMs are vulnerable to document-level hidden prompt injection attacks. In this work, we construct a dataset of approximately 500 real academic papers accepted to ICML and evaluate the effect of embedding hidden adversarial prompts within these documents. Each paper is injected with semantically equivalent instructions in four different languages and reviewed using an LLM. We find that prompt injection induces substantial changes in review scores and accept/reject decisions for English, Japanese, and Chinese injections, while Arabic injections produce little to no effect. These results highlight the susceptibility of LLM-based reviewing systems to document-level prompt injection and reveal notable differences in vulnerability across languages.
- Dec 30 2025 gr-qc arXiv:2512.23683v1Black holes beyond General Relativity may carry non-standard charges that impact their phenomenology. We study how the scalar charge that is induced by the scalar-Gauss-Bonnet coupling is affected by the presence of a nontrivial kinetic term $K(X)$. We discuss the corresponding kinetic screening in the asymptotically flat, static solution first. We then turn to the case where self-accelerating cosmology is driven by $K(X)$, finding that the time-dependence of the scalar field opens up the parameter space, turning the black-hole scalar charge from secondary to primary. We provide a stability analysis and a measure of the intensity of the kinetic screening from the quartic dispersion relation of the mixed scalar and gravitational modes.
- Dmitri M Orlov, Joseph McClenaghan, Jeff Candy, Jeremy D Lore, Nathan T Howard, Francesco Sciortino, Christopher HollandDec 30 2025 physics.plasm-ph arXiv:2512.23682v1Achieving self-consistent performance predictions for ITER requires integrated modeling of core transport and divertor power exhaust under realistic impurity conditions. We present results from the first systematic power-flow and impurity-content study for the ITER 15 MA baseline scenario constrained directly by existing SOLPS-ITER neon-seeded divertor solutions. Using the OMFIT STEP workflow, stationary temperature and density profiles are predicted with TGYRO for $1.5 \le Z_{\rm eff} \le 2.5$, and the corresponding power crossing the separatrix $P_{\rm sep}$ is evaluated. We find that $P_{\rm sep}$ varies by more than a factor of 1.7 across this scan and matches the $\sim 100$~MW SOLPS-ITER prediction when $Z_{\rm eff} \simeq 1.6$ or when auxiliary heating is reduced to $\sim 75\%$ of nominal. Rotation-sensitivity studies show that plausible variations in toroidal flow magnitude modify $P_{\rm sep}$ by $\lesssim 20\%$, while AURORA modeling confirms that charge-exchange radiation inside the separatrix is dynamically negligible under predicted ITER neutral densities. These results identify a restricted compatibility window, $Z_{\rm eff} \approx 1.6$--1.75 and $0.75 \lesssim f_{P_{\rm aux}} \le 1.0$, in which core transport predictions remain aligned with neon-seeded divertor protection targets. This self-consistent, model-constrained framework provides actionable guidance for impurity control and auxiliary-heating scheduling in early ITER operation and supports future whole-device scenario optimization.
- Dec 30 2025 math.NT arXiv:2512.23681v1We give a short survey of the phenomenon of better than squareroot cancellation, specifically as it applies to averages of multiplicative character sums (such as $\frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} |\sum_{n \leq x} \chi(n)|^{2q}$) thanks to their connection with so-called multiplicative chaos. We focus on the number theoretic aspects of the arguments, and also touch on some possible applications.
- As the class $\mathcal T_4$ of graphs of twin-width at most 4 contains every finite subgraph of the infinite grid and every graph obtained by subdividing each edge of an $n$-vertex graph at least $2 \log n$ times, most NP-hard graph problems, like Max Independent Set, Dominating Set, Hamiltonian Cycle, remain so on $\mathcal T_4$. However, Min Coloring and k-Coloring are easy on both families because they are 2-colorable and 3-colorable, respectively. We show that Min Coloring is NP-hard on the class $\mathcal T_3$ of graphs of twin-width at most 3. This is the first hardness result on $\mathcal T_3$ for a problem that is easy on cographs (twin-width 0), on trees (whose twin-width is at most 2), and on unit circular-arc graphs (whose twin-width is at most 3). We also show that for every $k \geqslant 3$, k-Coloring is NP-hard on $\mathcal T_4$. We finally make two observations: (1) there are currently very few problems known to be in P on $\mathcal T_d$ (graphs of twin-width at most $d$) and NP-hard on $\mathcal T_{d+1}$ for some nonnegative integer $d$, and (2) unlike $\mathcal T_4$, which contains every graph as an induced minor, the class $\mathcal T_3$ excludes a fixed planar graph as an induced minor; thus it may be viewed as a special case (or potential counterexample) for conjectures about classes excluding a (planar) induced minor. These observations are accompanied by several open questions.
Recent comments
...(continued)I am puzzled by the proposed metric for Shor's algorithm in this work. It seems to impose no restrictions on the classical pre- and post-processing which opens up a whole can of worms, as I try to explain below (since the authors of this work explicitly invite dialogue).
1. Firstly, for problem ins
...(continued)Nice paper! I noticed you also have schemes for the 4.8.8 code that generate the full Clifford group. Do you have any plans to do circuit-level simulations of these schemes? I'm quite interested in how the 4.8.8 circuit performs in general with the ancilla-free measurement circuit, because I think u
Just until this is addressed in v2: loglog(1/eps) depth is Thm 13.5 of the Kitaev-Shen-Vyalyi book. This is overall depth, not just T depth.
Thank you so much. I just realized this!
Dear Zhenhuan, if the group only contains the identity, the channel only needs to purify the maximally mixed state (the unique state in the algebra spanned by the group). It achieves this by always outputting the maximally entangled state (regardless of the input state).
...(continued)Congratulations on the intersting result!
I was wondering the relationship between your result and Theorem 3 in [arXiv:2509.21111][1], which proves the exponential sample complexity lower bound of preparing a single purification state.
Your result seems to hold for all unitary groups. So if th
If you want to try it yourself Min's implementation of the beam search decoder is now available here: https://github.com/ionq-publications/BeamSearchDecoder
The term *light rectangle* was used [20 years ago by N. David Mermin with the same meaning](https://arxiv.org/pdf/gr-qc/0411069). Mermin also deduces the invariant interval from the area of a light rectangle drawn on the Euclidean plane.
...(continued)Hi Ben, thanks a lot for your kind words!
Whether "optimal" should be reserved only for results that are tight without logarithmic factors is, we think, still somewhat up for debate. 🙂
For example, one of the two concurrent seminal papers establishing quantum state tomography optimal up to logs w
-
Supported by
Braneshop, and the US National Science Foundation.