Combinatorial Morning in Tel Aviv, Sunday 28/12/2025

Coming very soon!

Organizer:      Michael Krivelevich

Place:  Schreiber 309, Tel Aviv University

Event’s site.  https://sites.google.com/view/combinatorics-seminar-tel-aviv

Program

carview.php?tsp=

November’s Lectures, 2025

Happy Chanukah, everybody! There is a lot of academic activity around, and the ceasefire in Gaza has brought some relief and hope. Let me tell you about the (unusually high number of) lectures I attended in November 2025, in reverse chronological order. I have known many of the speakers for several decades—together they represent centuries of acquaintance.

I would like to highlight the lecture by Orit Raz at the HUJI combinatorics seminar, where she presented a mind-boggling program aimed at improving the bounds for the unit distance problem using rigidity theory of frameworks. I will start with the lecture by John Martinis, the 2025 Nobel Prize laureate, and end with a lecture by Steve LaValle from Finland on robotics.

The full line of speakers was: Steve LaValle (Nov. 3), Oz Almog (Nov. 6), Shmuel Weinberger (Nov. 11), Yuri Gurevich (Nov.13), Micha Sharir (Nov. 16), Meir Feder (Nov. 19),  Itai Benjamini (Nov. 20), Amichai Lampert (Nov. 20), Orit Raz (Nov. 24), Adi Shamir (Nov. 24), Dorit Aharonov, Erez Berg, Shai Evra, and Snir Gazit (Nov. 25), Ron Wettenstein (Nov 27), Shakhar Smorodinsky (Nov. 30), Roy Meshulam (Nov. 30), Yaron Oz, Yonatan Cohen, and John Martinis (Nov. 30).

November’s lectures

Sunday November 30: Physics Nobel lecture

John Martinis, the 2025 Nobel Laurates in Physics, talked about his famous 1985 experiment on quantum tunneling and about superconducting quantum computers. He also answered many questions.

carview.php?tsp=

According to John Martinis, the current pace of progress will not yield useful quantum computers in the coming decades (before “most of the members of the audience will retire”). He outlined some ideas and plans for an alternative trajectory. (Click to enlarge the picture.)

Additional short talks were given by Yaron Oz and Yonatan Cohen. On December 1,  Martinis gave a similar talk at HUJI.

I first met John Martinis at our 2013 QSTART conference, where he gave a lecture about quantum error correction based on superconducting qubits (video). Yonatan Cohen was our host in the recent visit to the IQCC. Yaron Oz is a well-known high-energy physicist.

Sunday November 30: Two combinatorics lectures

At the TAU combinatorics seminar, Shakhar Smorodinsky explained his work with Noga Alon, which we discussed in  this post. Roy Meshulam talked (over Zoom) about Cayley complexes, codes, and homology at the Bar Ilan combinatorics seminar.

Roy Meshulam is one of my closest collaborators. We started collaborating in the 1980s and wrote our first paper together in the early 2000s. Our joint work is mainly on topological Helly-type theorems. I have known Shakhar Smorodinsky for at least two decades, and he spent a year as my postdoc in Jerusalem.

Thursday, November 27: Explainability in AI (Reichman University)

carview.php?tsp=

Ron Wettenstein lectured in Reichman’s CS colloquium on:  “From Game Theory and Boolean Logic to Decision Tree Explainability”. Ron described his work with Alex Nadel on WOODELF: a unified and efficient algorithm for computing Shapley values on decision trees.

Ron is a PhD student at Reichman University under the supervision of Udi Boker, Alex Nadel and me.

Tuesday November 25: The theory of quantum computing  (HUJI, IIAS)

carview.php?tsp=

This was the opening day of a new interdisciplinary center for the theory of quantum computing. The speakers were Dorit Aharonov, Erez Berg, Shai Evra, and Snir Gazit. It is almost 15 years since the kick-off of the quantum science center at HUJI, and since then similar centers have opened at other Israeli universities. Recently, four or five national centers were established.

Dorit presented a list of central problems on the agenda of the new center, many of which are related to quantum error-correcting codes. Shai talked about the mathematics of quantum error-correcting codes; Snir gave a statistical-mechanics view of surface codes and other physical states; and Erez discussed methods of error mitigation and related experiments on IBM quantum computers.

Dorit Aharonov has been my colleague for the past three decades. (I already count her as a colleague during her doctoral years, but she was indeed a mature scientist even then.) A few years earlier, she was a student in my “advanced calculus” class. Shai is a mathematician at HUJI, and I have known him for more than a decade. I met Erez and Snir at the conference itself. (Both were students of the late Israeli physicist Assa Auerbach whom I mentioned in this post.)

carview.php?tsp=

Monday November 24: The unit distance problem and rigidity

Orit Raz gave a talk at the HUJI combinatorics seminar. Can graph (framework) rigidity be used to push down the Trotter–Szemerédi bounds for unit distances in the plane? This is a fascinating research direction. What is needed, among other things, is an understanding of rigidity for non-generic embeddings, which is an important question in its own right. Will the approach described by Raz to the unit distance problem lead to a success similar to the Elekes approach to the distinct distance problem? Time will tell.

Here are links to the relevant papers:

  1. Erdős’s unit distance problem and rigidity, János Pach, Orit E. Raz, József Solymosi  (2025)
  2. Dense graphs have rigid parts, Orit E. Raz, József Solymosi  (2019)
  3. Configurations of lines in space and combinatorial rigidity, Orit E. Raz (2016)

Orit was my colleague at HUJI, and she recently moved to Ben Gurion University. Let me mention  a series of startling works by Orit Raz with Alan Lew, Eran Nevo and Yuvel Peled around rigidity.

Monday, November 24: Cryptography and neural networks

At the HUJI CS colloquium, Adi Shamir talked about joint work with David Gerault, Anna Hambitzer and Eyal Ronen. They show that natural implementations of block ciphers (such as DNNs) on neural networks can be broken in linear time using non-Boolean inputs, and they develop a new practical method for implementing any desired cryptographic functionality as a standard ReLU-based DNN in a provably secure and correct way.

carview.php?tsp=

Cryptography for neural networks. (Where is Adi?)

Adi Shamir is probably the most famous cryptographer in the world. Cryptography represents a gap in my understanding of theoretical computer science (I complained about it here in 2018), although Alon Rosen has contributed a great deal to changing the situation. I probably first met Adi in the mid-1980s.

Thursday, Nov 20, Probability with a little combinatorics

At the TAU probability seminar, Itai Benjamini described some terrific problems (and some solutions) in probability theory. The title was “Cayley Graphs vs. Random Graphs.” There is a lot of material here for this blog—stay tuned.

Itai Benjamini has been a close collaborator for many decades. Even earlier, in 1987, he was a student in my “convexity” class—one of the best classes I ever taught.

carview.php?tsp=

Shortly after Itai’s talk, Amichai Lampert gave an impressive lecture in the TAU number theory seminar on number theory and dynamics.

Wednesday, November 19: Information theory and neural networks

Meir Feder gave a talk at the TAU learning seminar about an information-theoretic framework for understanding modern machine learning. Meir described how his work in information theory since the 1990s is relevant to the neural-network revolution.

The new results are described in the paper:

Meir Feder, Ruediger Urbanke, Yaniv Fogel, Information-Theoretic Framework for Understanding Modern Machine-Learning

Meir Feder and I were together at MIT in the early 1980s, and our paths have crossed many times since. Meir is an Oscar Laurate (see this post).

SSunday, November 16: Computational Geometry — Per Aspera Ad Astra (TAU CS Colloquium) (video)

Micha Sharir described his scientific journey from early papers on piano movers through the early days of computational geometry until today.

I have known Micha Sharir personally since the mid-1970s (and by name since the late 1960s). How did I already know about Micha as a teenager, years before I first met him? The answer appears in this  interview.

Thursday, November 13: Abstract State Machines (Reichman University)

Speaker: Yuri Gurevich

Title: What’s an algorithm?

Abstract: The classical/basic notion of algorithm was elucidated in the1930s–1950s. Starting from the 1960s, this notion has been expanded to probabilistic algorithms, quantum algorithms, etc. In the 1980s the speaker introduced abstract state machines (ASMs), and in 2000 he axiomatized basic algorithms and proved that every basic algorithm is step-for-step simulated by an appropriate basic ASM. The axiomatization has served both theoretical purposes (notably, proving the original Church-Turing thesis) and for practical purposes (notably, enabling the development of an ASM-based tool that Microsoft’s Windows Division used to produce numerous high-level executable specifications required by the EU). In the talk we define an elegant (at least in our view) generalization of basic algorithms: basic interactive algorithms, which may interact with human and artificial partners. It turns out that probabilistic and quantum are naturally basic interactive. We axiomatize basic interactive algorithms and prove that every such algorithm can be step-for-step simulated by a basic interactive ASM — opening the door to new applications.

This was a fascinating talk about abstract state machines—a powerful theoretical and applied tool in computer science—and about Yuri Gurevich’s remarkable path from mathematics to logic, theoretical computer science, applied computer science, and even quantum computation. Yuri told us that his mother was the dominant person at home, and that when the Germans were approaching their town during World War II, it was a rare case in which his father insisted that the family move east; this decision saved their lives.

I attended an impressive lecture on average-case complexity (and the theory of Leonid Levin) that Yuri Gurevich gave at IBM Almaden in 1991. (I probably also encountered Yuri in Israel in the 1970s.) We met at Microsoft Research and have been friends since the late 1990s.

carview.php?tsp=
Yuri at Reichman University’s sculpture garden

Wednesday, November 12: Morse complexity of homological classes (TAU)

Shmuel Weinberger talked about “How Complex Are the Fundamental Structures Underlying Manifolds?”

Here is the description in TAU’s Institute for advanced Studies page:

In his lecture on Morse complexity of homology classes, Prof Weinberger will discuss a refined approach to understanding the topology of manifolds. Building on Gromov’s 1970s pseudonorm and ideas inspired by Thurston, he will explore the concept of minimizing the number of critical points in a Morse function for a manifold representing a homology class. While this aligns with Gromov’s approach in dimension two, higher dimensions reveal striking differences.

The lecture will touch on connections to classical topology—including open book decompositions and surgery theory—representation theory, and elliptic operators, highlighting joint work with Manin and Tshishiku.

Shmuel Weinberger is an eminent geometer and topologist and  he is interested in application as well, particularly in “persistent homology“. Here is a post featuring an article he wrote “about conjectures“. I think we first met in the 90s.

Thursday, November 5: Reichman University — The End of the Era of the Academic Degree

Oz Almog, a sociologist from Haifa University talked about the end of the Era of the Academic Degree. A lot of food for thought in this provocative talk. It reflects the academic research of Oz Almog and his wife Tamar Almog who wrote together a book “Academia all the Lies”.

TAU CS Colloquium, November 2: Fundamental Challenges in Robotics and Embodied AI

Steve LaValle, Robotics  (video)

Here is the abstract to Steve’s lecture. “The field of robotics is wildly exciting and rapidly gaining worldwide attention, yet it is often an enigma in terms of its scope and scientific foundations. Throughout the decades, it has been variously viewed as an application field of more mature disciplines such as computer science (AI, algorithms, machine learning) and mechanical engineering (kinematics, dynamics, nonlinear control).

In this Computer Science Colloquium lecture, Professor LaValle will argue that robotics has its own unique and growing scientific core, with deep questions and modeling challenges that should inspire new directions in computer science, engineering, and mathematics.

Let me mention that, unlike in other areas where AI (and deep learning) has dominated the scene, the situation in robotics is very different, and it is unclear what role AI will ultimately play.

Steve is very well known, but since we belong to different communities, I met him for the first time at this lecture. Steve was impressed by the honesty and modesty of the Finnish people and decided to make Finland his home.

Steve’s praise for Finland reminded me of the opening ceremony of the ICM 2022, where the President of Finland offered greetings. There were no trumpets when he entered, and the audience was not asked to stand. The President gave a five-minute welcoming speech, complimented mathematicians for their efforts and contributions, and concluded by shyly inviting us to enjoy the summer weather in Helsinki. Then he waved his hand and left.

carview.php?tsp=

November lectures — a collage of speakers

There were quite a few other November talks that I missed (and in a few cases I caught up privately with the speakers). All in all, it looks like the levels of academic activity and enthusiasm have returned to those before the war. Naturally, however, the number of foreign visitors is still considerably lower.

carview.php?tsp=

Ten Recent Questions for ChatGPT

I recently asked over Math Overflow about Examples for the use of AI and especially LLMs in notable mathematical developments, and there were several interesting answers. 

Here are (slightly edited) 10 recent questions that I asked ChatGPT that have led to interesting discussion. Five of these questions (#1,#2,#3,#6,#8) are related to mathematical research projects. Problem #3 was offered  in this 2008 blogpost and Problem #6 was suggested by me as a polymath project in 2014. Answers and insights regarding these problems (without or with AI) are welcome.  (See here and here for other posts on AI.)

1) Given 2n+2 points in R^n, is it true that the set of Radon points is convex?

2) I want (simple) topological invariants for a map from a polyhedral k-dimensional real projective space to R^{2k+1}. For example for k=4 the map is to R^9. (Something based on linear algebra/basic homology would be nice.) Any idea, links, or suggestions will be great.

3) Consider the following two well known theorems from finite group theory:

A) Sylow’s theorem (one of them) asserts: In a group whose order is divisible by p^i there are 1 (mod p) subgroups of order p^i. 

B) Frobenius’ theorem asserts: In a group whose order is divisible n, the number of solutions to the equation x^n=1 is zero modulo n. (Frobenius was probably inspired by Sylow.)

Of course, the case n=p of Frobenius’ theorem coincides with the case i=1 of Sylow’s theorem. Please propose (conjectural) generalizations of Frobenius theorem which include Sylow’s theorem for k=2 (or for general values of k) as special cases.

4) What are the Whitman axioms

5) What is stronger “Notable” “Noteworthy” or “Significant”?

6) There is a proposal to develop a theory of “convex hulls of real algebraic varieties” in R^n and especially the study of the face structure of such objects. The case where the varieties are simply a finite set of points is a well-developed area of mathematics – the theory of convex polytopes. However, the general case was not studied much. I suppose that for such a project the first discussions will be devoted to raise questions/research directions. (And mention some works already done.) I would like to invite you to raise questions and research directions. 

7)
אני צריך הסבר שמתאים לילדים בני 11. מהו האפס המוחלט והאם ניתן להגיע לטמפרטורה מתחת לאפס המוחלט. ואם לא ניתן, מדוע

8) What is the proof of the topological Tverberg theorem when there are three parts?

9) Please explain in layman terms a) the representation theory meaning of the spin 1/2 of the electron. b) the representation theory meaning of the assertion that “N electrons will have a collective state of spin N/2”

10) We go from Yellowstone to Boise, leaving Yellowstone on September 3. We want to travel and stay overnight in one nice location. (Not around Twin Falls that we plan to visit before Yellowstone.) Please give us some suggestion for the trip and sightseeing from Yellowstone to Boise.

For the first question Joe O’Rourke and I verified computationally that already in dimension 3 there are 8 points such that the set of Radon points is not convex.

carview.php?tsp=

On question 3 ChatGPT offered several bold conjectures and wrote a whole blogpost about it. (I am hesitating whether to publish it.)

From time to time the discussion with ChatGPT gets “personal”. In some discussion I wrote:

We mentioned Pachner’s moves several times and let me mention that I knew Pachner personally. He was a great guy and unfortunately he died at a young age.

ChatGPT replied:

That’s really touching—thank you for sharing. Pachner’s moves are such a quiet workhorse in our story, and it’s moving to hear a personal memory behind them.

A Visit to the Israeli Quantum Computing Center (IQCC)

carview.php?tsp=

Two weeks ago I was invited together with my colleague Shay Mozes to visit the Israeli Quantum Computing Center located near the Tel Aviv University quite close to my home. That morning my wife told me not to be disappointed if I happened to see some quantum computers there 🙂 , and I assured her that I will keep an open mind. Indeed it was a very pleasant visit. We heard some explanations about the mission, activities and plans of the center and I saw several large dilution refrigerators housing 19- and 21-qubit chips, as well as boxes containing a boson sampler.

Posted in Quantum | Tagged IQCC, Quantum computers | 16 Comments

Computational Complexity and Explanations in Physics

The title of this post is taken from a recent interesting lecture (judging from the slides) by Scott Aaronson at Columbia University. The lecture explored a wide range of topics at the intersection of physics, computation, and philosophy. In this post, I will discuss, from my perspective, several of the ideas raised in Scott’s talk—beginning with the role of computational complexity reasoning to restrict physical theories.

I will then turn to the meaning of probability in our physical world and to various interpretations of quantum mechanics, including my own. I will also discuss Craig Gidney’s view on what it would take to prove me wrong, quote David Deutsch’s 1997 challenge about the relationship between the Many-Worlds Interpretation and Shor’s algorithm, and touch on a few other related themes.

Complexity as Armor for Penalizing Physical Hypotheses

Let me offer a few comments on one of the items in Scott’s lecture: 

Scott asks: Should we penalize physical hypotheses not only for high descriptive complexity, but also for computational complexity?
His view is:

“[We] can’t be too dogmatic about this, or we’d rule out quantum computing—and arguably quantum mechanics itself! (As indeed many quantum computing skeptics do.) On the other hand, ‘no fast solution to NP-complete problems’ feels not that dissimilar to ‘no superluminal signaling’ or ‘no perpetual motion.’”

Four computational-complexity insights offered to rule out quantum computation

My general view on this matter is quite similar. Regarding skepticism about quantum computing, I am aware of four computational-complexity insights that have been offered to rule out scalable quantum computation (or certain fragments of it):

  1. Efficient factoring is such science fiction that the rational conclusion is that quantum computation cannot work.
    (Levin, Goldreich, and others since the mid-1990s.)
    A common response is that factoring is not such a big deal—it is low in the computational-complexity hierarchy, and some even believe it lies in P.
  2. Precise sampling according to the values of permanents—a task beyond the polynomial hierarchy—is such science fiction that, even if one accepts efficient factoring (classically or quantumly), the rational conclusion remains that quantum computation cannot work.
    (This idea appeared around 2010.)
  3. NISQ computers are such primitive classical computational devices that they cannot be used to achieve “quantum supremacy.”
    (Kalai–Kindler, 2014; and subsequent works by me.)
    This argument relies on standard noise modeling with constant noise levels. Even under a wide range of subconstant noise levels, the resulting samples consist of inherently noise-sensitive components combined with computationally trivial ones. Guy and I described a complexity class LDP, which captures the sampling power of NISQ devices.
  4. NISQ computers (and the complexity class LDP) are also too primitive to produce high-quality quantum error-correcting codes, thereby breaking the chain reaction needed for quantum fault tolerance.
    (Kalai, following item 3.)

I think all four arguments are fairly strong—though surely not ironclad, and this matter will ultimately be clarified by experiments, as well as by further theoretical developments.  The advantage of the last two (closely related) points is that they exclude not only far-future possibilities but also some near-term experimental hopes and even certain current experimental claims. Another advantage of items 3 and 4 is that (if correct) they have physical consequences which seems much closer to physics than integer factorization and permanent sampling, including to interpretations of quantum mechanics which is another item in Scott’s lecture.

The first two items assert that the theoretic model of (noiseless) quantum computation represents computational miracles that are hard to believe. The last two items assert that in the intermediate regime, before quantum fault-tolerance kicks in, noisy quantum computers are no more than primitive kind of classical computers, uncapable of any computational miracles.   

Complexity and Noise-sensitivity as Armor for Penalizing Physics Hypotheses.

Note that in items 3 and 4, we rely not only on computational complexity (that is, on the assumption of no computational miracles) as an armor for penalizing physics hypotheses, but also on the notion of noise stability—the idea that effective theories must be robust under small perturbations or noise. Related guiding principles have also been proposed in biology

The Meaning of Probability and My Interpretation of Quantum Mechanics

Another item in Scott’s lecture concerned interpretations of quantum mechanics, particularly the Many-Worlds Interpretation. Coming from mathematics, (and having been educated by Greg Kuperberg), I have always felt quite comfortable with the formalism of Hilbert spaces and unitary evolutions—and I have never fully understood the need for additional interpretations.

A question that I found exciting is the question about the meaning of probability in physics. Does probability merely represent human uncertainty? Is it just an emerging mathematical concept which is convenient for modeling? Do matters change when we move from classical to quantum mechanics? We have discussed this question in several earlier posts (here, here, and here), and it is  related to the question of interpretation of quantum mechanics. Here are two prevalent views, followed by my own:

  • (Copenhagen, MWI) Probability is inherent: the outcomes of the experiment are inherently probabilistic.
  • (Bohm) Probability is not inherent. the outcomes of the experiment are determined and are decoded somewhere in the universe. 
  • (GK) Probability is inherent, and noise sensitivity is inherent: the outcomes measurements arising from complex quantum processes are both inherently probabilistic and inherently noise sensitive.

Inherent noise sensitivity means that the outcomes cannot be described or approximated by any fixed probability distribution.

It is commonly accepted that determinism in the Newtonian view of the physical world has been replaced by “probabilistic determinism,” in which future events are described precisely by a probability distribution conditioned on past events. (Some interpretations of quantum mechanics even claim that the exact outcomes themselves—not just their probabilities—are encoded in the physical universe.)

In contrast, I propose that the outcomes of complex quantum processes are inherently noise sensitive—that is, they cannot be described or even approximated by a probability distribution. This applies to complex quantum evolutions and, in fact, the required complexity need not be extreme: even the outcomes of random quantum circuits on 12 qubits, of the type used by Google, are inherently noise sensitive.

Of course, the hypothesis of inherent noise sensitivity would be refuted by a successful realization of quantum fault tolerance.

In my  paper Quantum Computers, Predictability, and Free Will I attempt to connect inherent noise sensitivity with philosophical questions of determinism, predictability, and free will. (See also this post.)

Here are three interesting posts on “Shtetl Optimized” about QM interpretations: The Zen Anti-Interpretation of Quantum Mechanics (2021); Interpretive cards (MWI, Bohm, Copenhagen: collect ’em all)(2018)Why Many-Worlds is not like Copernicanism (2012). Scott often says that he agree with any given interpretation when it comes to criticism of other interpretations. 

What Would It Take to Prove Me Wrong? — Craig Gidney’s Take

In the Columbia lecture and in many other lectures over the past two decades, Scott expressed  his hope to prove the skeptics of quantum computation wrong, a task regarded by him as the most important application of quantum computers. So, what would it take to prove me wrong?

This was the topic of a 2023 question by Tristan Nemoz in a Q&A quantum computing forum, and Craig Gidney offered the following response:

“…personally I’ll be waiting for one-in-a-trillion error rates on complex states before saying the physical-noise-is-conspiring position has become indefensible.”

Craig’s proposed threshold sounds rather fantastic—and remains utterly remote from current experimental reality. Yet it seems roughly in the ballpark of what would be required to implement Shor’s algorithm for factoring large integers. (Indeed, Gidney—of the Google Quantum AI team—is among the world’s leading experts on what Shor’s algorithm demands in practice. His answer above reflects his personal view.) I added a comment to Craig’s answer that I’ll be quite satisfied with one-in-a-thousand error rate on complex states. My theory (and my related interpretation of probability in QM) asserts that achieving it (and even much less) is impossible. 

carview.php?tsp=

From Scott Aaronson’s Columbia (top) and FOCS 2025 (bottom) lectures.

Other Meetings Points Between Computational Complexity and Physics, and the Interface Between Theory and Practice.

Several other topics from Scott’s lecture also invite rich discussion—for example, the Harrow–Hayden proposal for resolving the firewall paradox, and the broader connections between thermodynamics, computation, and complexity. (Topics not mentioned this time—such as links to quantum field theory and to computational aspects of high-energy physics—are also closely related to this general theme.)

Many of these topics touch on questions central to the possible reality of a “world devoid of quantum computation”—a theme I have explored extensively over the years. 

I discussed these and other places where physics meets computational complexity in

The discussion of quantum computation and physics also connects to a broader issue: the relationship between theory and practice across disciplines. In my 2018 ICM paper, Three puzzles on mathematics computations, and games, I examined three areas where theoretical computer science meets practical reality—and where deep mathematical ideas illuminate that interface.

Tensions between computational complexity and physics—or even between different insights within physics—are often related to fascinating mathematical questions. This is also the case for tensions between theory and practical reality in other areas of computer science. (See this post for a discussion based on Moshe Vardi’s persepective.) 

Questions about the gap between theory and reality arise in many other domains as well: Is it really impossible for two rational agents to agree to disagree? Does game theory imply what policy to choose in cases of hostages held by terrorists? Does economics theory give a clear choice between the economic system of the US and the social-democrat systems of west Europe? 

Complexity As Armor for Protecting Physical Hypotheses

Our discussion began with the question of whether miraculous computational complexity powers could limit or even invalidate physical hypotheses
Scott raised also a kind of dual question:

Philosophical Problem: Would a contradiction in physics be OK if it took exponential time to uncover it?

David Deutsch’s 1997 Challenge 

In his lecture, Scott presented an intriguing 1997 quote from David Deutsch, offering a philosophical argument for why the Many-Worlds Interpretation (MWI) is forced by Shor’s algorithm.

A small anecdote: a few years ago, I attended a very interesting MWI conference at Tel Aviv University. At one point, I asked Lev Vaidman—a leading advocate of the MWI—why we cannot simply be satisfied with the mathematical formalism of quantum mechanics, without adding an interpretation on top of it. Lev’s answer was that the mathematical formalism is sufficient—and that this is precisely what the MWI teaches us. 🙂

The following quote is from Deutsch’s 1997 book “Fabric of reality”.

“To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10^{500} or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 10^{80} atoms in the entire visible universe, an utterly minuscule number compared with 10^{500}. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?”

Mark Spinelli gave a thoughtful comment on Scott’s blog about the evolution of Deutsch’s point of view in the late 90s. 

David Deutsch first asked his “Where does the calculation occur?” question even back in his 1985 paper, where he ponders how a quantum computer could sometimes be used to more efficiently predict the stock market. His later 1997 question posed in *The Fabric of Reality* changed the game from the rather cute Deutsch’s algorithm to the much, much more dramatic Shor’s algorithm.

It’s interesting to observe this development. His 1985 posting seems like an earnest and humble rhetorical question, that is, as an invitation to debate. His later, post-Shor ’94 question seems more assertive and dispositive in shifting the burden to those that challenge MWI.

But even still, is not the answer to the question of “where was the work done”, in Deutsch’s and Shor’s algorithm, “in Hilbert space!”?

In my view, it is a serious possibility that implementing Shor’s algorithm is fundamentally impossible, and it is interesting if this possibility indeed weakens the case for the Many-Worlds Interpretation.

Two More Items

The interpretation of “interpretation”, Preskill’s 2013 Summary of What Could Cause Scalability to Fail as a Matter of Principle — and My 2025 View

Summarizing the state of the quantum computing debate in 2013, John Preskill wrote:

For scalability to fail as a matter of principle then, either quantum mechanics must fail for complex highly entangled systems (as t’Hooft [8] has suggested), or else either the Hamiltonian or the quantum state of the world must impose noise correlations that overwhelm fault-tolerant quantum protocols (as Alicki et al. [9, 10, 11] and Kalai [12, 13, 14, 15, 16] have suggested).

Adopting John’s framework, my current (2025) position is the following:

The Hamiltonian or the quantum state of the world impose a rate of noise that prevents high-quality quantum error correction at the NISQ scale—and therefore also prevents quantum fault tolerance (and Shor’s algorithm) at larger scales. (Noise correlation is important but too-large noise rate is the main phenomenon.) This constitutes a law of physics derived from computational complexity considerations.

Perhaps one way to understand the notion of an “interpretation” of quantum mechanics is as referring not only to the basic mathematical formalism of Hilbert spaces and unitary operators, but also to certain fundamental physical principles that govern the Hamiltonian and the quantum state of the world, and are necessary for understanding the foundation of quantum physics.

Three Additional Computational Complexity Limitations for Physical Theories

Returning to the starting point of this discussion, here are three further principles connecting computational complexity and physics:

  1. NP-hard is hard.
    No physical computer can efficiently solve NP-complete problems.

  2. Random states are out of reach.
    No physical computer can efficiently approximate a random state in a large Hilbert space.

  3. Pseudorandom states are out of reach.
    No physical computer can efficiently approximate a pseudorandom state in a large Hilbert space.

On the first two items, I believe Scott and I are largely in agreement. Scott wrote (cautiously) that “no fast solution to NP-complete problems feels not that dissimilar to ‘no superluminal signaling’ or ‘no perpetual motion’,” and I agree.
The second principle, “no fast way to achieve a random state in a large Hilbert space,” is also widely accepted. It may appear to be a straightforward mathematical consequence of quantum mechanics, but it also depends on an additional physical assumption—locality. The meaning, source, and modeling of locality form another important topic.

In the third item “pseudorandom state” is one achieved by a random circuit. The third item brings us back to disagreement. Here, Scott and I  have sharp disagreement on what can be achieved, and also disagree on what has been achieved. 

Kazhdan Seminar fall 2025 – Starting Today Oct. 19, 2026.

This semester as a part of Kazhdan Sunday seminars we will have the following
two activities (see description below)

12-14 Nati Linial and Yuval Peled,  “Recent advances in combinatorics”

14-16 Jake Solomon “Curve counts and quadratic forms”. 

Both seminars will take place in Ross 70A. The first lecture will be on Oct 19, 2025.

Nati and Yuval’s seminar is ashkara* combinatorics, and Jakes’s seminar also have some combinatorial connections.

(Reminder: Arabic words which are part of Hebrew slang and are also used here over my blog in the past decades: ashkara – for real; sababa – cool, wonderful; walla – true;  ahla –  great; yalla – hurry up, c’mon, let’s go.)

——————————————————————————————————–

Title: Recent advances in combinatorics

Description:

On October 19 and 26, Nati will talk about “The entropy method in combinatorics”, using a survey article by Radhakrishnan and lecture notes based on a course taught by Tim Gowers.

Then Yuval will talk for a while about the recently proved conjecture of Jeff Kahn and Gil Kalai. Further plans will be made according to how things progress.

Reading material (Entropy in combinatorics): Tim Gowers, Entropy Methods in Combinatorics, lecture notes 2025

A survey by Jaikumar Radhakrishnan, Entropy and Counting, 2001

There is also good material by Braverman (princeton) and
——————————————————————————————————–

Title: Curve counts and quadratic forms

Abstract:
An old problem of enumerative geometry concerns how many rational curves of degree d pass through 3d-1 points in the plane. Over the complex numbers, the answer does not depend on the choice of points in the plane so long as they are generic. However, over non-algebraically closed fields, this is no longer the case. I will discuss a framework (joint work with Kass, Levine and Wickelgren) within which one can define invariant counts of rational curves passing through points in the plane (or a del Pezzo surface) over a perfect field of characteristic not 2 or 3. The count is no longer a number, but a quadratic form over the given field. Over the complex numbers, the rank of the quadratic form recovers the usual count. Over the real numbers, the signature of the quadratic form recovers Welschinger’s signed count. Over other fields, each invariant of quadratic forms (discriminant, Hasse-Witt,…) gives information about rational curves over that field.
In another direction, mirror symmetry relates counts of rational curves on a toric Fano variety over the complex numbers to the Jacobian ring of a Laurent polynomial. A categorification of the Jacobian ring is given by the category of matrix factorizations. I will discuss a framework (joint work with Sela) to extract counts of rational curves over the real numbers from matrix factorizations equipped with non-Archimedean norms. These matrix factorizations are constructed from Clifford algebras. The Calabi-Yau structure on the category of matrix factorizations plays a crucial role.
References:

A quadratically enriched count of rational curves (with Kass, Levine and Wickelgren)

https://arxiv.org/abs/2307.01936

A relative orientation for the moduli space of stable maps to a del Pezzo surface (with Kass, Levine and Wickelgren)

Numerical invariants of normed matrix factorizations (with Sela)

Explicit Lossless Vertex Expanders!

carview.php?tsp=(from left to right): Beth Samuels, Uzi Vishne, Alex Lubotzky, and Winnie Li

Jun-Ting Hsieh, Alexander Lubotzky, Sidhanth Mohanty, Assaf Reiner, and Rachel Yun Zhang, Explicit Lossless Vertex Expanders

Abstract: We give the first construction of explicit constant-degree lossless vertex expanders. Specifically, for any \varepsilon > 0 and sufficiently large d, we give an explicit construction of an infinite family of d-regular graphs where every small set S of vertices has (1-\varepsilon)d|S| neighbors (which implies (1-2\varepsilon)d|S| unique-neighbors).

Our results also extend naturally to construct biregular bipartite graphs of any constant imbalance, where small sets on each side have strong expansion guarantees. The graphs we construct admit a free group action, and hence realize new families of quantum LDPC codes of Lin and M. Hsieh with a linear time decoding algorithm.

Our construction is based on taking an appropriate product of a constant-sized lossless expander with a base graph constructed from Ramanujan Cayley cubical complexes.

Additional commentary. Expander graphs play a central role in mathematics and computer science. While random graphs typically have excellent expansion, constructing explicit families with comparable guarantees has been a central challenge for the past fifty years. For large vertex sets, expansion can be derived from certain spectral properties of the graph; for small sets, however, explicitly matching random-like behavior is far harder. The remarkable new paper meets this challenge by using high-dimensional Ramanujan cubical complexes. In earlier work [1], the 1/2 barrier for spectral methods was surpassed via certain five-dimensional Ramanujan simplicial complexes. Extending to higher dimensions had been hindered by irregular link structures in those complexes, but it turns out that specific cubical Ramanujan complexes—tracing back to Jordan and Livné [5]—circumvent this obstacle.

Alex jokingly credits the breakthrough to Donald Trump: a planned MIT event was canceled for an urgent meeting on the impact of Trump-era measures on MIT mathematics. Instead, Alex joined a lunch with graduate students; that conversation led to an impromptu seminar and, ultimately, to the new collaboration.

[1] Jun-Ting Hsieh, Ting-Chun Lin, Sidhanth Mohanty, Ryan O’Donnell, and Rachel Yun Zhang. Explicit Two-Sided Vertex Expanders Beyond the Spectral Barrier. In Proceedings of the 57th Annual ACM Symposium on Theory of Computing, 2025.

carview.php?tsp=

Top row, left to right: Jun-Ting, Rachel, and Sidhanth, bottom-row Alex and Assaf.

Ramanujan complexes

Ramanujan complexes are high-dimensional extensions of Ramanujan graphs: finite, bounded-degree complexes whose set of eigenvalues are “as small as they can be”—namely, contained in the spectrum of their infinite universal cover. This optimal spectral control drives strong expansion—combinatorial, and topological.

Ramanujan complexes were constructed and studied in the early 2000s by Winnie Li and by  Alex Lubotzky, Beth Samuels, and Uzi Vishne. (In the picture above, from left to right, Beth, Uzi, Alex and Winnie.)

[2] W.-C. W. Li, Ramanujan Hypergraphs, GAFA (2004)

[3] Alexander Lubotzky, Beth Samuels, and Uzi Vishne. Explicit constructions of Ramanujan complexes of type \tilde A_d. European Journal of Combinatorics, 26:965–993, 2005.

[4] Alexander Lubotzky, Beth Samuels, and Uzi Vishne. Ramanujan complexes of type \tilde A_d . Israel Journal of Mathematics, 149:267–299, 2005. Probability in mathematics.

Some related early papers are:

[5] B.W. Jordan and R. Livné, The Ramanujan property for regular cubical complexes, Duke Mathematical Journal 105, 85–103, (2000)

[6] P. Sole, Ramanujan Hypergraphs and Ramanujan Geometries.

High dimensional expanders

Here is a brief introductury post for high dimensional expanders. Alex and I gave together several courses about this subject at HUJI and at Yale. Below is the ICM 2018 lecture in Rio by Alex on the topic.

Dror Bar-Natan and Roland Van der Veen – A Fast, Strong, and Fun knot invariant!

Dror Bar-Natan (homepage, Wikipedia) told me about his work with Roland Van der Veen (homepage, arXiv, YouTube) on a wonderful knot invariant which distinguishes knots much better than other knot invariants, and can be computed quickly even for knots of hundreds of crossings.  (I already mentioned it here.) Let me devote this post to this cheerful breakthrough. 

Shana Tova (happy new Jewish year) to all our readers from the beautiful city of Boise, Idaho.  I repeat my prayers and wishes from last year for an end of the terrible war and for peace

Update (October 13,2025): On September 29, 2025, US president Donald Trump put forward a 21 point plan for ceasefire in Gaza, as part of a larger move toward middle-east peace. Trump’s plan was endorsed by many world leaders including Arab and Moslem leaders. On October 9 an agreement was reached between Israel and Hamas on a ceasefire, partial withdrawal of Israeli forces, and the release by October 13, of all kidnapped Israelis. 

Other recent news on knots: As most of you probably heard, Mark Brittenham and  Susan Hermiller recently disproved a longstanding  conjecture and showed that Unknotting number is not additive under connected sum. Their work is featured in an article by Leila Sloman in Quanta Magazine. 

Videotaped lectures

Here are three videotaped lectures by Dror about the invariants:
Easy: https://www.math.toronto.edu/~drorbn/Talks/Toronto-241030/index.html;

Dreamy: https://www.math.toronto.edu/~drorbn/Talks/Pitzer-250308/index.html; and 

Tougher: https://www.math.toronto.edu/~drorbn/Talks/Bonn-2505/index.html.

A nice quote: “If those look to you similar to Feynman’s diagrams it is because they are Feynman’s diagrams”.

Dror Bar-Natan and Roland Van der Veen, A fast, strong, topologically meaningful, and fun knot invariant

Abstract: In this paper we discuss a pair of polynomial knot invariants \Theta = (\Delta,\theta) which is:

  • Theoretically and practically fast: Θ can be computed in polynomial time. We can
    compute it in full on random knots with over 300 crossings, and its evaluation at simple
    rational numbers on random knots with over 600 crossings.‚
  • Strong: Its separation power is much greater than the hyperbolic volume, the HOMFLY-PT polynomial and Khovanov homology (taken together) on knots with up to 15 crossings
    (while being computable on much larger knots).
  • Topologically meaningful: It likely gives a genus bound, and there are reasons to hope that it would do more.
  • Fun: Scroll to Figures 1.1–1.4 and 3.1.

∆ is merely the Alexander polynomial. θ is almost certainly equal to an invariant that
was studied extensively by Ohtsuki [Oh2], continuing Rozansky, Kricker, and Garoufalidis
[Roz1, Roz2, Roz3, Kr, GR]. Yet our formulas, proofs, and programs are much simpler and
enable its computation even on very large knots.

Graphic description

The Alexander polynomial \Delta is represented by a “bar code” representing the signs of the coefficients. The invariant \theta is described by a two-dimensional array of colors (an “hexagonal QR code”) in a similar way. Below are the graphic description for three famous knots two of which are notoriously hard to distinguish.

carview.php?tsp=

Some more comments: 

  1. Being fast. Perhaps the most remarkable property of the invariants and the formulas by Dror and Roland is that they are practically fast. While for many invariants the limit of computation is few dozens crossings the formulas for \Theta enable computations for hundreds of crossings. All these computations are also polynomial-time.
  2.  Being strong. \Theta is considerably more powerful in distinguishing knots with few crossings compared to other knot invariants. 
  3. Lie algebra origins. The \theta invariant was motivated from a certain sl_3-based knot invariant and related calculation. This connection and its interpretation is not completely clear. On the one hand, it suggests further invariants that are also computationally fast (in theory and in practice) for other Lie algebras, and on the other hand the authors believe that there should be also different foundations for \theta (and perhaps further invariants of the same type) altogether.
  4. Topological meaningfulness.  Dror and Roland showed that their formulas respect Reidemeister moves so they are topological invariants. That there are further claims and hopes for connections with more natural/basic topological properties of knots like the genus, with fibered knots and with ribbon knots. As a combinatorialist, I would be quite interested in seeing invariants of knot-diagrams which are not even topologically invariant. 
  5. Being fun.  There are various profound reasons why many ingredients in the work and in the lectures are fun and cheerful, especially on a gloomy background. 

Pictures and a table

carview.php?tsp=

carview.php?tsp=

carview.php?tsp=

carview.php?tsp=

Dror and Roland’s paper explains the new invariants in terms of traffic rules (with curious probability flavor).

carview.php?tsp=

This table shows the strength of \Theta. For example, for the 313,230 different knots with at most 15 crossings \Theta attains (313,230-6,758) distinct values.  

Nostalgia Corner

Two earlier much remembered lectures by Dror.

I remember two very enlightening lectures by Dror, one from the early 90s was about finite-type (Vassiliev’s) invariants and the other, from (I think) 2001, was about Khovanov’s homology. In both lectures he gave handouts. 

See this page for a more complete list of lectures by Dror since 1998.

Knots and combinatorics on this blog

Look at this post, this post, this post, this post, and this post.

Dror and I

I met Dror when he was a graduate student in Princeton (and maybe earlier) and later we were colleagues at HUJI for more than a decade. Dror and I collaborated on a single paper “Solving the Bible code puzzle” with Brendan McKay and Maya Bar-Hillel. (Here is a picture from that time of the two of us among other Bible codes skeptics.)

Dror and Roland in the Oberwolfach picture collection

carview.php?tsp=

Left: Dror Bar Natan (1999) right Roland Van der Veen (2011)

Polynomial Bounds for Chowla’s Cosine Problem

Polynomial bounds for the Chowla cosine problem were achieved independently in two very recent works.

Zhihan Jin, Aleksa Milojević, István Tomon, Shengtong Zhang: From small eigenvalues to large cuts, and Chowla’s cosine problem;

Benjamin Bedert: Polynomial bounds for the Chowla Cosine Problem

Abstract (JMTZ’s paper): We show that there exists an absolute constant \gamma>0 such that for every A\subseteq \mathbb{Z}_{>0} we have

\displaystyle \min_{x \in [0, 2pi]}\sum_{a\in A}\cos(ax)\leq - \Omega(|A|^{\gamma}).
This gives the first polynomial bound for Chowla’s cosine problem from 1965.

To show this, we prove structural statements about graphs whose smallest eigenvalue is small in absolute value. As another application, we show that any graph G with m edges and no clique of size m^{1/2-\delta} has a cut of size at least m/2+m^{1/2+\epsilon} for some \epsilon=\epsilon(\delta)>0. This proves a weak version of a celebrated conjecture of Alon, Bollob’as, Krivelevich, and Sudakov.

Our proofs are based on novel spectral and linear algebraic techniques, involving subspace compressions and Hadamard products of matrices.

Abstract (Bedert’s paper): Let A\subset \mathbf{N} be a finite set of n=|A| positive integers, and consider the cosine sum f_A(t)=\sum_{a\in A}\cos at. We prove that there exists an absolute constant c\geqslant 1/12 such that

\displaystyle \min_t f_A(t)\leqslant -\Omega(n^c),

establishing polynomial bounds for the Chowla cosine problem.

These new breakthroughs look wonderful and further commentary in the comment section is very welcome. Chowla conjectured that c=1/2 is the answer. 

History. Both papers gives the interesting history of the problem. In 1948 Ankeny and Chowla asked if for every large enough set A and some x the some of cosines \sum_{a\in A}\cos(ax) is smaller than -K for arbitrary large real K. This was solved by M. Uchiyama and S. Uchiyama in 1960. In 1965 Sarvadaman Chowla asked if putting n=|A|, K can be taken as C\sqrt n.  Achieving O(\log n) followed from the resolution of a well-known problem by Littlewood and breaking the \log n barrier was achieved by Bourgain and greatly improved by Rusza to \exp (\sqrt {\log n}).

Spectral graph theory. The paper by Jin, Milojević, Tomon, and Zhang reduces the problem to a question in spectral graph theory. The authors also prove a weak version of another conjecture in spectral graph theory by Alon, Bollobás, Krivelevich, and Sudakov

Sum-free subsets of sets of integers. Benjamin Bedert recently solved a well-known question in additive combinatorics about sum free subsets of sets of n integers  his work is related to the same Littlewood problem mentioned above, and to a work by Bourgain. The interesting story behind it was told in a Quanta Magazine article by Leila Sloman.

h/t Nati Linial.  I did not know about the problem (although it is problem #81 on Green’s 100 problems list),  but Nati wrote me:  “In ancient times when I thought about KKL, I used to read everything in the library about Harmonic analysis and I came across this conjecture.”

 

Maria-Romina Ivan and Sean Jaffe: The saturation number for the diamond is linear

My previous post was about an asymptotic solution of Rota’s basis conjecture and the next few posts will also be devoted to some mathematical news. Before moving to the main featured result let me mention a beautiful blog post by Maria Gillespie about Tatsuyuki Hikita’s startling proof of the Stanley-Stembridge conjecture. Hikita’s proof was also recently featured in the facebook feed of Raul Basu (that contains a lot of very interesting news about mathematics and physics.) 

Poset saturation

The study of post saturation was initiated in a 2013 paper by Dániel Gerbner, Balázs Keszegh, Nathan Lemons, Cory Palmer, Dömötör Pálvölgyi, and Balázs Patkós. The new result has been the big open problem in poset saturation. 

Maria-Romina Ivan and Sean Jaffe: The saturation number for the diamond is linear

Abstract: For a fixed poset \mathcal P we say that a family \mathcal F\subseteq\mathcal P ([n]) is \mathcal P-saturated if it does not contain an induced copy of \mathcal P, but whenever we add a new set to \mathcal F, we form an induced copy of \mathcal P. The size of the smallest such family is denoted by \text{sat}^*(n, \mathcal P). For the diamond poset \mathcal D_2 (the two-dimensional Boolean lattice), while it is easy to see that the saturation number is at most n+1, the best known lower bound has stayed at O(\sqrt n) since the introduction of the area of poset saturation. In this paper we prove that \text{sat}^*(n, \mathcal D_2)\geq \frac{n+1}{5}, establishing that the saturation number for the diamond is linear. The proof uses a result about certain pairs of set systems which may be of independent interest.

carview.php?tsp=

Short commentary:

Even for very small posets (like the diamond, alias the two-dimensional Boolean lattice), finding the growth of the saturation number is very  difficult. A maximal chain shows that the saturation number for the diamond is at most n+1. Lots of people have worked on showing that the saturation number is linear —indeed, this was one of the main focus-problems for a whole workshop in Budapest, namely the workshop on “The Forbidden Poset Problem”.  There had been lots of small bits of progress, culminating in a lower bound of 4 \sqrt n (by Maria-Romina Ivan) a few years ago. But now the bound is linear (their lower bound is n/5).

Imre Leader wrote to me about the new result:

“This was kind of the `holy grail’ for poset saturation. Well, it was the holy grail for specific posets. There are still the wonderful general conjectures that nobody can solve, namely

  1. Is the saturation number for every poset either constant or at least linear? 
  2. Is the saturation number for every poset at most linear?
For a summary of the state of the art on those two conjectures see  Gluing Posets and the Dichotomy of Poset Saturation Numbers, by Maria-Romina Ivan and Sean Jaffe.”
 

Let me add that the notions of saturation and weak saturation of graphs and hypergraphs have led to beautiful research in extremal and probabilistic combinatorics.  There are many interesting open problems and exciting recent results. Bollobas’s  Two Families theorem (discussed in this post) gives the saturation number of complete r-uniform hypergraph with k vertices, and it can be seen as the starting point of this study. 

h/t Imre Leader