For Newtonmas, One Seventeenth of a New Collider

Individual physicists don’t ask for a lot for Newtonmas. Big collaborations ask for more.

This year, CERN got its Newtonmas gift early: a one billion dollar pledge from a group of philanthropists and foundations, to be spent on their proposed new particle collider.

That may sound like a lot of money (and of course it is), but it’s only a fraction of the 15 billion euros that the collider is estimated to cost. That makes this less a case of private donors saving the project, and more of a nudge, showing governments they can get results for a bit cheaper than they expected.

I do wonder if the donation has also made CERN more bold about their plans, since it was announced shortly after a report from the update process for the European Strategy for Particle Physics, in which the European Strategy Group recommended a backup plan for the collider that is just the same collider with 15% budget cuts. Naturally people started making fun of this immediately.

carview.php?tsp=
Credit to @theory_dad on X

There were more serious objections from groups that had proposed more specific backup plans earlier in the process, who are frustrated that their ideas were rejected in favor of a 15% tweak that was not even discussed and seems not to really have been evaluated.

I don’t have any special information about what’s going on behind the scenes, or where this is headed. But I’m amused, and having fun with the parallels this season. I remember writing lists as a kid, trying to take advantage of the once-a-year opportunity to get what seemed almost like a genie’s wish. Whatever my incantations, the unreasonable requests were never fulfilled. Still, I had enough new toys to fill my time, and whet my appetite for the next year.

We’ll see what CERN’s Newtonmas gift brings.

Academia Tracks Priority, Not Provenance

A recent Correspondence piece in Nature Machine Intelligence points at an issue with using LLMs to write journal articles. LLMs are trained on enormous amounts of scholarly output, but the result is quite opaque: it is usually impossible to tell which sources influence a specific LLM-written text. That means that when a scholar uses an LLM, they may get a result that depends on another scholar’s work, without realizing it or documenting it. The ideas’ provenance gets lost, and the piece argues this is damaging, depriving scholars of credit and setting back progress.

It’s a good point. Provenance matters. If we want to prioritize funding for scholars whose ideas have the most impact, we need a way to track where ideas arise.

However, current publishing norms make essentially no effort to do this. Academic citations are not used to track provenance, and they are not typically thought of as tracking provenance. Academic citations track priority.

Priority is a central value in scholarship, with a long history. We give special respect to the first person to come up with an idea, make an observation, or do a calculation, and more specifically, the first person to formally publish it. We do this even if the person’s influence was limited, and even if the idea was rediscovered independently later on. In an academic context, being first matters.

In a paper, one is thus expected to cite the sources that have priority, that came up with an idea first. Someone who fails to do so will get citation request emails, and reviewers may request revisions to the paper to add in those missing citations.

One may also cite papers that were helpful, even if they didn’t come first. Tracking provenance in this way can be nice, a way to give direct credit to those who helped and point people to useful resources. But it isn’t mandatory in the same way. If you leave out a secondary source and your paper doesn’t use anything original to that source (like new notation), you’re much less likely to get citation request emails, or revision requests from reviewers. Provenance is just much lower priority.

In practice, academics track provenance in much less formal ways. Before citations, a paper will typically have an Acknowledgements section, where the authors thank those who made the paper possible. This includes formal thanks to funding agencies, but also informal thanks for “helpful discussions” that don’t meet the threshold of authorship.

If we cared about tracking provenance, those acknowledgements would be crucial information, an account of whose ideas directly influenced the ideas in the paper. But they’re not treated that way. No-one lists the number of times they’ve been thanked for helpful discussions on their CV, or in a grant application, no-one considers these discussions for hiring or promotion. You can’t look them up on an academic profile or easily graph them in a metascience paper. Unlike citations, unlike priority, there is essentially no attempt to measure these tracks of provenance in any organized way.

Instead, provenance is often the realm of historians or history-minded scholars, writing long after the fact. For academics, the fact that Yang and Mills published their theory first is enough, we call it Yang-Mills theory. For those studying the history, the story is murkier: it looks like Pauli came up with the idea first, and did most of the key calculations, but didn’t publish when it looked to him like the theory couldn’t describe the real world. What’s more, there is evidence suggesting that Yang knew about Pauli’s result, that he had read a letter from him on the topic, that the idea’s provenance goes back to Pauli. But Yang published, Pauli didn’t. And in the way academia has worked over the last 75 years, that claim of priority is what actually mattered.

Should we try to track provenance? Maybe. Maybe the emerging ubiquitousness of LLMs should be a wakeup call, a demand to improve our tracking of ideas, both in artificial and human neural networks. Maybe we need to demand interpretability from our research tools, to insist that we can track every conclusion back to its evidence for every method we employ, to set a civilizational technological priority on the accurate valuation of information.

What we shouldn’t do, though, is pretend that we just need to go back to what we were doing before.

Energy Is That Which Is Conserved

In school, kids learn about different types of energy. They learn about solar energy and wind energy, nuclear energy and chemical energy, electrical energy and mechanical energy, and potential energy and kinetic energy. They learn that energy is conserved, that it can never be created or destroyed, but only change form. They learn that energy makes things happen, that you can use energy to do work, that energy is different from matter.

Some, between good teaching and good students, manage to impose order on the jumble of concepts and terms. Others end up envisioning the whole story a bit like Pokemon, with different types of some shared “stuff”.

Energy isn’t “stuff”, though. So what is it? What relates all these different types of things?

Energy is something which is conserved.

The mathematician Emmy Noether showed that, when the laws of physics are symmetrical, they come with a conserved quantity. For example, because the laws of the physics are the same from place to place, momentum is conserved. Similarly, because the laws of physics are the same from one time to another, Noether’s theorem states that there must be some quantity related to time, some number we can calculate, that is conserved, even as other things change. We call that number energy.

If energy is that simple, why are there all those types?

Energy is a number we can calculate. It’s a number we can calculate for different things. If you have a detailed description of how something in physics works, you can use that description to calculate that thing’s energy. In school, you memorize formulas like \frac{1}{2}m v^2 and m g h. These are all formulas that, with a bit more knowledge, you could calculate. They are the things that, for a something that meets the conditions, are conserved. They are things that, according to Noether’s theorem, stay the same.

Because of this, you shouldn’t think of energy as a substance, or a fuel. Energy is something we can do: we physicists, and we students of physics. We can take a physical system, and see what about it ought to be conserved. Energy is an action, a calculation, a conceptual tool that can be used to make predictions.

Most things are, in the end.

Ideally, Exams Are for the Students

I should preface this by saying I don’t actually know that much about education. I taught a bit in my previous life as a professor, yes, but I probably spent more time being taught how to teach than actually teaching.

Recently, the Atlantic had a piece about testing accommodations for university students, like extra time on exams, or getting to do an exam in a special distraction-free environment. The piece quotes university employees who are having more and more trouble satisfying these accommodations, and includes the statistic that 20 percent of undergraduate students at Brown and Harvard are registered as disabled.

The piece has kicked off a firestorm on social media, mostly focused on that statistic (which conveniently appears just before the piece’s paywall). People are shocked, and cynical. They feel like more and more students are cheating the system, getting accommodations that they don’t actually deserve.

I feel like there is a missing mood in these discussions, that the social media furor is approaching this from the wrong perspective. People are forgetting what exams actually ought to be for.

Exams are for the students.

Exams are measurement tools. An exam for a class says whether a student has learned the material, or whether they haven’t, and need to retake the class or do more work to get there. An entrance exam, or a standardized exam like the SAT, predicts a student’s future success: whether they will be able to benefit from the material at a university, or whether they don’t yet have the background for that particular program of study.

These are all pieces of information that are most important to the students themselves, that help them structure their decisions. If you want to learn the material, should you take the course again? Which universities are you prepared for, and which not?

We have accommodations, and concepts like disability, because we believe that there are kinds of students for whom the exams don’t give this information accurately. We think that a student with more time, or who can take the exam in a distraction-free environment, would have a more accurate idea of whether they need to retake the material, or whether they’re ready for a course of study, than a student who has to take the exam under ordinary conditions. And we think we can identify the students who this matters for, and the students for whom this doesn’t matter nearly as much.

These aren’t claims about our values, or about what students deserve. They’re empirical claims, about how test results correlate with outcomes the students want. The conversation, then, needs to be built on top of those empirical claims. Are we better at predicting the success of students that receive accommodations, or worse? Can we measure that at all, or are we just guessing? And are we communicating the consequences accurately to students, that exam results tell them something useful and statistically robust that should help them plan their lives?

Values come in later, of course. We don’t have infinite resources, as the Atlantic piece emphasizes. We can’t measure everyone with as much precision as we would like. At some level, generalization takes over and accuracy is lost. There is absolutely a debate to be had about which measurements we can afford to make, and which we can’t.

But in order to have that argument at all, we first need to agree on what we’re measuring. And I feel like most of the people talking about this piece haven’t gotten there yet.

Bonus Info For “Cosmic Paradox Reveals the Awful Consequence of an Observer-Free Universe”

I had a piece in Quanta Magazine recently, about a tricky paradox that’s puzzling quantum gravity researchers and some early hints at its resolution.

The paradox comes from trying to describe “closed universes”, which are universes where it is impossible to reach the edge, even if you had infinite time to do it. This could be because the universe wraps around like a globe, or because the universe is expanding so fast no traveler could ever reach an edge. Recently, theoretical physicists have been trying to describe these closed universes, and have noticed a weird issue: each such universe appears to have only one possible quantum state. In general, quantum systems have more possible states the more complex they are, so for a whole universe to have only one possible state is a very strange thing, implying a bizarrely simple universe. Most worryingly, our universe may well be closed. Does that mean that secretly, the real world has only one possible state?

There is a possible solution that a few groups are playing around with. The argument that a closed universe has only one state depends on the fact that nothing inside a closed universe can reach the edge. But if nothing can reach the edge, then trying to observe the universe as a whole from outside would tell you nothing of use. Instead, any reasonable measurement would have to come from inside the universe. Such a measurement introduces a new kind of “edge of the universe”, this time not in the far distance, but close by: the edge between an observer and the rest of the world. And when you add that edge to the calculations, the universe stops being closed, and has all the many states it ought to.

This was an unusually tricky story for me to understand. I narrowly avoided several misconceptions, and I’m still not sure I managed to dodge all of them. Likewise, it was unusually tricky for the editors to understand, and I suspect it was especially tricky for Quanta’s social media team to understand.

It was also, quite clearly, tricky for the readers to understand. So I thought I would use this post to clear up a few misconceptions. I’ll say a bit more about what I learned investigating this piece, and try to clarify what the result does and does not mean.

Q: I’m confused about the math terms you’re using. Doesn’t a closed set contain its boundary?

A: Annoyingly, what physicists mean by a closed universe is a bit different from what mathematicians mean by a closed manifold, which is in turn more restrictive than what mathematicians mean by a closed set. One way to think about this that helped me is that in an open set you can take a limit that takes you out of the set, which is like being able to describe a (possibly infinite) path that takes you “out of the universe”. A closed set doesn’t have that, every path, no matter how long, still ends up in the same universe.

Q: So a bunch of string theorists did a calculation and got a result that doesn’t make sense, a one-state universe. What if they’re just wrong?

A: Two things:

First, the people I talked to emphasized that it’s pretty hard to wiggle out of the conclusion. It’s not just a matter of saying you don’t believe in string theory and that’s that. The argument is based in pretty fundamental principles, and it’s not easy to propose a way out that doesn’t mess up something even more important.

That’s not to say it’s impossible. One of the people I interviewed, Henry Maxfield, thinks that some of the recent arguments are misunderstanding how to use one of their core techniques, in a way that accidentally presupposes the one-state universe.

But even he thinks that the bigger point, that closed universes have only one state, is probably true.

And that’s largely due to a second reason: there are older arguments that back the conclusion up.

One of the oldest dates back to John Wheeler, a physicist famous for both deep musings about the nature of space and time and coining evocative terms like “wormhole”. In the 1960’s, Wheeler argued that, in a theory where space and time can be curved, one should think of a system’s state as including every configuration it can evolve into over time, since it can be tricky to specify a moment “right now”. In a closed universe, you could expect a quantum system to explore every possible configuration…meaning that such a universe should be described by only one state.

Later, physicists studying holography ran into a similar conclusion. They kept noticing systems in quantum gravity where you can describe everything that happens inside by what happens on the edges. If there are no edges, that seems to suggest that in some sense there is nothing inside. Apparently, Lenny Susskind had a slide at the end of talks in the 90’s where he kept bringing up this point.

So even if the modern arguments are wrong, and even if string theory is wrong…it still looks like the overall conclusion is right.

Q: If a closed universe has only one state, does that make it deterministic, and thus classical?

A: Oh boy…

So, on the one hand, there is an idea, which I think also goes back to Wheeler, that asks: “if the universe as a whole has a wavefunction, how does it collapse?” One possibility is that the universe has only one state, so that nobody is needed to collapse the wavefunction, it already is in a definite state.

On the other hand, a universe with only one state does not actually look much like a classical universe. Our universe looks classical largely due to a process called decoherence, where small quantum systems interact with big quantum systems with many states, diluting quantum effects until the world looks classical. If there is only one state, there are no big systems to interact with, and the world has large quantum fluctuations that make it look very different from a classical universe.

Q: How, exactly, are you defining “observer”?

A: A few commenters helpfully chimed in to talk about how physics models observers as “witness” systems, objects that preserve some record of what happens to them. A simple example is a ball sitting next to a bowl: if you find the ball in the bowl later, it means something moved it. This process, preserving what happens and making it more obvious, is in essence how physicists think about observers.

However, this isn’t the whole story in this case. Here, different research groups introducing observers are doing it in different ways. That’s, in part, why none of them are confident they have the right answer.

One of the approaches describes an observer in terms of its path through space and time, its worldline. Instead of a detailed witness system with specific properties, all they do is pick out a line and say “the observer is there”. Identifying that line, and declaring it different from its surroundings, seems to be enough to recover the complexity the universe ought to have.

The other approach treats the witness system in a bit more detail. We usually treat an observer in quantum mechanics as infinitely large compared to the quantum systems they measure. This approach instead gives the observer a finite size, and uses that to estimate how far their experience will be from classical physics.

Crucially, both approaches aren’t a matter of defining a physical object, and looking for it in the theory. Given a collection of atoms, neither team can tell you what is an observer, and what isn’t. Instead, in each approach, the observer is arbitrary: a choice, made by us when we use quantum mechanics, of what to count as an observer and what to count as the rest of the world. That choice can be made in many different ways, and each approach tries to describe what happens when you change that choice.

This is part of what makes this approach uncomfortable to some more philosophically-minded physicists: it treats observers not as a predictable part of the physical world, but as a mathematical description used to make statements about the world.

Q: If these ideas come from AdS/CFT, which is an open universe, how do you use them to describe a closed universe?

A: While more examples emerged later, initially theorists were thinking about two types of closed universes:

First, think about a black hole. You may have heard that when you fall into a black hole, you watch the whole universe age away before your eyes, due to the dramatic differences in the passage of time caused by the extreme gravity. Once you’ve seen the outside universe fade away, you are essentially in a closed universe of your own. The outside world will never affect you again, and you are isolated, with no path to the outside. These black hole interiors are one of the examples theorists looked at.

The other example are so-called “baby universes”. When physicists use quantum mechanics to calculate the chance of something happening, they have to add up every possible series of events that could have happened in between. For quantum gravity, this includes every possible arrangement of space and time. This includes arrangements with different shapes, including ones with tiny extra “baby universes” which branch off from the main universe and return. Universes with these “baby universes” are another example that theorists considered to understand closed universes.

Q: So wait, are you actually saying the universe needs to be observed to exist? That’s ridiculous, didn’t the universe exist long before humans existed to observe it? Is this some sort of Copenhagen Interpretation thing, or that thing called QBism?

You’re starting to ask philosophical questions, and here’s the thing:

There are physicists who spend their time thinking about how to interpret quantum mechanics. They talk to philosophers, and try to figure out how to answer these kinds of questions in a consistent and systematic way, keeping track of all the potential pitfalls and implications. They’re part of a subfield called “quantum foundations”.

The physicists whose work I was talking about in that piece are not those people.

Of the people I interviewed, one of them, Rob Myers, probably has lunch with quantum foundations researchers on occasion. The others, based at places like MIT and the IAS, probably don’t even do that.

Instead, these are people trying to solve a technical problem, people whose first inclination is to put philosophy to the side, and “shut up and calculate”. These people did a calculation that ought to have worked, checking how many quantum states they could find in a closed universe, and found a weird and annoying answer: just one. Trying to solve the problem, they’ve done technical calculation work, introducing a path through the universe, or a boundary around an observer, and seeing what happens. While some of them may have their own philosophical leanings, they’re not writing works of philosophy. Their papers don’t talk through the philosophical implications of their ideas in all that much detail, and they may well have different thoughts as to what those implications are.

So while I suspect I know the answers they would give to some of these questions, I’m not sure.

Instead, how about I tell you what I think?

I’m not a philosopher, I can’t promise my views will be consistent, that they won’t suffer from some pitfall. But unlike other people’s views, I can tell you what my own views are.

To start off: yes, the universe existed before humans. No, there is nothing special about our minds, we don’t have psychic powers to create the universe with our thoughts or anything dumb like that.

What I think is that, if we want to describe the world, we ought to take lessons from science.

Science works. It works for many reasons, but two important ones stand out.

Science works because it leads to technology, and it leads to technology because it guides actions. It lets us ask, if I do this, what will happen? What will I experience?

And science works because it lets people reach agreement. It lets people reach agreement because it lets us ask, if I observe this, what do I expect you to observe? And if we agree, we can agree on the science.

Ultimately, if we want to describe the world with the virtues of science, our descriptions need to obey this rule: they need to let us ask “what if?” questions about observations.

That means that science cannot avoid an observer. It can often hide the observer, place them far away and give them an infinite mind to behold what they see, so that one observer is essentially the same as another. But we shouldn’t expect to always be able to do this. Sometimes, we can’t avoid saying something about the observer: about where they are, or how big they are, for example.

These observers, though, don’t have to actually exist. We should be able to ask “what if” questions about others, and that means we should be able to dream up fictional observers, and ask, if they existed, what would they see? We can imagine observers swimming in the quark-gluon plasma after the Big Bang, or sitting inside a black hole’s event horizon, or outside our visible universe. The existence of the observer isn’t a physical requirement, but a methodological one: a restriction on how we can make useful, scientific statements about the world. Our theory doesn’t have to explain where observers “come from”, and can’t and shouldn’t do that. The observers aren’t part of the physical world being described, they’re a precondition for us to describe that world.

Is this the Copenhagen Interpretation? I’m not a historian, but I don’t think so. The impression I get is that there was no real Copenhagen Interpretation, that Bohr and Heisenberg, while more deeply interested in philosophy than many physicists today, didn’t actually think things through in enough depth to have a perspective you can name and argue with.

Is this QBism? I don’t think so. It aligns with some things QBists say, but they say a lot of silly things as well. It’s probably some kind of instrumentalism, for what that’s worth.

Is it logical positivism? I’ve been told logical positivists would argue that the world outside the visible universe does not exist. If that’s true, I’m not a logical positivist.

Is it pragmatism? Maybe? What I’ve seen of pragmatism definitely appeals to me, but I’ve seen my share of negative characterizations as well.

In the end, it’s an idea about what’s useful and what’s not, about what moves science forward and what doesn’t. It tries to avoid being preoccupied with unanswerable questions, and as much as possible to cash things out in testable statements. If I do this, what happens? What if I did that instead?

The results I covered for Quanta, to me, show that the observer matters on a deep level. That isn’t a physical statement, it isn’t a mystical statement. It’s a methodological statement: if we want to be scientists, we can’t give up on the observer.

Mandatory Dumb Acronyms

Sometimes, the world is silly for honest, happy reasons. And sometimes, it’s silly for reasons you never even considered.

Scientific projects often have acronyms, some of which are…clever, let’s say. Astronomers are famous for acronyms. Read this list, and you can find examples from 2D-FRUTTI and ABRACADABRA to WOMBAT and YORIC. Some of these aren’t even “really” acronyms, using letters other than the beginning of each word, multiple letters from a word, or both. (An egregious example from that list: VESTALE from “unVEil the darknesS of The gAlactic buLgE”.)

But here’s a pattern you’ve probably not noticed. I suggest that you should see more of these…clever…acronyms in projects in Europe, and they should show up in a wider range of fields, not just astronomy. And the reason why, is the European Research Council.

In the US, scientific grants are spread out among different government agencies. Typical grants are small, the kind of thing that lets a group share a postdoc every few years, with different types of grants covering projects of different scales.

The EU, instead, has the European Research Council, or ERC, with a flagship series of grants covering different career stages: Starting, Consolidator, and Advanced. Unlike most US grants, these are large (supporting multiple employees over several years), individual (awarded to a single principal investigator, not a collaboration) and general (the ERC uses the same framework across multiple fields, from physics to medicine to history).

That means there are a lot of medium-sized research projects in Europe that are funded by an ERC grant. And each of them are required to have an acronym.

Why? Who knows? “Acronym” is simply one of the un-skippable entries in the application forms, with a pre-set place of honor in their required grant proposal format. Nobody checks whether it’s a “real acronym”, so in practice it often isn’t, turning into some sort of catchy short name with “acronym vibes”. It, like everything else on these forms, is optimized to catch the attention of a committee of scientists who really would rather be doing something else, often discussed and refined by applicants’ mentors and sometimes even dedicated university staff.

So if you run into a scientist in Europe who proudly leads a group with a cutesy, vaguely acronym-adjacent name? And you keep running into these people?

It’s not a coincidence, and it’s not just scientists’ sense of humor. It’s the ERC.

Reminder to Physics Popularizers: “Discover” Is a Technical Term

When a word has both an everyday meaning and a technical meaning, it can cause no end of confusion.

I’ve written about this before using one of the most common examples, the word “model”, which means something quite different in the phrases “large language model”, “animal model for Alzheimer’s” and “model train”. And I’ve written about running into this kind of confusion at the beginning of my PhD, with the word “effective”.

But there is one example I see crop up again and again, even with otherwise skilled science communicators. It’s the word “discover”.

“Discover”, in physics, has a technical meaning. It’s a first-ever observation of something, with an associated standard of evidence. In this sense, the LHC discovered the Higgs boson in 2012, and LIGO discovered gravitational waves in 2015. And there are discoveries we can anticipate, like the cosmic neutrino background.

But of course, “discover” has a meaning in everyday English, too.

You probably think I’m going to say that “discover”, in everyday English, doesn’t have the same statistical standards it does in physics. That’s true of course, but it’s also pretty obvious, I don’t think it’s confusing anybody.

Rather, there is a much more important difference that physicists often forget: in everyday English, a discovery is a surprise.

“Discover”, a word arguably popularized by Columbus’s discovery of the Americas, is used pretty much exclusively to refer to learning about something you did not know about yet. It can be minor, like discovering a stick of gum you forgot, or dramatic, like discovering you’ve been transformed into a giant insect.

Now, as a scientist, you might say that everything that hasn’t yet been observed is unknown, ready for discovery. We didn’t know that the Higgs boson existed before the LHC, and we don’t know yet that there is a cosmic neutrino background.

But just because we don’t know something in a technical sense, doesn’t mean it’s surprising. And if something isn’t surprising at all, then in everyday, colloquial English, people don’t call it a discovery. You don’t “discover” that the store has milk today, even if they sometimes run out. You don’t “discover” that a movie is fun, if you went because you heard reviews claim it would be, even if the reviews might have been wrong. You don’t “discover” something you already expect.

At best, maybe you could “discover” something controversial. If you expect to find a lost city of gold, and everyone says you’re crazy, then fine, you can discover the lost city of gold. But if everyone agrees that there is probably a lost city of gold there? Then in everyday English, it would be very strange to say that you were the one who discovered it.

With this in mind, the way physicists use the word “discover” can cause a lot of confusion. It can make people think, as with gravitational waves, that a “discovery” is something totally new, that we weren’t pretty confident before LIGO that gravitational waves exist. And it can make people get jaded, and think physicists are overhyping, talking about “discovering” this or that particle physics fact because an experiment once again did exactly what it was expected to.

My recommendation? If you’re writing for the general public, use other words. The LHC “decisively detected” the Higgs boson. We expect to see “direct evidence” of the cosmic neutrino background. “Discover” has baggage, and should be used with care.

Explain/Teach/Advocate

Scientists have different goals when they communicate, leading to different styles, or registers, of communication. If you don’t notice what register a scientist is using, you might think they’re saying something they’re not. And if you notice someone using the wrong register for a situation, they may not actually be a scientist.

Sometimes, a scientist is trying to explain an idea to the general public. The point of these explanations is to give you appreciation and intuition for the science, not to understand it in detail. This register makes heavy use of metaphors, and sometimes also slogans. It should almost never be taken literally, and a contradiction between two different scientist explanations usually just means they are using incompatible metaphors for the same concept. Sometimes, scientists who do this a lot will comment on other metaphors you might have heard, referencing other slogans to help explain what those explanations miss. They do this knowing that they do, in the end, agree on the actual science: they’re just trying to give you another metaphor, with a deeper intuition for a neglected part of the story.

Other times, scientists are trying to teach a student to be able to do something. Teaching can use metaphors or slogans as introductions, but quickly moves past them, because it wants to show the students something they can use: an equation, a diagram, a classification. If a scientist shows you any of these equations/diagrams/classifications without explaining what they mean, then you’re not the student they had in mind: they had designed their lesson for someone who already knew those things. Teaching may convey the kinds of appreciation and intuition that explanations for the general public do, but that goal gets much less emphasis. The main goal is for students with the appropriate background to learn to do something new.

Finally, sometimes scientists are trying to advocate for a scientific point. In this register, and only in this register, are they trying to convince people who don’t already trust them. This kind of communication can include metaphors and slogans as decoration, but the bulk will be filled with details, and those details should constitute evidence: they should be a structured argument, one that lays out, scientifically, why others should come to the same conclusion.

A piece that tries to address multiple audiences can move between registers in a clean way. But if the register jumps back and forth, or if the wrong register is being used for a task, that usually means trouble. That trouble can be simple boredom, like a scientist’s typical conference talk that can’t decide whether it just wants other scientists to appreciate the work, whether it wants to teach them enough to actually use it, or whether it needs to convince any skeptics. It can also be more sinister: a lot of crackpots write pieces that are ostensibly aimed at convincing other scientists, but are almost entirely metaphors and slogans, pieces good at tugging on the general public’s intuition without actually giving scientists anything meaningful to engage with.

If you’re writing, or speaking, know what register you need to use to do what you’re trying to do! And if you run into a piece that doesn’t make sense, consider that it might be in a different register than you thought.

Fear of the Dark, Physics Version

Happy Halloween! I’ve got a yearly tradition on this blog of talking about the spooky side of physics. This year, we’ll think about what happens…when you turn off the lights.

Over history, astronomy has given us larger and larger views of the universe. We started out thinking the planets, Sun, and Moon were human-like, just a short distance away. Measuring distances, we started to understand the size of the Earth, then the Sun, then realized how much farther still the stars were from us. Gradually, we came to understand that some of the stars were much farther away than others. Thinkers like Immanuel Kant speculated that “nebulae” were clouds of stars like our own Milky Way, and in the early 20th century better distance measurements confirmed it, showing that Andromeda was not a nearby cloud, but an entirely different galaxy. By the 1960’s, scientists had observed the universe’s cosmic microwave background, seeing as far out as it was possible to see.

But what if we stopped halfway?

Since the 1920’s, we’ve known the universe is expanding. Since the 1990’s, we’ve thought that that expansion is speeding up: faraway galaxies are getting farther and farther away from us. Space itself is expanding, carrying the galaxies apart…faster than light.

That ever-increasing speed has a consequence. It means that, eventually, each galaxy will fly beyond our view. One by one, the other galaxies will disappear, so far away that light will not have had enough time to reach us.

From our perspective, it will be as if the lights, one by one, started to turn out. Each faraway light, each cloudy blur that hides a whirl of worlds, will wink out. The sky will get darker and darker, until to an astronomer from a distant future, the universe will appear a strangely limited place:

A single whirl of stars, in a deep, dark, void.

carview.php?tsp=

C. N. Yang, Dead at 103

I don’t usually do obituaries here, but sometimes I have something worth saying.

Chen Ning Yang, a towering figure in particle physics, died last week.

carview.php?tsp=
Picture from 1957, when he received his Nobel

I never met him. By the time I started my PhD at Stony Brook, Yang was long-retired, and hadn’t visited the Yang Institute for Theoretical Physics in quite some time.

(Though there was still an office door, tucked behind the institute’s admin staff, that bore his name.)

The Nobel Prize doesn’t always honor the most important theoretical physicists. In order to get a Nobel Prize, you need to discover something that gets confirmed by experiment. Generally, it has to be a very crisp, clear statement about reality. New calculation methods and broader new understandings are on shakier ground, and theorists who propose them tend to be left out, or at best combined together into lists of partial prizes long after the fact.

Yang was lucky. With T. D. Lee, he had made that crisp, clear statement. He claimed that the laws of physics, counter to everyone’s expectations, are not the same when reflected in a mirror. In 1956, Wu confirmed the prediction, and Lee and Yang got the prize the year after.

That’s a huge, fundamental discovery about the natural world. But as a theorist, I don’t think that was Yang’s greatest accomplishment.

Yang contributed to other fields. Practicing theorists have seen his name strewn across concepts, formalisms, and theorems. I didn’t have space to talk about him in my article on integrability for Quanta Magazine, but only just barely: another paragraph or two, and he would have been there.

But his most influential contribution is something even more fundamental. And long-time readers of this blog should already know what it is.

Yang, along with Robert Mills, proposed Yang-Mills Theory.

There isn’t a Nobel prize for Yang-Mills theory. In 1953, when Yang and Mills proposed the theory, it was obviously wrong, a theory that couldn’t explain anything in the natural world, mercilessly mocked by famous bullshit opponent Wolfgang Pauli. Not even an ambitious idea that seemed outlandish (like plate tectonics), it was a theory with such an obvious missing piece that, for someone who prioritized experiment like the Nobel committee does, it seemed pointless to consider.

All it had going for it was that it was a clear generalization, an obvious next step. If there are forces like electromagnetism, with one type of charge going from plus to minus, why not a theory with multiple, interacting types of charge?

Nothing about Yang-Mills theory was impossible, or contradictory. Mathematically, it was fine. It obeyed all the rules of quantum mechanics. It simply didn’t appear to match anything in the real world.

But, as theorists learn, nature doesn’t let a good idea go to waste.

Of the four fundamental forces of nature, as it would happen, half are Yang-Mills theories. Gravity is different, electromagnetism is simpler, and could be understood without Yang and Mills’ insights. But the weak nuclear force, that’s a Yang-Mills theory. It wasn’t obvious in 1953 because it wasn’t clear how the massless, photon-like particles in Yang-Mills theory could have mass, and it wouldn’t become clear until the work of Peter Higgs over a decade later. And the strong nuclear force, that’s also a Yang-Mills theory, missed because of the ability of such a strong force to “confine” charges, hiding them away.

So Yang got a Nobel, not for understanding half of nature’s forces before anyone else had, but from a quirky question of symmetry.

In practice, Yang was known for all of this, and more. He was enormously influential. I’ve heard it claimed that he personally kept China from investing in a new particle collider, the strength of his reputation the most powerful force on that side of the debate, as he argued that a developing country like China should be investing in science with more short-term industrial impact, like condensed matter and atomic physics. I wonder if the debate will shift with his death, and what commitments the next Chinese five-year plan will make.

Ultimately, Yang is an example of what a theorist can be, a mix of solid work, counterintuitive realizations, and the thought-through generalizations that nature always seems to make use of in the end. If you’re not clear on what a theoretical physicist is, or what one can do, let Yang’s story be your guide.