| CARVIEW |
The two surveys were conducted similarly in time (Pew Sept. 22-28, 2025 and Barometer July 3-August 1, 2025), and both are online nonprobability samples weighted for known population parameters. Barometer is a substantially larger sample (N=31,891, relative to Pew’s 3,445), but Pew’s can hardly be called a small survey sample.
Of course, right-direction/wrong-direction is not the same as trust/mistrust, so it’s possible the public is carefully parsing the question texts, and lots of people think higher ed is going in the wrong direction but nevertheless trust it. That seems unlikely to me.
Barometer doesn’t seem to have published the question text, but invites communication to the report’s authors. It seems, though, that Barometer asks about “universities and colleges,” while Pew asks about “higher education.” That’s potentially a big difference in priming, since universities and colleges may include attitudes about science, health care, and even college sports, while “higher education” may signal undergraduate education more narrowly. (Just my conjecture – I have no evidence for this.)
The other questions on the Pew survey do focus mostly on undergrad education (all the questions except one are about students; the remaining one is “Advancing research and innovation”), while Barometer asks about universities’ role in technology, science, economic growth, health care, democracy, and relations with other countries.
It’s tempting to just go with the Barometer survey since the findings are much more comforting. I suspect what’s really going on is that most Americans don’t have well-formed attitudes about higher ed (or universities and colleges) either way, which leaves them open to more significant framing and priming effects in differing surveys.
There’s a lot of talk about Americans’ declining trust in higher education, often citing Gallup’s “confidence” measure (yet another, distinct from “trust,” “right-direction,” and “approve”). Gallup finds a big partisan gap; Barometer does too, but with a higher baseline level of trust. And a lot of higher-ed reform discussions and proposals are built on the premise that this distrust is fixed and demonstrated.
I think Americans should trust higher ed. I think higher ed should try to be trustworthy. And I think higher ed should reform in a variety of ways. But the wide variation in findings in these surveys makes me think the reform should be based on a careful analysis of where we are doing research, education, and service well and where we are falling short rather than trying to follow public (dis)trust.
]]>No, not really. And the reasons are pretty much the same as in that old Scatterplot post and the two articles that followed: Perrin, Caren, and Cohen 2013 and Cheng and Powell 2015.
As I understand it, the Multiverse Analysis approach Young and Cumberworth advocate is a way of computationally modeling the likelihood of various outcomes being true by simulating an expert debate over modeling and then harvesting all the possible outcomes. (BTW this seems very interesting to me but it’s not at all my field, so please feel free to correct my interpretation.) From the book’s introduction, the goal is to
reduce the discretion of authors to pick an exactly preferred model and result while expanding the range of models and results that any one author considers. The method involves specifying a set of plausible model ingredients (including possible controls, variable definitions, estimation commands, and standard error calculations) and estimating all possible combinations of those model ingredients. The principle is to use only vetted, credible model inputs, as any author would do when selecting a single estimate, but then report back every estimate that can be obtained from those inputs. It perturbates the model using a combinations algorithm while also reporting how much each modeling input (or assumption) matters for the results.
They chose several controversial studies to submit to this analytical strategy, including Jeremy’s favorite hurricane-names study. Controversial studies lend themselves well to the approach, not just because they’re interesting intrinsically but also because there are extant critiques in the literature that can be used to create the multiverse.
One of the chapters is based on the Regnerus study (Chapter 11, “Data Processing Multiverse Analysis of Regnerus and Critics”). Most of the analysis deals with the distinction between Regnerus’s emphasis on the status of parents as “same-sex” and critic Rosenfeld’s (2015) emphasis on family instability by focusing on transitions. The chapter finds basically that Regnerus’s point estimates are larger than warranted, but that nevertheless the large majority (76%) of models are “both negative and statistically significant.” Comparing the Regnerus approach to the Rosenfeld approach yields this graph:

in which both still yield negative results, but Rosenfeld’s less so.
So does this count as “vindication?” No. Because the core problems with the study were never the analytical strategy but rather the data themselves. As documented elsewhere, NFSS used a screener question, asking people whether a parent ever had a romantic relationship with a member of the same sex. Everyone who said “yes” was included in the survey in order to oversample for those who said their parents had done so.
Not to revisit the “cheeto-eating” side-argument, but there are all sorts of reasons an online survey-taker might say yes to that question, and many of those reasons are not about the accuracy of the statement. Young and Cuumberworth agree: “We see it as bad practice to prescreen a survey with the question ‘are [sic] either of your parents gay?’ and then include everyone who responds ‘yes.’”
The later questions are similarly suspect, with the effect that a significant number of those identified as having been “parented” by “same-sex parents” were almost certainly not. Cheng and Powell’s paper details this well on the data-cleaning side, but even if you take the respondents to be telling the truth, the thing measured by NFSS is not the thing most people think of as same-sex parenting.
What that means is that no number of alternative analyses in the multiverse can possibly discover an effect of same-sex parenting, because the data don’t contain any plausible measure of same-sex parenting.
Young and Cumberworth also chastise some of the critiques as overly focused on significance testing:
A focus on significance testing alone, to the exclusion of effect sizes, is bad statistical practice in any context (Gelman and Stern 2006). It is especially flawed here, as the critics provide new specifications that cut the sample size and greatly reduce the treatment group. … It is not a fair assessment of the data to report that significance levels fall after dropping as much as 44 percent of the treatment group: Of course statistical significance will be lower when the sample is smaller.
I think this is wrong. If “dropping” the cases were just a technical matter, they might have a case. But the rationale for dropping the cases is that they are not in fact cases in the treatment group. If I assign 100 people randomly to receive the COVID vaccine, but only 56 of them actually get the vaccine, it is the right decision to drop the other 44 cases (or, I suppose, assign them to the control arm of the study). The remaining 56% of the cases don’t provide sufficient information to confidently distinguish the point estimate from zero, which is what the significance “stars” mean. In other words: based on the 56% of the treatment group that we have some confidence actually received the treatment, we cannot conclude that the treatment mattered. That’s, well…. “no difference.”
]]>I think both “viewpoint” and “diversity” are bad ways of thinking about this problem, but I think the problem itself is quite real and quite worrisome. (I prefer the term “ideological pluralism,” which is still problematic but perhaps less so.) It’s important to define the scope of the problem: I don’t think it’s accurate to say academia is less ideologically diverse than other major institutions, but rather that its diversity is remarkably skewed relative to the space of politics in general, and I think that skew interferes with inquiry, particularly but not entirely in the social sciences and humanities.
I don’t consider myself part of any “movement” for viewpoint diversity, but I am more sympathetic to the concerns raised under that label than Lisa is; and indeed I am “thrilled to have [her] ideas in the mix.”
I think the article’s distinction between intrinsic and instrumental reasons for valuing intellectual diversity rests upon a questionable premise: that one or another “viewpoint” — a metaphor that I think is itself quite problematic — is “the best or most true.” I would argue that that’s not generally possible — even in the natural sciences, but certainly not in the humanities or social sciences. Certainly any one question could theoretically be resolved as best or true, though even that is pretty rare. But the idea that a package of tendencies and approaches — a “viewpoint” — is going to be proven better or true (or worse or false) strains credulity. (I also think that goal is rather totalitarian, in that it seeks univocality in a pluralistic society.) In the absence of that, it’s both intrinsically and instrumentally better to entertain a larger range of claims and arguments.
I agree with a point the article makes a couple of times, particularly in thesis 1: that valuing ideological pluralism is at odds with a “search for truth” purism. If the process of inquiry can be performed entirely separately from any political motivations or considerations, then there is no justification for seeking political/ideological breadth because it’s not relevant to the process of inquiry. In short: ideological pluralism is valuable only insofar as the process of inquiry can’t be separated entirely from ideologically-tinged foundations, so the “stick to objective science” goal is at odds with the ideological pluralism goal. (There may still be pedagogical justifications, but that’s outside the scope of what we’re talking about, I think.) And I do think it would be helpful for scholars particularly in the humanities and social sciences to work hard on laboring under the discipline of making evidence-based arguments that are not bound to political positions — but as you know well, that itself is quite contested and even unpopular among many of our colleagues.
But even if that discipline were more widely supported, this unmotivated “search for truth” ideal strikes me as hopelessly naive and very much at odds with all we know about the social realities of inquiry across the disciplines. This is not because truth itself is impossible, but rather because its discovery is animated specifically by motivated debate and contest. New evidence isn’t just “uncovered,” but emerges because someone is investigating a hypothesis that is surprising or even heterodox; new methods don’t initiate paradigms on their own, they do so in the hands of someone(s) in contest with extant paradigms. I’d argue that in many (but not all) fields, the set of hypotheses (or arguments, claims) that get considered is constrained insofar as the scholars working with them share more sets of value propositions — “viewpoints” — than necessary. That’s why I think the article’s conclusion is wrong: “…those of us who want good ideas to win and bad ideas to lose should understand that viewpoint diversity… can only ensconce more bad ideas.”
Another way of saying this: what claims in the humanities and social sciences are true in the same way as “DNA’s structure is a double helix” is true? I suspect there are actually relatively few claims even in the natural sciences that are true in quite that same way; for example, “we have to close public schools for a year and a half to mitigate COVID risk” is a scientific claim that is less true than the DNA claim. And I certainly don’t think claims like “settler colonialism is always evil” or “inequality is the most important outcome to diagnose” can be said to have the same truth-status as the structure of DNA, even though both enjoy strong, almost axiomatic, support within relevant fields.
Theses 2-4 seem to me to rest on a foundational claim: that intellectual matters are entirely distinct from political ones, and that when disciplinary standards are “settled upon by the collective expertise of the discipline,” that settling-upon carried no political dimension. Again, this strikes me as wildly unlikely as an empirical matter, and probably simply impossible in many fields based on the fact that so many of these standards are themselves political (or moral) on their faces.
I think theses 5 and 7 are pretty similar – they amount to proponents of viewpoint diversity are lying about what they really want. Thesis 5 holds that they don’t want all diversity, just a particular kind; thesis 7 that they hold this preference for reasons of “bad faith.” That may be true of some proponents; to the extent that I am a proponent, it is not true of me, and I don’t believe it to be true of many of the others I’ve talked and worked with who count more as “proponents” than I do. More importantly than that, though, it’s not a verifiable question; the argument for viewpoint diversity needs to succeed or fail on its merits, not on whether you think the people making the arguments are good or not. Similarly for thesis 6 and part of the opening: the fact that anti-university actors (including most prominently the Trump administration) have used the rhetoric of viewpoint diversity to attack universities, professors, and academic freedom is certainly true, but not an argument against viewpoint diversity itself.
There are, of course, very bad reasons for pursuing viewpoint diversity. One is because the Trump Administration wants it (or claims to). Another is because we anticipate it may insulate us from future attacks. But I believe the other reasons I offered above are valid and important.
]]>(I do really wish people would actually spend more than five minutes with Foucault’s texts before caricaturing his oeuvre; to say that Foucault “reduced all Western societies to intricate and oppressive systems of social control” is to reveal a thorough lack of understanding of the theory.) But beyond that: it takes a certain amount of chutzpah to cite one’s own “study” in the WSJ and link, then, to an unpublished manuscript hosted on one’s own Google Drive. I admire the chutzpah, so read the study carefully.
I also admire the empirical ambition: the discipline of determining what’s actually happening beyond the endless anecdotes that populate most of the “ideological diversity” debates. The study uses data from the Open Syllabus Project to examine what texts are taught alongside three important primary texts: Michelle Alexander’s The New Jim Crow; Edward Said’s Orientalism; and Judith Jarvis Thomson’s “A Defense of Abortion.” Each of these texts has widespread respect and important, respected criticism; to what extent do courses teaching them teach their most important critics?
The Open Syllabus Project is a great source of data for this. I can imagine some more systematic approaches that build on Shields et al.’s approach: for example, a network analysis evaluating ties, cliques, and betweenness centrality among readings and disciplines would allow for understanding the (lack of) ties between texts and their critics more generally.
Shields et al. give a few clues as to what they consider pedagogical goals: “intellectual friction” (23); “teach academic controversies in full” (48). In the main I tend to agree that these are important goals, for reasons they explain toward the end of the paper. However, it’s not clear to me what “in full” appropriately means across a wider field than the ones they examine here; and indeed the point about “inter-class diversity” (49) is very important: it’s not illegitimate to offer a course, say, in Marxist thought, or postcolonial or feminist theory, or conservative thought. Few economics courses teach Marxist or even heterodox economics alongside the ruling model. So while I think many, perhaps even most, courses ought to display the kind of internal diversity Shields et al. advocate, I do also think courses that focus on a particular strain of thought are worthwhile; there ought to be more of them across a broader ideological swath.
On to the specific analysis. The first case (of the Alexander book) is the strongest in my view; it’s a very controversial book, and it would be difficult for students to comprehend it without reference to the controversy it’s in the middle of. This case seems solid to me (though Du Bois’s book’s title is misstated twice): rarely do the most important critics of Alexander appear on syllabi alongside it; indeed, much more common are works that, broadly, agree with it. The biggest problem in the presentation of this case is, as I signaled above, Foucault is caricatured (as is, later, all of postmodernism [34]). I think a “Foucault for conservatives” or “postmodernism for conservatives” course would be great fun – maybe I’ll teach it sometime. But to hold that postmodernism is only about privileging the “perspective of the story-teller” is to betray no serious engagement with the swath of theory and philosophy that goes by “postmodernism.”
The second case (of Said’s Orientalism) is weaker. Said, like Foucault above, is caricatured intellectually, mostly by way of personal experiences and statements outside the text itself (more of this on others on page 25). But most importantly: the presumption in the paper is that courses teaching Orientalism are mostly about Israel/Palestine. That depends on the misreading of that book as being mostly about Israel/Palestine. It emphatically is not. On page 28 there’s a somewhat snide comment about its assignment in the humanities: “Presumably, scholars in these fields consider Said’s work of relevance because they believe orientalism infects western literature and art.” But the scholars need not believe that; the book makes and defends precisely that claim, which makes it substantively relevant to certain considerations of literature and art. So are “professors … also presenting as a scholarly consensus the broad social consensus against Israel on campus?” No, that claim is not warranted by the evidence since Orientalism is not primarily an anti-Israel or anti-Zionist book, though of course it does contain those claims alongside a much broader historical/epistemological argument. Nevertheless, I do think the main claim is defensible: that, once again, Orientalism is mostly not taught alongside its detractors.
The third case (on abortion) strikes me as the weakest of the three. The top titles taught alongside the focal text (p 43) strongly suggest it’s not mostly taught as part of a discussion of abortion ethics at all — rather, it’s about philosophy or legal studies, so perhaps taught as an exercise in formal persuasion. Thus whether or not anti-abortion texts are taught is kind of tangential.
Zooming back out from the cases: I think Shields et al. are likely right about the case, and the empirical approach is quite laudable. But the core research questions ought to be something like: (1) when controversial texts are taught, how often are important texts that disagree with them taught? and (2) across the curriculum, do students encounter intellectually diverse texts in ways that encourage them to put the texts into “intellectual friction” with one another? These are great questions, and I suspect the answers are (1) “not that often, though sometimes” and (2) “not much.” But the analysis here, while strongly suggestive, doesn’t really answer those bigger questions. It’s possible that, for example, Orientalism is paired with many more critical pieces but no one rises to the top. A larger network analysis might be able to tease out those bridges and structural holes.
]]>In my parts of sociology, the term “performative” entered into conversation through various theorists picking up on the work of J. L. Austin. In How to Do Things with Words, Austin (1955) argued that some phrases should be understood as a kind of action (“speech acts”) rather than a kind of claim. That is, for a statement like a judge’s sentencing of a convict (“I sentence you to one year in prison”) or an officiant at a wedding (“I now pronounce you married”) the relevant question is not “is this statement true or false?” but rather, “is this performance successful or failed?” Austin used the phrase “felicity conditions” to describe the circumstances that must hold for a performative utterance to succeed – the speaker must be authorized to make such statements in such time and place, etc.
This idea was really generative. Pierre Bourdieu picked up on it in some of his work, Michel Callon drew on it to understand the role of economics in contemporary markets (work that was extended by Donald MacKenzie and others), and most famously Judith Butler deployed “performativity” for their broader theorizing of gender in Gender Trouble. Building on Butler, Sarah Ahmed later introduced the term “non-performative” to describe utterances that are not performative, that in fact (perhaps intentionally) summon into being nothing or the opposite of what they claim (as when a large organization proclaims “we are committed to diversity!” but the full extent of their valuing diversity is making this non-performative statement).
Somehow, over the past decade*, a separate meaning for performativity has emerged. In public discourse in the US, especially around topics like “virtue signalling” and expressive forms of social movement, corporate, or governmental action, “performative” has become an allegation: that’s just a performance. Merriam-Webster defines this second usage as follows: “made or done for show (as to bolster one’s own image or make a positive impression on others).” This definition of performative is nearly equivalent to Ahmed’s concept of non-performative! And thus we’ve reached contranym.

So, for students out there who are used to seeing the term “performative” in the contemporary political meaning and who are now writing about social theorists who use the term in other ways, please just remember to be careful to define the term and signal what you mean by it in your paper. Your increasingly ancient professors will appreciate the signposts.
* The OED dates this usage to 1996, but I think it’s proliferated mostly in the last decade. In graduate school, I don’t really recall this confusion popping up, for example. I’d be curious if anyone has a better sense of the trajectory of the term and how much its takeoff is a function of e.g. BLM I, BLM II, #MeToo, etc.
]]>Instruction tuning, and later chat tuning, are the methodological innovations that make some machine learning pass the Turing Test. An object that we can use imperative language with (“write me an email…”) seems to have captured imaginations in ways that other, frankly better, objects don’t.
I should back up a bit. Alan Turing’s imitation game, now famous as the ‘Turing Test,’ was proposed as a way of testing whether something is ‘intelligent.’ The game has three players. A woman, a man pretending to be a woman, and interrogator attempting to tell which is which while corresponding with them only in text. Turing then replaces one of the people with a computer, which tries to fool the interrogator into believing it is a woman. If it does, it is intelligent.1 This mirrors quite well current chatbot interactions: users correspond by text (and increasingly voice), and only observe their inputs (the words they write) and the outputs (the words generated by the model). Plenty of people are fooled, both in formal tests of the imitation game, and in the pages of various news and academic publishing venues.
Given only input and output pairs, though, the inner workings of a system are underdetermined. Many totally different systems may produce the same output, and it would be a mistake to assume much about what is inside or how they work from simply observing the inputs and outputs, as in the imitation game. The Mechanical Turk, for example, secretly had a human inside the machine operating it, contrary to its purveyors’ claims that it was all mechanical. Much of current ‘AI’ is just a thin mask over humans’ manual labor, as well.
Plenty of chatbots really are purely mechanical at inference time. Still, this tells us little about their internal workings or fundamental nature. Consider calculators. We have a huge variety of tools and algorithms for arithmetic. A child dividing by 10 may add the number repeatedly until it goes over. Perhaps you learned to move the decimal one place left instead. Or you recall the answer from memorized times tables. Three totally different algorithms. Same answer. An abacus, slide rule, and 19th century arithmometer all work differently inside, too, and each delivers the same outputs given the same inputs. Even a desktop solar calculator and your phone’s calculator app use different algorithms: they may both be digital, but the circuits in the machine are different, and so the internal process to convert an input math problem to an output answer are different. If we play the imitation game with calculators, we cannot tell the NASA employee apart from the slide rule. Yet no one attributes general intelligence or consciousness to slide rules.2
The main difference is the interface. Chatbots accept imperative language. Earlier LLMs were more transparent with how they worked. A user provides some text, and the model continues the text by adding a sequence of probable next words. If you want it to write an Abraham Lincoln speech, you start it with the text “Four score and seven years ago…” and it will continue adding words one by one. This works great for computer code, since the beginning of functions is often just the function name and a comment about what it does. Programmers know what they want and can write that part easily. The LLM then fills in plausible code to follow that beginning. We still see vestiges of this text continuation setup in the latest chat LLMs, with system prompts like this:
This is a transcript of a conversation between a curious User and a helpful AI assistant. Continue the dialogue.
Assistant: Hello, how may I help you?
User:
The user then types something, extending the document. When they press send, the interface adds Assistant: to the end of the document, then passes it to an LLM for continuation. Interfaces typically also specify something like User: as a stopping string as well. Without it, the LLM would get to the end of the message from the assistant and keep going, writing out a response for the user, then one for the assistant, and on and on. Some interfaces have an ‘impersonate’ button to reverse this rule, so that the LLM adds text after User: and gets stopped when it writes Assistant:. There is nothing inherent about Assistant: at the start of a line that is bound to the LLM. In the roleplay use cases that have gathered increasing attention, the lines might start with Alice: and White Rabbit: instead. All of this infrastructure is hidden from normal users.
Increasingly, it is hidden from advanced users programming in python, as well. OpenAI’s API is becoming somewhat of a standard, even for independent open source projects. It still has the old “completions” endpoint, where one can send it a block of text and let it continue from the end. The block could be formatted as a chat log (see above), cake recipe, or anything else. But increasingly, OpenAI is pushing their “chat” endpoint. Here, users send a list of messages attributed to personas and then some system at the other end decides how to stitch them together into a block of text that the LLM continues. This both obscures the process and cedes control. On Azure, for instance, the API only allows the model to continue text after the string Assistant:. You cannot rename it to write text for a Lewis Carrol character or to fill in for your character. This isn’t part of the LLM, but a restriction imposed by the interface between you and the LLM. You must use imperative language to get the desired behavior. (E.g. something hamfisted like “You are an assistant role playing as Alice from Through the Looking Glass. Write responses as if you are Alice. Never break character.”) It is little wonder then that academics working with chatbots complain “it did ___ even though we told it not to,” as if it were some misbehaving research assistant rather than a model of likely next words (fancy autocomplete). Increasingly, the only way to interact with commercial LLMs is to command them imperatively, obscuring their actual workings as text continuation machines.
It took effort to make imperative language work. Without instruction tuning, if one wants the value of pi, they would open with “The value of pi is…” and then have the model continue the text (“…3.14”). If one prompts such a model with an imperative (e.g. “tell me the value of pi…”) an LLM is likely to continue the imperative (e.g. “…out to the 10th digit”). The text after a command is likely more command. With some carefully selected training data, however, we’ve shown the models a different pattern to follow: after an imperative (“tell me the value of pi…”), instruction tuned models continue with an answer (“…the value of pi is 3.14”). This pattern is what allows chatbots and not other machines and models to pass the Turing Test.
Of course, an interface that accepts imperative language is not enough: Ask Jeeves, Wolfram Alpha, and Amazon Alexa didn’t get anthropomorphized the way chatbots do. I think the difference is scope. These systems had limited scope in a way that reminds us of clap-on, clap-off. Users make ritual invocations within a constrained set of the tool’s abilities. (“Alexa, set a timer for…”) With such tools, users are constantly aware of what they can and cannot do. The awareness of their limits lets us think “this is a tool for that.” The same goes for ‘AI’ image, song, and code generators.3 Natural language feels much more general purpose than images, code, songs, math, house lights, etc.4 Chatbots will continue text for and about anything5, and they allow a sense of conversational back-and-forth. Even clever people get tricked.6
Many of the arguments about LLMs seem to involve us talking past one another. Insistence that they are “just autocomplete” is demonstrably true, but often remains too abstract to persuade people. I have tried to be less abstract here. Meanwhile, most proponents at some point break down in frustration and say “just try it and you’ll see!” Their argument is phenomenological: Doesn’t it feel smart and capable? Don’t you feel like you’re getting value out of it? Working faster? This, too, is demonstrably true. Many people feel that way. The problem comes when we mistake the feeling of using a chatbot (writing this input and getting that output feels like talking to an intelligent person) for the actual inner mechanisms of it (next word prediction). As with calculators, many different internal mechanisms can produce indistinguishable output.
- An earlier version of this blog reversed the role of the man and woman in Turing’s original formulation. Turing also has a slightly different second formulation. The text here is now appropriate for both Turing’s versions.
︎ - There must be at least one philosopher who does. Academic philosophy and the internet have Rule 34 in common that way.
︎ - Chatbots have some vestiges of these ritual invocations in discussions of ‘prompt engineering,’ but as chat tuning improves, users rub up against them less.
︎ - With apologies this time to mathematicians and artists, who wish they were more popular than they are. Programmers may share this wish, but they will get no such apology.
︎ - Sometimes the text is “as an AI language model, I cannot…”, but such refusals are themselves situation-appropriate text. And while users are mostly unaware of it, most refusals are not generated directly by the LLM, but rather inserted by filtering systems that sit between the LLM and the user.
︎ - An open question: When people have “AI” generate an image of, say, the Statue of Liberty, my sense is that all of them, down to a person, think of it as fictional. But this is generally not true when they ask it to generate a text description of the same statue. What is it about text as a medium?
︎
Faculty From 14 Universities Join Forces to Call on Administrative Bodies to Stand Up to Attacks on Higher Education
Nearly 5,100 Faculty From 14 Universities Call on University Administrations to Stand Up to Attacks on Democratic Principles
Cambridge, MA – April 17th, 2025 – Faculty from fourteen universities across the United States signed and released letters calling for their respective university’s administrative bodies to stand up to the Trump administration’s attacks on academic freedom, freedom of inquiry, and other democratic principles. Nearly 5,100 faculty have collectively signed these letters on what the American Association of University Professors has called a Day of Action for Higher Ed. This is a collaborative effort independently organized by passionate faculty across four universities – Professor Ryan Enos from Harvard University; Professor Gerry Leonard from Boston University; Professor Brian Cleary from Boston University; Professor Daniel Laurison from Swarthmore College; and Professor Dan Hirschman from Cornell University – to encourage leadership at higher education institutions to stand up and fight back against the anti-democratic attacks of the federal government.
Following the extraordinary sign of leadership from Harvard University on April 14, 2025, these faculty are speaking out to encourage their own institutions to stand up and fight back collectively against the illegal and unprecedented actions of the federal government in three key ways:
- Condemn the attacks on higher education, even if your own institution has not yet been singled out.
- Legally contest and refuse to comply with unlawful demands from the federal government.
- Work with other universities to mount a coordinated opposition of administrators, faculty, and alumni to combat these anti-democratic attacks.
“When my colleagues and I first penned a letter to Harvard’s leadership asking them to take a stand for higher education in the face of Trump’s attacks, we didn’t know what to expect,” said Ryan Enos, Professor of Government at Harvard University. “But we received an outpouring of support from faculty within Harvard. Soon after we heard from faculty across the country who wanted their schools to also stand up for America’s colleges and universities. These letters are the result of those voices. The message is clear: faculty are saying now is the time to stand up.”
“Universities have been a place of liberation for me and my students all of my adult life. Of all the people in universities, only faculty remain year after year to see up close the profound benefits that higher education offers to students and communities,” said Gerry Leonard, Professor of Law at Boston University. “So when universities came under attack from the Trump Administration, I couldn’t see that we really had any choice but to join together and defend what’s right in any way we could. If faculty wouldn’t do it, who would?”
The individual letters from each participating faculty group can be viewed here. To learn more about this united action to defend higher education, including how your college or university can stand up and fight back, please visit www.wearehighered.org or email us at campusletters2025@gmail.com to join the movement.
Media Contact:
Dena Enos
Founder, StrongHouse
(909) 228-8030
]]>The direct targets of the order are the Smithsonian Museums in DC and how they portray American history. The order attacks both diversity and trans people (among other things). (Aside: the President is not directly in charge of the Smithsonian, and so the real effect of this order – like many of Trump’s executive orders – is in question/remains to be seen.) One less reported aspect of the order, but one that’s frightening for what it signals about the arguments the administration is prepared to advance, is that the order also argues for scientific racism (sometimes called biological racism or racial realism).* Here’s the relevant text from a criticism of a specific exhibit:
For example, the Smithsonian American Art Museum today features “The Shape of Power: Stories of Race and American Sculpture,” an exhibit representing that “[s]ocieties including the United States have used race to establish and maintain systems of power, privilege, and disenfranchisement.” The exhibit further claims that “sculpture has been a powerful tool in promoting scientific racism” and promotes the view that race is not a biological reality but a social construct, stating “Race is a human invention.”
To be clear: race is a social construct, and not a “biological reality.” This is the consensus position of anthropologists, biologists, historians, sociologists, etc.**
Much as the anti-trans executive orders (falsely) asserted that biological sex was dichotomous and thus that there were only two sexes***, this executive order contributes to a larger project of promoting scientific racism (the discredited idea that socially-defined races are biologically distinct, and that those biological distinctions explain and justify existing racial inequality). This belief is core to most white supremacists, including the ones in the White House. Out of Hiding, indeed.
*H/T to Philip Cohen on Bluesky who noted this text.
** That it’s the consensus position among scholars isn’t the full story, of course. There are plenty of scholars hawking scientific racism, from discredited IQ research to the latest genomic arguments. And there has long been substantial funding and support to promote this work. See, among many others, research by Dorothy Roberts, Ann Morning, Joan Fujimura, Emily Merchant, etc.
*** The oral arguments where the Justice Department lawyers try to defend this position against a judge who has ever read anything about intersex folks are worth a read.
]]>Speaking Out for Democracy and US Higher Education
To add your name to this statement, go to https://bit.ly/DemocracyAndHigherEdSign
We publicly affirm our commitment to the enterprise of higher education in a democratic and free society, and to the values and practices that facilitate the production, advancement, and sharing of knowledge. Given the continuous and escalating attacks on higher education along with many other pillars of American democracy by the Trump administration and its allies, we call on colleges and universities to protect these values.
We affirm that:
- The democratic ideals of free thought, free speech, free association, freedom of assembly and the right to dissent are worth fighting for. Democracy both honors our dignity as individuals and enables collective action on behalf of the common good.
- Education is a fundamental pillar of a democratic society. People come from all over the world to take part in the free exchange of ideas and the depth of knowledge and expertise found in US colleges and universities. The capacity and tools these institutions provide to think carefully and deeply about politics, society, and the built and natural worlds produce scholars and world citizens whose contributions benefit us all. The value of American education has long been a consensus position across parties and ideologies; both Democratic and Republican administrations have supported our system of higher education.
- Diversity is essential. Democracy requires that we invest fully in the rich array of our differences. We affirm the fundamental dignity and value of each person of every race, ethnicity, national origin, class, gender, sexual orientation, disability status, legal status, religion, identity, ideology and viewpoint. Bringing together people with different experiences, talents, and perspectives is critical to a successful learning environment and ultimately benefits society as a whole.
- Education, knowledge, and science are intrinsically worthwhile. They improve both individual lives and the collective well-being of a democratic society. Cutting funding risks inflicting lasting damage on scholarly inquiry, from work in the arts to social policy to life-saving medical research and care.
- Academic freedom is necessary to the pursuit of knowledge. Research must be conducted free from political threat if it is to identify and develop ideas serving the human race. These ideas, turned into action, are critical elements of any functioning society, including the rule of law, medical care, and scientific advancement.
- No amount of accommodation or compliance will protect us. The current attacks on higher education amount to an assault on the foundational principles of democracy. If we abandon our commitments to equality, pluralism, and free scholarly inquiry we turn our backs on the most essential ingredients to our democracy: reflecting on our past, pooling our present talents, and investing in our future.
As scholars, educators, and people who care about our students and our democracy, we believe it is our duty to speak out against the attacks on diversity and pluralism, on scholarship and learning, on academic freedom, and on democracy itself. We are doing so through this statement, and will continue to do so on our campuses and beyond.
We urge the leaders of America’s colleges and universities, and every American who believes in democracy and education, to stand up for the values we share.
We call on college and university leadership to refuse to comply with the unethical, irresponsible and frequently illegal demands of the Trump administration; to join together to speak out in defense of the values of academic freedom, scholarship and research; to protect their students and faculty from government reprisals; and to fight attacks on our institutions in the public sphere and the legal arena.
]]>SUBMIT YOUR PRÉCIS HERE
SUBMISSION DEADLINE: March 21, 2025, 11:59pm Eastern Time
The 19th Junior Theorists Symposium (JTS) is now open to new submissions. The JTS is a conference featuring the work of emerging sociologists engaged in theoretical work, broadly defined. Sponsored in part by the Theory Section of the ASA, the conference has provided a platform for the work of early-career sociologists since 2005. We especially welcome submissions that broaden the practice of theory beyond its traditional themes, topics, and disciplinary function.
The symposium will be held as an in-person event on Friday, August 8 prior to the 2025 ASA Annual Meeting in Chicago.
This year’s discussants are:
Guillermina Altomonte (New York University)
Oluwakemi Balogun (University of Oregon)
Barbara Kiviat (Stanford University)
Jonah Stuart Brundage (University of Michigan), winner of the 2024 Junior Theorist Award, will deliver a keynote address.
Finally, the symposium will include an after-panel titled “The Potential of Public Theorizing.”
We invite all ABD graduate students, recent PhDs, postdocs, and assistant professors who received their PhDs from 2021 onwards to submit up to a three-page précis (800-1000 words). The précis should include the key theoretical contribution of the paper and a general outline of the argument.
Successful précis from last year’s symposium can be viewed here.
Please note that the précis must be for a paper that is not under review or forthcoming at a journal.
As in previous years, there is no pre-specified theme for the conference. Papers will be grouped into sessions based on emergent themes and discussants’ areas of interest and expertise. We invite submissions from all substantive areas of sociology, and we encourage papers that are works-in-progress and would benefit from the discussions at JTS.
Please remove all identifying information from your précis and submit it via the Google form.
This year’s symposium is organized by Yunhan Wen (Princeton University), Xuewen (Shelley) Yan (University of Texas at Austin), Lauren Clingan (Princeton University) and Mira Vale (University of Michigan). You can contact them at juniortheorists@gmail.comwith any questions.
By early April, we will extend 9 invitations to present at JTS 2025, with each presenter matched to one of our 3 discussants. Please plan to share a full paper by July 11, 2025.
If you have any issues uploading your document, please send a copy of your précis with all identifying information removed to juniortheorists@gmail.com. Please include your name and affiliation (University and Department) in the body of the email.
]]>