| CARVIEW |
They couldn’t explain their reasoning, though, and just tried to get us all to sign up and start harvesting coins, which at the time, probably 2012, was kind of easy. But I wouldn’t do it, on the principle that it was a useless thing of no intrinsic value, and a huge waste of energy to boot. Come back when you can get rewarded for saving energy instead of wasting it, I remember saying.
I looked at the news this morning and realized that I was interested in back then, and why I joined Occupy in the first place, has come full circle to crypto.
Namely, the idea that the government (at the time, the Obama administration) was propping up a market (the mortgage securities specifically and the stock market more generally) for political reasons, which meant that the folks who should have been out of their jobs and possibly charged with crimes were getting off with their multimillion dollar year-end bonuses.
It seemed outrageous that nobody was losing their shirt except for the victims of the whole scam, because it was political suicide to allow the markets to collapse. And this was especially true because so many people had been convinced to invest their retirement in the stock market. As pensions were replaced by stock portfolios, it became a requirement to never allow stocks to dip too low for too long. And the result was truly non-natural and perverted.
I wasn’t the only one to be outraged, of course. The Tea Party was founded on the notion that somehow the borrowers for these loans – especially folks of color, for some mysterious reason – were somehow to blame.
Well here we are, back at the apex of a bubble, this time in cryptocurrency instead of weird hyper-inflated mortgage securities. But this time it’s not just that “Americans won’t be happy” if the bubble bursts, because they were convinced to put their retirement into that bubble. This time it’s Trump and his actual family that is so heavily invested in crypto that they personally stand to lose billions of dollars if and when the bubble bursts.
Which leads me to wonder, what will they end up doing to prevent this particular bubble from bursting? I don’t know, but I’m guessing it will be really gross.
]]>It makes a strong case that, while the hype men are over-hyping the new technology, the critics are too dismissive. Wong quotes Emily Bender and Alex Hanna’s new book The AI Con as describing it as “a racist pile of linear algebra”.
Full disclosure: about a week before their title was announced, which is like a year and a half ago, I was thinking of writing a book similar in theme, and I even had a title in mind, which was “The AI Con”! So I get it. And to be clear I haven’t read Bender and Hanna’s entire book, so it’s possible they do not actually dismiss it.
And yet, I think Wong has a point. AI not going away, it’s real, it’s replacing people at their job, and we have to grapple with it seriously.
Wong goes on to describe the escalating war, sometimes between Gary Marcus and the true believers. The point is, Wong argues, they are arguing about the wrong thing.
Critical line here: Who cares if AI “thinks” like a person if it’s better than you at your job?
What’s a better way to think about this? Wong has two important lines towards answering this question.
Ignoring the chatbot era or insisting that the technology is useless distracts from more nuanced discussions about its effects on employment, the environment, education, personal relationships, and more.
Automation is responsible for at least half of the nation’s growing wage gap over the past 40 years, according to one economist.
I’m with Wong here. Let’s take it seriously, but not pretend it’s the answer to anyone’s dreams, except the people for whom it’s making billions of dollars. Like any technological tool, it’s going to make our lives different but not necessarily better, depending on the context. And given how many contexts AI is creeping into, there are a ton of ways to think about it. Let’s focus our critical minds on those contexts.
]]>In it, the authors pit Large Language Models (LLMs) against Large Reasoning Models (LRMs) (these are essentially LLMs that have been fine-tuned to provide reasoning in steps) and notice that, for dumb things, the LLM’s are better at stuff, then for moderately complex things, the LRMs are better, then when you get sufficiently complex, they both fail.
This seems pretty obvious, from a pure thought experiment perspective: why would we think that LRMs are better no matter what complexity? It stands to reason that, at some point, the questions get too hard and they cannot answer them, especially if the solutions are not somewhere on the internet.
But the example they used – or at least one of them – made me consider the possibility that their experiments were showing something even more interesting, and disappointing, than they realized.
Basically, they asked lots of versions of LLMs and LRMs to solve the Tower of Hanoi puzzle for n discs, where n got bigger. They noticed that all of them failed when n got to be 10 or larger.
They also did other experiments with other games, but I’m going to focus on the Tower of Hanoi.
Why? Because it happens to be the first puzzle I ever “got” as a young mathematician. I must have been given one of these puzzles as a present or something when I was like 8 years old, and I remember figuring out how to solve it and I remember proving that it took 2^n-1 moves to do it in general, for n discs.
It’s not just me! This is one of the most famous and easiest math puzzles of all time! There must be thousands of math nerds who have blogged at one time or another about this very topic. Moreover, the way to solve it for n+1 discs is insanely easy if you know how to solve it for n discs, which is to say it’s iterative.
Another way of saying this is that, it’s actually not harder, or more complex, to solve this for 10 discs than it is for 9 discs.
Which is another way of saying, the LRMs really do not understand all of those blogposts they’ve been trained on explicitly, and thus have not successfully been shown to “think” at all.
And yet, this paper, even though it’s a critique of the status quo thinking around LRMs and LLMs and the way they get trained and the way they get tested, still falls prey to the most embarrassing mistake, namely of assuming the pseudo-scientific marketing language of Silicon Valley, wherein the models are considered to be “thinking”.
There’s no real mathematical thinking going on here, because there’s no “aha” moment when the model actually understands the thousands of explanations of proofs of how to solve the Tower of Hanoi that it’s been trained on. To test that I talked to my 16-year-old son this morning before school. It took him about a minute to get the lay of the land and another two minutes to figure out the iterative solution. After that he knew exactly how to solve the puzzle for any n. That’s what an “aha” moment looks like.
And by the way, the paper also describes the fact that one reason LRMs are not as good at simple problems as LLMs is that they tend to locate the correct answer, and then keep working and finally output a more complicated, wrong answer. That’s another indication that they do not actually understand anything.
In conclusion, let’s not call these things thinking. They are not. They are, as always, predicting the next word in someone’s blog post who is writing about the Towers of Hanoi.
One last point, which is more of a political positioning issue. Sam Altman has been known to say he doesn’t worry about global climate change because, once the AI becomes super humanly intelligent, we will just ask it how to solve climate change. I hope this kind of rhetoric is exposed once and for all, as a money and power grab and nothing else. If AI cannot understand the simplest and most mathematical and sanitary issue such as the Tower of Hanoi for n discs, it definitely cannot help us out of an enormously messy human quagmire that will pit different stakeholder groups against each other and cause unavoidable harm.
]]>That’s not to say there’s nothing to be concerned about. I worry about kids and other vulnerable people spending too much time with bots, and there have been quite a few alarming ideas put forth by healthcare insurers to deploy AI as a way of saving money. But even there I feel like there will be cautious uptake on some of this, and mistakes will be noted, and will lead to lawsuits.
But this morning I read this WaPO article about DOGE’s planned 83,000 job cuts to the VA and in particular this line:
“Doctor administration work is important and not replaceable by AI,” the provider said in response to concerns that this administration has encouraged the use of artificial intelligence to replace work done by humans.
so, I feel like, we now see the plan for what it is: remove critically important VA staff and replace them with VA, presumably as a way of forcing a long term federal client for Silicon Valley’s latest product. And presumably with no oversight, judging from the 10-year AI regulation moratorium that these same guys are pushing.
Yikes!
]]>And third, that the world might actually end, and all humanity might actually die by 2027 (or, if we’re lucky, 2028!) because autonomous AI agents will take things over and kill us.
So, putting this all together, how about we don’t?
Note that I don’t buy any of these narratives. AI isn’t that good at stuff (because it just isn’t), it should definitely *not* be given control over things like our checkbooks and credit cards (because duh) and AI is definitely not conscious, will not be conscious, and will not work to kill humanity any more than our smart toasters that sense when our toast is done.
This is all propaganda, pointing in one direction, which is to make us feel like AI is inevitable, we will not have a future without it, and we might as well work with it rather than against it. Otherwise nobody graduating from college will ever find employment! It’s scare tactics.
I have another plan: let’s not cede control to problematic, error-ridden AI in the first place. Then it can’t destroy our lives by being taken over by hackers or just buying stuff we absolutely don’t want. It’s also just better to be mindful and deliberate when we shop anyway. And yes, let’s check the details of those law briefs being written up by AI, I’m guessing they aren’t good. And let’s not assume AI can take over things like accounting, because again, that’s too much damn power. Wake up, people! This is not a good idea.
]]>- make fun of “MILF”s that use weight loss drugs to obsessively lose a small amount of weight,
- extend sympathy to actually fat people who cannot afford the weight loss drugs when they could really benefit from them,
- poke fun at the “body positivity” movement,
- truly villainize the health care industry, and in particular the way it tortures people that try to navigate the system, as well as
- the companies producing weight-loss drugs, which now make ginormous profits off of stuff that’s actually easy to manufacture, and finally
- they address the tired notion of “will power” and how it should be sufficient as a replacement for real solutions to the problem of obesity.
I recommend it! In particular, I’ve ready tons of coverage about weight-loss drugs in the past few years by journalists, and it’s mostly garbage and rarely gets at the underlying issues.
]]>There’s something called the AI Futures Project. It’s a series of blog posts about trying to predict various aspects of how soon AI is going to be just incredible. For example, here’s a graph of different models for how long it will take until AI can code like a superhuman:

Doesn’t this remind you of the models of COVID deaths that people felt compelled to build and draw? They were all entirely wrong and misleading. I think we did them to have a sense of control in a panicky situation.
Here’s another blogpost of the same project, published earlier this month, this time imagining a hypothetical LLM called OpenBrain, and what it’s doing by the end of this year, 2025:
… OpenBrain’s alignment team26 is careful enough to wonder whether these victories are deep or shallow. Does the fully-trained model have some kind of robust commitment to always being honest? Or will this fall apart in some future situation, e.g. because it’s learned honesty as an instrumental goal instead of a terminal goal? Or has it just learned to be honest about the sorts of things the evaluation process can check? Could it be lying to itself sometimes, as humans do? A conclusive answer to these questions would require mechanistic interpretability—essentially the ability to look at an AI’s internals and read its mind. Alas, interpretability techniques are not yet advanced enough for this.
The wording above makes me roll my eyes, for three reasons.
First, there is no notion of truth in an LLM, it’s just predicting the next word based on patterns in the training data (think: Reddit). So it definitely doesn’t have a sense of honesty or dishonesty. So that’s a nonsensical question, and they should know better. I mean, look at their credentials!
Second, the words they use to talk about how it’s hard to know if it’s lying or telling the truth betray the belief that there is a consciousness in there somehow but we don’t have the technology yet to read its mind: “interpretability techniques are not yet advanced enough for this.” Um, what? Like we should try harder to summon up fake evidence of consciousness (more on that in further posts)?
Thirdly, we have the actual philosophical problem that *we* don’t even know when we are lying, even when we are conscious! I mean, people! Can you even imagine having taken an actual philosophy class? Or were you too busy studying STEM?
To summarize:
Can it be lying to itself? No, because it has no consciousness.
But if it did, then for sure it could be lying to itself or to us, because we could be lying to ourselves or to each other at any moment! Like, right now, when we project consciousness onto the algorithm we just built with Reddit training data!
]]>Because, and I know I’m not alone in saying this, enough is enough. It’s time for the next version of Occupy. Things have gotten way WAY worse, and not better at all, in terms of the the power of the very rich dictating to the rest of society since the original Occupy.
And when I see media coverage of them speaking in places like Montana and Utah and Idaho, I’m thinking to myself, about fucking TIME some folks on the left talk to the people of these states about the way things are actually working against the working person.
So it came as a big surprise when I heard some folks on MSNBC talking about this. One of them was a TV journalist who just interviews people all the time about politics, and the other represented the Democratic party. I don’t remember their names and so I can’t find the clip, but it went something like this:
Journalist: why are Bernie and AOC in Utah? Isn’t that a super red state? They have no chance to help elect a Dem! There’s not even a viable candidate nor an election!
DNC rep: Well, my theory is that they are there to get media coverage, and after all we are talking about them.
Journo: Oh, that makes sense.
This, to me, is a great illustration of one of the many things that the Democrats have really really wrong. At some point in the distant past, they stopped thinking about voters. Instead they started looking at numbers, and polls, and focusing very narrowly on incremental elections and surgical strikes into purple states.
In other words, I don’t think it occured to either of these people that Bernie and AOC are actually there to talk to actual people with actual problems, and trying to persuade those folks by paying actual attention to them and their problems, even though they are not going to be living in a blue state tomorrow.
This blindness to how people actually are makes the poll-watchers super blind to what really matters in terms of changing people’s minds. And it’s a widespread illness for so many folks you see on TV. They literally don’t see the point of talking to people who live in red states. And that’s why the red states get deeper red and will continue to if those folks are in charge. Yeesh.
]]>Anyhoo, I got an early flight home, which meant I was in an Uber at around 5:15am on the Bay Bridge on my way to SFO.
And do you know what I saw? Approximately 35 lit billboards, and every single one of them was advertising some narrowly defined, or ludicrously broadly defined, or sometimes downright undefined AI.
I then noticed that every single ad *AT* the airport was also advertising AI. And then I noticed the same thing at Boston Logan Airport when I arrived home.
It’s almost like VC money has been poured into all of these startups with the single directive, to go build some AI and then figure out if they can sell it, and now there’s a shitton of useless (or, as Tressie McMillam Cottom describes it, deeply “mid”) AI products that were incredibly expensive to build and nobody fucking cares about it at all.
Then again, maybe I’m wrong!? Maybe this stuff works great and I’m just missing something?
Take a look at these numbers from the American Prospect (hat tip Sherry Wong)
- In 2023, 71 percent of the total gains in the S&P 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft.
- Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out.
- Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years.
- OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026.
- OpenAI loses $2 for every $1 it makes.
So, um, I think this is going to be bad. And it might be blamed on Trump’s tariffs! Ironic.
]]>
https://sanderoneilclock.tiiny.site/
This is a follow up to this post https://mathbabe.org/2015/03/12/earths-aphelion-and-perihelion/ from 2015
also try other experiments on my website:
https://mroneilportfolio.weebly.com/analemma.html
This JavaScript program is a clock/map/single star-star chart in the style of Geochron.
lets look at the elements of the site

The info here is your location and the time, these should be pulled directly from your computer and your IP address, the location isn’t exact for security reasons but it should point to the nearest city/wherever your WIFI is routed through. day and time of day are represented in numbers but rewritten into something understandable in the box below.

This is a star chart, but just for the sun.
This is a polar graph where distance from the center is the angular distance to straight upward, the light blue circle represents points in the sky above the horizon, dark blue represents below the horizon. Keen thinkers will already realize that this means the outside circle on this graph is actual the point in the sky which represents straight down, (or straight up for people on your Antipode)
The yellow circle is the suns current position in the sky,
the white squares represent where the sun will be in the sky in the next 24 hours, 1 square per hour.
the white line is where the sun will be every other day at this time for the next 365 days. (on my chart 12:47 pm)

On this Equirectangular projection of the earth each pixel moved up/down or left/right represents the same change in angle in latitude or longitude respectively
The yellow circle represents the point on earth closest and pointing most directly at the sun.
the red/black line represents the sunset/sunrise line. On the black side the sun cannot be seen, on the red side it can. As can be seen from this line, the sun has just risen in Hawaii and is soon to set over Greece.
The red horizontal line is your latitude.
The white vertical line is your longitude.
the yellow squares represent where the sun will be in the sky in the next 24 hours, 1 square per hour.
the white line is where the sun will be every other day at this time for the next 365 days.
the yellow and orange curves give some sense of where the sun is most directly warming currently.
Principles
The way I have actually gotten these positions is basically the most complicated possible method.
const f1 = vec(-783.79 / 2, 0);
const f2 = vec(783.79 / 2, 0);
let a = 23455;
let c = Math.abs(f1[0] - f2[0]) / 2;
let b = Math.sqrt(a * a - c * c);
const goal_angle_to_orbital_pos = (goal_angle) => {
let angle = goal_angle + 0;
let M = goal_angle - 0.0167086 * Math.sin(goal_angle - Math.PI);
let goal_dif = M - goal_angle;
for (let n = 0; n < 10; n += 1) {
angle += goal_dif;
M = angle - 0.0167086 * Math.sin(angle - Math.PI);
goal_dif = goal_angle - M;
}
p = vec(Math.cos(angle) * a, Math.sin(angle) * b);
return math.subtract(f1, p);
}
const rev_transform_planet = (p, a) => {
const angle = a * 365.25 * 366.25 / 365.25;
const day_matrix = rotationMatrix(2, angle); // Z-axis rotation
const earth_tilt = math.unit(-23.5, 'deg').toNumber('rad');
const tilt_matrix = rotationMatrix(1, earth_tilt); // Y-axis rotation
const angle_tilt_to_elipse = -0.22363;
const day_tilt_to_elipse = rotationMatrix(2, angle_tilt_to_elipse); // Z-axis rotation
p = vec3(p[0], p[1], 0);
let rotated_point = math.multiply(day_matrix, math.multiply(tilt_matrix, math.multiply(day_tilt_to_elipse, p)));
rotated_point = normalize(rotated_point._data);
const angle_rev = a * 365.25;
let longitude = Math.atan2(rotated_point[1], rotated_point[0])-0.24385;
let latitude = Math.atan2(rotated_point[2], Math.sqrt(rotated_point[1] * rotated_point[1] + rotated_point[0] * rotated_point[0]))
return [vec(longitude, latitude), Math.abs(angle_rev + 0.22363) % (2 * Math.PI), rotated_point];
};
const year_to_angle = (t) => { return t * 2.0 * Math.PI - (182.0) / 365.25 }
const day_hour_to_angle = (d, h) => { return ((d - 182.0) / 365.25 + h / 365.25 / 24) * 2.0 * Math.PI }
const day_hour_to_year = (d, h) => { return ((d) / 365.25 + h / 365.25 / 24) }
Basically I’ve got the dynamics of the earths orbit of off Wikipedia and created a ellipse out of it. there is a concept in orbital mechanics of an imaginary angle called the mean anomaly which is an angle that changes at a constant rate through an orbit. It is not hard to calculate what the current mean anomaly is but it is only possible to estimate the real position of a point based on its mean anomaly. That is why my goal_angle_to_orbital_pos function performs 10 guesses and corrections. This is technically not an exact solution for the earths location in orbit but it gets 5 times more accurate with each correction so its indiscernible after 10 corrections
You will see odd corrections here, some constants thrown in. This is because the orbit and rotation of the earth are not all aligned. For instance the Apoapsis is 182 days off of the start of the year.

Future work
Tell me what you think of the NESW arrangement of the star chart. It is a little strange but you would use the chart in its current form by setting it on the ground. If it was going to represent looking up then it should be NORTH WEST SOUTH EAST.
Tell me if the location and time are correct for you, and anything else you want to see.
https://mroneilportfolio.weebly.com/clock.html
also you can go to my GitHub and just download the clock.html file and run it locally for the best experience https://github.com/Sander-ONeil/sun-timing/
]]>