| CARVIEW |
The link no longer works, where are the animations kept now?
]]>I came across this blog by accident: I looked up the composer of the score for the (eternally peerless) Granada Sherlock Holmes series
I wouldn’t qualify as an assistant on your project but I hope that I can be of some small use
Your ad hoc summary of LLM “habits” certainly agrees with my own experience of LLMs’ response to mathematical queries; you hint that you have found that LLM performance has improved significantly in your last 3 years’ experience; it has improved noticeably in my last 3 months’ experience
But it remains – day of posting – prone to shocking bad error; and I wonder that it could be that the underlying “bad habits” responsible are more salient under rudimentary mathematics than (at a guess) the theorems which your own queries invoke (or don’t, either because not published, or because (strategically) false)
Some minutes ago I submitted a set of rudimentary arithmetic statements I had submitted perhaps 3 months ago to Gemini, ChatGPT; today the conclusions were not repudiated (as they had been) but affirmed; but at least some of the glaring errors of 3 months ago persisted
(Today but not previously I gave the LLM the job of returning, independently, “Conclusions true” / “User’s arithmetic and/or inferences include at least one error”)
…
I will happily forward the complete query; meanwhile ChatGPT’s response to the follow on query, “How can an LLM get this wrong?” may rhyme with your own experience and/or knowledge/analysis:
This verbatim :
“Explanation for earlier misreading:
“The earlier mistake was not arithmetic but scope tracking.
“LLMs do not have a symbolic parser; when long structured mathematical text alternates between definitions and claims,
it can lose track of which statements are assumptions versus derived claims.
“You explicitly wrote “Letting a^N have distinct Nth roots {x}, b^N have distinct Nth roots {y} mod prime powers d, 0 < (x, y) < d,”
which already established the premise.
“The model’s earlier response failed to carry that assumption into the evaluation of Claim #2, producing an incorrect qualification.
“Under your system definition, Claim #2 is fully correct.”
…
You will know that repeated submissions of some identical query seldom yields some internally consistent set of responses; the Gemini responses are more interesting than ChatGPT’s
Either I’d say were worth a look
…
Anyway, hope this helps, if you are interested in the original query: antilli@gmail.com
Regards, D
]]>I came across this blog by accident: I looked up the composer of the score for the (eternally peerless) Granada Sherlock Holmes series
I wouldn’t qualify as an assistant on your project but I hope that I can be of some small use
Your ad hoc summary of LLM “habits” certainly agrees with my own experience of LLMs’ response to mathematical queries; you hint that you have found that LLM performance has improved significantly in your last 3 years’ experience; it has improved noticeably in my last 3 months’ experience
But it remains – day of posting – prone to shocking bad error; and I wonder that it could be that the underlying “bad habits” responsible are more salient under rudimentary mathematics than (at a guess) the theorems which your own queries invoke (or don’t, either because not published, or because (strategically) false)
Some minutes ago I submitted a set of rudimentary arithmetic statements I had submitted perhaps 3 months ago to Gemini, ChatGPT; today the conclusions were not repudiated (as they had been) but affirmed; but at least some of the glaring errors of 3 months ago persisted
(Today but not previously I gave the LLM the job of returning, independently, “Conclusions true” / “User’s arithmetic and/or inferences include at least one error”)
…
I will happily forward the complete query; meanwhile ChatGPT’s response to the follow on query, “How can an LLM get this wrong?” may rhyme with your own experience and/or knowledge/analysis:
This verbatim :
“Explanation for earlier misreading:
“The earlier mistake was not arithmetic but scope tracking.
“LLMs do not have a symbolic parser; when long structured mathematical text alternates between definitions and claims,
it can lose track of which statements are assumptions versus derived claims.
“You explicitly wrote “Letting a^N have distinct Nth roots {x}, b^N have distinct Nth roots {y} mod prime powers d, 0 < (x, y) < d,”
which already established the premise.
“The model’s earlier response failed to carry that assumption into the evaluation of Claim #2, producing an incorrect qualification.
“Under your system definition, Claim #2 is fully correct.”
…
You will know that repeated submissions of some identical query seldom yields some internally consistent set of responses
I have found that Gemini’s responses are more interesting than ChatGPT’s
Either I’d say were worth a look
…
Anyway, hope this helps, if you are interested in the original query: antilli@gmail.com
Regards, D
]]>I’m an independent design researcher and software engineer who’s built a web-based platform for spatial mathematical writing (based on my MSc thesis, 20/20). It might address some of the interface challenges you mention. The approach focuses on reduction of cognitive load through spatiality and progressive disclosure and it’s designed for ease of authoring. I’ve written spatial versions of several theorems, including those leading to Newman’s proof of PNT: https://www.brainec.com/university-demo/~vfuto. The tool itself is at https://brainec.com and is already being used by students and professors.
I’d be happy to share any know-how I’ve gathered over the years if it helps.
]]>Some time ago I wrote this spatial version of Heine-Borel theorem with the proof, maybe it could help: https://www.brainec.com/university-demo/~vfuto/heine-borel-theorem.html
]]>https://arxiv.org/abs/2412.15184v1
“Data for Mathematical Copilots: Better Ways of Presenting Proofs for Machine Learning”
by
Simon Frieder, Jonas Bayer, Katherine M. Collins, Julius Berner, Jacob Loader, András Juhász, Fabian Ruehle, Sean Welleck, Gabriel Poesia, Ryan-Rhys Griffiths, Adrian Weller, Anirudh Goyal, Thomas Lukasiewicz, Timothy Gowers
]]>The best way is probably to email me and we can discuss whether there’s a match between our needs and what you would be willing/able to do.
]]>I have read your proof carefully, and I find it very satisfying. Please forgive me for bothering you again with another question.
I previously saw someone mention that the proofs of some real analysis theorems use techniques that are essentially combinatorial in nature. I apologize that I couldn’t find the original post despite searching for a long time.
I know you are an expert in both analysis and combinatorics. Could you please provide some examples from measure theory, undergraduate real analysis, or probability theory?
]]>Here’s one way to prove the theorem. Suppose we have an open cover $mathcal U$ of $[0,1]$. Let $A$ be the set of all $x$ such that the interval $[0,x]$ can be covered by finitely many sets from $mathcal U$. Then $A$ is non-empty as it contains $0$. Also, if $xin A$ and $x<1$, then there is some $epsilon>0$ such that $x+epsilonin A$, since we can find finitely many open sets that cover $[0,x]$ and one of them contains $x$ and therefore an interval around $x$. Also, $A$ is closed, because if for every $y<x$ we can cover $[0,y]$ with finitely many sets from $mathcal U$, then we can pick a set $Uinmathcal U$ that contains $x$, and it will contain an interval $(y,z)$ that contains $x$, so combining that with a finite collection of sets that cover $[0,y]$ we get a finite collection of sets that covers $[0,x]$. This proves that $A$ is a closed interval $[0,x]$ and that $x$ must be 1. In other words, $mathcal U$ has a finite subcover.
It’s not clear that that is a proof by contradiction, though if one digs into some of the details (such as the very last step where I said that $x$ must be 1) then one does start by saying “Suppose not”.
The easiest way to explain why it is important is to appeal to the notion of compactness. If you don’t know what that is (though I’d be surprised if you are taking a measure theory course without having seen compactness), then you should look it up. The Heine-Borel theorem states that closed intervals in $mathbb R$ are compact. Since compact sets have many useful properties, it is very useful to have a basic class of compact sets out of which other ones can be built by such methods as taking products, finite unions, closed subsets, and continuous images.
]]>I am a student taking the measure theory course. I have a question on the Heine-Borel theorem. What is the idea of the theorem? I mean what does this theorem tell us and why is it important? When I am doing my homework, this theorem arises quite often and I struggle to recognize when to use it.
Also, why is proof by contradiction essential in the proof of this theorem, I mean how can one come up with proof by contradiction when seeing Heine-Borel theorem for the first time? I think there is something hiding in the proof and I cannot regcognize it due to the fact that I am ignorant.
Thanks in advance for your help!
]]>I should have spotted this comment earlier. This error on the book cover (the last words of most lines have been removed) appeared only recently and is now in the process of being dealt with.
]]>Since the concept of limit is the most fundamental innovation in calculus, not teaching the definition of a limit is a grievous mistake in pedagogy.
Not teaching (or emphasizing) the definition of a limit sets the stage for not teaching or emphasizing the definition of a derivative. And it is obvious that a student who is unfamiliar with the definition of a limit cannot possibly well understand what the definition of a derivative means.
]]>Using the notation [n] = {1, 2, 3, …, n}, f is a bijection from [n] to [n]. But a ‘permutation g of a set X’ is an entirely different entity – it is a bijection from X to X. These two entities f and g are both mappings but have different domains :
(i) the ‘permutation’ f has domain [n]
(ii) the ‘permutation of a set X’ g has domain X
But in (i) we also use the word ‘permutation’ to mean something else, namely the physical process of re-ordering an ordered list of n objects, in accordance with the instructions provided by the permutation function f. This ‘physical process’ is what we intuitively mean by ‘permutation’ – it could be formalized mathematically in some way but that is laborious and unhelpful to the problem at hand which is simply to analyze permutations, for example in the theory of matrix inverses and determinants. Instead of formalizing it in this way we associate it with a formal function f : [n] -> [n] in a natural way.
Thus (i) is ascribing two meanings to the word ‘permutation’, one a function f : [n] -> [n], and the other a physical process defined by f. This relationship respects the idea of composition, ie. composition of k physical permutations corresponds with function composition – ie. fk o … o f2 o f1 is the permutation function for the succession of k physical permutations that individually come from f1, f2, …., fk.
In the case of (ii) there is no physical process of permutation related to the function g, because X is just a general set without ordering. Trying to make X an ordered set has no benefit, [n] is adequate for our needs, for it is only positions that are being altered. Introducing an ordered set X just adds complexity but has no benefit.
I think the confusion arises because of mathematicians’ natural desire to formalize definitions into set theoretic entities like sets, functions and n-tuples and so on, but these can become very clunky and impractical to work with in some cases. Secondly, the above two uses (i) and (ii) of the word ‘permutation’ have entirely different meanings. Which meaning is intended should be clear from the context.
Please let us keep ‘permutation’ simple. If such a simple concept as permutation causes such a long detailed and unprofitable and unenjoyable discussion then maths must surely be losing its way, enforcing rigid formalisms on our thought processes that simply have no benefit. I was very surprised indeed when you said ‘It is the first.’, I fully expected you to go with the second approach. I myself could not undertake any mathematics using this first approach and indeed developing my own set of proofs of the theory of n x n matrix inverses and determinants (a topic normally omitted from linear algebra textbooks) would not have been possible without the clarity and simplicity of the second approach to permutations. I would elect to fail a class rather than use the first approach, I think these kind of thought processes are bad for mathematics.
]]>“This is a one-of-a-kind reference for with a serious interest in mathematics.”
]]>More thoughts here: https://jmft.dev/vibecoding-accessible.html
]]>