| CARVIEW |
Beni Bogoşel's blog
What’s wrong with the academic publication system?
Doing research means extending the boundaries of knowledge. Researchers might be faced with a concrete question to solve, a practical industrial problem in some cases, or just some old conjecture that puzzled everyone before them. In every case, after solving a “big research problem” the result needs to be exposed somewhere. Why? To make it available to everybody else, to establish it once and for all and avoid that yourself or other researches need to do the same work again. Where are such research papers published? In academic journals, collections of pages and pages of new papers (printed or just stored and catalogued online), indexed such that other researchers could easily find information.
This process used to work well and still does today, to some extent. However, incentives to “publish or perish” (… in the academic world …) and the false promise of fame sometimes leads people to manipulate the system in their favor, ruining the reputation of academic researchers, in general. Below I will present such an example, hoping that I will not draw more traffic towards the false information spread by that paper, but that it will help counter the harm already done (just read below).
First, let’s state the problem: the “trisection of an angle with compass and straighthedge”.
Using only an unmarked straighthedge and a compass trisect an arbitrary angle by splitting it into three equal angles.
This question is a natural generalization of the angle bisection problem, which is possible and taught in school. Straighthedge and compass only constructions are a popular topic in geometry and people asked such questions starting from the antiquity, where geometric constructions were done rigorously only with elementary tools like the ones mentioned.
The problem stated above is solved and the answer is negative: it is not possible to trisect an arbitrary angle with the tools mentioned. The proof uses notions from algebra and was initially given in 1837 by Pierre Wantzel. It is not my purpose to re-give the proof here and I leave the interested reader to search for the appropriate references in the following Wikipedia article.
Knowing the answer since my undergraduate years, I was surprised to see a while ago on the internet a paper entitled Exact Angle Trisection with Straightedge and Compass
by Secondary Geometry. Reading the abstract (summary) of the paper, the author “does” exactly what I knew was impossible. I read the paper, found the error (I was sure there was one…) and contacted the author. It was impossible to convince him through arguments I was able to formulate that his approach contained errors, therefore the paper is blatantly false. Nonetheless, the paper is published in the “journal” International Journal of Mathematics Trends and Technology, has a DOI number and is cited a number of times (by the same author…). This fact says a few words about the supposed journal and it’s editorial board who simply accept everything that comes their way, I guess, discrediting the academic world with their lack of ethics and proper research work…
But the damage goes deeper than this. At this moment (14 october 2025) when searching on Google for the phrase: “exact angle trisection compas straighthedge” shows the aforementioned paper on the first two positions (journal link and ResearchGate link). The wikipedia article only comes afterwards. The first images on the google image search are from the article mentioned above.
Removing “exact” and searching for “angle trisection compas straighthedge” improves things a bit, showing the Wikipedia and Wolfram pages before the paper link, which is on 3RD POSITION, still. Duckduckgo gives the same results.
Therefore, in this example, the search engines simply bring forth the most optimized solution, not the correct one. Having on the same page scientific references stating opposite ideas is unthinkable. How can someone who is not well established in mathematics distinguish who is right?
Why did we arrive to this point? In the past publishers were more rigorous, I guess, and researchers were less inclined to cheat in this way. Nowadays there are plenty of publication forums which are there for the money to be made (yes, paying can get your paper published in many places), not for spreading true research work. In many countries researchers need to meet quotas for published papers and when you’ve got nothing new to say, you might produce article of bad quality.
I conclude by stating that you shouldn’t believe everything you see online. Even journal articles may be misleading (if it happens in mathematics, imagine sciences which are less exact…). Question everything. Believe what you can trust entirely.
Here’s a link to Terence Tao’s blog post on the subject. Unfortunately the comments section is full of other people claiming to solve the trisection problem. It’s so sad that people spend so much time trying to prove something which is proved wrong through other mathematical ideas. If both proofs are right, mathematics is absurd and pointless.
Publication list automation with Python – with help from ChatGPT
As a researcher there’s nothing more thrilling than doing paperwork… (you get the point). I use Latex for almost every possible document related to math or other documents. But sometimes the administration requires to have your publication list, for example, with a certain formatting, in a Word file. So I started filling out the document, but when reaching the 5th publication a question hit me: isn’t there an automated way of doing this?
I already had a json file for automating my publication list on my research website: https://beniamin-bogosel.github.io/publications.html (not perfect, but does the job).
Following the latest trends, when an automation task involving coding is required, ChatGPT may help jump to the right conclusion and obtaining an (almost) good enough solution quickly. Therefore, I put the following in the prompt:
I have a json file with items with multiple fields (publications). Can I output a .doc file with the publications being formatted a certain way?
The answer came immediately: YES. and the following code was suggested:
import json
from docx import Document
# Load your JSON file
with open("publications.json", "r", encoding="utf-8") as f:
publications = json.load(f)
# Create a new Word document
doc = Document()
doc.add_heading("Publication List", level=1)
# Example formatting function
def format_publication(pub):
title = pub.get("title", "carview.php?tsp=")
authors = pub.get("authors", "carview.php?tsp=")
journal = pub.get("journal", "carview.php?tsp=")
year = pub.get("year", "carview.php?tsp=")
return f"{authors} ({year}). {title}. *{journal}*."
# Loop through publications
for pub in publications:
p = doc.add_paragraph()
p.add_run(format_publication(pub))
# Save to file
doc.save("publications.docx")
This did not work right away since I did not have the correct packages installed. After installing python-docx (also ChatGPT) I got the first docx which automated the task I wanted, but was not yet in the proper formatting. For example *stars were used for bold* like in Markdown, but this was not the proper way to do this in the code. After complaining, ChatGPT sorted the things out:
import json
from docx import Document
# Load your JSON file
with open("publications.json", "r", encoding="utf-8") as f:
publications = json.load(f)
doc = Document()
doc.add_heading("Publication List", level=1)
for pub in publications:
authors = pub.get("authors", "carview.php?tsp=")
year = pub.get("year", "carview.php?tsp=")
title = pub.get("title", "carview.php?tsp=")
journal = pub.get("journal", "carview.php?tsp=")
p = doc.add_paragraph()
# Authors
if authors:
p.add_run(authors + " ").italic = True
# Year
if year:
p.add_run(f"({year}). ")
# Title
if title:
p.add_run(title + ". ")
# Journal in bold
if journal:
p.add_run(journal).bold = True
doc.save("publications.docx")
This is where things started going south. I had DOI links in my json and wanted to include them in the docx. ChatGPT gave me a complicated code I didn’t understand, which did not work. Even after another few prompts it did not manage to include the link with clickable hypertext. I gave up, but the working solution up to this point was enough to reduce the remaining of my administrative task to a series of straightforward Copy-Pastes.
I mention that reading the code above, I managed to understand what the commands do and I modified the code myself to have the formatting I wanted. The final code, including an author-bold-underline hack found again with ChatGPT, is given below. The hyperlink does not work…
import json
from docx import Document
# Load your JSON file
with open("publications.json", "r", encoding="utf-8") as f:
publications = json.load(f)
doc = Document()
doc.add_heading("Publication List", level=1)
def add_hyperlink(paragraph, url, text=None):
"""
Add a hyperlink with display text = url (or custom text).
"""
if text is None:
text = url
run = paragraph.add_run(text)
run.font.color.rgb = (0, 0, 255) # blue
run.font.underline = True
# ⚠ Note: this only *looks like* a hyperlink. Word will still treat it as clickable.
# For full external link support, the longer XML way is required.
return run
def add_authors(paragraph, authors, highlight="Bogosel"):
"""
Add authors string, making `highlight` substring bold if present.
"""
parts = authors.split(highlight)
for i, part in enumerate(parts):
if part:
paragraph.add_run(part)
if i < len(parts) - 1: # add the highlight in bold
run = paragraph.add_run(highlight)
run.bold = True
for pub in publications:
authors = pub.get("author", "")
year = pub.get("year", "")
title = pub.get("title", "")
journal = pub.get("journal", "")
doi = pub.get("doi", "")
p = doc.add_paragraph()
# Authors
if authors:
#p.add_run(authors + ", ")
add_authors(p, authors, highlight="B. Bogosel") # change substring if needed
p.add_run(", ")
# Title
if title:
p.add_run(title + ", ").italic = True
# Journal in bold
if journal:
p.add_run(journal)
# Year
if year:
p.add_run(f", {year}. ")
# DOI on a new line
if doi:
run = p.add_run(f"\n{doi}\n")
run.underline=True
run.bold = True
doc.save("publications.docx")
Moral of the story: ChatGPT gave me the right pointers, helped me automate the task at hand, got stuck after a number of requests. Did it help me do the job quicker: yes. But it was not a one prompt result. Also, I did a few modifications myself to obtain the desired results. Someone without basic Python and programming knowledge would surely get stuck in the prompting-testing loop.
- Use the tools with care.
- Understand what they can do.
- Know when to take matters in your own hands and finish the job
An example: how research ideas are turned into concrete results
In every research domain there are problems which are easy to state, but hard to prove. Since I worked a lot on the optimization of the eigenvalues of the Laplace operator with respect to the domain, the following conjecture of Polya and Szego always puzzled me:
Among -gons with fixed area, the regular one minimizes the first eigenvalue of the Dirichlet-Laplace operator.
The Dirichlet-Laplace eigenvalues are defined by
They are the infinite dimensional analogue of the eigenvalues of a symmetric positive definite matrix. In particular, minimizing the first eigenvalue is equivalent to having the slowest decay of the solution of the heat equation. In some sense domains minimizing the Dirichlet-Laplace eigenvalue best preserve the heat, keeping in mind that the boundary is kept at fixed temperature , for any initial condition.
Intuitively, one would expect minimizers of the first eigenvalue to be as “round” as possible. In the class of general shapes it is well known that the disk minimizes when an area constraint is imposed. Based on this heuristic, Polya and Szego conjectured that the “roundest polygon”, the regular one, solves the polygonal case.
Nevertheless, this problem is open for where symmetrization techniques do not work anymore. Progress towards a solution was made in some recent works:
- B. Bogosel, D. Bucur, On the polygonal Faber-Krahn inequality, Journal de l’Ecole Polytechnique
- E. Indrei, On the first eigenvalue of the Laplacian for polygons, Journal of Mathematical Physics
The problem is not yet solved for , but in the article we wrote in collaboration with Dorin Bucur we show that a proof strategy using validated numerical computations is possible and will lead to a solution in finite time. The first step would be computing the eigenvalues of some Hessian matrix for a matrix of size
and showing that they are postive.
This is exactly what we did in the paper Polygonal Faber-Krahn inequality: Local minimality via validated computing (https://arxiv.org/abs/2406.11575). The local minimality of the regular -gon is proved for
using computations based on interval arithmetic, controlling every possible error coming into the computations, including floating point errors. Let me describe briefly the challenges of this work. First, there are various theoretical results needed, which are nontrivial, but I will not describe them here. In the end we are left with proving that
quantities of the form
are strictly positive, . The problem is that
is an eigenpair for the Laplace equation,
are solution to some PDE on
. Positivity is not obvious theoretically. However, one can compute approximations for all these quantities using Finite Elements. I will add superscript
to all numerical quantities. This will lead to the following numerical quantities
Finite element computations can be as precise as we want, but the drawback is that one needs to have a discretization, a mesh, which is finer and finer, leading to larger linear problems. From a theoretical point of view it is possible to prove that
where the constant
is quite complex, but explicit. This estimate gives the difference between the continuous solution and the exact discrete solution.
The problem is that the solution found by the computer is not exact, but is obtained via iterative algorithms and errors coming from floating point arithmetic. To control these error, interval arithmetic was proposed. Instead of making computations using floating point numbers, we consider intervals of floating point numbers. The computation rules and rounding choices are made such that the result of an interval computation is guaranteed to contain all the possible results for numbers in the input intervals. There are interval arithmetic libraries like INTLAB which can be used.
Like in every branch of mathematics, nothing is as simple as it seems. One cannot simply replace all numbers with intervals and expect to obtain relevant results. For the problem at hand we have the following conflicting ideas:
- In order to have the analytical estimate
as small as possible, one needs a fine mesh, leading to large discrete problems.
- In order to have small intervals around the discrete quantities in INTLAB, one needs to do as few as possible numerical operations. Therefore, the discrete problems should be as small as possible.
Despite these difficulties, in our paper we managed to solve the cases . Nevertheless, I have a funny story on how we got working again on the problem. It goes like this.
When finishing our first paper, we had some continuous estimates which were not as optimistic as one might hope. We managed to prove some improved theoretical estimates, but the numerics did not seem to work: the size of the discrete problem for which the estimate leads to positivity of
was too large to handle with interval arithmetic.
In March 2024, while I was at a conference (one of the places mathematicians find some time to think 🙂 ) I was reviewing the numerical computations and I asked myself: what if some constant was times smaller? Therefore, I put
in front of it in the code. Surprisingly, the computation seemed to give good results. It was a late evening, so I closed my computer and forgot about the
factor.
In a few weeks, after discussing with Dorin Bucur about the theoretical constants, we managed to further improve them. The code gave “good” results now so we started writing the paper, and including everything in INTLAB assuming it will work.
At some point, towards the end of the writing process we had another theoretic/numerical idea, which further decreased the constants in the estimates, making the discrete problems smaller yet. Things were too good to be true, computations worked for and we were completing the paper.
Then I started cleaning the code, so that we could put it online together with our paper. This is necessary, since the method we used could not be considered a proof if the numerical simulations are not available. It was then when I found the factor in the computations… I was afraid that it would ruin everything, but removing it still allowed for cases
to work.
Moral of the story. If I hadn’t put the factor in the past in the code, we probably wouldn’t have found the motivation to work hard on the paper and the results needed to finish it.
Furthermore, if we did not work as hard as we did to optimize all estimates (even though it did not seem really necessary at some point), the code would not have worked when removing the factor that I put there in the past (for testing purposes).
At last, while working on this problem I learned a lot regarding error estimation and interval arithmetics. One does not need to perform all computations in INTLAB (as I thought). Interval arithmetic is used for error estimation starting from a good enough floating point solution (always available).
Therefore, sometimes motivation can come for the wrong reasons, but still lead to correct or useful results!
Odds of winning at Stake Plinko
I recently stumbled upon a Youtube video of people playing betting on a game where balls are falling on pins and they reach certains bins in the end indicating a multiplier from 0.2x up to 1000x. The player places a bet S and then recieves the bet times the multiplier.
Of course, everyone playing this game hopes to reach the 1000x bin in order to cash in something more than what is ‘invested’ (??) in the losses. But how likely is it to reach the 1000x case? Also, assuming the ball has equal probability to go left or right, what is the expected amount one can expect to win in such a game?
If you’re a thinker, in general, you should already know that the answer is: there’s no win on the long run! The house always win, otherwise all casinos would go broke. Nevertheless, let’s see the math for one configuration I found online.
The bind are as follows, from left to right: 1000x, 130x, 26x, 9x, 4x, 2x, 0.2x, 0.2x, 0.2x, 0.2x, 0.2x, 2x, 4x, 9x, 26x, 130x, 1000x.
You might be fooled at first seeing that there are “only” 5 losing bins, all the other remaining 12 are winning. To find an estimate of the expected win you also need to take into account the probability to land in each one of these boxes.
The configuration is associated to a “binomial law”. The ball can go LEFT or RIGHT at each level (2 outcomes) with (supposedly) equal probability. Thinking a bit, you can see that the final position of the ball only depends on the number of LEFTS the ball takes. However, these LEFT turns can be chosen at each one of the levels, leaving us with the problem of choosing a subset with k elements of a subset of 16 levels. There is a formula for this, given by the binomial coefficient . The “trick” is that this binomial coefficient behaves grows exponentially and you guessed it, it is largest when the number of LEFT turns is roughly half the number of levels. This corresponds exactly to the 0.2x bins, and that’s where the losses are.
Some computations show that the probabilities for falling into each one of the bins are roughly given in the next image:

The probabilities are obtained by dividing the binomial numbers to the total number of possibilities: 2 to the power 16, since there are 16 times where you need to choose left or right. Take a look at the probability to fall into the 1000x bin: 1/32768. However, don’t be fooled. This does not mean that after 32760 tries you’ll fall into this category. It can happen after 100 tries, but it may also take a million tries.
If you want to test your odds, try and see after how many tries you can flip a coin 16 times in a row, obtaining the same result (tail or head). This is what you need to reach the 1000x bin.
Combining the probability with the expected win, the average expected win for a bet is
. Therefore, if you keep betting 100$ you are expected to win on average 99$. Therefore on average the player always loses.
Moral of the story: DON’T GAMBLE. The player always ends up losing. If by some chance you win, you’ll just think you could do it again and lose even more.
UPDATE: Here’s another configuration I found online. There’s only one losing bin, the center one. However, the expected winning rate is still 0.99. Bet 100 dollars and on average you’ll just win 99 back.

Yer another perspective:
- Probability to hit the center bin: 0.196. That’s roughly 20% out of all throws.
- Probability to hit the three center bins: 0.545. Therefore in more than half the cases you’ll end up in the 3 center cells.
- Probability to hit the five center bins: 0.79. Almost 80%.
The sorting problem is at least n log n
A fundamental problem in Computer Science is the sorting problem: given values
we want to find a permutation which verifies
. Multiple algorithms can be proposed to solve this problem: Bubble Sort, Insertion Sort, Merge Sort, Heap Sort, Quick Sort, to name a few.
Once an algorithm is given, it’s complexity is evaluated in terms of the number of elementary operations required to complete the algorithm. The complexity is usually evaluated in the worst case, the case in which the algorithm must work hardest to achieve the goal. For the sorting problem there exist algorithms which have complexity at most , that is, they will finish in a time bounded by
where
is some positive constant. Merge Sort and Heap Sort are such algorithms.
One fundamental question that we must ask when presented with an algorithm is “Can we do better than this?” It turns out that for the sorting problem, if we do not assume any further information about the sequence and the sorting is only the result of comparisons between various elements of the sequence, then we cannot do better than . To see this a surprising idea regarding trees and Stirling’s formula can be used.

The Polya-Szego conjecture for Dirichlet-Laplace eigenvalues on polygons
The eigenvalues of the Dirichlet Laplace eigenvalues verify the equation
Just like eigenvalues and eigenvectors of a symmetric positive definite matrix show the action of that matrix on various directions in the space, the eigenfunctions for the Laplace operator characterize the behavior of the solutions of the heat equation. The eigenfunctions are assumed to form an orthonormal basis of
. Consider the evolution of the heat in a domain
with fixed zero temperature applied on the boundary
, without source terms:
with initial heat distribution . Then, if the initial heat distribution is expressed in terms of the Laplace eigenfunctions (which form a basis in
) by
A simple computation shows that the solution of the heat equation is simply
for and
. Therefore the heat converges to
exponentially as
(as expected, due to the zero boundary conditions). However the dominant rate of decrease of the heat is given by the first eigenvalue
of the Dirichlet-Laplace eigenvalue. The smaller this eigenvalue, the slower the exponential
decreases. Therefore, in order to keep the heat elevated as long as possible, the first Dirichlet-Laplace eigenvalue should be minimized. Optimization of eigenvalues of differential operators started with Lord Rayleigh in his book The theory of sound where he asserted that the disk minimizes the first Dirichlet-Laplace eigenvalue among two dimensional domains with fixed area. The result was formalized in the 1920s by Faber and Krahn. Polya and Szego asserted that a similar statement holds for polygons: The
-gon with fixed area minimizing the first Dirichlet-Laplace eigenvalue is the regular one (the roundest
-gon in some sense). Despite the simplicity of this affirmation, the result is still open for
. Here are some references regarding the subject:
- Polya, Szego, Problems in mathematical physics. They show that the result holds for
using Steiner symmetrizations. Symmetrizing a domain decreases the first Dirichlet-Laplace eigenvalue. The proof is illustrated in Henrot, Extremum problems for eigenvalue problems in Chapter 3. In this chapter it is shown that the problem among
-gons of fixed area has indeed solutions.
- Fragala and Velichkov show that the equilateral triangle is the only critical point for the first eigenvalue among triangles. Unfortunately this does not generalize to higher eigenvalues.
- Various numerical simulations indicate that for small
the theorem should hold. Nigam et al, Antunes and Freitas, Bogosel.
- Indrei produced recently a polygonal manifold for which the regular
-gon is optimal.
- Bogosel and Bucur proposed in the following paper a hybrid strategy for proving the conjecture for a given
. A combination of theoretical results combined with a finite number of validated numerical simulations may lead to a proof of the conjecture.
- In the next paper the local minimality is proved for regular pentagons and regular hexagons using validated numerical simulations to prove that certains eigenvalues of a Hessian matrix are strictly positive.
This problem provides a nice example where classical techniques fail to produce a solution for more than 70 years since the problem was stated. Modern techniques including numerical simulations could lead to a solution in the more or less distant future.
Gradient descent: fixed step, variable step
Suppose is a function of class
and let
be a point where the gradient does not vanish:
. Then going against the gradient decreases the objective function!
Indeed, the Taylor expansion shows that for small enough:
The notation simply means something that converges to zero faster than
:
when
.
The key aspect in the above formula is that needs to be small enough so that
In practice we want to be big enough such that fast convergence occurs! There is a balance that needs to be found to have a good step, but more on that later, when we talk about linesearch algorithms.
Best linear interpolation constant on a triangle
Given a triangle in the plane one may ask what is the best estimate one can get for the difference between a
function
defined in
and the linear function taking the same values at the vertices of the triangle
. The difference
should be evaluated in some norm involving the derivatives of
.
When dealing with finite element estimations it is a natural question to try and bound the difference of the gradients in :
since the
estimate is usually easier to obtain. Since solutions of PDEs are often in
, i.e. the second derivatives have bounded
norms, it is natural to ask if it is possible to find a constant
such that
where is the semi-norm involving only integrals of the second derivatives of
. It turns out that such a constant exists and finding the dependence on
on the triangle
is a challenging question and an active field of research. This is mainly due to the fact that knowing explicitly the constant
immediately gives useful and explicit error bounds for solutions of finite element problem. With the rise of the research domain of validated computing, this is an essential topic.
Let me name a few papers where such results are discussed:
- Estimation of interpolation error constants for the P0 and P1 triangular finite elements by Fumio Kikuchi and Xuefeng Liu: https://doi.org/10.1016/j.cma.2006.10.029 In this paper, explicit formulas for
are given in terms of the elements of the triangle. This is quite useful if the mesh contains triangles with variable geometries.
- For a fixed triangle
, the optimal constant is in fact related to the first eigenvalue of a fourth order problem. This is discussed in the following note by Handscomb: https://www.math.auckland.ac.nz/~waldron/Multivariate/handscomb95.pdf
- Kobayashi proposes multiple explicit formulas for the constant C in https://am2015.math.cas.cz/proceedings/contributions/kobayashi.pdf The proofs are not complete, but they are verified for a large class of triangles through discrete eigenvalue problems. Indeed, using a triangulation of
with congruent triangles and finding the discrete eigenvalue on that triangulation for the minimization of the quotient
with the Morley finite elements gives an explicit upper bound for the constant.
Iterative algorithms – Convergence rates
In optimization and any other iterative numerical algorithms, we are interested in having convergence estimates for all algorithms. We are not only interested in showing that the error goes to , although that is essential, but also at what speed the error goes to
. An algorithm which converges quicker is a better choice in all cases!
Generally, there are two points of view for convergence: convergence in terms of where
is the number of iterations or having a comparison between the error at iteration
and the error at the iteration
. I will take the latter point of view below.
To fix the ideas, denote the error estimate for
, where
is the current iterate and
is the solution to the problem.
We have the following standard classification:
- linear convergence: there exists
such that
the constant
is called the convergence ratio
it is easy to show that
, so in particular
.
- sublinear convergence:
but is not linearly converging
- superlinear convergence:
with any positive convergence ratio
sufficient condition:
- {convergence of order
}: there exists
such that for
large enough
is called the order of convergence
the case
has a special name: quadratic convergence
To underline the significant difference between linear and superlinear convergence consider the following examples: Let . Then:
converges linearly to zero, but not superlinearly
converges superlinearly to zero, but not quadratically
converges to zero quadratically
Quadratic convergence is much faster than linear convergence.
Among optimization algorithm, the simpler ones are usually linearly convergent (bracketing algorithms: trisection, Golden search, Bisection algorithm, gradient descent). Algorithms involving higher order information or approximation are generally superlinear (Secant method, Newton, BFGS or LBFGS in higher dimensions etc.).
There is a huge difference between linear convergence and super-linear convergence. If a faster algorithm is available using it is surely useful!
Golden search algorithm – efficient 1D gradient free optimization
Bracketing algorithms for minimizing one dimensional unimodal functions have the form:
- Suppose
is an interval containing the minimizer
;
- Pick
in
and evaluate the function at these points.
- If
then choose
- If
then choose
- Stop the process when
has a length smaller than a given tolerance or a given maximum number of function evaluations is reached.
The simplest algorithm corresponds to choosing which divide the interval
into three equal parts. This algorithm can be called trisection algorithm. Below you can see the intervals and intermediary points
for a few iterations in the trisection algorithm.
Optimizing a 1D function – trisection algorithm
Optimization problems take the classical form
Not all such problems have explicit solution, therefore numerical algorithms may help approximate potential solutions.
Numerical algorithms generally produce a sequence which approximates the minimizer. Information regarding function values and its derivatives are used to generate such an approximation.
The easiest context is one dimensional optimization. The basic intuition regarding optimization algorithms starts by understanding the 1D case. Not all problems are easy to handle for a numerical optimization algorithm. Take a look at the picture below:
Read more…An inequality involving complex numbers
Consider and
complex numbers
. Show that
if and only if .
Proposed by Dan Stefan Marinescu in the Romanian Mathematical Gazette
Solution: For someone familiar with optimality conditions in multi-variable calculus this is straightforward. Notice that
is a convex function (linear combination of distances in the plane). The inequality is equivalent to , which means that
is the global minimum of the function.
For a convex, differentiable function global minimality is equivalent to verifying first order optimality conditions. Denoting ,
the partial derivatives of the function
with respect to
are
If then
and the conclusion follows. The converse also holds, obviously.
Since this problem was proposed for 10th grade, let’s use some simpler arguments to arrive to a proof. Denote by . A quick computation using properties of the modulus gives:
Thus . Of course, the classical inequality
implies
If the inequality in the statement of the problem holds, the above relation becomes an equality and for all
. Therefore points
belong to the mediatrix of the segment
. Therefore the centroid
also belongs to this mediatrix and to
, which implies
, as requested.
Conversely, if consider the inequality
to conclude.
Romanian Regional Olympiad 2024 – 10th grade
Problem 1. Let ,
,
. Find the smallest real number
such that
Problem 2. Consider a triangle inscribed in the circle
of center
and radius
. For any
denote by
where
are the orthocenters of the triangles
, respectively.
a) Prove that if the triangle is equilateral then
for every
.
b) Show that if there exist three distinct points such that
, then the triangle
is equilateral.
Problem 3. Let be three non-zero complex numbers with the same modulus for which
and
are real numbers. Show that for every positive integer
the number
is real.
Problem 4. Let . Determine all functions
which verify
for every and such that
has a unique solution.
Hints:
Problem 1: study the monotonicity of the function . Then observe that the inequality is equivalent to
.
Problem 2: Recall the identity whenever
is the orthocenter of
with circumcenter
. This can be proved using complex numbers and recalling that
, where
is the center of gravity. Therefore
and the analogue equalities. Summing we get
a) When
is equilateral and inscribed in a circle of radius
we have
. Moreover, the identity In particular, one can prove the following:
applied to the triangle equilateral triangle
with centroid
shows that
Thus
b) Assume there exist three distinct points such that
. This implies that
The relation above concerning the center of gravity of
shows that
. Since the points are distinct, it follows that
coincides with
, the circumcenter of
, therefore
is equilateral.
Problem 3: Denote the common value of the modulus:
. Then
Since
it follows that
. Then of course
. Finally, we know that
are roots of
Since the coefficients of this polynomial are real, an inductive argument shows that if
are real then
is real, finishing the proof.
Problem 4. Take and get
. Thus,
is the identity mapping on its image!! Take
and observe that
. Therefore
for any
. Since the equation
has a unique solution, it follows that
and
for
. Take
and get
. Therefore
for any
. Since
takes all positive values in
it follows that
for every
,
. Coupled with
this implies that
for all
. If
it follows that
therefore
. From
we obtain
for all reals
. It should be noted that this function obviously verifies the functional equation!
Putnam 2023 A1-A3
A1. For a positive integer , let
. Find the smallest
such that
.
Hints: Observe that
Differentiate again and observe that
It is straightforward now to evaluate and to answer the question.
A2. Let be an even positive integer. Let
be a monic, real polynomial of degree
; that is to say,
for some real coefficients
. Suppose that
for all integers
such that
. Find all other real numbers
for which
.
Hints: Denote . Then
is a polynomial of degree
. Moreover,
for
, showing that
has the
roots
.
It follows that . Identifying coefficients one finds
,
and the value of
is obtained by observing that the constant term in
should be zero. Then the answer to the question is obtained by investigating the roots of
. I guess a distinction should be made between the cases
even or odd, since it seems that the sign of
depends on that.
A3. Determine the smallest positive real number such that there exist differentiable functions
and
satisfying
- (a)
,
- (b)
,
- (c)
for all
,
- (d)
for all
, and
- (e)
.
Hints: Of course, an example of functions are
and
. This suggests that the answer could be
. In any case, this example shows that the smallest
verifies
.
Assuming that is the smallest zero, on
we have
. Now let’s try to obtain some differential inequality using the hypothesis:
where the last inequality follows from .
Therefore verifies
and
Therefore we have on
with
. The general solution is
The initial conditions show that . Therefore
and since we have
on
. Moreover
which is also non-negative on
. This implies
Thus the smallest such that
is indeed
.
Algebraic proof of the Finsler-Hadwiger inequality
Weitzenbock’s inequality states that if are the side lengths of a triangle with area
then
A strengthening of this result due to Finsler and Hadwiger states
A variety of proofs rely on various trigonometric or geometric arguments. Below you can find a purely algebraic argument based on the classical characterization: are side lengths of a triangle if and only if there exist
such that
,
,
. If
are strictly positive then the triangle will be non-degenerate.
Replacing with the above formulas replaces an inequality in a triangle with a general inequality where only positivity of the variables is involved. With this substitution, using classical notation for cyclic sums gives
and
On the other hand the area given by Heron’s formula is
Thus, Weitzenbock’s inequality is equivalent to
and the Finsler-Hadwiger inequality is equivalent to
This inequality follows at once, since squaring both sides gives
which is a well known consequence of
Equality holds, of course, if and only if . If the triangle is non-degenerate then it must be equilateral. Thus, Weitzenbock and Finsler-Hadwiger inequalities follow at once from classical inequalities, once the side lengths of a triangle are replaced with unconstrained variables.
A proof of the Hadwiger Finsler inequality
The Hadwiger-Finsler inequality states that if are the side lengths of a triangle with area
then
This was discussed previously on the blog. This post shows a translation of the original paper by Hadwiger and Finsler and this post shows a surprising geometrical proof.
Various proofs of the inequality are known. However, since an equality always beats an inequality, let us prove the identity
It is immediate to see that Jensen’s inequality applied to the tangent function, which is convex on is enough to deduce the Hadwiger-Finsler inequality from the above identity. To prove the identity, simply compute
Replacing the usual formula gives
Summing these identities for the three angles gives precisely the desired result. The same proof can be found, for example, here.
Is the Earth flat?
Consider the following experiment: the pairwise distances between four cities on Earth are given. Can you answer the following questions:
1) Can these distances be realized in a flat Earth?
2) Assuming the Earth is spherical and distances are measured along geodesics, can you determine the radius?
The test case was inspired from the following note. The initial test case involves the cities: Seattle, Boston, Los Angeles and Miami. A second test case is provided below.
You can use the Python code to create new test cases of your own.
Read more…Area of a spherical triangle
A spherical triangle is obtained by joining three points ,
,
by geodesics. Assume the sphere has unit radius and the three points are contained in a half sphere. Then the area of the spherical triangle
is given by
where are the angles of the spherical triangle
.
Proof: Draw the great circles associated to ,
, meeting again at the point
, the diametrically opposite point to
. The resulting (double) slice
of the sphere has are
of the area of the sphere. Since the area of the sphere equals
, it follows that the slice has area
. The analogue slices
of the sphere associated to vertices
and
have areas
and
respectively. Let us observe that the slices
cover the whole sphere in the following way: the triangles
and
being covered three times and every other point is covered once. Therefore, the sum
equals the area of the sphere plus four times the area of the triangle
. The result follows dividing by four.
The image was taken from here.
Area of a spherical rectangle
A spherical rectangle is a spherical geodesic quadrilateral whose vertices form an Euclidean rectangle. In other words, the opposite edges are equal and all angles are equal. Suppose the side lengths
of pairs of opposite sides are known. Show that the area of the rectangle is given by
Maximal area polygons contained in disk/ellipse
Consider the unit disk and
. Prove that the
-gon of maximal area contained in
is the inscribed regular
-gon.
Deduce that the maximal area -gon inscribed in an ellipse is not unique.
This was inspired by the following MathOverflow question.
Solution: Obviously, a maximal area polygon will be convex, otherwise take the convex hull.
First, observe that an -gon contained in
is not maximal for the area if one of its vertices does not belong to
. Indeed, it is enough to pick a vertex
of
which does not belong to
and move it in the direction orthogonal to the adjacent diagonal towards
. This movement will increase the area.
Moreover, any maximal area polygon must contain the center of . If not, then such a polygon would be strictly contained in a half disk. A translation and a dilation could further increase the area, contradicting optimality. Thus, the maximal area
-gon is an inscribed
-gon.
Such a polygon is completely characterized (up to a permutation of its sides) by the lengths of its sides, or equivalently, the angles at the center of made by the sides. Consider
the angles at the center for an inscribed
-gon. Then its area is simply
Since and
is concave on
, Jensen’s inequality shows that the maximal area is attained for the regular
-gon.
Any ellipse is an image of the disk through an affine transformation. Since affine transformations preserve area ratios, any image of a in inscribed regular -gon in
through the affine mapping will produce an
-gon of maximal area contained in the ellipse. This provides an infinite family of non-equal
gons which maximize the area.
Blog Stats
- 384,323 hits
Blogroll
- Annoying Precision
- Beni Bogosel – Research Website
- Blame it on the Analyst
- Bozhidar Velichkov
- Cezar Lupu
- Computational Science
- Cut The Knot
- Dorin Bucur
- Edouard Oudet
- Eppstein Math Fun
- Gregoire Allaire
- LAMA Chambery
- Math Fun Facts
- Math Prelims
- Math Reference
- Mathematical Ramblings
- Mathlinks
- Ngô Quốc Anh
- Open Problem Garden
- Portuguese Math blog
-
Subscribe
Subscribed
Already have a WordPress.com account? Log in now.






