Mathematical Secrets of Ancient Tablet Unlocked After Nearly a Century of Study (2017)
Posted4 months agoActive4 months ago
theguardian.comSciencestory
calmmixed
Debate
60/100
MathematicsHistoryAncient Civilizations
Key topics
Mathematics
History
Ancient Civilizations
The mathematical secrets of the ancient Plimpton 322 tablet have been unlocked after nearly a century of study, sparking discussion about the tablet's significance and the mathematical theories surrounding it.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
16h
Peak period
48
12-24h
Avg / period
8.9
Comment distribution62 data points
Loading chart...
Based on 62 loaded comments
Key moments
- 01Story posted
Aug 24, 2025 at 5:31 PM EDT
4 months ago
Step 01 - 02First comment
Aug 25, 2025 at 9:31 AM EDT
16h after posting
Step 02 - 03Peak activity
48 comments in 12-24h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 29, 2025 at 1:56 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45007992Type: storyLast synced: 11/20/2025, 4:47:35 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
cf. https://en.wikipedia.org/wiki/Divine_Proportions:_Rational_T...
E.g. when you calculate the area of a plot of land do you take into account the curvature of the Earth? You have to make a bunch of compromises in the first place to even talk about what the area of a plot land means.
Math is a bunch of useful systems that we humans have devised. We tend to gravitate towards the ones that help us describe and predict things in the real world.
But there is plenty of math which doesn't do either. It's just as real as the math that does.
The short answer is that they deal with such things symbolically.
foo=3.14159265...
Where after 5 is some continuing sequence of decimals.
The series of functions is literally just:
foo(0) = 3 foo(1) = 3.1 foo(2) = 3.14...
And to be clear, it's not just like, an algorithm that estimates pi, it's literally just a list of return values that is infinitely long that return more and more digits of whatever the number is. That is actually how he defines pi.
https://youtu.be/lcIbCZR0HbU?si=3YxcHfPlCFrlr5h3&t=2080
pi _happens_ to be computable, and there are more efficient functions that will produce those numbers, but you could do the same thing with an incomputable number, you just need a definition for the number which is infinitely long.
To be clear, I don't think any of this is a good idea, just pointing out that if he's going to allow that kind of definition of pi (ie, admit a definition that is just an infinite list of decimal representations), you can just do the same thing with any real number you like. He of course will say that he's _not_ allowing any _infinite list_, only an arbitrary long one.
All the numbers you get this way are going to be rational, and if you require them to be finite, you can't even identify them with any irrational numbers. At least with the computable numbers you get an infinite set of irrational numbers along with the rationals, while still never touching the vast majority of all numbers (the remaining, incomputable irrationals).
An ultrafinitist is still allowed to call that 'i'.
I don't know how you'd do electrical engineering with the rational complex field, because electrical engineering and physics in general involves a lot of irrational quantities and calculus, and the standard foundations of these concepts use real numbers.
It's really up to finitists to show that there are problems with these methods and that they have a better way of doing things, because so far the standard way seems to work very well.
After all, the Cayley-Dickson construction is not an infinite affair.
But it's because the sine of 60 degrees is said by modern tables to be equal to sqrt(3) / 2, which Wildberger doesn't "believe in", he prefers to state that the square of the sine is actually 3 / 4 and that this is "more accurate".
The actual paper is at [1]:
[1] https://doi.org/10.1016/j.hm.2017.08.001
The news from this paper (thanks for the link!) is that evidently the Babylonians preferred that, too. Surely Pythagoras would have.
But how do you actually do anything useful with this ratio ¾? Like, calculating the height of a ziggurat of a given size whose sides are 60° above the horizontal? Well, that one in particular is pretty obvious: it's just the Pythagorean theorem, which lets you do the math precisely, without any error, and then at the end you can approximate a linear result by looking up the square root of the "quadrance" in a table of square roots, which the Babylonians are already known for tabulating.
For more elaborate problems, well, Wildberger wrote the book on that. Presumably the Babylonians had books on it too.
Some tables do indeed have that value and it is a very useful value for calculation, one that can be symbolically manipulated to get you an exact number (albeit one likely expressed in radicals) for your work. When I used to teach algebra, it was a struggle to get students to let go of the decimal approximations that came out of their calculators and embrace expressions that weren’t simple decimals but were exact representations of the numbers at hand. (Then there’s really fun things like the fact that, e.g., √2 + √3 can also be written as √(5+2√6) (assuming I didn’t make an arithmetic error there)).
If you want to know how many courses of bricks your ziggurat is going to need, given that the base is 400 cubits across and there are 10 courses of bricks per cubit, you're going to have to round 2000√3/2 to an integer. You can do that with a table of squares, or you can use a decimal (or sexagesimal) fraction approximation, and I guess you're right that it isn't clear that one is necessarily better than the other.
Incidentally, the fact that we write things like 59°59'30" comes about because the Babylonians at least weren't using Wildberger's "spreads" all the time.
Personally I don't believe in either value. I prefer to state that the sine of 60 degrees is 2.7773. I believe that is more accurate.
Re: rationals, I mean there's an infinite number of rationals available arbitrarily near any other rational, that has to mean they are good enough for all practical purposes, right?
For practical purposes, they’re bad. Denominators tend to explode when you do a few operations (for example 11/123 + 3/17 = 556/2091), and it’s not easy to spot whether you can simplify results. 12/123 + 3/17 = 191/697, for example.
You can counteract things by ‘rounding’ to fractions with denominators below a given limit (say 1000) but then, you likely are better of with reckoning with a fixed denominator that you then do not have to store with each number, allowing you to increase the maximal denominator.
For example (https://en.wikipedia.org/wiki/Farey_sequence), there are 965 rational fractions in [0,1] with denominator at most 10 (https://oeis.org/A005728/list), so storing one requires just under 10 bits. If you use the fractions n/964 for 0 ≤ n ≤ 964 as your representable numbers, arithmetic becomes easier.
To defend Wildberger a bit (because I am an ultrafinitist) I'd like to state first that Wildberger has poor personal PR ability.
Now, as programmers here, you are all natural ultrafinitists as you work with finite quantities (computer systems) and use numerical methods to accurately approximate real numbers.
An ultrafinitist says that that's really all there is to it. The extra axiomatic fluff about infinities existing are logically unnecessary to do all the heavy lifting of the math that we are familiar with. Wildberger's point (and the point of all ultrafinitist claims) is that it's an intellectual and pedagogical disservice to teach and speak of, e.g. Real Numbers, as if they're actually involving infinite quantities that you can never fully specify. We are always going to have to confront the numerical methods part, so it's better to make teaching about numbers methodologically aligned with how we actually measure and use them.
I have personally been working on building various finite equivalents to familiar math. I recommend anyone to read Radically Elementary Probability Theory by Nelson to get a better sense of how to do finite math, at least at the theoretical level. Once again, on a practical level to do with directly computing quantities, we've only ever done finite math.
We use numbers in compact decimal approximations for convenience. Repeated rational series are cumbersome without an electronic computer and useless for everyday life.
The point is to not confuse the notational convenience with the underlying concept that makes such numbers comprehensible in the first place.
I can tell you that it is the output of a function, not a distinct entity that exists on its own independently of the computation.
The whole point is that as a theory for the foundations of mathematics, you do not need to assume numbers with infinitely long decimal expansions in order to do math.
Could you elaborate? What is the output of that function if not an entity in it's own? Having studied math with philosophiy minor long time ago I am curious.
On the other hand, using the axioms of ZFC, one can say any real number exists without having a function to compute it, or a proof to construct it.
For an ultrafinitist, or any finitist for that matter, we say that you only need the minimum of ingredients to produce math -- you do not need to assume anything over and above that, as it's not even helpful in the verification process.
So assuming only finitely many symbols and finitely many numbers, I can produce what we call sqrt(2). We only ever verify it numerically and finitely anyways. We can never reach decimals at infinite ordinals.
So it makes no sense to say, "Hey I assume transfinitely many entities, and my assumption says these numbers exist even though the proofs and decimal expansions are only ever finite."
We think we study the real numbers but it seems we can't even have a system to express them. And indeed, that's not even a limitation of algebraic systems: any notation over a finite alphabet can only express a countable set of distinct objects which amounts to nothing when real numbers are concerned.
I'm not a finitist, but I do find it curious that we approach mathematics by inventing a more-than-infinite set of objects that's impossible to fully grasp. I don't see it as a bad thing though, I also love Complex Analysis and many people (and some mathematicians even) denounce them for being imaginary. My impression is that transcendental numbers are as imaginary as are imaginary numbers, it's just we don't notice. And they're obviously still useful as are the complex numbers.
Definable numbers like 2, pi, or Chaitin's constant [0] are countable. The reals are only uncountable because of numbers we can't even talk about.
[0] https://en.wikipedia.org/wiki/Chaitin%27s_constant
The error of Zeno of Elea was that he did not understand the symmetry between zero and infinity (or he pretended to not understand it).
Because of this error, Zeno considered that infinity is stronger than zero, so he believed or pretended to believe that zero times infinity is infinity, instead of recognizing that zero times infinity can be any number and also zero or infinity.
For now, there exists no evidence whatsoever that the physical space and time are not infinitely divisible.
Even if in the future it would be discovered that space and time have a discrete structure, the mathematical model of an infinitely divisible space and time would remain useful as an approximation, because it certainly is simpler than whatever mathematical model would be needed for a discrete space and time.
What is your evidence for it? You want to make a claim about something being infinite, it is up to you to provide evidence.
> recognizing that zero times infinity can be any number and also zero or infinity.
This statement makes no sense in formal mathematics. Multiplication is a function, which means for each set of inputs there is one output. I imagine you are trying to say something about limits here, but the language you are using is very imprecise.
As long as someone isn't a crank (e.g. they aren't creating false proofs) I enjoy the occasional outsider.
The standard way of setting up calculus involves continous magnitudes, hence irrational quantities, and obviously that's used all over physics and there doesn't seem to be a problem with it.
I think to make a compelling case for a finitist foundation for maths you would at the least have to construct all of the physically useful maths on a finitist basis.
Even if you did that, you should show somehwere this finitist foundation disagrees with the results obtained by the standard foundation, otherwise there's no reason to think the standard foundation is in error.
Well these are probably easy to find even now? E.g the Banach-Tarsky paradox is unlikely to be provable in finitist math which is somewhat of an improvement.
At more advanced levels the theories are based on differential geometry and operators on Hilbert space. I'm not sure if fully worked out finitist versions of these even exist. Where finitist versions do exist, they're often technically more difficult to use than the standard versions, which is the opposite of an improvement in my view.
Whether it's undesirable for your mathematical foundation to prove the Banach-Tarski paradox is debatable. It's counter-intuitive, but doesn't lead to contradictions, as far as is known. It doesn't apply to physics because the construction uses non-measurable sets.
The problem that bothers some mathematicians is that despite working well math still lacks a solid foundation. Furthermore it's basically proven that these foundations can't even exist, or at least for the mainstream version of math. This is where non-mainstream versions pop up. The denial of uncountable sets does help you resolve some of the paradoxes. Not all unfortunately, even the countable sets already lead to things like incompleteness theorems. Well, one can dream.
What are you referring to? The current working foundation is ZFC but there are equivalent type theoretical foundations like what Lean and other proof-checking software uses. I guess you know that, but that's why I don't know what you mean by saying this
ZFC is a working foundation of math but it's unknown whether it's consistent or arithmetically sound and important statements like CH are independent from it. It's a "working foundation" but not a "true foundation" which alas cannot exist.
As mentioned above I'm personally not a finitist and think that math without infinite and uncountable sets is intellectually poorer. I don't mind however developing further a finitist subset of math and see what's provable (and describable) in it, much like there's value in proving theorems in ZF instead of ZFC whenever possible.
This is so true but it can be good if you're flexible enough to try it either way.
With massive tables of physical properties officially produced by pages of 32-bit Fortran it really did look like floating-point was ideal at first. Because it worked great.
The algorithm had been stored as a direct mathematical equation, plain as day, exactly as deduced with constants and operations in 32-bit double-precision floating point.
But when the only user-owned computers were still just 8-bit machines, there was no way to reproduce the exact results across the entire table to the same number of significant figures, using floating point.
Since it's a table it is of course not infinite, and a matrix to boot. A matrix of real numbers across an entire working spectrum.
The algorithm takes a set of input values, calculates results as defined, and rounds it off repeatably in the subsequent logic before output, so everyone can get agreement. The software OTOH takes a range of input values and outputs a matrix. And/or retains a matrix in "imaginary" spreadsheet form for later use :)
Every single value in the matrix is a floating-point representation of a real number, but they are rounded off as precisely as possible to the "exact" degree of usefulness, making them functionally all finite values in the end. This took a lot of work from top mathematicians, computer scientists, and engineers. And as designed, the matrix then carries the algorithm on its own without reference to the fundamental equation.
The solution turned out to involve working backward from the matrix reiteratively until an alternate algorithm was found using only integers for values and operations, up until the final rounding and fixed.point representation at the end. Dramatically unrecognizable algorithm but it worked and only took 0.5 kilobytes of 8-bit Basic code which was a fraction of the original Fortran.
This time the feature that showed up without having to make extra effort was the property of being more precise based directly on increased bitness of the computer, without need for floating-point at all. Of course the Fortran code accomplished this too by the wise use of floating-point but it took a lot bigger iron to do so. And wasn't going to be battery powered any time soon way back then.
>somehwere this finitist foundation disagrees with the results obtained by the standard foundation,
>there's no reason to think the standard foundation is in error.
This is "exactly" how it was. There were disagreements all over the place but they were in further decimal places not representable by the table. The standard was an international standard having carefully agreed-upon accuracy & precision, as defined by the Fortran which really worked and was then written in stone, with any nonmatched output being a notable failure.
Topology, i.e. the analysis of connectivity, is built upon the notion of continuity and infinite divisibility, which seems to be difficult to handle in an ultrafinitist way.
Topology is an exceedingly important branch of mathematics, not only theoretically (I consider some of the results of topology as very beautiful) but also practically, as a great part of the engineering design work is for solving problems where only the topology matters, not the geometry, e.g. in electronic schematics design work.
So I would consider any framework for mathematics that does not handle well topology as incomplete and unusable.
Ultrafinitist theories may be interesting to study as an alternative, but the reality is that infinitesimal calculus in its modern rigorous form does not need any alternatives, because it works well enough and until now I have not seen alternatives that are simpler, but only alternatives that are more complicated, without benefits sufficient to justify that.
I also wonder what ultrafinitists do about projective geometry and inversive geometry.
I consider projective geometry as one of the most beautiful parts of mathematics. When I encountered it for the first time when very young, it was quite a revelation, due to the unification that it allows for various concepts that are distinct in classic geometry. The projective geometry is based on completing the affine spaces with various kinds of subspaces located at an "infinite" distance.
Without handling infinities, and without visualizing how various curves located at infinity look like (as parts of surfaces that can be seen at finite distances), projective geometry would become very hard to understand, even if one would duplicate its algorithms while avoiding the names related to "infinity".
Similarly for inversive geometry, where the affine spaces are completed with points located at "inifinity".
Such geometries are beautiful and very useful, so I would not consider as usable a variant of mathematics where they are not included.
https://www.cnbc.com/2019/04/10/toddler-locks-ipad-for-48-ye...
https://scispace.com/pdf/words-and-pictures-new-light-on-pli...
Robson's argument is that it isn't a trig table in the modern sense and was probably constructed as a teacher's aide for completing-the-square problems that show up in Babylonian mathematics. Other examples of teaching-related tablets are known to exist.
On a quick scan, it looks like the Wildberger paper cites Robson's and accepts the relation to the completing-the-square problem, but argues that the tablet's numbers are too complex to have been practical for teaching.
A little off-topic, but as a non native English speaker this sentence in the article made me look up whether there’s scientific consensus that Noah’s Ark has been found and I’d just never heard about it. Turns out there isn’t, and the end of the sentence actually refers to the tablet. Was still a fun rabbit hole to go down.
2 more comments available on Hacker News