Key Takeaways
I tried using if for this: https://adventofcode.com/2023/day/12 but computer said no
Despite the fact that this was actively debated for decades, modern math courses seldom acknowledge the fact that they are making unprovable intellectual leaps along the way.
That’s not at all true at the level where you are dealing with different infinities, usually, which tends to come after the (usually, fairly early) part dealing with proofs and the fact that all mathematics is dealing with “unprovable intellectual leaps” which are encoded into axioms, and everything in math which is provable is only provable based on a particular chosen set of axioms.
It may be true that math beyond that basic level doesn’t make a point of going back and explicitly reviewing that point, but it is just kind of implicit in everything later.
Uncountable need not mean more. It can mean that there are things that you can't figure out whether to count, because they are undecidable.
> I guarantee that a naive presentation doesn't actually include the axioms
But you said "modern math courses". Are you now talking about a casual conversation? I mean the OP's story is that his wife just liked listening to him talk about his passions. > Uncountable need not mean more.
Sure. But that doesn't mean that there aren't differing categories. However you slice it, we can operate on these things in different ways. Real or not the logic isn't consistent between these things but they do fall out into differing categories.If you're trying to find mistakes in the logic does it not make sense to push it at its bounds? Look at the Banach-Tarski Paradox. Sure, normal people hear about it and go "oh wow, cool." But when it was presented in my math course it was used as a discussion of why we might want to question the Axiom of Choice, but that removing it creates new concerns. Really the "paradox" was explored to push the bounds of the axiom of choice in the first place. They asked "can this axiom be abused?" And the answer is yes. Now the question is "does this matter, since infinity is non-physical? Or does it despite infinity being non-physics?"
You seem to think mathematicians, physicists, and scientists in general believe infinities are physical. As one of those people, I'm not sure why you think that. We don't. I mean math is a language. A language used because it is pedantic and precise. Much the same way we use programming languages. I'm not so sure why you're upset that people are trying to push the bounds of the language and find out what works and doesn't work. Or are you upset that non-professionals misunderstand the nuances of a field? Well... that's a whole other conversation, isn't it...
When I say "modern math courses", I mean like the standard courses that most future mathematicians take on their way to various degrees. For all that we mumble ZFC, it is darned easy to get a PhD in mathematics without actually learning the axioms of ZFC. And without learning anything about the historical debates in the foundations of mathematics.
If instead you're talking about experts then I learned about what you're talking about in my Linear 2 course in a physics undergrad and have seen the topic appear many times since even outside my own reading of set theory. The axiom of choice seems to have even entered more main stream nerd knowledge. It's very hard to learn why AoC is a problem without learning about how infinities can be abused. But honestly I don't know any person that's even an amateur mathematician that thinks infinities are physical
The arguments between the different schools of philosophy in math are something that most professional mathematicians are unaware of. Those who know about them, generally learned them while learning about either the history of math, or the philosophy of math. I personally only became aware of them while reading https://www.amazon.com/Mathematical-Experience-Phillip-J-Dav.... I didn't learn more about the topic until I was in grad school, and that was from personal conversations. It was never covered in any course that I took on, either in undergraduate or graduate schools.
Now I'm curious. Was there anything that I said that should have been said more clearly? Or was it hard to understand because you were trying to fit what I said into what you know about an entirely unrelated debate about the axiom of choice?
> The fact that you think I'm talking about the axiom of choice, demonstrates that you didn't understand what I'm talking about.
Dude... just a minute ago you were complaining about ZFC... Sure, I brought up AoC but your time to protest was then.The reason I brought up AoC is because it is a common way to learn about the abuse of infinity and where axioms need be discussed. Both things you brought up. I think you are reading further into this than I intended.
> Now I'm curious. Was there anything that I said that should have been said more clearly?
Is this a joke?When someone says
>> Honestly it's difficult to understand exactly what you're arguing.
That's your chance to explain. It is someone explicitly saying... I'm trying to understand but you are not communicating efficiently.This is even more frustrating as you keep pointing out that this is not common knowledge. So why are you also communicating like it is?! If it is something so few know about then be fucking clear. Don't make anyone guess. Don't link a book, use your own words and link a book if you want to suggest further reading, but not "this is the entire concept I'm talking about". Otherwise we just have to guess and you getting pissed off that we guess wrong is just down right your own fault.
So stop shooting yourself in the foot and blaming others. If people aren't understanding you, try assuming they can't read your mind and don't have the exact same knowledge you do. Talk about fundamental principles...
That point being that what we mean by "exists" is fundamentally a philosophical question. And our conclusions about what mathematical things exist will depend on how we answer that question. And very specifically, there are well-studied mathematical philosophies in which uncountable sets do not have larger cardinalities than countable ones.
If none of those explanations wind up being clear for you, then I'm going to need feedback from you to have a chance to explain this to you. Because you haven't told me enough for me to make any reasonable guess what the sticking point is between you and understanding. And without that, I have no chance of guessing what would clarify this for you.
To be fair, constructivists tend to prefer talk about different "universes" as opposed to different "sizes" of sets, but that's all it is: little more than a mere difference in terminology! You can show equiconsistency statements across these different points of view.
So the care that intuitionists take does not lead to any improvement in consistency.
However the two approaches lead to very different notions of what it means for something to mathematically exist. Despite the formal correspondences, they lead to very different concepts of mathematics.
I'm firmly of the belief that constructivism leads to concepts of existence that better fit the lay public than formalism does.
For sure there are valid arguments on whether or not to use certain axioms which allow or disallow some set theoretical constructions, but given ZFC, is there anything that follows that is unprovable?
In particular, you have made sufficient assumptions to prove that almost all real numbers that exist can never be specified in any possible finite description. In what sense do they exist? You also wind up with weirder things. Such as well-specified finite problems that provably have a polynomial time algorithm to solve...but for which it is impossible to find or verify that algorithm, or put an upper bound on the constants in the algorithm. In what sense does that algorithm exist, and is finite?
Does that sound impossible? An example of an open problem whose algorithm may have those characteristics is an algorithm to decide which graphs can be drawn on a torus without any self-crossings.
If our notion of "exists" is "constructable", all possible mathematical things can fit inside of a countable universe. No set can have more than that.
I'm saying that to go from the uncountability of the reals to the idea that this implies that the infinity of the reals is larger, requires making some important philosophical assumptions. Constructivism demonstrates that uncountable need not mean more.
On the algorithm example, you could have asked what I was referring to.
The result that I was referencing follows from the https://en.wikipedia.org/wiki/Robertson%E2%80%93Seymour_theo.... The theorem says that any class of finite graphs which is closed under graph minors, must be completely characterized by a finite set of forbidden minors. Given that set of forbidden minors, we can construct a polynomial time test for membership in the class - just test each forbidden minor in turn.
The problem is that the theorem is nonconstructive. While it classically proves that the set exists, it provides no way to find it. Worse yet, it can be proven that in general there is no way to find or verify the minimal solution. Or even to provide an upper bound on the number of forbidden minors that will be required.
This need not hold in special cases. For example planar graphs are characterized by 2 forbidden minors.
For the toroidal graphs, as https://en.wikipedia.org/wiki/Toroidal_graph will verify, the list of known forbidden minors currently has 17,523 graphs. We have no idea how many more there will be. Nor do we have any reason to believe that it is possible to verify the complete list in ZFC. Therefore the polynomial time algorithm that Robinson-Seymour says must exist, does not seem to exist in any meaningful and useful way. Such as, for example, being findable or provably correct from ZFC.
Errr, I'm just assuming the axioms of ZFC. That's literally all I'm doing.
> In what sense do [numbers that can't be finitely specified] exist?
In the sense that we can describe rules that lead to them, and describe how to work with them.
I understand that you're trying to tie the notion of "existence" to constructability, and that's fine. That's one way to play the game. Another is to use ZFC and be fine with "weird, unintuitive to laypeople" outcomes. Both are interesting and valid things to do IMO. I'm just not sure why one is obviously "better" or "more real" or something. At the end, it's all just coming up with rules and figuring out what comes out of them.
On the other hand, I think it's really cool to teach laypeople about things like "sizes of infinities", etc. They are deep math concepts that can be taught with relatively simple analogies that most people understand, and they're interesting things to know. I know that I personally loved learning about them as a kid, before I had almost any knowledge of math - it's one of the reasons that while I initially didn't connect with other areas of math, I found set theory delightful as a kid.
I just feel like if you need to first walk people through a bunch of philosophical back and forth on constructionism, you'll never get to the fun stuff.
But it is easy to present deep ideas from constructivism, without mentioning the word constructivism. Or even acknowledging that the philosophy exists.
For example the second half of https://math.stackexchange.com/questions/5074503/can-pa-prov... is an important constructivist thing. It shows why everything that a constructivist could ever be interested in mathematically, can be embedded in the natural numbers. With all of the constructions needing nothing more than the Peano Axioms. (Proving the results may need stronger axioms though...)
From my point of view, https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach does something similar. That book got a lot of people interested in basic concepts around recursion, computation, and what it means to think. Absolutely everything in it works constructively. And yet that philosophy is not mentioned. Not even once.
The only point where a constructivist need discuss all of the philosophical back and forth on constructivism, is in explaining why a constructivist need not accept various claims coming out of classical mathematics. And even that discussion would not be so painful if people who have learned classical mathematics were more aware of the philosophical assumptions that they are making.
To be honest, I don't feel like I know enough about the constructivist philosophy. What would be a good place to start if I want to learn more about it?
I haven't yet read your PA proving Goodstein sequences article, though I have skimmed it and it is, indeed, super interesting.
And for the record, Godel, Escher, Bach was probably the single most important influence on me even starting to get interested in computation, etc.
ZFC (and its underlying classical logic) is precisely the problem here though
In the sense that all statements of non-constructive "existence" are made, viz. "you can't prove that they don't exist in the general case", so you are allowed to work under the stronger assumption that they also exist constructively, without any contradiction resulting. That can certainly be useful in some applications.
But the fact that such systems don't create contradictions emphatically *DOES NOT* demonstrate the constructive existence of such an oracle. Doubly not given that in various usual constructivist systems, it is easily provable that nothing that exists can serve as such an oracle.
Of course, but it shows that you can assume that such an oracle exists whenever you are working under additional conditions where the existence of such a "special case" oracle makes sense to you, even though you can't show its existence in the general case. This outlook generalizes to all non-constructive existence statements (and disjunctive statements, as appropriate). It's emphatically not the same as constructive existence, but it can nonetheless be useful.
I won't ever be able to find a contradiction from that claim, because I have no way to find that bank account if it exists.
But that argument also won't convince me that the bank account exists.
Theoretically possible? Sure. But the kinds of questions that lead you there are generally in opposition to the kinds of principles that lead someone to prefer constructivism.
If the only questions you accept as meaningful are the decidable ones, then you can trust its answers for all the questions you accept as meaningful and for which it has answers.
Also, “provable that nothing that exists can serve as such an oracle” seems pretty presumptive about what things can exist? Shouldn’t that be more like, “nothing which can be given in such-and-such way (essentially, no computable procedure) can be such an oracle”?
Why treat it as axiomatic that nothing that isn’t Turing-computable can exist? It seems unlikely that any finite physical object can compute any deterministic non-Turing-computable function (because it seems like state spaces for bounded regions of space have bounded dimension), but that’s not something that should be a priori, I think.
I guess it wouldn’t really be verifiable if such a machine did exist, because we would have no way to confirm that it never errs? Ah, wait, no, maybe using the MIP* = RE result, maybe we could in principle use that to test it?
On being presumptive about what things can exist, that's the whole point of constructivism. Things only exist when you can construct them.
We start with things that everyone accepts, like the natural numbers. We add to that all of the mathematical entities that can be constructed from those things. This provides us with a closed and countable universe of possible mathematical entities. We have a pretty clear notion of what it means for something in this universe to exist. We cannot be convinced of the existence of anything that is outside of the universe without making extra philosophical assumptions. Philosophical assumptions of exactly the kind that constructivists do not like.
This constructible universe includes a model of computation that fits Turing machines. But it does not contain the ability to describe or run any procedure that can't fit onto a Turing machine.
Therefore an oracle to decide the Halting problem does not exist within the constructible universe. And so your ability to imagine such an oracle, won't convince a constructivist to accept its existence.
This is exactly what I’m saying is presumptive! If constructivism is to earn the merit of being less presumptive by virtue of not assuming the existence of various things, it should also not assume the non-existence of those things.
Which, I think many visions of constructivism do earn this merit, but not your description of it.
What makes you presume that you have any business telling someone with different beliefs from you, what is OK to believe? You may believe in the existence of whatever you like. Whether that be numbers that cannot be specified, or invisible pink unicorns.
I'll be over in the corner saying that your belief does not compel me to agree with you on the question of what exists. Not when your belief follows from formalism, which explicitly abandons any pretense of meaningfulness to its abstract symbol manipulation.
I might be confused here, but isn't an Oracle to decide the halting problem something that everyone agrees doesn't exist?
The whole idea is for this to be a thought experiment. "If we magically had a way to decide the halting problem, how would that affect things" seems like a normal hypothetical question.
Here is why a classical mathematician would say that this oracle exists.
Let f(program, input, n) be 1 or 0 depending on whether the program program, given input input, is still running at step n. This is a perfectly well-behaved mathematical function. In fact it is a computable one - we can compute it by merely running a simulation of a computer for a fixed number of steps.
Let oracle(program, input) be the limit, as n goes to infinity, of f(program, input, n). Classically this limit always exists, and always gives us 0 or 1. The fact that we happen to be unable to compute it, doesn't change the fact that this is a perfectly well-defined function according to classical mathematics.
If you give up the existence of this oracle, you might as well give up the existence of any real numbers that do not have a finite description. Which is to say, almost all of them. Why? Because the set of finite descriptions is countable, and therefore the set of real numbers that admit a finite description is also only countable. But there are an uncountable number of real numbers, so almost all real numbers do not admit a finite description.
The real question isn't whether this oracle exists. It is what you want the word "exists" to mean.
John Horton Conway:
> It's a funny thing that happens with mathematicians. What's the ontology of mathematical things? How do they exist? In what sense do they exist? There's no doubt that they do exist but you can't poke and prod them except by thinking about them. It's quite astonishing and I still don't understand it, having been a mathematician all my life. How can things be there without actually being there? There's no doubt that 2 is there or 3 or the square root of omega. They're very real things. I still don't know the sense in which mathematical objects exist, but they do. Of course, it's hard to say in what sense a cat is out there, too, but we know it is, very definitely. Cats have a stubborn reality but maybe numbers are stubborner still. You can't push a cat in a direction it doesn't want to go. You can't do it with a number either.
https://plato.stanford.edu/entries/mathematics-constructive/ is one place that you could start filling in that gap.
Also this: https://arxiv.org/pdf/1212.6543
Assuming you haven't looked at these already, of course.
Turns out I’m neither good in maths nor teaching
I think dominating on a first date is a risk (which I was mindful of) but just being yourself, and talking about something you're truly passionate about is the key.
Pure mathematics is regarded as an abstract science, which it is by definition. Arnol'd argued vehemently and much more convincingly for the viewpoint that all mathematics is (and must be) linked to the natural sciences.
>On forums such as Stack Exchange, trained mathematicians may sneer at newcomers who ask for intuitive explanations of mathematical constructs.
Mathematicians use intuition routinely at all levels of investigation. This is captured for example by Tao's famous stages of rigour (https://terrytao.wordpress.com/career-advice/theres-more-to-...). Mathematicians require that their intuition is useful for mathematics: if intuition disagrees with rigour, the intuition must be discarded or modified so that it becomes a sharper, more useful razor. If intuition leads one to believe and pursue false mathematical statements, then it isn't (mathematical) intuition after all. Most beginners in mathematics do not have the knowledge to discern the difference (because mathematics is very subtle) and many experts lack the patience required to help navigate beginners through building (and appreciating the importance of) that intuition.
The next paragraph about how mathematics was closely coupled to reality for most of history and only recently with our understanding of infinite sets became too abstract is not really at all accurate of the history of mathematics. Euclid's Elements is 2300 years old and is presented in a completely abstract way.
The mainstream view in mathematics is that infinite sets, especially ones as pedestrian as the naturals or the reals, are not particularly weird after all. Once one develops the aforementioned mathematical intuition (that is, once one discards the naive, human-centric notion that our intuition about finite things should be the "correct" lens through which to understand infinite things, and instead allows our rigorous understanding of infinite sets to inform our intuition for what to expect) the confusion fades away like a mirage. That process occurs for all abstract parts of mathematics as one comes to appreciate them (expect, possibly, for things like spectral sequences).
I'd argue that, by definition, mathemtatics is not, and cannot be, a science. Mathematics deals with provable truths, science cannot prove truth and must deal falsifiability instead.
In the end arguing about whether mathematics is a science or not makes no more sense than bickering about tomates being fruit; can be answered both yes and no using reasonable definitions.
Mathematicians actually do the same thing as scientists: hypothesis building by extensive investigation of examples. Looking for examples which catch the boundary of established knowledge and try to break existing assumptions, etc. The difference comes after that in the nature of the concluding argument. A scientist performs experiments to validate or refute the hypothesis, establishing scientific proof (a kind of conditional or statistical truth required only to hold up to certain conditions, those upon which the claim was tested). A mathematician finds and writes a proof or creates a counter example.
The failure of logical positivism and the rise of Popperian philosophy is obviously correct that we can't approach that end process in the natural sciences the way we do for maths, but the practical distinction between the subjects is not so clear.
This is all without mention the much tighter coupling between the two modes of investigation at the boundary between maths and science in subjects like theoretical physics. There the line blurs almost completely and a major tool used by genuine physicists is literally purusiing mathematical consistency in their theories. This has been used to tremendous success (GR, Yang-Mills, the weak force) and with some difficulties (string theory).
————
Einstein understood all this:
> If, then, it is true that the axiomatic basis of theoretical physics cannot be extracted from experience but must be freely invented, can we ever hope to find the right way? Nay, more, has this right way any existence outside our illusions? Can we hope to be guided safely by experience at all when there exist theories (such as classical mechanics) which to a large extent do justice to experience, without getting to the root of the matter? I answer without hesitation that there is, in my opinion, a right way, and that we are capable of finding it. Our experience hitherto justifies us in believing that nature is the realisation of the simplest conceivable mathematical ideas. I am convinced that we can discover by means of purely mathematical constructions the concepts and the laws connecting them with each other, which furnish the key to the understanding of natural phenomena. Experience may suggest the appropriate mathematical concepts, but they most certainly cannot be deduced from it. Experience remains, of course, the sole criterion of the physical utility of a mathematical construction. But the creative principle resides in mathematics. In a certain sense, therefore, I hold it true that pure thought can grasp reality, as the ancients dreamed. - Albert Einstein
That's the thing, though — It does make sense, and it's an important distinction. There is a reason why "mathematical certainty" is an idiom — we collectively understand that maths is in the business of irrefutable truths. I find that a large part of science skepticism comes from the fundamental misunderstanding that science is, like maths, in the business of irrefutable truths, when it is actually in the business of temporarily holding things as true until they're proven false. Because of this misunderstanding, skeptics assume that science being proven wrong is a deathblow to science itself instead of being an integral part of the process.
The "symbol pushing" is a methodological tool, and a very useful one that opened up the possibility of new expansive fields of mathematics.
(Of course, it is important to always distinguish between properties of the abstraction or the tool from the object of study.)
Solipsists would like to have a word with you...
In practice when proofs of research mathematics are checked, they go out to like 4 grad students. This isn't a very glamorous job for those grad students. If they agree then it's considered correct...
But note this is just the bleeding edge stuff. The basic stuff is checked and reproven by every math undergrad that learns math. Literally millions of people have checked all the proofs. As long as something is taught in university somewhere, all the people who are learning it (well, all the ones who do it well) are proving / checking the theory.
Anyway, when the scientific community accepts a bad proof what effectively happens is that we've just added an extra axiom.
Like when you deliberately add new axioms, there are 3 cases
- Axiom is redundant: it can be proven from the other axioms. (this is ... relatively fine? we tricked ourselves into believing something that is true is true, the reason is just bad.)
This can get discovered when people try to adapt the bad proof to prove other things and fail.
Also people find and publish and "more interesting", "different" proofs for old theorems all the time. Now you have redundancy.
- Axiom contradicts other axioms: We can now prove p and not p.
I wonder if this has ever happened? I.e. people proving contradictions, leading them to discover that a generally accepted theorem's proof is incorrect. It must have happened a few times in history, no?
o/c maybe the reason this hasn't happened is that the whole logical foundation of mathematics is new, dating back to the hilbert program (1920s).
There are well known instances of "proofs" being overturned before that, but they're not strictly logically proofs in the hilbert-program sense, just arguments. (Of course they contain most of the work and ideas that would go into a correct proof, and if you understand them you can do a modern proof)
e.g. https://mathoverflow.net/a/35558
Cauchys proof that, if a sequence of continuous functions converges [pointwise] to a function, the limit function is also continuous (cauchys proof only holds for uniform convergence, not pointwise convergence - but people didnt really know the difference at the time)
- Axiom is independent of other axioms: You can't prove or disprove the theorem.
English doesn't have a "I'm just hypothesizing all of this" voice, if it did exist this post should be in it. I didn't do enough research to answer your question. Some of the above may be wrong, e.g. the part about the 4 grad students. One should probably look for historical examples.
When we try to model something probabilistically, it is usually not a great idea to model the probability that we made an error in our probability calculations as part of our calculations of the probability.
Ultimately, we must act. It does no good to suppose that “perhaps all of our beliefs are incoherent and we are utterly incapable of reason”.
But we can be more sure of the deductive validity of a proof than we can be of any of the claims you make in these sentences, so I don't think they can serve to establish any doubt. If we're wrong about deductive logic, then we can only be more wrong about any empirical claims, which rely on deductive logic plus empirical observations
The incompleteness theorem doesn't say that there are statements which are unprovable in any absolute sense. What it says is that given a formal system, there will always be statements which that particular formal system can't prove. But in fact as part of the proof, Godel proves this statement, just not by deriving it in the formal system in question (obviously, since that's what he's proving is impossible).
The way this is done is by using a "metalanguage" to talk about the formal theory in question. In this case it's a kind of ambient set theory. Of course, the proof also implies that if this ambient metalanguage is formalized then there will be sentences which it can't prove either, but these in general will be different sentences for each formalized theory.
Math is scientific in the sense that you've proposed a hypothesis, and others can test it.
Also the empirical part means natural phenomena needs to be involved. Math can be purely abstract.
If you want to escape human fallibility, I'm afraid you're going to need divine intervention. Works checked as carefully as possible still seem to frequently feature corrections.
That isn't true, you just test new axioms but most stuff we do in empirical sciences don't require new axioms.
The only difference between material sciences and math is that in math you don't test axioms while in empirical sciences you do.
And a lot of what goes on in foundations of mathematics could be described as "testing the axioms", i.e. identifying which theorems require which axioms, what are the consequences of removing, adding, or modifying axioms, etc.
[1] And even this has limits: https://en.wikipedia.org/wiki/Gödel%27s_incompleteness_theor...
I may be off-base as an outsider to mathematics, but Euclid’s Elements, per my understanding, is very much grounded in the physical reality of the shapes and relationships he describes, if you were to physically construct them.
I am going to quote from the _very beginning_ of the elements:
Definition 1. A point is that which has no part. Definition 2. A line is breadthless length.
Both of these two definitions are impossible to construct physically right off the bat.
All of the physically realized constructions of shapes were considered to basically be shadows of an idealized form of them.
The complex number system started being explored by the greeks long before any notion of the value of complex spaces existed, and could be mapped to something in reality.
There's a nice (brief) discussion in section 20.2 of Stillwell's Mathematics and its History
> Euclid's Elements is 2300 years old and is presented in a completely abstract way.
depends on what you mean by completely abstract. Euclid relies in a logically essential way on the diagrams. Even the first theorem doesn't follow from the postulates as explicitly stated, but relies on the diagram for us to conclude that two circles sharing a radius intersect.
This is a thought-provoking paper on the issue by Viktor Blasjo, Operationalism: An Interpretation of the Philosophy of Ancient Greek Geometry https://link.springer.com/article/10.1007/s10699-021-09791-4
which was recently the subject of a guest video on 3blue1brown https://www.youtube.com/watch?v=M-MgQC6z3VU
https://math.stackexchange.com/questions/31859/what-concept-...
Other great sources for quick intuition checks are Wikipedia and now LLMs, but mainly through putting in the work to discover the nuances that exist or learning related topics to develop that wider context for yourself.
It wasn't; but that's a common misunderstanding from hundreds of centuries of common practice.
So, how has maths gotten so abstract? Easy, it has been taken over by abstraction astronauts(1), which have existed throghout all eras (and not just for software engineering).
Mathematics was created by unofficial engineers as a way to better accomplish useful activities (guessing the best time of year to start migrating, and later harvesting; counting what portion of harvest should be collected to fill the granaries for the whole winter; building temples for the Pharaoh that wouldn't collapse...)
But then, it was adopted by thinkers that enjoyed the activity by itself and started exploring it by sheer joy; math stopped representing "something that needed doing in an efficient way", and was considered "something to think about to the last consecuences".
Then it was merged into philosophy, with considerations about perfect regular solids, or things like the (misunderstood) metaphor of shadows in Plato's cave (which people interpreted as being about duality of the essences, when it was merely an allegory on clarity of thinking and explanation). Going from an intuitive physical reality such as natural numbers ("we have two cows", or "two fingers") to the current understanding of numbers as an abstract entity ("the universe has the essence of number 'two' floating beyond the orbit of Uranus"(2)) was a consequence of that historical process, when layers upon layers of abstraction took thinkers further and further away from the practical origins of math.
[1] https://www.joelonsoftware.com/2001/04/21/dont-let-architect...
That is, numbers were specifically used to abstract over how other things behave using simple and strict rules. No?
Agree that math is built on language. But math is not any specific set of abstractions; time and again mathematicians have found out that if you change the definitions and axioms, you achieve a quite different set of abstractions (different numbers, geometries, infinity sets...). Does it mean that the previous math ceases to exist when you find a contradiction on it? No, it's just that you start talking about new objects, because you have gained new knowledge.
The math is not in the specific objects you find, it's in the process to find them. Rationalism consider on thinking one step at a time with rigor. Math is the language by which you explain rational thought in a very precise, unambiguous way. You can express many different thoughts, even inconsistent ones, with the same precise language of mathematics.
Leibniz (late 1600s) helped to popularize negative numbers. At the time most mathematicians thought they were "absurd" and "fictitious".
No, not highly abstract from the beginning.
Geometry is “attached” to the physical world… but in an abstract way… but you can point to the thing your measuring maybe so it doesn’t count…
Abstraction was perfected if not invented by mathematics.
https://en.wikipedia.org/wiki/The_Method_of_Mechanical_Theor...
Wasn't that imaginary numbers?
Numbers, for example, are abstract in the sense that you cannot find concrete numbers walking around or falling off trees or whatever. They're quantities abstracted from concrete particulars.
What the author is concerned with is how mathematics became so abstract.
You have abstractions that bear no apparent relation to concrete reality, at least not according to any direct correspondence. You have degrees of abstraction that generalize various fields of mathematics in a way that are increasingly far removed from concrete reality.
- they are material objects
- they are concepts I understand
- they are sequences of letters
- they are English words
- ...
Not sure why oneness is privileged as what they have in common, and their oneness is meaningless by itself. Oneness is a property that is only meaningful in relation to other concepts of objects.
The tendency towards excessive abstraction is the same as the use of jargon in other fields: it just serves to gatekeep everything. The history of mathematics (and science) is actually full of amateurs, priests and bored aristocrats that happened to help make progress, often in their spare time.
Formal reasoning is the point, which is not by itself abstraction.
Someone else in this discussion is saying Euclid's Elements is abstract, which is near complete nonsense. If that is abstract our perception of everything except for the fundamental [whatever] we are formed of is an abstraction.
What do you think "formal" means in that sentence.
It means "formal" from the word "form". It is reasoning through pure manipulation of symbols, with no relation to the external world required.
https://www.etymonline.com/word/formal "late 14c., "pertaining to form or arrangement;" also, in philosophy and theology, "pertaining to the form or essence of a thing," from Old French formal, formel "formal, constituent" (13c.) and directly from Latin formalis, from forma "a form, figure, shape" (see form (n.)). From early 15c. as "in due or proper form, according to recognized form," As a noun, c. 1600 (plural) "things that are formal;" as a short way to say formal dance, recorded by 1906 among U.S. college students."
There's not a much better description of what Euclid was doing.
https://plato.stanford.edu/entries/logic-classical/
"Formal" in logic has a very precise technical meaning.
Edit to add: this comment had a sibling, that was suggesting that given a specific proof assistant requires all input to be formal logic perhaps the word formal could be redefined to mean that which is accepted by the proof assistant. Sadly this fine example of my point has been deleted.
Isn't that the subject of the whole argument? That mathematicians have taken the road off in a very specific direction, and everyone disagreeing is ejected from the field, rather like occurred more recently in theoretical physics with string theory.
Prior to that time quite clearly you had formal proofs which do not meet the symbolic abstraction requirements that pure mathematicians apparently believe are axiomatic to their field today, even if they attempt to pretend otherwise, as argued over the case of Euclid elsewhere. If the Pythagoreans were reincarnated, as they probably expected, they would no doubt be dismissed as crackpots by these same people.
I've been unable to imagine or recall an example. Can you provide one?
I could construct a formal reasoning scheme involving rules and jugs on my table, where we can pour liquids from one to another. It would be in no way symbolic, since it could use the liquids directly to simply be what they are. Is constructing and studing such a mechanism not mathematics? Similarly with something like musical intervals.
An apple is an abstraction over the particles/waves that comprise it, as is a banana.
Euclid is no more abstract than the day to day existence of a normal person, hence to claim that it is unusually abstract is to ignore, as you did, the abstraction inherent in day to day life.
As I pointed out it's very possible to create formal reasoning systems which are not symbolic or abstract, but due to that are we to assume constructing or studying them would not be a mathematical exercise? In fact the Pythagoreans did all sorts of stuff like that.
No, you don’t understand what abstraction is. Apple is exactly arrangement of particles, it’s not abstraction over them.
> hence to claim that it is unusually abstract
Who talks about him being unusually abstract (and not just abstract)?
> is to ignore, as you did, the abstraction inherent in day to day life.
How am I ignoring this abstraction when I’ve provided you exactly that (numbers are abstraction inherent in day to day life). I’m sorry but you seem to be discussing in bad faith.
No. You can do things to that apple, such as bite it, and it is still an apple, despite it now having a different set of particles. It is the abstract concept of appleness (which we define . . . somehow) applied to that arrangement of particles.
> I’m sorry but you seem to be discussing in bad faith.
Really?
> No, you don’t understand what abstraction is.
People are aware that you need context to motivate abstractions. That's why we start with numbers and fractions and not ideals and localizations.
Jargon in any field is to communicate quickly with precision. Again the point is not to gatekeep. It's that e.g. doctors spend a lot of time talking to other doctors about complex medical topics, and need a high bandwidth way to discuss things that may require a lot of nuance. The gatekeeping is not about knowing the words; it's knowing all of the information that the words are condensing.
To put it another way: Jargon is the source code of the sciences. To an outsider, looking in on software development, they see the somewhat impenetrable wall of parentheses and semicolons and go "Ah, that's why programming is hard: you have to understand code". And I hope everyone here can understand that that's an uninformed thing to say. Syntax is the easy part of programming, it was made specifically to make expressing the rigorous problem solving easier. Jargon is the same way: it exists to make expressing very specific things that only people in this subfield actually think about easier, instead of having to vaguely gesture at the concept, or completely redefine it every time anybody wants to communicate within the field.
Given the collective time put into it, easier stuff was already solved thousands of years ago, and people are not really left with something trivial to work on. Hence focusing on more and more abstract things as those are the only things left to do something novel.
But also wrong, the easier stuff was solved INCORRECTLY thousands of years ago. But it takes advanced math to understand what was incorrect about it.
Math in its core has always been abstract. It’s the whole point.
I don't think so. E.g. there may be some abstractions in numerical linear algebra, but the subject matter has always been quite concrete.
What you call concrete - were the origins of math as we know it. Geometry, astronomy, metaphysics etc they all had in common fundamental abstract thing that we call math today.
Saying “math got abstract” is like saying “a tree got wooden”. Because when it was a seed - it wasn’t yet a tree in a full sense.
I get what they're saying in practice. But numbers are abstract. They only seem concrete because you'd internalized the abstract concept.
I personally cannot wrap my head around Cantor's infinitary ideas, but I'm sure it makes perfect sense to people with better mathematical intuition than me.
The Peano axioms are pretty nifty though. To get a better appreciation of the difficulty of formally constructing the integers as we know them, I recommend trying the Numbers Game in Lean found here: https://adam.math.hhu.de/
47 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.