Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A mathematical formalisation challenge by Peter Scholze (xenaproject.wordpress.com)
75 points by alimw on Dec 6, 2020 | hide | past | favorite | 24 comments


This paragraph is paramount, coming from one of the youngest Fields Medalists and a truly trusted source. Truly honest and inspiring.

“ I have occasionally been able to be very persuasive even with wrong arguments. (Fun fact: In the selection exams for the international math olympiad, twice I got full points for a wrong solution. Later, I once had a full proof of the weight-monodromy conjecture that passed the judgment of some top mathematicians, but then it turned out to contain a fatal mistake.)”


[just to be clear here, I am the author of the Xena project blog and my current research is in formalising number theory; before formalisation I was working in Scholze's area.]

Scholze's first application of the theory of perfectoid spaces was to prove weight-monodromy in many new cases (but not in all cases) -- this was his first breakthrough result really. Fun fact: in the first version of the post which he sent me, this weight-monodromy confession was not there. He then slept on it and sent a second version a day later, and it was only when I was converting the LaTeX into Wordpress format that I noticed that he had added this extra line. I quite agree that it is very rare for people, especially of his stature, to publically admit to errors, especially ones which never made it into print. Of course we will be working on this challenge in Lean, and we are currently optimistic, but who knows. It is certainly true that in the study group we had on the work at Imperial earlier this year, we did not work through the technical proof which Scholze is now challenging the formalization community to check. This is really Scholze's point I guess: once you have a Fields Medal it's very easy for other people to say "well this is a bit technical but let's face it, it's probably fine" (this is exactly what we did, for example). Voevodsky made similar comments around a decade ago -- and he managed to get false arguments published, perhaps partly because of his own Fields Medal. Scholze is flagging an explicit argument in his work which he believes needs to be carefully analysed, and I have seen with my own eyes that the academic system we have right now might not actually do it carefully enough. What is not at all clear, right now at least, is whether computer proof verification systems are up to the task. I think it will be interesting to see how this develops.


Oh, many thanks for this explanation. Even more interesting.

Yes, Voedovsky’s abd Scholze’s candidness is encouraging and liberating. And a great call to responsibility for us reviewers.

And thanks for your work on formalization. Truly important in my view.


I feel like with 2 fields medalists on boards, the culture can finally shift. I'm really excited.

[It's not just about math, see https://hapgood.us/2015/10/17/the-garden-and-the-stream-a-te... for a great explanation of the type of labor that is mathlib is sorely lacking in just about every sector. I'm really hoping that mathematics can lead the way here.]


When you get to the symbols, all becomes symbol manipulation, so why would computer proof verification systems not be up to the task?


Well, climbing Mt. Everest is also "just moving your arms and legs". But I'm not up to that task!

It's certainly possible in principle, but whether it's possible in practice is part of the challenge. You need enough people that understand the details of the proof, and know how to turn that into a formally verified proof. Such proofs can also become slow, so to keep things practical, you also need to take speed into account.


"Quantity has a quality all its own"

Also, and perhaps easier to wrap one's head around, is issues of tooling. Already, there is a very heavy use of "tactics" (metaprograms, and ones with decent computational complexity (think "search" not just "expansion")). Mathematicians write lemmas so we can try to run the tactics on "mini problems" that do not grow even as the total body of work grows, but there's always a risk the that there's some sticking point one cannot break down enough.


Remember Pentium’s division bug (on mobile so cannot cooy-paste). We need some king of certificate of proof, not just some black-box which answers “OK”, “NOT-OK”.


The theorem proves that are discussed here already do that.


blah blah blah type checkers blah blah blah can be run on different chipsets / OS's blah blah blah computers are several orders of magnitude more accurate blah blah blah not really the issue.


To add another data point: I submitted a paper to a major mathematics journal which had a mistake in a proof. One of the referees flagged that I was making a claim without proving it, and asked for me to revise it. I submitted a revised version which still had a mistake in the same step and it was accepted.

(Three years later, my doctoral supervisor pointed out the mistake to me, and we published another paper -- in the same journal -- filling the gap.)


The mathematics being discussed here is very new, as well as very technical, and I would guess the number of people who really understand it at the moment is at most dozens; certainly fewer than hundreds.

I am certainly not one of those people. Nevertheless, I find this enormously exciting. Formalisation of mathematics has long been the preserve of a handful of logicians and type theorists. What we’re seeing here is one of the world’s leading mathematicians, whose work is situated squarely at the centre of what is today considered mainstream mathematics, advocating the importance of formalisation.

If, like me, you believe that formalisation will be an essential part of the mathematics of the future, this is a very encouraging development.


I just hope NeuraLink pans out so I have a chance of understanding any of this in this life


I'm not a mathematician and I'm generally unqualified to comment on any of this but I'll have a stab anyway.

The TLDR is that Peter Scholze, one of modern mathematics' most accomplished theory builders, has a concept called a 'condensed set' that he thinks can act as a drop-in replacement for topological spaces in most of mathematics. Topological spaces are perhaps one of the least restrictive models of 'space' is in mathematics. Peter thinks his definition will allow mathematicians to bring algebraic techniques to bear on problems that were previously not amenable to such methods.

The problem is that everything rests on one theorem, whose proof is difficult. Because the proof is so foundational, he'd like to get it checked by computer instead of relying on the opinion of a handful of mathematicians, who may be sloppy in their checking.


What are good beginner resources to learn Lean?


The Lean Community website [1] is a great place to start. Depending on your background you might like to dive right into the Natural Number Game [2] or the Theorem Proving in Lean [3] (both linked from the Community site).

1. https://leanprover-community.github.io/

2. http://wwwf.imperial.ac.uk/~buzzard/xena/natural_number_game...

3. https://leanprover.github.io/theorem_proving_in_lean/


See the resources over at https://leanprover-community.github.io/learn.html and the youtube playlist of the Lean for the Curious Mathematician workshop: https://www.youtube.com/playlist?list=PLlF-CfQhukNlxexiNJErG...


An addictive personality :).

Theorem proving can be a fun open-world videogame, and I do hope these aspects can be refined overtime to make real mathematics a lot more accessible.


"Liquid tensor experiment" <-- great pun in the title. Now if you'll excuse me, I'll go put on the amazing CD this is referencing.


Not sure if this question even makes sense, or if it sounds hopelessly naive to someone in the know, my maths education ended in undergrad -

can this type of push towards formalizing mathematical proofs be seen as a sort of continuation of Hilbert's program? If no, what's the difference? If yes, how are the fatal issues in that program avoided?


It is essentially “the most one can do with Hilbert’s Programme”.

Actually Hilbert’s Programme was shattered when his great hope (the proof of the consistency and completeness of Mathematics FINITISTICALLY) was proved impossible (actually inconsistent).

But from the point of view of mechanizing Maths, it is more or less equivalent.


Yeah Hilbert’s Program wasn't actually set back by the incompleteness theorems in the first place, it was all FUD.


Not sure what you mean by that: after Gödel, Hilbert essentially became hopeless.


Hilbert misunderstood his own program :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: