Hacker Newsnew | past | comments | ask | show | jobs | submit | Enginerrrd's commentslogin

Also frankly, doctor =/= expert on probiotics.

None of their training really addresses that and while they might be more qualified to read research than random layman I would not in general ascribe authority to what a random practitioner has to say about probiotics. Frankly, the research on probiotics is still very much in its infancy and a LOT remains to be figured out.


Absolutely, if you have access to domain specific experts or researchers than that should trump whatever your more generalized expert will say.

Also right to highlight that just because there exist specialist in something does not mean we have the full or correct understanding yet, it's just your best place to find information regarding it unless you want to go join the field.

Great points!


> doctor =/= expert on probiotics

Medical microbiologists would love to have a word with you. Medicine and medicine-adjacent disciplines each develop institutional knowledge that percolates from each specialized discipline.

> …the research on probiotics is still very much in its infancy and a LOT remains to be figured out.

I’m curious who you think does the research. It’s certainly not Bubba from down the creek.


PhDs do the research. Not your typical overworked family practitioner.


They don’t develop treatment protocols or testing modalities either. Knowledge gets disseminated as best practices and gets applied as needed to different specialties.

If probiotics is what you’re after, why not eat or drink something fermented?


3dimensions still allows for more freedom than that though since the couch can stand on end.

I would contend that it's still useful since you'd be able to turn the corner without over-complicating it by getting it into some weird tilt position.


“Vanity of vanities, all is vanity!”

A tale as old as time.


I think you just nerd-sniped me but I’m not convinced it’s impossible to assign a consistent ordering to events with relativistic separations.

For starters, the spacetime interval between two events IS a Lorentz invariant quantity. That could probably be used to establish a universal order for timelike separations between events. I suspect that you could use a reference clock, like a pulsar or something to act as an event against which to measure the spacetime interval to other events, and use that for ordering. Any events separated by a light-like interval are essentially simultaneous to all observers under that measure.

The problem comes for events with a space like or light like separation. In that case, the spacetime interval is still conserved, but I’m not sure how you assign order to them. Perhaps the same system works without modification, but I’m not sure.


For any space-like event you can find reference frames where things happen in different order. For the time-like situation you described the order indeed exists within the cone, which is to say that causality exists.


You can still order them with the spacetime interval compared to a reference event, even for space like separated events.

It allows for differing elements of the set to share the same value but so does using time alone. It just also allows every observer to agree on the ordering.

Bc Assigning a distance function to elements of a set is a common way to do that in fact. It doesn’t work with just a time coordinate or space coordinate, because that’s effectively a Euclidean metric.

You just have to contend with a few nonintuitive aspects but it’s not so bad.


I think you meant compared to a reference observer? Events are not really independent of observers. Consider the case in baseball where a runner and the baseman tag the base at the "same" time from opposite sides of the base. Assume they move at equal speeds. If the umpire is closer to the baseman then the baseman has tagged it first, if he is closer to the runner, then the runner has tagged it first. The "event" of "touching the base" has two possible outcomes depending on where the observer stands, and there is no "view from nowhere" or observer-free view that we can reference.


No, I mean a reference event, though you bring up an interesting subtlety. (Essentially I just mean an event that definitely happened [A particle decay, a supernova, an omnidirectional radio signal, etc] which will serve essentially as an origin point on the spacetime manifold). You are right though that technically, we need at least one observer to define the coordinates of that event initially. Once that's done however, ALL observers can order events according to the spacetime interval between any event they observe and the reference point (transformed into their coordinates) and they will ALL agree on that ordering. A "good" event here would be something that observers can compare. I think using pulsar pulses counted from some epoch is a perfectly good reference here, assuming we could communicate that omnidirectionally. The difference, as measured by the spacetime interval, between any event in any observers reference frame, and a reference event in their past lightcones is something that ALL observers that can communicate will always agree on. Observers may disagree about how many pulses have occurred since that epoch at a particular time in their coordinate time, but it doesn't matter. As long as they're comparing in spacetime intervals to a particular count on the pulsar, no disagreement will occur. i.e. the spacetime interval between the 3rd pulse and some event will always be the same since it's a lorentz invariant scalar quantity (i.e. a rank zero tensor).

Your baseball analogy has flaws: No properly defined "event" in spacetime will have dual-outcomes. The events in that case are that "a baseman tagged the base", and "a runner tagged the base". "x tagged the base first" is NOT an event, that's a comparison between events, and it's one that was done in a particular observers time coordinate, which is not the correct procedure here. No Lorentz invariant transformation between observers within the light cone will disagree that those events happened, though observers may disagree which happened first within their coordinate time.

(Note the issue of observers needing to be in the same light-cone is a superficial one. I haven't defined that precisely, but I don't need to: If observers can communicate at all, they will agree, upon communication, that an event is within their past light cone. In the context of server synchronization, this will always be true.)


I’m a little out of my depth, but I’d guess a lot of them would probably fall into one of two categories: Something we believe should go on forever (and not halt) if the math problem is resolved the way we expect, but theoretically could suddenly halt after some absurdly long number of steps. Or something where it halts for a given input after some number of steps unless something some counter example exists where it goes on forever.

In the first, you can’t really do anything but just keep watching it not halt but it isn’t telling you anything about the infinity to go. (Say a program that spits out twin primes, we expect an infinite number but we don’t really know)

And in the second case we’d just have to keep trying larger and larger inputs making this just an extension of the first category if we wrote a program to do that for us. And if we did find an example where it goes on forever without repeating states, how would you even know? It’d be like the first situation again.


Ah that makes a lot of sense!


Once we have scalable quantum computers, fusion power, time travel and an indestructable material, I figure we can bundle all that together with instructions to send a particle back after T+1 on termination. Some problems will stay unsolved as they go on to the heat-death of the universe but maybe one or a few comes back with a useful result!

Certainly with the right investments we'll get there within the next 5 years if you ask Musk and Altman. While a time machine might sound uncertain in that timefram, I'm sure AI will figure it out for us.


You say that like it’s even remotely feasible at the frontier of mathematics and not a monumental group effort to turn even established proofs into such.

Most groundbreaking proofs these days aren’t just cross-discipline but usually involve one or several totally novel techniques.

All that to say: I think you’re dramatically underestimating the difficulty involved in this, EVEN if the author(s) were a(n) expert(s) in machine readable mathematics, which is highly UNlikely given that they are necessarily (a) deep expert(s) in at LEAST one other field.


> You say that like it’s even remotely feasible at the frontier of mathematics and not a monumental group effort to turn even established proofs into such.

people on hn love making these kinds of declarative statements (the one you responded to, not yours itself) - "for X just do Y" as a kind of dunk on the implied author they're responding to (as if anyone asked them to begin with). they absolutely always grossly exaggerate/underestimate/misrepresent the relevance/value/efficacy of Y for X. usually these declarative statements briskly follow some other post on the frontpage. i work on GPU/AI/compilers and the number of times i'm compelled to say to people on here "do you have any idea how painful/pointless/unnecessary it is to use Y for X?" is embarrassing (for hn).

i really don't get even get it - no one can see your number of "likes". twitter i get - fb i get - etc but what are even the incentives for making shit up on here.


It feels good to be smarter than everyone else. You see your upvotes and that's good enough for an ego boost. Been there, done that.

I wish we were a bit more self-critical about this, but it's a tough problem when what brings the community together in the first place is a sense of superiority: prestigious schools, high salaries, impressive employers, supposedly refined tastes. We're at the top of the world, right?


HN is frequently fodder for satire on other forums. Nobody thinks HN users have "refined tastes", or even that they are "smart" for that matter.


Hey, do you mind sharing any of these other forums? I’m trying to make my way up the satire food chain.


> prestigious schools, high salaries, impressive employers, supposedly refined tastes. We're at the top of the world, right?

Being pompous and self obsessed requires none of those things.


> Being pompous and self obsessed requires none of those things.

Sufficient, but not necessary


Do selection dynamics require awareness of incentives? I would think that the incentives merely have to exist, not be known.

On HN, that might be as simple as display sort order -- highly engaging comments bubble up to the top, and being at the top, receive more attention in turn.

The highly fit extremes are -- I think -- always going to be hyper-specialized to exploit the environment. In a way, they tell you more about the environment than whatever their content ostensibly is.


isn't it sufficient of an explanation that one has occasionally wasted a ton of time trying to read an article only to discover after studying and interpreting half of a paper that one of the author's proof steps is wholly unjustified?

is it so hard to understand that after a few such events, you wish for authors to check their own work by formalizing it, saving countless hours for your readers, by selecting your paper WITH machine readable proof versus another author's paper without a machine readable proof?


If wishes were fishes, as they say.

To demonstrate with another example: "Gee, dying sucks. It's 2025, have you considered just living forever?"

To this, one might attempt to justify: "Isn't it sufficient that dying sucks a lot? Is it so hard to understand that having seen people die, I really don't want to do that? It really really sucks!", to which could be replied: "It doesn't matter that it sucks, because that doesn't make it any easier to avoid."


I understand where you're coming from but it's a bad analogy. Formal proofs are extremely difficult but possible. Immortality is impossible.


I don't think it matters, to be quite honest. Absolute tractability isn't relevant to what the analogy illustrates (that reality doesn't bend to whims). Consider:

- Locating water doesn't become more tractable because you are thirsty.

- Popping a balloon doesn't become more tractable because you like the sound.

- Readjusting my seat height doesn't become more tractable because it's uncomfortable.

The specific example I chose was for the purpose of being evocative, but is still precisely correct in providing an example of: presenting a wish for X as evidence of tractability of X is silly.

I object to any argument of the form: "Oh, but this wish is a medium wish and you're talking about a large wish. Totally different."

I hold that my position holds in the presence of small, medium, _or_ large wishes. For any kind of wish you'd like!


Those are all better analogies than the original one you gave, which didn't illustrate your as clearly as they do.


Unavoidable: expecting someone else to do the connection isn't a viable strategy in semi-adversarial conditions so it has to be bound into the local context, which costs clarity:

- Escaping death doesn't become more tractable because you don't want to die.

This is trivially 'willfully misunderstood', whereas my original framing is more difficult -- you'd need to ignore the parallel with the root level comment, the parallel with the conversation structure thus far, etc. Less clear, but more defensible. It's harder to plausibly say it is something it is not, and harder to plausibly take it to mean a position I don't hold (as I do basically think that requiring formalized proofs is a _practically_ impossible ask).

By your own reckoning, you understood it regardless. It did the job.

It does demonstrate my original original point though, which is that messages under optimization reflect environmental pressures in addition to their content.


I don't know why you can't accept that your analogy was bad. Learn from it and move on with your life.


Learn what? I don't agree and you haven't given reasons. I don't write for your personal satisfaction.


wishes can be converted to incentives, what if the incentives change such that formally verified proofs were rewarded more and informal "proofs" less?


If enough care about this that can and will do something about it (making formalization easier for the average author), that happens over time. Today there's a gap, and in the figurative tomorrow, said gap shrinks. Who knows what the future holds? I'm not discounting that the situation might change.


Its super easy to change imho: one could make a cryptocurrency, using PoT: Proof of Theorem, as opposed to just proof of stake or proof of work.

What do Bitcoin etc. actually prove in each block? that a nonce was bruteforced until some hash had so many leading zero's? Comparatively speaking, which blockchain would be more convincing as a store of value: one that doesn't substantially attract mathematicians and cryptographers versus one that does attract verifiably correct mathematicians and cryptographers?

Investors would select the formal verification chain as it would actually attract the attention of mathematicians, and mathematicians would be rewarded for the formalization of existing or novel proofs.

We don't need to wait for the magic constellation of the planets 20 years from now nor wait for LLM's etc to do all the heavy lifting (although they will quickly be used by mathematics "miners"), a mere alignment of incentives can do it.


I grossly underestimate the value of the time of highly educated people having to decode the arguments of another expert? Consider all the time saved if for each theorem proof pair, the proof was machine readable, you could let your computer verify the proclaimed proof as a precondition on studying it.

That would save a lot of people a lot of time, and its not random peoples time saved, its highly educated peoples time being saved. That would allow much more novel research to happen with the same amount of expert-years.

If population of planet A would use formal verification, and planet B refuses to, which planet do you predict will evolve faster


You appear to be deliberately ignoring the point.

Currently, in 2025, it is not possible in most fields for a random expert to produce a machine checked proof. The work of everyone in the field coming together to create a machine checked proof is also more work for than for the whole field to learn an important result in the traditional way.

This is a fixable problem. People are working hard on building up a big library of checked proofs, to serve as building blocks. We're working on having LLMs read a paper, and fill out a template for that machine checked proof, to greatly reduce the work. In fields where the libraries are built up, this is invaluable.

But as a general vision of how people should be expected to work? This is more 2035, or maybe 2045, than 2025. That future is visible, but isn't here.


It's interesting that you place it 10 or 20 years from now, given that MetaMath's initial release was... 20 years ago!

So it's not really about the planets not being in the right positions yet.

The roman empire lasted for centuries. If they wanted to do rigorous science, they could have built cars, helicopters, ... But they didn't (in Rome, do as the Romans do).

This is not about the planets not being in the right position, but about Romans in Rome.


Let's see.

I could believe you, an internet stranger. And believe that this problem was effectively solved 20 years ago.

Or I could read Terry Tao's https://terrytao.wordpress.com/wp-content/uploads/2024/03/ma... and believe his experience that creating a machine checkable version of an informal proof currently takes something like 20x the work. And the machine checkable version can't just reference the existing literature, because most of that isn't in machine checkable form either.

I'm going with the guy who is considered one of the greatest living mathematicians. There is an awful lot that goes into creating a machine checkable proof ecosystem, and the file format isn't the hard bit.


20x the work of what? the work of staying vague? there is no upper limit to the "work" savings, why not be 5 times vaguer, then formal verification can be claimed to be 100x more work.

If ultimate readership (over all future) were less than 20 per theorem, or whatever the vagueness factor would be, the current paradigm would be fine.

If ultimate readership (not citation count) were higher than 20 per theorem, its a net societal loss to have the readers guess what the actual proof is, its collectively less effort for the author to formalize the theorem than it would be to have the readers guess the actual proof. As mathematicians both read and author proofs, they would save themselves time, or would be able to move the frontier of mathematics faster. From a taxpayer perspective, we should precondition mathematics funding (not publication) on machine readable proofs, this doesn't mean every mathematician would have to do it, if some hypothetical person had crazy good intuitions, and the rate of results high enough this person could hire people to formalize it for them, to meet the precondition. As long as the results are successfully formalized, this team could continue producing mathematics.


Plus, mathematics isn't just a giant machine of deductive statements. And the proof checking systems are in their infant stages and require huge amounts of efforts even for simple things.


> mathematics isn't just a giant machine of deductive statements

I know HN can be volatile sometimes, but I sincerely want to hear more about these parts of math that are not pure deductive reasoning.

Do you just mean that we must assume something to get the ball rolling, or what?


I think the point was that it's not a machine.

Stuff that we can deduce in math with common sense, geometric intuition, etc. can be incredibly difficult to formalize so that a machine can do it.


What do you mean with "do it" in

"...etc. can be incredibly difficult to formalize so that a machine can do it." ?

1. do it = search for a proof

2. do it = verify a purported proof?


Deduce. So your #2.


Of course a machine can verify each step of a proof, but that formal proof must be first presented to the machine.


Right. And I said it's incredibly difficult to formalize so that a machine can do it.

I don't understand what you're confused about.


Theres nothing difficult about formalizing a proof you understand.

Formalizing hot garbage supposedly describing a proof can be arbitrarily difficult.

The problem is not a missing library. The number of definitions and lemmas indirectly used is often not that much. Most of the time wasted when formalizing is discovering time and time again that prior authors are wasting your time, sometimes with verifiably false assumptions, but the community keeps sending you around to another gap-filling approach.


> Theres nothing difficult about formalizing a proof you understand.

What are you basing that on? It's completely false.

If that were true, we would have machine proofs of basically everything we have published proofs for. Every published mathematical paper would be accompanied by with its machine-provable version.

But it's not, because the kind of proof suitable for academic publication can easily take multiple years to formalize to the degree it can be verified by computer.

Yes of course a large part depends on formalizing prior authors' work, but both are hard -- the prior stuff and your new stuff.

Your assertion that there's "nothing difficult" is contradicted by all the mathematicians I know.


For one, some geometric proofs by construction can literally involve pictures rather than statements, right?


Sure the history of mathematics used many alternative conceptions of "proof".

The problem is that such constructions were later found to be full of hidden assumptions. Like working in a plane vs on a spherical surface etc.

The advantage of systems like MetaMath are:

1. prover and verifier are essentially separate code bases, indeed the MM prover is essentially absent, its up to humans or other pieces of software to generate proofs. The database just contains explicit axioms, definitions, theorems claims, with proofs for each theorem. The verifier is a minimalistic routine with a minimum amount of lines of code (basically substitution maps, with strict conditions). The proof is a concrete object, a finite list of steps.

2. None of the axioms are hardcoded or optimized, like they tend to be in proof systems where proof search and verification are intermixed, forcing axioms upon the user.


>Do you just mean that we must assume something to get the ball rolling

They're called "axioms"


> mathematics isn't just a giant machine of deductive statements

I think the subject at question here is mathematical truth, not "mathematics" whatever that means.


>You say that like it’s even remotely feasible at the frontier of mathematics and not a monumental group effort to turn even established proofs into such.

Is it really known to be the frontier as long as its not verified? I would call the act of rigorous verification the acknowledgement of a frontier shift.

Consider your favorite dead-end in science, perhaps alchemy, the search for alcahest, the search for the philosophers stone, etc. I think nobody today would pretend these ideas were at the frontier, because today it is collectively identified as pseudoscience, which failed to replicate / verify.

If I were the first to place a flag on some mountain, that claim may or may not be true in the eyes of others, but time will tell and others replicating the feat will be able to confirm observation of my flag.

As long as no one can verify my claims they are rightfully contentious, and as more and more people are able to verify or invalidate my claims it becomes clear if I did or did not move the frontier.


One doesn't need to be an expert in machine readable mathematics, to understand how to formalize it to a machine readable form.

If one takes the time to read the free book accompanying the metamath software, and re implements it in about a weekend time, you learn to understand how it works internally. Then playing around a little with mmj2 or so you quickly learn how to formalize a proof you understand. If you understand your own proof its easy to formalize it. One doesn't need to be "an expert in machine readable mathematics".


Do you have the weekend free? Perhaps you can take this new proof and show us how it is done.


If one is given an incomplete proof (i.e. where not each step is justified in terms of theorems completely justified before it) there is an amount of bruteforce entropy involved for guessing which intermediate steps weren't jotted down. Of course it takes 20x or more effort if the prover refused to write down certain steps. It even occurs that when pointed out, it takes the original "prover" a lot of time to find a proof for a gap, hence it wasn't originally proven.

If I find my own proofs, or if the proof of someone else is clearly written, formalization is not hard at all.

Let us assume for the sake of this discussion that Wiles' latest proof for FLT is in fact complete, while his earlier proof wasn't. It took Wiles and his helper more than a weekend to close the gap. Imagine no one had challenged the proof or pointed out this gap. Anyone tasked with formalizing it would face the challenge of trying to figure out which result (incorrectly presumed to be already known) was used in a certain step. The formalizer is in fact finishing an unfinished proof.

After succeeding in closing this gap, who else was willing to point at the next gap? There is always some sense of prestige lost when pointing at a "gap" and then observing the original prover(s) close that gap, in a sense they saw how to prove it while the challenger did not. This dynamic is unhealthy. To claim a proof we should expect a complete proof, the burden of proof should lay on the proving claimant not on the verifier.


I actually think math and sciences should introduce what I call "synthesis" much earlier. i.e. I don't think it's unfair to give students all the ingredients and add in a question on the exam to see if they can take those ingredients and apply it to a problem type they haven't seen before. (This is a great differentiator between C students and A students.) Or for a science class, rather than perform an experiment, I think the students should have to actually DESIGN the experiment first. (I had one laboratory exam in 2nd semester undergrad chem class that did this and it was amazing! The students also performed pretty well at it too. It consisted of being told to figure out how much zinc was in a lozenge. We were also maybe given a handy reaction formula and that was it. You had to design your analysis procedure and figure out how to get the quantity you wanted out of it, and then actually perform your analysis all within the exam period.)

I think not doing this starting in like middle school is a big part of the reason why people think math/science is useless. Unless the exact scenario they have been taught pops up, they can very rarely see the application. But the real world NEVER works this way. A problem is NEVER formulated as a straight forward well-formed problem. Figuring out how to mold it into something that you can apply the tools you know to is in and of itself a REALLY important skill to practice, and sadly, we almost NEVER practice that. Only in grad school does that type of thing come up.


If you follow the things that have been disclosed / leaked/ confirmed when they’re 20+ years out of date, then yes the probability this is true is significant.


Don't get excited.

The author also claims to have proved the twin prime conjecture. https://figshare.com/articles/dataset/The_Twin_Prime_Conject...

They don't seem to be affiliated with any university and don't seem to collaborate with anyone except this one person, Andrew Elliot.

My assessment of the probability that this is a real proof: Less than 0.1%.


Other red (or at least yellow) flags:

1. Typeset in (what appears to be) Microsoft Word. Anyone under the age of 90 who knows enough math to prove the Riemann Hypothesis will have learned and strongly prefer LaTeX.

2. Casually introduces novel terminology like "entropy-spiral coordinate" without explanation, inconsistent with norms of mathematical exposition.

3. Social absurdities characteristic of crankery. Nobody with enough knowledge to prove the Riemann hypothesis thinks they need to put "rights holder" after their name in the proof.


Riemann, twin-primes and a united theory of physics in one year; busy couple. Amusing comment on the latter:

Comments: 335 Pages. (Note by viXra Admin: File size reduced by viXra Admin; please submit article written with AI assistance to ai.viXra.org)

https://figshare.com/articles/dataset/Structured_Determinism...


Is there any real expert opinion on this? The abstract itself reads rather dense.

That said, if there's any field that "independent researchers" can excel, it should be math, it's not like you need an experimental group to crib off on.


  >  Each appendix isolates and resolves one of the classical obstacles to a self-contained Hilbert–Pólya formulation—self-adjointness, trace-class bounds, Paley–Wiener confinement, Weyl normalization, and uniqueness of the arithmetic weights.
Given the nature of the problem and the unsolicited mention of a "confinement manifold" earlier in the abstract, this summary gives very strong vibes of "My corral has six well closed gates, why do you keep asking about the fence?". This is in addition to not trusting complex and unclear proofs.


Pro-tip, state your assumptions baked into the estimate. If one of them is wrong you can renegotiate price, although depending on the client, you may not always want to do that to show good will and whatnot.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: