Hacker Newsnew | past | comments | ask | show | jobs | submit | loose-cannon's commentslogin

You sound like you're writing from the future.

"or see a doctor, if that’s still a thing people do?"

Yep. People still do that.


If you just pick one of those subjects, you'll probably find a textbook just as long as his entire PDF trying to cover 13+ subjects.

Sorry to be negative Nancy over here, but you're going to need more than 54 pages to cover calculus. There is value in organizing the major theorems in the different disciplines. But, to be honest, this doesn't really serve the beginner.


Two thoughts here:

1. I don't think it is at all intended to serve the beginner.

It's geared towards readers wait a reasonable amount of mathematical maturity already (it explicitly says that in the landing page).

2. Many, many of the pages of most introductory calculus textbooks are spent on exercises and on the specifics of computing integrals and derivatives of particular functions - none of this is necessary to understand the concepts themselves.

For example, Baby Rudin (the standard textbook for Analysis for math majors) covers Sequences, Series, Continuity, Differentiation, and the Riemann integral in less than 100 pages (including exercises).


So this is aimed at somebody who has mathematical maturity but prefers... less content and detail? The point is that you are losing something in a shortened presentation. You're not just losing "unnecessary exercises" as you put it.


From the book

> Philosophy behind the Napkin approach

> As far as I can tell, higher math for high-school students comes in two flavors:

> • Someone tells you about the hairy ball theorem in the form “you can’t comb the hair on a spherical cat” then doesn’t tell you anything about why it should be true, what it means to actually “comb the hair”, or any of the underlying theory, leaving you with just some vague notion in your head.

> • You take a class and prove every result in full detail, and at some point you stop caring about what the professor is saying.

> Presumably you already know how unsatisfying the first approach is. So the second approach seems to be the default, but I really think there should be some sort of middle ground here. Unlike university, it is not the purpose of this book to train you to solve exercises or write proofs, or prepare you for research in the field. Instead I just want to show you some interesting math. The things that are presented should be memorable and worth caring about. For that reason, proofs that would be included for completeness in any ordinary textbook are often omitted here, unless there is some idea in the proof which I think is worth seeing. In particular, I place a strong emphasis over explaining why a theorem should be true rather than writing down its proof.


As I said, intro calculus books will spend a large amount of time teaching you the mechanics of finding closed form solutions for integrals and derivatives of various kinds of functions. Look at https://ocw.mit.edu/courses/res-18-001-calculus-fall-2023/pa... for an example. Most of that content is not that important to understand the concepts.

And yes, with more mathematical maturity you definitely don't need as much detail. The proofs get terser as you're expected to be able to fill out the more straightforward details yourself.


My first calculus class in high school was about 10% "conceptual explanation of limits, derivatives, and integrals", 30% "techniques for evaluating derivatives", 50% "techniques for evaluating integrals", and maybe another 10% (or less) "justifications of the correctness of those techniques". (I guess I'm putting the Fundamental Theorem of Calculus in the the last 10% here.)

The style of this textbook does seem to primarily skip the "techniques for evaluating" stuff, on the basis that you just wanted to understand what each branch of mathematics is about and what kinds of theorems it has that might relate to the larger edifice of mathematics.


I don't quite get how it's supposed to introduce calculus/analysis - the introductory chapters just start talking about metric spaces without even bothering to properly introduce the real numbers or their peoperties. I don't think that's quite sensible. For comparison, mathlib4 of course does it right by starting from topological spaces - and it manages to nicely simplify things throughout, by defining a basic "tends to" notion using set-theoretic filters.


Would anyone here want their kids to attend this school?


I think a big part of undergraduate math is theory/abstraction building, which is usually very different than the activity of programming. I've studied quite a bit of math, so I can appreciate both. But I can definitely see why the former drives people away.


I wonder how quickly you can load some of the modern, popular, websites on a dial up connection.


We have a whole generation of programmers that will justify 12MB of JavaScript bundles to output "Hello world".


Easy to see for yourself using the throttling option in the developer tools of popular browsers.


This orange site is fine but I wouldn’t hold my breath on any others


Something low-resource demand (like my blog) would probably be okay, save for a few large pics on some pages. Most people who run in the smolweb circles also like vintage computing, so creating webspaces using only HTML & CSS is common practice, which should do fine over a 56k connection.


Google homepage: two or three minutes

A Google SERP with rich content: about 20 minutes

A typical Facebook post: ten minutes

CNN home page: half an hour

YouTube: forget it


>CNN home page: half an hour

This should be much faster, it was created for people with limited network access.

https://lite.cnn.com/


Why would anyone want the non-lite version?


Wow! That's a great option for browsing CNN!


RealMedia: buffering


I’d bet a lot of them are using old computers too, with who knows what browser and OS. It’s probably hard to tell loading issues from rendering issues


You can save many hours by installing uBlock Origin.


Page loading times would probably be measurable with a sundial or calendar.


No need to wonder, just end up in an old building with thick brick walls that are only penetrated by a weak 2G signal and try to load something on your phone.


Not possible anymore is many areas, where 2G and 3G networks have been shutdown to re-use spectrum for newer standards. The last time I was in a rural area with minimal signal strength, my phone was alternating between satellite-only messaging or 5G with 5-10 MB/s. I was actually able to download a movie in a quite reasonable amount of time, presumably because there wasn't anyone else doing much with the cell tower I was barely in range of.


Out in rural Michigan, there are plenty of spots where an LTE signal technically exists but you can't do much beyond calls and texts (and even those fail sometimes), and it's interesting to see what apps still work. For instance, YouTube will still load and play videos, albeit at an abysmal pace that's really not worth it (and it's interesting to see how the app prioritizes the video itself over its metadata, to the point that you could watch an entire video in 144p before the channel name and description load), while my bank's app just fails entirely despite ostensibly requiring less bandwidth than video playback.


You can test it yourself in the comfort of your gigabit connection. I wanted to test my barrage of very small images using lazy loading on a crappy connection. I learned that Chrome can easily pretend to suck. On Safari you somehow need to download a special tool but it works just as well.

Or as worse I guess.


Do you know if Firefox or edge has a similar feature and if so what its called?



I think chrome dev tools has a button to simulate different internet speeds.

But im pretty sure the answer is really damn slow.


I know Firefox has it, since I used it to test my own website. Once you go past text and really small images, it starts taking minutes to load.


seems like another distraction. If it permanently disappears, maybe then it's worth talking about?


You're posting this in a thread filled with stories which paint him as an asshole.


My point was that 'child prodigy' was not loose flattery or exaggeration by a friend, because he was at 'has newspaper articles being written about him' levels of child prodigy and was unarguably a child prodigy.


and child prodigies are notorious "arseholes" QED.


Reducibility is usually a goal of intellectual pursuits? I don't see that as a fault.


Ok. A lot of things are very 'reducible' but information is lost. You can't extend back from the reduction to the original domain.

Reduce a computer's behavior to its hardware design, state of RAM, and physical laws. All those voltages make no sense until you come up with the idea of stored instructions, division of the bits into some kind of memory space, etc. You may say, you can predict the future of the RAM. And that's true. But if you can't read the messages the computer prints out, then you're still doing circuits, not software.

Is that reductionist approach providing valuable insight? YES! Is it the whole picture? No.

This warning isn't new, and it's very mainstream. https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_differen...


'Reducibility' is a property if present that makes problems tractable or possibly practical.

What you are mentioning is called western reductionism by some.

In the western world it does map to Plato etc, but it is also a problem if you believe everything is reducible.

Under the assumption that all models are wrong, but some are useful, it helps you find useful models.

If you consider Laplacian determinism as a proxy for reductionism, Cantor diagonalization and the standard model of QM are counterexamples.

Russell's paradox is another lens into the limits of Plato, which the PEM assumption is based on.

Those common a priori assumptions have value, but are assumptions which may not hold for any particular problem.


What the person you are replying to is saying that some things are not reducible, i.e. the the vast array of complexity and detail is all relevant.


That's a really hard belief to justify. And what implications would that position have? Should biologists give up?


Concretely we know that there exist irreducible structures, at least in mathematics: https://en.wikipedia.org/wiki/Classification_of_finite_simpl...

The largest of the finite simple groups (themselves objects of study as a means of classifying other, finite but non-simple groups, which can always be broken down into simple groups) is the Monster Group -- it has order 808017424794512875886459904961710757005754368000000000, and cannot be reduced to simpler "factors". It has a whole bunch of very interesting properties which thus can only be understood by analyzing the whole object in itself.

Now whether this applies to biology, I doubt, but it's good to know that limits do exist, even if we don't know exactly where they'll show up in practice.


That's not really true, otherwise every paper about it would be that many words long. The monster group can be "reduced" into its definition and its properties which can only be considered a few at a time. A person has a working memory of three to seven items.


I think that chemistry, physics, and mathematics, are engaged in a program of understanding their subject in terms of the sort of first principles that Descartes was after. Reduction of the subject to a set of simpler thoughts that are outside of it.

Biologists stand out because they have already given up on that idea. They may still seek to simplify complex things by refining principles of some kind, but it's a "whatever stories work best" approach. More Feyerabend, less Popper. Instead of axioms they have these patterns that one notices after failing to find axioms for a while.


Several different definitions are being bandied about. If you think of reduction as understanding a material system in terms of its components, biology is now reductionist, having abandoned vitalism.


On the other hand, bio is the branch of science with a single accepted "theory of everything": evolution.


Evolution is a theory of the origin of species via natural selection of heritable traits; evolution is not a theory of biogenesis, the origin of life itself.


Yeah, I almost wrote ‘nearly have a theory of everything’ for that reason, but decided it wasn’t worth the extra words. We have a few plausible outlines of how life started, and IMO it doesn’t really matter all that much which one(s) actually happened. When we ourselves are doing biogenesis, there’s no requirement that it has to happen the way it happened before. It would be interesting to know, though, so if in your estimation we don’t have a theory of everything because of that, I’m okay with that.


That's a fine counterexample to "theory of everything", and fertile ground for spirited debate. But I think it's a distinction thats relevant to <1% of the work that biologists do, so like... does it matter?


It would imply that when dealing with complex systems, models and conceptual frameworks are, at the very best, useful approximations. It would also imply that it is foolhardy to ignore phenomena simply because they are not comprehensible within your preferred framework. It does not imply biologists should give up.


How reducible is the question. If some particular events require a minimum amount of complexity, how to do you reduce it below that?


Biologists don't try to reason everything from first principles.

Actually, neither do Rationalists, but instead they cosplay at being rational.


> Biologists don't try to reason everything from first principles.

What do you mean? The biologists I've had the privilege of working with absolutely do try to. Obviously some work at a higher level of abstraction than others, but I've not met any who apply any magical thinking to the actual biological investigation. In particular (at least in my milieu), I have found that the typical biologist is more likely to consider quantum effects than the typical physicist. On the other hand (again, from my limited experience), biologists do tend to have some magical thinking about how statistics (and particularly hypothesis testing) works, but no one is perfect.


Setting up reasoning from first principles vs magical thinking is a false dichotomy and an implicit swipe.


Ok, mea culpa. So what distinction did you have in mind?


Reasoning is one information-processing process, performed by humans, with bounds on what it can accomplish. It works in a limited context and is inherently incomplete and imperfect. Other non-logical processes, emergent processes, parallel processes such as evolution, process information in ways reasoning cannot. It perhaps should not be surprising that we may have internal systems of understanding that follow these principles, instead of only those of logic or reasoning.

Reasoning from first principles cannot span very far in reality, as for starters the complexity of the argument quickly overwhelms our capacity for it. Its numerous other limits have been well-documented.

Logicomix, Gödel Escher Bach are some common entry points.


>Gödel Escher Bach

I'm kinda new here but am surprised I haven't seen this book mentioned more. Maybe I just haven't seen it or it's old news but it seems right up HNs alley.


"Reductionist" is usually used as an insult. Many people engaged in intellectual pursuits believe that reductionism is not a useful approach to studying various topics. You may argue otherwise, but then you are on a slippery slope towards politics and culture wars.


I would not be so sure. There are many fields where reductionism was applied in practice and it yielded useful results, thanks to computers.

Examples that come to mind: statistical modelling (reduction to nonparametric models), protein folding (reduction to quantum chemistry), climate/weather prediction (reduction to fluid physics), human language translation (reduction to neural networks).

Reductionism is not that useful as a theory building tool, but reductionist approaches have a lot of practical value.


> protein folding (reduction to quantum chemistry),

I am not sure in what sense folding simulations are reducable to quantum chemistry. There are interesting 'hybrid' approaches where some (limited) quantum calculations are done for a small part of the structure - usually the active site I suppose - and the rest is done using more standard molecular mechanics/molecular dynamics approaches.

Perhaps things have progressed a lot since I worked in protein bioinformatics. As far as I know, even extremely short simulations at the quantum level were not possible for systems with more than a few atoms.


That's a practical limitation. There's no barrier in principle to a complete reduction.


I meant that the word "reductionist" is usually an accusation of ignorance. It's not something people doing reductionist work actually use.


But that common use of the word is ignorant nonsense. So, yes, someone is wrong on the internet. So what?


The context here was a claim that reducibility is usually a goal of intellectual pursuits. Which is empirically false, as there are many academic fields with a negative view of reductionism.


'Reductionist' can be an insult. It can also be an uncontroversial observation, a useful approach, or a legitimate objection to that approach.

If you're looking for insults, and declaring the whole conversation a "culture war" as soon as you think you found one, (a) you'll avoid plenty of assholes, but (b) in the end you will read whatever you want to read, not what the thoughtful people are actually writing.


As opposed to?


I'm really confused by this comment section, is no one is considering the people they'll have to work with, the industry, the leadership, the customers, the nature of the work itself, the skillset you'll be exercising... literally anything other than TC when selecting a job?

I don't get why this is a point of contention, unless people think Meta is offering $100M to a React dev...

If they're writing up an offer with a $100M sign on bonus, it's going to a person who is making comparable compensation staying at OpenAI, and likely significantly more should OpenAI "win" at AI.

They're also people who have now been considered to be capable of influencing who will win at AI at an individual level by two major players in the space.

At that point even if you are money motivated, being on the winning team when winning the race has unfathomable upside is extremely lucrative. So it's still not worth taking an offer that results in you being on a less competitive team.

(in fact it might backfire, since you do probably get some jaded folks who don't believe in the upside at the end of the race anymore, but will gladly let someone convert their nebulous OpenAI "PPUs" into cash and Meta stock while the coast)


> even if you are money motivated, being on the winning team when winning the race has unfathomable upside

.. what sort of valuation are you expecting that's got an expected NPV of over $100m, or is this more a "you get to be in the bunker while the apocalypse happens around you" kind of benefit?


$100M doesn't just get pulled out of thin air, it's a reflection of their current compensation: it's reasonable that their current TC is probably around 8 figures, with good portion that will 10x on even the most miserable timelines where OpenAI manages to reach the promised land of superintelligence...

Also at that level of IC, you have to realize there's an immense value to having been a pivotal part of the team that accomplished a milestone as earth shattering as that would be.

-

For a sneak peak of what that's worth, look at Noam Shazeer: funded a AI chatbot app, fought his users on what they actually wanted, and let the product languish... then Google bought the flailing husk for $2.7 Billion just so they could have him back.

tl;dr: once you're bought into the idea that someone will win this race, there's no way that the loser in the race is going to pay better than staying on the winning team does.


Something similar actually happened to me. Somebody called me about a job application I submitted through craigslist/email 6-7 years ago. This was while I was a student. I had to tell the lady that I don't even live in the same area.

Obviously this is nowhere near 48 years.


A number of years ago I was internally referred to a role at one of the FAANGs. When speaking with the recruiter or hiring manager they seemed pretty confused at why I was even qualified. Eventually I asked them what address was written at the top of the resume they were looking at.

"Ah yes. You seem to have a copy of the resume I entered into your system fresh out of high school."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: