The motivation to "implement" physics in code, is that you can't "cheat." You have to spell out every step in a formal way. The motivating example in SICM is that the usual way the Euler-Lagrange equations are written ... doesn't make sense.
The authors explain: "Classical mechanics is deceptively simple. It is surprisingly easy to get the right answer with fallacious reasoning or without real understanding. Traditional mathematical notation contributes to this problem. Symbols have ambiguous meanings that depend on context, and often even change within a given context."
And why not just "code" but "functional code"? Well, it makes a lot more sense to "take a derivative of a function" if that function doesn't have side effects (etc). There is a tighter correspondence between functions in the programming sense and in the mathematical sense.
I don't think the MIT guys have the same motivations as the author of this book. He (Walck) discusses the suitability of (a subset of) Haskell in this article: https://arxiv.org/abs/1412.4880
Maybe someone else can shed light on the MIT mindset. Certainly some of Walck's points apply to Scheme as much as to Haskell, but Scheme lacks the type system, syntax and syntactical "convenience" of curried functions. The basic strength of functional programming is the lack of complex imperative book-keeping: your code looks more like math.
My impression is that SICP and SICM are eccentric.
Yes, and that's like arguing that spaces between words is syntactic distraction. It's clearly not, more syntax rules can make a language simpler to understand (for both humans and computers).
A very smart CS guy I know pitched functional programming for scientific computing- he said it would greatly speed up the performance of codes by not spending time computing results that weren't going to be used.
Although that's not a terrible idea, I have never actually seen any major scientific code that was based on functional programming and was significantly faster than its non-FP competitors. My guess is that the folks writing the codes are already pretty smart, not doing any extra work that could be easily removed, and already take advantage of algorithms that use non-functional paradigms which give them significant speedups
I've heard that before, usually from people with no experience in actual scientific computing. There's nothing wrong with using functional programming in scientific applications. I do. But I don't see how it's "specifically" good for scientific programming.
The thing about performance in scientific programming, it is often binary: You either need the very best, or you don't care about it at all. Unlike other areas of programming, there is no middle ground. If you need your scientific code to be performant, then you need to squeeze every last bit of performance out of your hardware, which you can only do with something like Fortran or C. If you don't care about performance, then it doesn't matter. That's why Python is so popular.
Ideally I would love for something like F# to replace python in the scientific computing space, but the ecosystem is so much larger in python. That's what matters to most scientists.
Generally agree, but: the idea for FP in scientific computing would be for the FP-optimizing compiler to elide any computation that doesn't contribute to the final result.
The analogy I think of is is tree traversal. A smart person can write an optimal tree traversal algorithm and make their program finish quickly, whether or not the user requested that part of the algorithm's results, but FP can realize the program doesn't output the tree, so traversing it can be skipped. OK, that's not a great analogy but the point is that in principle, FP optimization could find a cheaper way to produce the same exact values as a simulation written in a non-functional language.
How often are there competing implementations in scientific computing? Most of the time people are doing just enough to publish a paper, or maybe maintaining a single library that everyone uses. Few people have the inclination, and even fewer the funding, to "rewrite everything".
In finance, which has a lot of parallels with scientific computing but tends to end up with semi-secret, parallel, competing implementations of the same ideas, functional programming has had significant (though by no means universal) success in doing exactly what you describe.
Let's see. The two big codes I worked with- BLAST and AMBER- have competitors. For BLAST there have been a long history of codes that attempted to do better than it, and I don't think anybody really succeeded until that except possibly HMMER2. Both BLAST and HMMER2 had decades of effort poured into them. BLAST was rewritten a few times by its supporting agency (NCBI) and the author(s) of HMMER rewrote it to be HMMER2. I worked with the guy who wrote the leading competitor to HMMER, he was an independently wealthy computer programmer (with a physics background). In the case of AMBER, there are several competitors- gromacs, NAMD, and a few others are all used frequently. AMBER has been continuously developed for decades (I first used it in '95 and already it was... venerable).
All the major players in these fields read each other's code and papers and steal ideas
In other areas there are no competitors, there's just "write the minimal code to get your idea that contributes 0.01% more to scientific knowledge, publish, and then declare code bankruptcy". And a long tail of low to high quality stuff that lasts forever and turns out to be load-bearing but also completely inscrutable and unmodifiable.
After typing that out I realize I just recapitulated what you said in your first paragraph. My knowledge of finance is limited beyond knowing "jane street capital has been talking about FP for ages" and most of the people I've talked to say their work in finance (HPC mostly) is C++ or hardware-based.
My impression is that it can be a very frustrating way to learn mechanics if you don't have much interest in functional programming.