Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For any practical program, memory usage and number of operations are part of the engineering specification and no one will deem correct a program that exceeds those specifications. So you just confirmed “impractical”, “academic” and “niche” charges.

I've encountered few C programmers who can predict what instructions will be emitted by their compiler.

Update: You might be surprised, in the presence of optimizations, how similar the code emitted by gcc and GHC can be for similar programs.

Fewer still those who can specify their pre- and post-conditions and loop invariants in predicate calculus in order prove their implementation is correct.

Most people wing it and rely on past experience or the wisdom of the crowd. What I like to call, programming by folk lore. Useful for a lot of tasks, I use it all the time, but it's not the only way.

The nice thing about Haskell here is that, while there is a lot you cannot prove (termination, etc... please verification friends, understand I'm generalizing here), you can write a sufficient amount of your specification and reason about the correctness of the implementation in the same language.

This has a nice effect: you can write the specification of your algorithm in Haskell. It won't be efficient enough for use at first. However you can usually apply some basic algebra to transform the program you know is correct into one that is performant without changing the meaning of the program.



> I've encountered few C programmers who can predict what instructions will be emitted by their compiler.

That's an irrelevancy. Predicting those specific instructions does not preclude one from making reasonably correct judgements about a program's performance.

It is a fact that reasoning about performance of Haskell program is virtually impossible, unless you're an active ghc developer, and that's why the language remains unused for practical problems. Apart from buggy pandoc and few blockchain scams, that is.


That's simply not true. You can use the same tools used to reason about performance in time as we do for nearly every program. Predicting memory performance is harder due to optimizations and how untrained Haskell developers have a hard time spotting where there code is leaving unevaluated thunks on the heap. However the memory profiling tools are there and are great at catching them so in practice, as it is in C++ and many other languages, it's a pain but not a huge deal.

As for practical problems, I dunno. I work in Haskell full-time at a company that isn't doing block-chains. And I stream myself working in Haskell once a week on pretty practical things. I've made a couple of small games, a PostgreSQL logical replication client, have been learning different algorithms. All pretty practical to me.


> You can use the same tools used to reason about performance in time as we do for nearly every program.

That's simply not true. Reasoning about performance of imperative languages is fundamentally easier.


You can write imperative programs in Haskell and many people do. You get effect tracking, giving perhaps the "the worlds best imperative language" :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: