Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This varies by use case and how much / which extensions you're including in your Haskell source (and, to a lesser extent, which libraries being included in the equivalent Java or Go source).

I haven't taken a lot of measurements and my production code is biased to Go, C++, Python and Java, with most of my Haskell experience being side projects and toys for learning from, and writing a type-unifier for a production project. I can summarize what I learned but I would be interested in seeing better measurements.

First, though, let's be more specific about what you mean by "writing code as fast," which I think should be refined to "time spent writing code" + "time compiling code" + "time spent in language runtime" + "time spent debugging / reading code". Depending on your project, and who if anyone is collaborating, each of these might be more or less important. Sometimes runtime speeds dwarf the needs of development or even debugging time. Sometimes compilation speeds afford the quick feedback loop that contributes to flow in development time.

Within that framing, Haskell can excel at development time with small teams and limited scopes. It affords writing a domain-specific language within the code, including very custom operators, and this carries the risk of overburdening with complexity, and strongly proportional to the number of people on the team. Things can get complex fast and it can contribute some to compilation time if there is a lot of recursive complexity to the type system. But if the source is organized well and doesn't need to be very dynamic, this may not be much of a concern. As an underlying engine for very dynamic data inputs and sufficiently complex numerical analysis or IO management as its primary purpose, it would probably do well.

The packaging system (Hackage) is pretty good, and that benefits the development time considerably. Adding some modules may impede compilation times, for much of the same reason as above, the type inference can become expensive. And undisciplined source management can lead to a lot of type implementations that are near copies of each other. Obviously this also ties into the reading/debugging time, too.

For runtime, though, yes Haskell can be competitive with bare-metal C implementations. I think there have been a few papers written about that going back a decade or so.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: