Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If it's not too much trouble, could you create a minimal demonstration of a simple piece of code, structured for various goals - easy to extend, easy to debug etc.? I can't defend my code form the best-practice-people with a Pareto Front wikipedia article.


There's two aspects to GP's comment.

> Even in the best case, readability just becomes a Pareto frontier[0], given by expressive limits of the dominant programming paradigm

> People forget that readability isn't a function of specific program - there is no one optimal readability. On the contrary, it's a function of the program and the goals of the reader.

The original Wiki at c2 has a great example [0] comparing the expressive capabilities of functional vs. object-oriented programming, and their suitability towards certain goals - namely, either extending the set of operations the system supports, or expanding the number of data (sub-)types the system models.

In spite of Turing equivalence, some paradigms (even if only in terms of readability, dev ergonomics etc.) are better suited for expressing certain classes of problems than others, which may introduce unnecessary friction trying to solve problems that are mismatched with the paradigm's approach to structuring and decomposing problems (e.g. into compositions of functions, or compositions of objects).

[0] https://wiki.c2.com/?ExpressionProblem


Think "lots of small functions" vs. "few fat functions" - one of those infamous code style holy wars. Look at the arguments people make for and against either.

There is no single answer there, because which style is better depends on what you're doing. For example, "lots of small functions" makes things easier to understand when you're working horizontally, trying to e.g. understand a module at a certain level of abstraction. However, in the same code, if you're trying to understand a single piece of functionality, e.g. to debug it, it's much easier when you have a vertical view - ideally a single fat function with all helper calls inlined. In first case, all the little functions form high-level languages that aid your understanding; in the latter, they're just noise that kills your working memory with all the jumping-to-definition around the codebase.

Our current programming paradigm forces you to make this style choice ahead of time. You can't have it both ways. This is the Pareto frontier - as you make your codebase easy for one type of work, it becomes hard to do other kinds of work in it. And this is a stupid state to be in, because you will be doing both horizontal and vertical tasks, and many others that benefit in yet different kinds of slicing through code, and you will be switching gears every few days or weeks.

Another concrete example: exceptions vs. algebraic return types (Expect/Result/Maybe/etc.). People love the latter for code locality, and mostly ignore the ridiculous amount of noise this method adds to all code, and/or invent ever more advanced math to paper over it. Exceptions were much better in this regard, if worse in others, but again, I posit that having to make that choice is dumb in the first place. Personally, I'm fine with Result return types. It's just that, 90% of the time, I don't give a damn about them, because I'm working on the success case/golden path, and they're just pure visual noise. It's something I'd like to just not see. But then, remaining 10% of the time, I'd like to make everything other than Result types and control flow disappear, because when working with error handling, success case becomes visual noise.

Inventing new species of monads or syntax keywords to cram this, and async, and other cross-cutting concerns, isn't the solution. The solution is to stop working on raw plaintext source code, and instead work on representations tailored to specific needs, while treating the underyling source code like we treat assembly today.


I wish there was another Gang of Four with a book on codebase design decisions, some of which you had outlined. I need the alternatives laid out clearly and, more importantly, given catchy names that I can refer to when real, or self-appointed code reviewers show up.

How do I explain what my prefered "level of abstraction" is and why it is superior to all the others? How can I be convincing with a fuzzy and subjective term "visual noise"? Etc.


> How do I explain what my prefered "level of abstraction" is and why it is superior to all the others?

That's my point: you shouldn't, because there isn't. Once you hit the Pareto front, there's no superior choice. There are just choices, each of which is better in different situations, and once taken, very expensive to back out of. The problem is being forced to make the choice in advance.

> How can I be convincing with a fuzzy and subjective term "visual noise"?

That's another part of the problem. Different choices may be better for different people. There's no one-size-fits-all here. Which is why, again, it's dumb that we have to make this choice once and for everyone (part of what I mean by "working on single-source-of-truth code").

The way I see it, to stop running in circles on the Pareto front, to move past to more powerful ways of dealing with complexity, we need to make things like "lots of small functions vs. few fat ones" or "exceptions vs. expected" to be subjective, personal, local preferences, not meaningfully different than your editor's color scheme, or syntax highlighting scheme; inlining needs to be as easy as code folding, etc.


Your arguments still seem to lead me to the entirely opposite conclusion - that different choices would be better for different organizations and different projects, and it would make sense to have things like "lots of small functions vs. few fat ones" or "exceptions vs. expected" to be well-known tradeoffs where the choices would be made according in an objective manner according to the type of the project, and override personal preference because for this thing we'll explicitly optimize for this type of reader because we believe that this codebase has XYZ properties and will be maintained at ABC level by DEF kind of people.


My argument is that those are fundamentally not project-level tradeoffs, those are "whatever your current ticket is about"-level tradeoffs. Your task may be such that you'll want opposite choices to have been made within 5 minute span - e.g. "lots of small functions" to get the gist of what the module is doing, followed by inlining everything into a single fat function along a vertical, after you found what piece of logic you want to debug. Both might also benefit from temporarily pretending error handling is done by unchecked exceptions, to remove visual clutter.

I.e. you're talking strategy and architecture, I'm talking not even tactics, but individual minute-to-minute execution.


IMHO at the ticket-to-ticket level you're not really making these tradeoffs but rather experiencing the consequences of them - it doesn't matter if for minute-to-minute execution right now lots of small functions would be better or worse, you either have them or not; and whatever tradeoffs you make for the code you write during this one ticket, it should take into account the potential future readers.


> it doesn't matter if for minute-to-minute execution right now lots of small functions would be better or worse, you either have them or not; and whatever tradeoffs you make for the code you write during this one ticket, it should take into account the potential future readers.

This is precisely the problem.

The way it should look like is, these trade-offs should be purely local, minute-to-minute editor preferences. Inlining some helpers or removing error types from the code you see, should be no harder than folding a code block or pinning a peeked function definition. Nor should those "changes" have any effect visible to anyone else. You should not take into account any "future readers", because there won't be any - but rather, you should pick a representation you need right this second, and switch to a different one the moment you need that, etc.


Ah ok, I was assuming that having this be a fully transparent & reversible issue of representation isn't really possible - but if it is, then sure, then I'd agree, then it can be treated just like indentation or syntax highlighting color schemes.

But for the major issues like split between "lots of small functions" vs "one big function" even if the IDE could e.g. inline the functions for readability, but that feels a bit risky as it makes it difficult to talk about the code or document a functions' interface if another person is seeing different semantic structure of the code, it's as if they had different names for the same things.


Until future tech allows for reformatting code to a developer's preference (which might not arrive any time soon or might introduce subtle new bugs, who knows) we could use some framework to make transparent decisions around that Pareto front, no?

(Of course there is a best way to do it that will optimize the codebase for a function of productivity, reliability and pleasure. The likelihood and the cost of changes are part of that function. Finding and advocating for it is the task of the most experienced, wisest team member. Less experienced team members may (or may not) understand the wisdom of the better decision in time. But this is just my struggle with postmodernism and relativism and besides the point)


> Another concrete example: exceptions vs. algebraic return types (Expect/Result/Maybe/etc.). People love the latter for code locality, and mostly ignore the ridiculous amount of noise this method adds to all code, and/or invent ever more advanced math to paper over it. Exceptions were much better in this regard, if worse in others, but again, I posit that having to make that choice is dumb in the first place.

There's actually at least one programming language that allows you to choose as needed, and it's Visual Basic of all things.

The "On Error Goto" directive says on error you want to jump to an exception handler. "On Error Resume Next" means just keep forging ahead -- which is a terrible idea done badly, but if check the Err object, you can access the status.

It's not quite as good as algebraic return types, but it was an interesting idea in that you can code each bit of code in whichever way works out best.


I like VB's idiosyncrasies for other reasons, but here my point is different. This is not about making choices at runtime, but about making them in your IDE, for yourself. It's about the lens you view your code through. Hiding or emphasizing error handling should be a visual operation, with no impact on semantics - just like e.g. folding or unfolding blocks of code in your editor.


I'm not sure this is even possible - all the value comes when the underlying application is no longer simple. If you try to make a minimal example it just looks over-engineered, like some sort of "enterprise hello world" joke.


How about just hello world?

  public class HelloWorld {
    public static void main(String[] args) {
      // Prints "Hello, World" in the terminal window.
      System.out.println("Hello, World");
    }
  }
This is the full code. This is what I care about on success path:

  System.out.println("Hello, World");
  // Success
Or maybe even just:

  println("Hello, World");
Or maybe, because I forgot what is what in Java:

  java.lang.System - The System class contains several useful
  class fields and methods. It cannot be instantiated.
  |
  |       System.out : java.io.PrintStream - The "standard" output stream.
  |       | 
  |       |    PrintStream::println(String) -> void - Prints a String and then terminate the line.
  v       v    v   
  System.out.println("Hello, World");
This is what I care when I'm looking at the module view:

  HelloWorld (public, class)
    main(String[] args) -> void (public, static, entry point)
    -- Prints "Hello, World" in the terminal window.
This is what I care when I'm looking at error propagation view:

  HelloWorld::main() - entry point
    Program end, return code 0 [default]
Or maybe, given sufficient metadata in the standard library:

  HelloWorld::main() - entry point
    System.out.println("Hello, World")
      -> failed if System.out.checkError() == true
    Program end, return code 0 [default]
Or I want a dependency view:

  HelloWorld
    java.lang.System
    [java.io.PrintStream]
Etc.

Now imagine being able to open the Hello World code, and instantly switch between those views and many others with a single shortcut. Not tooltips above the code, but views that replace code. That is what I'm talking about.

Oh, and views are editable where it makes sense. Even read-only views would improve the development experience a lot, but the magic is in mutable views, so that you don't ever have to touch the full, original form of the source.

EDIT: look at the first vs. third code-block in this comment. People built whole programming languages on the premise that Hello World should look like the third block instead of the first one. That's an extreme case of making a style choice ahead of time, and also a source of endless, pointless debates.


Interesting, and somewhat doable for IDEs in current languages, but I think the real question is: is it possible for what you can't see to hurt you?

That is, if typing in one of the restricted views, is it possible to create a bug because of something that's currently hidden from you? I feel as soon as that happens to someone, they are going to switch to the "show everything" view and never switch back.

> look at the first vs. third code-block in this comment. People built whole programming languages on the premise that Hello World should look like the third block instead of the first one.

Well that was silly of them. Microsoft managed to retrofit it: https://learn.microsoft.com/en-us/dotnet/csharp/tutorials/to...


> is it possible to create a bug because of something that's currently hidden from you?

Isn't this always the case?

I don't think there is a "full view" like you describe. Even if you're writing raw assembly, you're still working on top of the abstractions provided by the instruction set. There is no such thing as a full view where you can see every detail of everything that is going to happen.

I think the solution to this problem is tests and doing your due diligence in understanding what you're doing.

Having more control over what abstractions you're seeing should help with the understanding part, which should in turn reduce bugs resulting from lack of understanding.

Edit: related thought: it's about improving the tooling. Which I suppose is another thing for someone to learn and possibly misuse. But I don't think it's correct to say that firefighters shouldn't carry axes because they could accidentally kill someone with them.


Intentional Software was working on a system like this many years ago: https://youtu.be/tSnnfUj1XCQ?t=230

Part 2: https://m.youtube.com/watch?v=ZZDwB4-DPXE

You can project your programs into different views and add lots of metadata about the program and use that data in various contexts. Also extends to source control and other use cases. It looked really neat.

I'm not sure what happened to it.


With enough work I think it is definitely possible, but your comment made realise it would be pretty large, almost a research project.

One could start from a real case in a real company, document it thorougly, including different stakeholders. Then rewrite the code to fit this organisation at this point in time.

Then, make a two or three fictional changes to the organisation and circumstances, snapshot these (or follow an organisation longitudinally for long enough that this occurs naturally) and for each snapshot, rewrite and redocument the code to fit those circumstance.

From that, one could "dumb down" the whole thing until it stops making sense and see how simple one could make it.

Probably there are people smart and seasoned enough to write an entirely fictional account of all of this, and still have it make sense - that's not me though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: