I didn't claim that usability features were "important to the successes of languages designed like Ada". But Ada at least explicitly considered human factors in their design (even if mostly based on expert judgment and established principles, not practical studies), which also seems very appropriate given the criticality of most applications written in Ada. But as my ergonomics professor at ETH Zurich, Helmut Krüger (successor to the renowned Étienne Grandjean), used to say: people get used to even the most ergonomically terrible systems. The "level of suffering" experienced by most people is probably simply not great enough to systematically take such aspects into account. But there are still industries where it is important to reduce the human tendency to make mistakes by taking appropriate measures. Ada was created for such an industry from the very beginning.
> I didn't claim that usability features were "important to the successes of languages designed like Ada".
This kind of seems like it's focusing too much on my exact word choice and less the actual intent of my question behind it. The question I have is why following established principles should matter; I don't think it should be particularly surprising that someone might assume that making a language more usable for humans would be related to the number of humans who end up deciding to use it, and if that's not the case, I wanted to understand why my intuition is wrong.
> there are still industries where it is important to reduce the human tendency to make mistakes by taking appropriate measures. Ada was created for such an industry from the very beginning
This is a good point that I hadn't considered; it definitely makes sense to me that some domains might be less tolerant to human errors than others, and those domains would better reflect how well-designed a language is for humans.
> The "level of suffering" experienced by most people is probably simply not great enough to systematically take such aspects into account. But there are still industries where it is important to reduce the human tendency to make mistakes by taking appropriate measures.
Reading this part a couple of times, I think this might be where the nuance lies. My colloquial understanding of what it means for something to be ergonomic (and even by the idea of what"level of suffering" would mean) isn't quite the same as the measurement of how likely it is for something to induce human error. This might just be a case where the common use of the term isn't the same as how it's used inside the field of study, but I would have expected that the ergonomics of a language and measurement of the "level of suffering" would be with respect to the programmer, not the one experiencing the use of the software that's developed as a result. That isn't to say I disagree with the idea that the end-user experience should ultimately be more important, but I think that might account for the disconnect between what you're describing here and what I would have expected from a discussion around "programming language ergonomics" (which also might explain the difference between Ada and the other languages mentioned in this thread).
> The question I have is why following established principles should matter
Apparently I still don't understand your question, sorry. For what I understand, following established principles is part of the engineering profession; it has proven to be the right thing to do over decades, and it is part of engineering education.
> I would have expected that the ergonomics of a language and measurement of the "level of suffering" would be with respect to the programmer, not the one experiencing the use of the software that's developed as a result.
Usually not the "level of suffering" is measured in human factors engineering, but the time needed and degree of fulfillment of typical tasks a typical representative of a test group is suppost to perform. You can do that with different designs and can then conclude which one meets the performance requirements best. Human factors typically enter a specification as performance requirements (what functions shall the system implement and how well). Given a programming language, you could measure how long a typical programmer requires to complete a specific task and how many errors the implementation has in the first version.
> Apparently I still don't understand your question, sorry. For what I understand, following established principles is part of the engineering profession; it has proven to be the right thing to do over decades, and it is part of engineering education.
I agree that following established principles is important, but my understanding is that the principles get established because they're better at leading to desirable outcomes. I'm trying to understand what the outcomes are that the principles you describe are intended to lead to. From your most recent two replies, my best interpretation is that it leads to fewer errors overall in the programs produced, but that wasn't as apparent to me from your first comment. I do think I understand now though.
> I'm trying to understand what the outcomes are that the principles you describe are intended to lead to.
The application of the principles of Human factors engineering to the design of systems reduces human errors, increases productivity, and enhances safety, health and comfort when interacting with these systems. For a programming language, taking human factors into account appropriately means that the target group of language users (i.e. programmers) is sufficiently capable of performing their tasks in all phases of a program's life cycle, e.g., they are not cognitively overwhelmed, and the likelihood of misunderstandings or mistakes is reduced. However, they should neither be unnecessarily restricted or hindered in their work, because also this creates unnecessary extraneous cognitive load that exhausts the programmer's limited working memory capacity. Human working memory can hold only 3-5 "chunks" of information simultaneously. This is a well-documented, hard biological constraint; when programming languages impose excessive formalism, they force programmers to juggle more mental "chunks" than working memory can handle. Self-explanatory code (which includes avoiding incomprehensible abbreviations or confusing syntax) reduces the cognitive load on the programmer. Ada's explicit human factors principle states: "Code is read more than written"; over a program's lifetime, especially in large, long-lived systems, code is read orders of magnitude more often than it's written; Ada's formalism optimizes for the more frequent activity (reading and maintenance) at the expense of the less frequent activity (initial writing). As a language designer, you therefore have to find the right balance, which of course is a function of your target audience, and the primary activities they will perform with the language.