Seems to be dunking quite hard on Babbage for no reason. You can invent something and not be able to build it, and he clearly invented a computer in the quite reasonable modern sense of a device operating from an easily-changeable stored program, with an attached store, to perform arbitrary computations and output the results. The design is plausible and the limitations are mechanical, not fundamental. That doesn't make him the most important person in the history of computing, but the Analytical Engine quite clearly fits the modern definition of "computer".
> Seems to be dunking quite hard on Babbage for no reason. You can invent something and not be able to build it, and he clearly invented a computer in the quite reasonable modern sense of a device operating from an easily-changeable stored program, with an attached store, to perform arbitrary computations and output the results.
I'm not sure this is consistent with the way most people understand the term 'invent' -- the person credited as the inventor of a given technology is usually the person who first demonstrated a viable working example of it, not the first person to propose it as an idea, however detailed the idea might be. It's not enough to have a design you can't implement -- you have to implement it and refine it against the constraints of the usage context.
That's not to disparage Babbage in any way -- his work was extremely important and definitely contributed to the development of modern computers, but I don't know if it makes sense to say that he 'invented a computer' in the same way we say e.g. the Wright brothers invented their airplane.
I searched the article for the word "Antikythera" in vain. If I wanted to be uncharitable, I would guess it was hard to argue that the Antikythera mechanism was not a computer and that would have introduced too much nuance to the author's argument so it was left out. But is the charitable interpretation of the omission really that the author ignores the existence of the Antikythera mechanism? I'm not so sure.
The author argues that "computing device", like the Antikythera mechanism, which allow specific formulaic calculations have existed for thousands of years, these devices were purpose made and allowed certain predetermined calculations to be performed automatically. This clearly applies to the Antikythera mechanism.
The author then argues that this categorization is unhelpful as it obscures the line between a purpose made computing device and a general computing device. The arrival of such devices he places at the end of the 1930s with the Z1, which was not made to perform a specific algorithm, but which instead allowed for programmability. Very soon after rapid advances happened, which created the "computer" as a general purpose computing device.
The Antikythera mechanism is essentially irrelevant to the point the author makes.
I understand the point they're making but it's wrong: a computer doesn't have to be universal, to be a computer, neither does having a fixed program, like the Antikythera mechanism, disqualify a device from being a computer. In the same sense, Turing Machines are not necessarily universal; Universal Turing Machines are. But there's no official or formal definition of what a "computer" is and the category is less crisp than the author is trying to make it.
In the same way drawing a line between programmable and fixed-program computer is arbitrary and in fact the author spends much of the article dismissing Babbage's Analytical engine, which was a programmable computer, on the grounds it wasn't actually built; which is, of course, irrelevant, because a physical machine is not necessary for complete computation, as demonstrated by Ada Lovelace.
In any case, the early days of modern computing (1930's- 1940's) were replete with a plethora of computing machinery of various architectures many of which were called "computers", without them being necessarily programmable computers; for example, the "Atanassoff-Berry Computer" [1].
Did you read the article? Universality is explicitly rejected as a criterion.
The point is there is a fundamental difference between a computing device which performs a single algorithm and a computer which can perform programmed algorithms.
How about a device that can perform more than one computation then? e.g. the Antikythera mechanism could calculate the motion of the planets, and, separately, eclipses. Universality is the only characteristic that makes a difference, but I think the author avoids this because they don't want to be caught up in debates about whether modern computers are truly universal.
The article is trying to make the distinction between what is, and isn't a "computer" sound crisp and exact, when it's arbitrary and ad-hoc.
Yes, more or less. To quote another commenter in another thread, " an open definition of what a computer is, is too inclusive and uninteresting if you want to talk about the history of actual computers".
«My point was simple: if a cuckoo clock or a differential gear qualifies as a “proto-computer”, what doesn’t meet the bar? Almost any man-made tool performs some sort of a calculation.»
Its about the author arguing that an open definition of what a computer is, is too inclusive and uninteresting if you want to talk about the history of actual computers.
I agree that there's a qualitative difference between a cuckoo clock and a programmable computer, but the difference between a computer and a calculator or a fixed program computer is much less clear and much more open to arbitrary proclamations of what we can call a computer, and what not.
Don't forget that modern computers include all sorts of special-purpose components, like ALUs etc. In the limit, a universal machine is the union of infinite non-universal machines, after all.
Aren’t those synonyms? Just before the arrival of the first actual programmable computers, the word was a job description for people who did calculations.
“Earlier words for "one who calculates" include computator (c. 1600), from Latin computator; computist (late 14c.) "one skilled in calendrical or chronological reckoning."”
I would argue that in modern parlance no they are not. We could pick multiple examples of things doing calculations, like cash registers. They sum up numbers. Clearly they do calculations. But are they computers?
Even now, would you say the type of cash registers where numbers are manually entered are computers?
A classical cash register is not the same like the thing we call computer, but that's not my point. My point was that "a computer is a thing that computes while a calculator is a thing that calculates" does not sound correct, the argument is basically the wrong way around. In my opinion, the verbs "to compute" and "to calculate" are synonyms. We only recently started to distinguish between a computer and a calculator, probably being a bit fuzzy and sloppy regarding the term "computer" (as natural languages are). But that doesn't mean that we can transfer that implicit (and maybe altered) meaning back to the verb "to compute".
In Greek, my maternal language, both are called "computers". A calculator is specifically a "κομπιουτεράκι", or small computer, whereas "computer" is "υπολογιστής", literally, "calculator".
In English also the two words' literal meanings are the same, and it's only modern usage that distinguishes them, as sibling comments argue.
Fair point. However, being programmable was one of the requirements listed in the article. Was the Antikythera mechanism actually programmable or was its calculation set in stone through its physical design?
My problem with that, is that it is an arbitrary distinction given the title of the article "there were no ancient computers". Of course, if you draw the line at programmable computers, then there were no ancient computers because fixed-program computers are not computers; only, they actually are.
Music boxes weren't programmable in the sense that they allowed for a different algorithm to be performed. They changed the timing of certain operations.
>Or perhaps only things that can have branch instructions count
The author identifies programmability as the most interesting metric, which creates a "computer". Branching is necessary for that but not sufficient.
Music boxes can have input data. Drum with notches is input just as cards with holes is input. They just do not have the other part of varied function.
It might not be fast to replace the input data, but it is possible.
Seems like a bit of a silly debate, but I'll play along ...
It mostly comes down to definitions. Being general purpose and programmable do seem like they should be part of the definition, but where do we set the bar for programmability? Should ENIAC really be considered as programmable, or merely configurable (via patch panel wiring)?
Things like the Jacquard loom, programmable/configurable via punched cards, or automatic "player pianos", programmable/configurable via changeable disks/barrels, really introduced the idea of a stored "program", although no-one would consider them as computers. On the other hand it'd seem odd not to consider something like the Antikythera mechanism (which calculated future positions of the planets) from over 2000 years ago as a computer, albeit an analog one, even though it was fixed function.
Apart from functional definitions, perhaps the most compelling reason to consider machines like ENIAC as the origin of digital computers is that this was the start of using signal switching as the basis of computation, initially in the form of relays and vacuum tubes, then transistors leading to integrated circuits, with this being a continuous line of development.
Still, even if not part of this line of development, maybe we should at least give a nod to earlier computation and programmable machines as part of the inspiration for modern computers, even if only as part of our collective intelligence/memory and engineering progress in terms of conceiving and building ever more complex automated machines.
> A better definition of a computer would include not just the words “designed for calculation” but also “programmable.”
That would be a "programmable computer", or a "stored-program computer". The earliest digital computers[0] couldn't store a program; you had to re-wire them. Wikipedia says that ENIAC was "programmable", but programming it was a matter of jump-cables and plugboards, which I call re-wiring. In its earliest version, it had no main memory, just a set of registers/accumulators. You could use card-decks for persistent storage, but you couldmn't load a program from a card-deck, they were just for data.
It’s fascinating to note the ability to have stored programs was planned but not completed due to time and budget, so the extra memory and interface was added as an upgrade in ‘48.
Babbage may not have built his Analytical Engine, but the Difference Engines (of slightly different design) have been built: by Scheutz, by George B. Grant, heck, even Burroughs Co. have built a couple of those things in the early XX century.
This is irrelevant to the argument of the author, the analytical engine would have been a computer capable of being programmed. It was the idea of a computer.
The difference machine is not a computer, as it exists to perform a specific fixed algorithm.
The author also argues in his other article that he linked to that Claude Shannon actually invented Boolean logic (which is misnamed, because what Boole invented is not what's today called by this name). Well, the problem is that I've seen scans of books on mathematical logic printed in the 1910s and 1920s, and they are pretty much the same as today, except maybe with less emphasis on Hilbert's axioms, but the truth tables, the AND/OR/NOT trinity, de Morgan's laws ― it's all there already.
So, if Shannon was not aware of all that work and had to reinvent it on his own, do we really credit him with this invention?
>but the truth tables, the AND/OR/NOT trinity, de Morgan's laws ― it's all there already.
Yes, the idea of algebraizing logic was to my knowledge first done by Frege in his 1884 book "Begriffsschrift" (Hard to translate, maybe "Statement Notation" gets close?), which is fascinating to read. He had developed a (from our perspective) very weird notation, but it actually expresses what we would nowadays call "formal logic". Importantly it includes decuctions, which is an important addition to the and/or/not operations.
Before Frege philosophy/mathematics also had these notions, but instead of being algebraic they were linguistic notions. Those you can find even in ancient Greek Texts.
If the author really does argue that Shannon invented formal logic, then he is definitely wrong, although I have not read his argument on that.
It’s easy to confuse computers with programs. A mechanical calculator isn’t a computer. It’s a piece of hardware built to execute a single program of limited scope. It’s why Turing’s Bombe, however impressive it was, is not a computer but rather a piece of hardware designed to run a program to crack Enigma.
Tldr; The author argues that the first computer should be considered one that is programmable because many nonprogrammable calculating devices have existed for a very long time. I guess he also means programmable AND performs tasks with data because the Jacquard loom was programmable but did not perform data manipulation tasks, and would be considered the first computer otherwise
at this point, the entire raison d'être of a lot of mainstream historians is to say "the previous guy categorised this slightly wrong!!" and hope this is seen as controversial enough it'll get talked about, but not controversial enough that they'll get branded a revisionist historian.
take "The Dark Ages" as a case in point. the available evidence has broadly been covered, because there is very little, so historians, who have to be seen to be doing something, have decided a rebranding is in order, with the vague notion that "The Dark Ages" is offensive to the very dead people of that era
They weren't called the dark ages until some humanists came along and decided to call themselves the enlightenment. It's not so much about feelings as facts.
It is possible to consider in this way that people are computers. In any case, there should be a return on resources from any processes, activities, as this always exists and happens in the world of people. If there is no feedback from something, then the actions (activities) should be stopped and changed to another productive one, where there is a return. In any case, any human activity and any correct code work exactly like this.