We know every nuclear failure. We don't know every time a strong nuclear risk existed but by chance, didn't trigger. Nuclear power plants are probably much safer on average, but it only takes one corner cutting plant to cause a nuclear accident.
The people harmed by the externalities of your personal choices because you live in a society where some degree of cooperation with the larger community is required.
The wage effect of the personal choice of what wage to work at it is not an externality. An externality is when one party's actions deprive you of something you are entitled to, like your person or property. The opportunity offered by another individual, is not yours to begin with, and that other indivual has a right to withdraw it by contracting with someone else at a lower wage.
By way of analogy, imagine if you training to become physically more attractive made other male suitors less attractive to women in general. You did not, by virtue of making yourself more attractive, impose a negative externality on other men, as they were never entitled the affection and engagement of women in the first place.
Moreover, the absence of labor regulations and social democratic policies is associated with more rapid economic growth and larger wage gains, so the premise of the notion that there are negative externalities emanating from contract liberty is wrong. The free market is far and away the most efficient way to organize an economy, because the market is the ultimate coordinating tool, and helps raise productivity, which has enormous positive externalities.
These sound like the fundamentalist tenets of an ideology, that just happen to further the interests of the rich, but the statistical/empirical evidence has constistently validated these assertions, as do rationalist deductions based on game theory and widely accepted economic axioms like supply and demand and the efficiency of equilibriums established by them.
> The free market is far and away the most efficient way to organize an economy,
By 'free' do you mean unregulated?
And efficiency (which you seem to equate with 'return on capital') isn't the only positive value, nor is efficiency necessarily a good proxy for all other positive values.
In any case, while a theoretical perfect market without any asymmetries might be most efficient, in practice we have markets that are imperfect in many ways that induce market failures, and regulation is needed to compensate and restore market efficiency. The market for labor is no exception, as the asymmetries in labor relations are huge.
>>And efficiency (which you seem to equate with 'return on capital') isn't the only positive value, nor is efficiency necessarily a good proxy for all other positive values
Efficiency is the only thing that matters. Take two countries: Pragmatia and Utopitia. Pragmatia solely pursues efficiency, in line with its pragmatic ideals, while Utopitia pursues social democratic ideals, in line with its utopianist ideals.
Pragmatia consequently sees GDP grow at an annual rate of 5% a year, while Utopitia sees its GDP grow at 2.5% a year.
From a starting point as equals in 2022, Pragmatia acquires a 3 to 1 per capita GDP advantage over Utopitia within 50 years, providing Pragmatia's residents with a vastly better standard of living than Utopitia's, irrespective of how large of a percentage of the latter's GDP was expropriated by the state for social democratic redistribution.
This was basically the story of Hong Kong vs Mainland China from the 1950s to 1980:
>>In any case, while a theoretical perfect market without any asymmetries might be most efficient, in practice we have markets that are imperfect in many ways that induce market failures, and regulation is needed to compensate and restore market efficiency.
Imperfect markets don't imply that wages are below what they would be under free market conditions, or that a crude intervention, like granting labor unions with collective bargaining monopolies over companies' hiring practices will counter-act the wage-inhibiting effect of some market inefficiency, let alone do so without introducing far larger and more significant inefficiencies of its own.
Basic supply and demand theory tells us that the employment standard mandates, like minimum wage controls, advocated by unions, to the extent that they have an effect, harm economic efficiency. In the absence of the ability to conduct controlled experiments to prove definitively its effect one way or another, we should opt to trust basic economy theory.
>>The market for labor is no exception, as the asymmetries in labor relations are huge.
Asymmetries are irrelevant. Apple is worth over $2 trillion, yet cannot force me to buy a Macbook Pro. It can only induce me to do so by offering more value than competitors.
If we let Apple and its social justice PR firms convince us that we need the government to control the consumer electronic market, then Apple could, through regulatory barriers, keep competitors out, or through taxation, force us to indirectly buy its products, by funding the state that does.
>>And efficiency (which you seem to equate with 'return on capital') isn't the only positive value, nor is efficiency necessarily a good proxy for all other positive values
Efficiency is the only thing that matters. Take two countries: Pragmatia and Utopitia. Pragmatia solely pursues efficiency, in line with its pragmatic ideals, while Utopitia pursues social democratic ideals, in line with its utopianist ideals.
Pragmatia consequently sees GDP grow at an annual rate of 5% a year, while Utopitia sees its GDP grow at 2.5% a year.
From a starting point as equals in 2022, Pragmatia acquires a 3 to 1 per capita GDP advantage over Utopitia within 50 years, providing Pragmatia's residents with a vastly better standard of living than Utopitia's, irrespective of how large of a percentage of the latter's GDP was expropriated by the state for social democratic redistribution.
This was basically the story of Hong Kong vs Mainland China from the 1950s to 1980:
>>In any case, while a theoretical perfect market without any asymmetries might be most efficient, in practice we have markets that are imperfect in many ways that induce market failures, and regulation is needed to compensate and restore market efficiency.
Imperfect markets don't imply that wages are below what they would be under free market conditions, or that a crude intervention, like granting labor unions with collective bargaining monopolies over companies' hiring practices will counter-act the wage-inhibiting effect of some market inefficiency, let alone do so without introducing far larger and more significant inefficiencies of its own.
Basic supply and demand theory tells us that the employment standard mandates, like minimum wage controls, advocated by unions, to the extent that they have an effect, harm economic inefficiency. In the absence of the ability to conduct controlled experiments to prove definitively its effect one way or another, we should opt to trust basic economy theory.
>>The market for labor is no exception, as the asymmetries in labor relations are huge.
Asymmetries are irrelevant. Apple is worth over $2 trillion, yet cannot force me to buy an Mac Pro. It can only induce me to do so by offering more value than competitors.
If we let Apple and its social justice PR firms convince us that we need the government to control the consumer electronic market, then Apple could, through regulatory barriers, keep competitors, or through taxation, force us to indirectly buy its products, by funding the state that does.
I wouldn't use this personally, but to me it's pretty clear that the use of 'quote-unquote' is meant to denote sarcasm more strongly than quotation marks.
> Were such a car to exist, it is clear the dog would win in very very many environments (almost all). As would a mouse, let alone a dog.
This seems incredibly unlikely. AI vastly outperforms 99.99% of humans on various video games, and 100% on many others. I'll bet on a well trained ml model over a dog every time.
> That it may be possible to rig a human environment to be replete with so many symbols (road signs, etc.) that an incredibly dumb automated system can follow them is hardly here-nor-there.
We already have above average human performance with just normal road signs, and could also simply use digital information.
> Self-driving cars may work on highways and motorways; I don't see there being any in cities. Not for centuries.
And yet computers continue to perform tasks that were talked about for years as something uniquely human / intelligence driven. This is a nice philosophical debate, but in practice I think it falls flat.
I dont see any single case of that. Rather in every case the goal posts were moved.
Can a computer play chess? No.
They search through many permutation of board states and in a very dumb way merely select the decision path that leads to a winning one.
That was never the challenge. The challenge was having them play chess; ie., no tricks, no shortcuts. Really evaluate the present board state, and actually choose a move.
And likewise everything else. A rock beats a child at finding the path to the bottom of a hill.
A rock "outperforms" the child. The challenge was never, literally, getting to the bottom of the hill: that's dumb. The challenge was matching the child's ability to do that anywhere via exploration, curiosity, planning, coordination, and everything else.
If you reduce intelligence to merely completing a highly specific task then there is always a shortcut, which uses no intelligence, to solving that task. The ability to build tools which use these shortcuts was never in doubt: we have done that for millenia.
> They search through many permutation of board states and in a very dumb way merely select the decision path that leads to a winning one.
> That was never the challenge. The challenge was having them play chess; ie., no tricks, no shortcuts. Really evaluate the present board state, and actually choose a move.
Uh-huh. And how exactly do you play chess? Do you not, perhaps, think about future states resultant from your next move?
Also, Alpha Zero, with its ability to do a tree search entirely removed, achieves an ELO score of greater than 3,000 in chess, which isn't even the intended design of the algorithm.
A rock will frequently fail to get the to bottom of a hill due to local minimums vs. global minimums. A child will too sometimes.
> Uh-huh. And how exactly do you play chess? Do you not, perhaps, think about future states resultant from your next move?
Not quite. You'd need to look into how people play chess. It has vastly more to do with present positioning and making high-quality evaluations of present board configuration.
> rock will frequently fail to get the to bottom of a hill due to local minimums
Indeed. And what is a system which merely falls into a dataset?
A NN is just a system for remembering a dataset and interpolating a line between its points.
If you replace a tree search with a database of billions of examples, are you actually solving the problem you were asked to solve?
Only if you thought the goal was literally to win the game; or to find the route to the bottom of the hill. That was never the challenge -- we all know there are shotcuts to merely winning.
Intelligence is in how you win, not that you have.
> Not quite. You'd need to look into how people play chess. It has vastly more to do with present positioning and making high-quality evaluations of present board configuration.
That is what Alpha Zero does when you remove tree search
> A NN is just a system for remembering a dataset and interpolating a line between its points.
Interpolating a line between points == making inferences on new situations based on past experience.
> If you replace a tree search with a database of billions of examples, are you actually solving the problem you were asked to solve?
The NN still performs well on positions it hasn't see before. It's not a database. The fact that the NN learned from billions of examples is irrelevant. Age limits aside, a human could have billions of examples of experience as well.
> A NN is just a system for remembering a dataset and interpolating a line between its points.
So are human brains. That is the very nature of how decisions are made.
> Only if you thought the goal was literally to win the game; or to find the route to the bottom of the hill. That was never the challenge
So then why did you bring it up as an example other than to move goal posts yet again? I can build a bot to explore new areas too. Probably better than humans can. Any novel perspective that a human brings, is, by definition, learned elsewhere, just like a bot.
> Intelligence is in how you win, not that you have.
Sure, and being a dumbass is in how you convince yourself you're superior when you lose every game. There are many open challenges in AI. Making systems better at learning quickly and generalizing context is a very hard problem. But at the same time, intellectual tasks are being not only automated, but vastly improved by AI in many areas. Moving goalposts on what was clearly thought labor in the past is just handwaving philosophy to blind yourself from something real and actively happening. The DOTA bots don't adapt to unfamiliar strategies by their opponents, and yet, they're still good at DOTA.
Let’s say that you have the ability to know the state of every neuron, and the interconnect map between them, at all times. You watch a chess player make a move, determine what is going on, and define the process the brain follows as an algorithm. Now that you have an algorithm, you have a very powerful piece of silicon execute the algorithm. Does that piece of silicon have intelligence? You would probably say no, since simply executing a pre-defined algorithm is a shortcut. Intelligence means the ability to develop the algorithm intrinsically in your head.
So fine, we take a step back. Instead of tracing all the neurons as they determine a chess move, we trace all the neurons as they start, from a baby, and learn to see and to understand spatial temporal behavior and as they understand other independent entities that can think like they do and as they learn chess and how to make a move. Then we encode all of that into algorithms and run it on silicon. Is that intelligence? To me, it sounds like it is just a shortcut - we figured out what a brain does, reduced it to algorithms, and ran those algorithms on a computer.
What if we go back further and replay evolution. Is that a shortcut?
To be fair, you did claim that the ability to adapt and make tools is what distinguishes real intelligence. But I wonder if ten years from now, you will saying that a tool making computer is just a shortcut.
I think intelligence is more generally how an agent optimizes to be successful, objectively and subjectively, across a wide variety of different situations.
> Can a computer play chess? No.
> They search through many permutation of board states and in a very dumb way merely select the decision path that leads to a winning one.
This is a perfect example of moving the goal posts. The objective was never to simulate a human playing chess.
If we understand something, we can describe it with an algorithm.
If algorithms for intelligence are by definition impossible, then understanding intelligence is by definition impossible.
So if true intelligence is something beyond our current understanding of the world (beyond algorithmic description). To me, this feels like god in the gaps applied to intelligence.
If a lookup table can predict human decisions with high accuracy given access to its senses and feelings, then either a human is just another can opener or intelligence isn't real.
The objective was to make a machine that could beat anybody at chess. Nobody on the Alpha Zero team believes Alpha Zero is an example of general AI. Teaching a system to understand a complex system is a necessary subcomponent of general intelligence.
Yeah, you can of course keep approximating it. The next adjustment would then probably be to use something like quality-adjusted-life-years (QALY) based on the persons co-morbidity, and then (if you want to) your could also take it the other way and reduce QALY for survivors with long-covid.
That's only true if a random sample of the population dies. If we assume (and I'm making this up) that in the steady state, the elderly comprise 90% of deaths, then if a war kills only young people, you'll expect a substantial increase in proportional death rate.
I was simplifying, the point is that the model should always reflect the current state of the population. You shouldn't expect the kind of "catch up" effects that OP is referring to, unless you have static predictions that don't take into account actual deaths and births.
And your model would reflect that after the day all the old people died. So you wouldn't see any negative excess deaths (= actual deaths - predicted deaths). Otherwise you would see massive excess deaths from the baby boomers getting old for example, or negative excess deaths from the WW2 generation in places where many young people died.
That said, I'm also pro-nuclear.