I don't think Turing knew about the existence of the lambda calculus when he wrote the paper. It was later that he decided to study his PhD under Church and move to Princeton.
A year ago he said that language models would never be able to explain why if you move a table, the bottle on top of it would not fall. He definitely had no idea what was coming with LLMs. And now he is arrogant enough to say "we all knew what capabilities LLMs would have". He has no intelectual humility or honesty at all. I at least admit openly that language models blew my mind.
Lost respect for his opinions massively after seeing how he lied about the topic.
He’s said it’s easy to generate this kind of questions that trick LLMs because of the lack of physical grounding in models trained solely on text.
And that’s true now as ever. I also heard him say that training multimodal models on text+image/video would mitigate the grounding issue, and that’s proven to be true too.
Can you cite a reference for that quote? I can’t find it. He has definitely underestimated language model capabilities but he has been mostly right on the “language models can’t plan” stuff. He has been consistent and correct about his criticism about auto regressive model as well.
I have been thinking a lot about it recently and I do think they are significant.
If you look for example at chagpt4, it achieves a certain sort of intelligence. It gives you many sensible responses.
The thing Is that this is basically what we humans do. Give so-so explanations about all sorts of stuff that are reasonable-sounding.
Im confident that they well become evidently better than all of us. More complete arguments, with better data backing that up that the regular folk. In some sense I feel that they almost above that threshold.
The basic opinions of people are neither sophisticated nor deep. And llms are always improving to become exactly that.
This should be a required reading for everyone at OpenAI. If such simple programs can fool even intelligent people, I can't imagine what an infinitely more capable gpt4 will be able to achieve socially.
As Yuval Harari correctly points out, with the current mastery of language, AIs are starting to hack the OS on which humans operate: words and ideas.
People would get hooked to that worse than heroin lol.
I recently fell for a girl and being without her definitely felt like having abstinence syndrome. I was obsessively and relentlessly thinking about her every waking second.
Sure, as more time goes on and the thing starts to become more evident. It will become clearer that bringing a human being into a dying world is a deeply immoral thing to do. It will not be soon though.
The world is not dying. It is only changing for the worse for us. Not making children right now or in the future is a suicide for mankind which doesn't solve the problems we have created.
what happened in the past few years to get so many people, especially ostensibly smart people in tech circles, to buy into this death cult nonsense? I genuinely do not understand what can lead to such a mindset.
Cant comment for the “tech circle” you’re referring to, but from where I’m from “the environment” was a topic taught in school from the get go and taking a stance that we aren’t “destroying the planet” was pretty much out of the question and frowned upon.
I specifically remember one of my teacher saying they wouldn’t have children because the future was going to be unbearable for them.
>bringing a human being into a dying world is a deeply immoral thing to do
What does this actually mean? What is a dying world, exactly?
Historically, the world has been an awful place to be. Most of Human history, 200k years or so, we were essentially nomadic tribes killing and eating each other(sometimes literally*). Since civilization has begun, we've been ostensibly more moral-- yet the majority of humans who have existed in the past 10,000 years were laborers we know very little about.
When you read about what people in Greece and Rome were like? That's mostly the well off nobles. Rarely did anyone bother to write about the lives of slaves, and subsequently we know next to nothing about them except that many of them had very harsh lives. We don't know what day to day life was like for the majority of Roman, for example.
Countless kingdoms have fallen-- Cesar had a book he published about some of his expeditions in genocide(or read another way, in conquering the enemy). When a city was sacked by the enemy; all resistance was killed, women were used, and then those remaining men women, and children were sold into slavery to fund the winning kingdom. Their stuff was sold too, as you won't be surprised to hear. If they really hated the other side, like when Rome felled Carthage, they would salt the earth after. While expensive, it guaranteed nothing would grow there for a long time after-- a testament to the power of the conquers to eliminate any opposition.
My point being.. I don't know why it's more unethical to birth humans into this world, than any of our ancestors. Logically, every human experiences unhappiness. How is it ever moral to birth humans into a world when humans experience pain? Human existence itself isn't purely logical and cannot be understood like that
Your argument only works when the general trend is toward the better. After 50+ years of not doing anything about it, it's now a near certainty that the next 50 years of "progress" will be more like going backwards 100+ years.
No it doesn't? My argument is that human life has been full of unavoidable difficulty for 200,000 years, so if bringing kids into a painful world is unethical then having children has always been an unethical decision. Even if we as a society go back 200 years, the decision to have kids is still just as unethical as it ever was. Which is to say, having children isn't an ethical decision it's an emotional decision.
Or I can put it this way-- The most ethical thing any human can do really is to commit suicide so you don't further consume finite resources. But the desire to live is a strong emotional desire and even the most logical person will justify their own life while simultaneously consuming resources and contributing to the problem they lament.
No, it really does. Maybe you can emotionally justify bringing additional people into a problem created by people, but, I can't. And, if you can justify it, quite frankly, I think that makes you a bad person.
200 years ago, nobody had any reason to suspect that life would be significantly worse in 50 years. Today, we do. 200 years ago, we hadn't poisoned the planet with "forever chemicals" (PFAS) and microplastics. Oh, and let's not forget the economic mess we're leaving to future generations, either.
200 years ago, there was no reason to suspect the next generation would be worse off than this one. One could afford not to take ethical considerations into account when having children then, because there was no reason to suspect it would be unethical.
As for suicide, sure, I can justify that on ethical grounds. I'm not sure I want to continue much past whenever my dog dies. She's 7. OTOH, I can't see where I'm ethically required to commit suicide to solve a problem I didn't create.
>200 years ago, there was no reason to suspect the next generation would be worse off than this one
Wars, plagues, famines, economic issues and tyranny all existed 200 years ago. Only someone living in a very cushy position in the first world would be reasonably assured that life wouldn't be worse in the future. The future is always uncertain. Even now, for all we know the covid 19 virus may evolve into some incredibly virulent and deadly disease at any time. That was doubly so when the black death was still occurring, before antibiotics, along with a plethora of diseases we're only somewhat aware of today such as polio and leprosy.
You have an inaccurate view of the world if you think the bottom 25% of humans ever had any real assurance life was going to get better rather than worse. Making moral judgements on them is an easy way to sound naive. For example you're implying African slaves in the Americas were "bad people"(your words) because they had children despite knowing their children would be enslaved. It's incredibly simplistic to think that anyone ever had that kind of assurance except for the very most privileged in society. We're not in a wholly novel situation, we're returning to the uncertainty that our ancestors had to deal with. Forever chemicals and microplastics are a new poison, and one I really am concerned about to tell you the truth, but poison itself isn't new.
And you're just as ethically required to commit suicide as you are to not have kids-- they both are means to lower the anguish in the world. How can only be ethically required to care about my kids possible pain, but simultaneously not required to care about about the billions of working class around the world working their fingers to their bones mining our lithium and making our food and goods, for pennies a day, just to survive? The lithium batteries in our phones that were using to discuss this, are a testament to the way we are engaged in this awful system. I don't know why one humans anguish is acceptable to be a part of while another unborn humans anguish would be a moral sin. Either contributing to the pain and blood inherent in the system is bad, or its okay. It doesn't really matter whose kids are suffering, in the grand picture of morality-- you owe the same moral duty to do no harm to the kids other people created as you owe to your potential kids
A being is conscious if and only if it feels like something to be that being. What you're referring to is self-consciousness. Goldfish are most surely conscious but not self-conscious.
That's all subjective. What is freedom to you? Which one is freer: choosing and working to buy a house or having the absolute certainty you'll always have a house? Which one is more liberating? If you had that certainty, would you abandon your current job and write poetry?
Throughout history, people have always avoided naturalism, almost at any cost. I think that this in part has to do with the crude reality it implies, it a very hard pill to swallow to anyone who understands it well. However, time and again, what has been attributed to non-natural, magical entities, has turn out false. If I wanted to avoid becoming the next sun-worshipping, cow-worshipping idolatrous, I would be very wary of any supernatural claim.
(since you are juxtaposing naturalism and magic, I'm assuming by naturalism you mean materialism. If that is incorrect, ignore me.)
Yes, but also no. Materialism is a way of looking at the world that itself encompasses the scientific method. Saying that the scientific method always proves materialism correct is a tautology.
I think what's different this time around is that we are saying materialism is likely incomplete, as opposed to 'wrong', which seems like a safe bet given our advancing understanding of the universe. Give the materialists that which is theirs.
>Saying that the scientific method always proves materialism correct is a tautology
If, on a hypothetical example, we could come up with an experiment where you removed half of 100 people's nervous system and most of them kept acting like normal then the scientific method would "prove" that materialism isn't correct (unless, of course, someone came and found out that what actually makes people behave like they do isn't their nervous system).
There is nothing making the scientific method unable ascertain whether there is more to the universe than the physical things in it. The scientific method just fails again and again at reaching the opposite stance.
> If, on a hypothetical example, we could come up with an experiment where you removed half of 100 people's nervous system and most of them kept acting like normal then the scientific method would "prove" that materialism isn't correct
Ah, you yourself are falling into the old failings of combating materialism: Using materialist methods and materialist measures. Trying to prove materialism wrong with materialist frameworks is a fools errand. Like mentioned above: it has failed time and time again, and I'm fairly confident it would fail in your example.
If the argument is "X isn't encapsulated in the material" then surely removing the material should leave X intact. The other option is experimenting on what actually encapsulates X, but non materialists have a tendency to say what actually encapsulates X cannot be interacted with nor observed.
Yet they somehow claim that the non interactable non observable stuff is the actual mechanism by which things work. Which begs the question of how they reached that answer to begin with... since it's non interactable and non observable
Unless you got good reason to believe that there is more to it, you don't attach additional meta proprieties that no one can investigate, even from first principles (and this part is important, because it could be that the investigation methods simply haven't got there yet)
There are none. Non-materialist sciences are horribly underdeveloped. We have two options for exploring solutions to problems we may suspect non-materialist answers to:
1) Try materialism over and over again anyway hoping it will eventually solve the problem.
2) Develop a new discipline starting with the axioms of the problem at hand.
Number 1 has been so successful and provided so much work for scientists that any problem it doesn't work for mostly gets ignored.
I'm not saying materialism is necessarily wrong or bad BTW, just that it has a limit. It starts and ends at the perimeter of shared *human* experience.
Perhaps the greatest trick that materialism has pulled off is conflating "natural" with "material." To the idealist, mind -- this experiential fabric that is directly and unmistakably apprehensible -- is perfectly natural, and so idealism is "naturalism." The in-principle-unobservable abstraction called "matter" is what's spooky and unnecessary.
Struggling hard to avoid a particular outcome ("*-worshipping") makes it harder to be completely unbiased and look where the raw data is pointing. That's why the Enlightenment, in setting itself in direct opposition to the Church, ended up with materialism (though it first made a foray into Cartesian dualism).
Examine your own experience. Pinch yourself. Attempt to deny the salience of that experience. Now attempt to explain that subjective experience arising from pure matter.
This has largely been solved by computation, for me and many others.
In this interpretation, the brain is a computing machine that decodes signals from the outside world into various internal forms, akin to, say, the in-memory representation of a data structure representing an image being observed by an image sensor. Subjective feelings are then the result of a certain part of the brain analyzing other parts of the brain.
All of the various quibbles about "qualia" and "p-zombies" and such seem to just be conceptions that beg the question. Sure, we can imagine or conceive of a being which reacts to stimuli and reasons without having internal feelings, but there is no reason to actually assert that such a being is actually possible. It is very possible that feelings/"qualia" are a necessary component/by-product of a computing system capable of general intelligence and self-reflection.
In the Mary's Room thought experiment, it's quite possible that if Mary knows everything that there is to know about the physics of light and the neuroscience of color perception, she can literally cause herself to imagine the color red, or ultra-violet, so that she will not be at all surprised when she encounters actual red for the first time.
In the Chinese Room thought experiment, the Room (homunculus + books) quite possibly understands Chinese in the same sense as a Chinese-speaking human does, even if the homunculus inside doesn't.
You assume that conscious experience arises "ex nihilo". You are saying that something of a different ontological category "emerges" from the mechanism. I'm afraid the onus is on you to describe the process of formation, vaguely waving your hands in the direction of strong emergence is nothing more than saying "and then there is magic".
You raise the Chinese room thought experiment, but it is orthogonal to the point at hand. I believe the machine in the Chinese room thought experiment is conscious and that says little about where I might imagine consciousness comes from.
There is nothing in need of explanation. Consciousness is what consciousness does.
Just like a computer can sort numbers, a human brain can produce thoughts and speech, and describe itself to itself, which we call consciousness.
A machine that would both (a) have enought information about the working of the world, and (b) have the right algorithms for predicting how to influence human beings and other conscious animals would, I believe, be able to turn this same predictive ability on itself and come up with what we call "conscious experiences".
While I can't claim it's impossible that there is more to it than that (perhaps only beings imbued with transcendent souls by a god can actually have conscious experience - that is not ultimately disprovable, after all), I also don't see any reason to imagine that there MUST be something like "consciousness" that is apart from complex computation.
I used to agree. But when we examine the nature of our own experience there is clearly something additional that is not described by matter interacting with itself. The only solution that makes sense to me is that matter has intrinsic consciousness that varies by degree as we span from atom to brain. Otherwise you have to imagine that the feeling that accompanies our day to day experience arises from nothing out of the matter from which we are constructed. That seems more magical than adding consciousness in at the base level as an axiom.
Does one particle have a temperature as well? Does matter have transistor-ness or Linux-ness?
Computations emerge from physical laws. If consciousness is "just" a complex computation, then it can emerge by the exact same process as Linux emerges from electrons and rock.
I will also note that, whatever else it means, consciousness implies some kind of identity - this human vs that human, this rock not that rock. But, this means that there can be no intrinsic physical property related to consciousness at the elementary particle level, as all electrons are perfectly identical, all protons are perfectly identical and so on. If electrons were to have some property of identity, some minimal quantum of consciousness, and so if individual electrons were different from one another, quantum mechanical statistics would look entirely differently and so the world would be entirely different.
Of course, you can still ascribe some non-physical, transcendental concept of consciousness to each electron, a soul of its own, as arbitrarily complex as you want to believe it, and there will never be any way to prove or disprove it's existence.
Temperature is an emergent phenomena, it is a value that describes the bulk behaviour of atoms.
But the feeling of something is a different thing entirely, it is an entirely different category of thing, it isn't a number, it isn't a thing. Do what I said in the first instance, pinch yourself. Feel the sensation, appreciate that there is a thing that it is like to experience the sensation. Then try to understand how the experience can arise from the matter. When you argue this point entirely within the confines of the abstract machine of scientific reason then you lose connection with the only piece of information that you can ever be certain of: That things feel like something, qualia is real. All other things that you may think about the world are guesses.
On your second point, I believe that the physical world is entirely an expression of consciousness. So it follows that the study of physics is the study of consciousness from an external viewpoint. The action of consciousness is physics from the external viewpoint and what we feel is what physics feels like from the inside. So if quantum mechanics proves that there is no identity to this electron rather than that electron, then it follows that this electron has the same consciousness as that electron. Because fundamentally matter and consciousness are the same thing.
When you take a strong materialist line you are disagreeing with a great number of highly influential and deep thinkers with academic credentials as long as your arm. You are also arguing against one of the favoured viewpoints of contemporary philosophers.
The position you're describing is either solipsistic (I am the only thing that exists, all else is illusion before me) or it is religious (we are al Brahman, the division of the world into things is Maya, illusion).
I fully agree that there is no way to prove this position is wrong, but I think you also have to accept that you can't prove that materialism is wrong. I would say materialism is more useful than solipsism or transcendentalism, but this is a subjective assessment.
Are you referring to the person sending its question to the Chinese Room as "the guy being duped"?
If so, if they pose a question in Chinese and obtain a meaningful answer in Chinese, in what way are they "being duped"? You would only call them "being duped" if you believe that the answer is somehow meaningfully different from what a real Chinese-speaking human would have given, which I and many others do not accept.
He is duped because the system has no understanding, yet he believes it does[0]. A counterfeiter who evades detection is not a mint.
[0] I believe that the original thought experiment was intended to lead to this conclusion, but in popular culture and in the above post, is marshaled towards the opposite end.
The whole point of Turing's original Immitation Game thought experiment, that Searle turned to understanding of Chinese rather than the more abstract notion of "thought", is that there is no reason to distinguish between what a person who answers questions does and what a machine that would give the same answers would do: they are both "thought" by any possible measure, as long as we accept that they produce the exact same outcomes.
Similarly, as long as we accept that that person outside the room can't distinguish by any means of inside the room there is a speaker of Chinese or a speaker of English following the magical Chinese-answering algorithm, then the distinction is, by definition, meaningless. There is no 'duping' because the notion of 'understanfing Chinese' as apart from 'runnkng the algorithm' is meaningless.
I don’t know. To me, to assert that there is no meaningful difference between a thing and what we know to be an artificial imitation of that thing is to assert that there’s no meaningful difference between truth and lies, so long as the lies are convincing enough.
Well, what does "correct" mean here? Colors are a construct of the human mind, whichever way you put it.
Now, you could devise some tests where you look at a "white" piece of paper (you conduct a survey of 100 people to establish whether it is pure white or tinted) and you look at it through each eye, and now if one eye sees it as pure white and the other as reddish or blueish, you know that the eye that sees it as pure white is "correct"; possibly one eye sees it as reddish and the other as blueish, and then neither eye is "correct". Of course, this defines "correct" as "in agreement with the eye sight of most other people".
You could also chose to dig deeper, and have many complex tests done to determine if there are differences in the structure of the retinas of the two eyes that could explain the difference (e.g. perhaps one retina has some malformations that probably explain the difference), and then you can decide that the eye that doesn't have the malformation, if any, is "correct". That eye could still be more skewed in your perception according to the first test though, since the brain may have already adjusted.
Alternatively, you could study the neural architecture that is responsible for color perception and suss out the differences between the two images, find out what is the difference between them, and decide which is correct based on that (are they different output images for the same input, and is one receiving any other input that should not be related? are they receiving different inputs? how does your neural architecture differ from that of 100 other people? etc.)
Of course, we entirely lack the ability to do the third test, and mostly lack this ability for the second test as well, so from a purely practical point of view, you would be stuck with the first test to determine this.
The exact same question could be posed of a color-reporting computer system, by the way. Say you have two cameras and an image analyzer that can print out the color of the central pixel in the images from both cameras (in RGB). Pointing the two cameras at the same object, you get a print out that says `LEFT (R250,G255,B255); RIGHT (R255,G255,B250)`. Which of the two is correct?