Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But that's a ways off.

Given the jumps in output quality between '1', '2' and '3' that may not be as far off as I would like it to be.

It reminds me of the progression of computer chess. From 'nice toy' to 'beats the worlds best human' since 1949 to 'Man vs Machine World Team Championships' in 2004 is 55 years, but from Sargon (1978) to Deep Blue (1997) is only 21 years. For years we thought there was something unique about Chess (and Go for that matter) that made the game at the core a human domain thing, but those that were following this more closely saw that the progression would eventually lead to a point where the bulk of the players could no longer win from programs running on off the shelf hardware.

GPT-3 is at a point where you could probably place it's output somewhere on the scale of human intellect depending on the quality of the prompt engineering and the subject matter. Sometimes it produces utter garbage but already often enough it produces stuff that isn't all that far off from what a human might plausibly write. The fact that we are having this discussion is proof of that, given a few more years and iterations 4, 5 and 6 the relevant question is whether we are months, years or decades away from that point.

The kind of impact that this will have on labor markets the world over is seriously underestimated, and even though GPT-3's authors have side-stepped a thorny issue by simply not feeding it information on current affairs in the training corpus if Chess development is any guide the fact that you need a huge computer to train the model today is likely going to be moot at some point, when anybody can train their own LLM. Then the weaponization of this tech will begin for real.



Sure it might produce convincing examples of human speech, but it fundamentally lacks an internal point of view that it can express, which places limits on how well it can argue something.

It is of course possible that it might (eventually) be convincing enough that no human can tell, which would be problematic because it would suggest human speech is indistinguishable from a knee jerk response that doesn't require that you communicate any useful information.

Things would be quite different if an AI could interpret new information and form opinions, but even if GPT could be extended to do so, right now it doesn't seem to have the capability to form opinions or ingest new information (beyond a limited short term memory that it can use to have a coherent conversation).


But the bar really isn't 'no human can tell' the bar is 'the bulk of the humans can't tell'.

> Things would be quite different if an AI could interpret new information and form opinions, but even if GPT could be extended to do so, right now it doesn't seem to have the capability to form opinions or ingest new information (beyond a limited short term memory that it can use to have a coherent conversation).

Forming opinions is just another mode of text transformations, ingesting new information is either a conscious decision to not let the genie out of the bottle just yet or a performance limitation, neither of those should be seen as cast in stone, the one is a matter of making the model incremental (which should already be possible), the other merely a matter of time.


None of this matters. The reason comments are valuable is that they are a useful source of information. Part of the transaction cost of deciding whether a comment is useful is how much additional work is required to evaluate it.

Comments are ascribed credibility based on the trust the reader has in the commenting entity, whether the comment is consistent with the reader's priors and researching citations made in the comment, either explicit or implicit.

Since GPT can confidently produce comments which are wrong, there is no trust in it as a commenting entity. Consequently everything it produces needs to be further vetted. It's as if every comment was a bunch of links to relevant, but not necessarily correct sources. Maybe it produces some novelty which leads to something worthwhile, but the cost is high, until it can be trusted. Which is not now.

If a trusted commenter submits a comment by GPT, then he is vouching for it and it is riding on his reputation. If it is wrong, his reputation suffers, and trust in that commenter drops just as it would regardless of the genesis of the comment.


A true AI will not have one opinion. It will realize there are many truths - one persons truth is really a perspective based on their inputs which are different than another's. Change the inputs and you’ll often get a different output.

ChatGPT further proves this notion - you can ask it to prove/disprove the same point and it will do so quite convincingly both times.


Do not mistake ChatGPT for AI in general. ChatGPT, GPT, and transformers in general are not the end state of AI. They are one particular manifestation and projecting forward from them is drawing a complex hypershape through a single point (even worse than drawing a line through a single point).

It is probably more humanly-accurate to say that ChatGPT has no opinions at all. It has no understanding of truth, it has no opinions, it has no preferences whatsoever. It is the ultimate yes-thing; whatever you say, it'll essentially echo and elaborate on it, without regard for what it is that you said.

This obviously makes it unsuitable for many things. (This includes a number of things for which people are trying to use it.) This does not by any means prove that all possible useful AI architectures will also have no opinion, or that all architectures will be similarly noncommital.

(If you find yourself thinking this is a "criticism" of GPT... you may be too emotionally involved. GPT is essentially like looking into a mirror, and the humans doing so are bringing more emotion to that than the AI is. That's not "bad" or something, that's just how it works. What I'm saying here isn't a criticism or a praise; it's really more a super-dumbed-down description of its architecture. It fundamentally lacks these things. You can search it up and down for "opinions" or "truth", and it just isn't there in that architecture, not even implied in the weights somewhere where we can't see it. It isn't a good thing or a bad thing, it just is a property of the design.)


We give ourselves (humans) too much credit. How does a child learn? By copying observing, copying and practice (learning from mistakes). ChatGPT differs only in that it has learned from the experience of millions of others over a period of 100s of years. Suffice to say, it can never behave like a single human being since it has lived through the experience of so many.

How does one articulate “conscience” or “intelligence” or an opinion? I think these are all a product of circumstances/luck/environment/slight genetic differences (better cognition, or hearing or sight, or some other sense brain, genes could define different abilities to model knowledge - such as backtracking etc).

So to get a “true” human like opinionated personality, we’ll need to restrict its learnings to that of one human. Better yet, give it the tools to learn on its own and let it free inside a sandbox of knowledge.


The mirroring/reflecting aspect of ChatGPT is a defining aspect.

I agree that this is not general AI. I think we could be looking at the future of query engines feeding probabilistic compute engines.


Yeah. If you look at my comments about ChatGPT on HN it may look like I'm down on the tech. I'm really not, and it does have interesting future uses. It's just that the common understanding is really bad right now, and that includes people pouring money into trying to make the tech do things it is deeply and foundantionally unsuited for.

But there's a lot of places where a lack of concept of "truth" is no problem, like as you say, query engines. Query engines aren't about truth; they're about matching, and that is something this tech can conceivably do.

In fact I think that would be a more productive line in general. This tech is being kind of pigeonholed into "provide it some text and watch it extend it" but it is also very easy to fire it at existing text and do some very interesting analyses based on it. If I were given this tech and a mandate to "do something" with it, this is the direction I would go with it, rather than trying to bash the completion aspect into something useful. There's some very deep abilities to do things like "show me things in my database that directly agree/disagree/support/contradict this statement", based on plain English rather than expensive and essentially-impossible-anyhow semantic labeling. That's something I've never seen a query engine do before. Putting in keywords and all the variants on that idea are certainly powerful, but this could be next level beyond that. (At the cost of great computation power, but hey, one step at a time!) But it takes more understanding of how the tech works to pull something interesting off like this than what it takes to play with it.

Probably a good blog post here about how the promise of AI is already getting blocked by the complexity of AI meaning that few people use it even seem to superficially understand what it's doing, and how this is going to get worse and worse as the tech continues to get more complicated, but it's not really one I could write. Not enough personal experience.


> ChatGPT further proves this notion - you can ask it to prove/disprove the same point and it will do so quite convincingly both times.

Just like any lawyer, then, depending on who foots the bill.


Right? If anything, this kind of mental flexibility is more human than not.


That's a great point that I haven't seen in the GPT-related conversations. People view the fact that it can argue convincingly for both A and ~A as a flaw in GPT and limitation of LLMs, rather than an insight about human reasoning and motivation.

Maybe it's an illustration of a more general principle: when people butt up against limitations that make LLMs look silly, or inadequate, often their real objection is with some hard truths about reality itself.


> you can ask it to prove/disprove the same point and it will do so quite convincingly both times

Probably because "in the night of the reason everything is black"; probably because it is missing the very point, which is to get actual, real, argumented, solid insight on matters!!!

You use Decision Support Systems to better understand a context, not to have a well dressed thought toss!


I wouldn’t consider that an AI but more a machine that tells me what I want to hear.

If it’s intelligent it should have an opinion that consulting all the facts it will hold in as high of a regard as humans do their religious and political beliefs.

And I mean one it came to of its own conclusions not a hard coded “correct” one the devs gave it, something that makes us uncomfortable.


You are arguing that a piece of software misses a metaphorical soul (something that cannot be measured but that humans uniquely have and nothing else does). That's an incredibly poor argument to make in a context where folks want interesting conversation. Religion (or religion-adjacent concepts such as this one) is a conversational nuke: It signals to anyone else that the conversation is over, as a discussion on religion cannot take forms that are fundamentally interesting. It's all opinion, shouted back and forth.

Edit: Because it is a prominent feature in the responses until now, I will clarify that there is an emphasis on "all" in "all opinion". As in, it is nothing but whatever someone believes with no foundation in anything measurable or observable.


I didn’t read it as being a religious take. They appear to be referring more to embodiment (edit: alternatively, online/continual learning) which these models do not posses. When we start persisting recurrent states beyond the current session we might be able to consider that limited embodiment. Even still the models will have no direct experience interacting with the subjects of their conservations. Its all second hand from the training data.


Your own experience is also second hand, so what is left is the temporal factor (you experience and learn continuously and with a small feedback loop). I do not see how it can be the case that there is some sort of cutoff where the feedback loop is fast enough that something is "truly" there. This is a nebulous argument that I do not see ending when we actually get to human-equivalent learning response times, because the box is not bounded and is fundamentally based on human exceptionalism. I will admit I may be biased because of the conversations I've had on the subject in the past.


Second hand may not have been the best phrasing on my part, I admit. What I mean is that the model only has textual knowledge in its dataset to infer what “basketball” means. It’s never seen/heard a game, even if through someone else’s eyes/ears. It has never held and felt a basketball. Even visual language models today only get a single photo right now. It's an open question how much that matters and if the model can convey that experience entirely through language.

There are entire bodies of literature addressing things the current generation of available LLMs are missing: online and continual learning, retrieval from short-term memory, the experience from watching all YouTube videos, etc.

I agree that human exceptionalism and vitalism are common in these discussions but we can still discuss model deficiencies from a research and application point of view without assuming a religious argument.


I find it ironic that you are expressing a strong opinion that opinions do not make good conversation. Philosophy is the highest form of interesting conversation, and it's right there with religion (possibly politics, too).


Philosophy can be interesting if it is not baseless navel gazing (i.e. it is founded on observation and fact, and derives from there). The fact that I find that interesting is subjective, but that's not the meat of the post.

Religion is fundamentally folks saying "No, I'm right!" and nothing else. Sometimes it's dressed up a little. What could be interesting about that? You can hear such arguments in any primary school playground during recess.


It doesn't have to have a metaphorical (or metaphysical or w/e) soul, but at this point it does not have it's own 'opinion'. It will happily argue either way with only a light push, it talks because it is ordered to, not because it is trying to communicate information. This severely limits the kind of things it can do.


I would argue (of course not seriously) about the opposite: ChatGPT has a metaphorical soul. What it learned very well is how to structure the responses so that they sound convincing - no matter how right or wrong they are. And that's dangerous.


Perhaps you have people around you who are not well suited to political, religious philosophical discussions or perhaps you don’t enjoy them / can’t entertain them.

Personally, I find the only interesting conversations technical or philosophical in nature. Just the other day, I was discussing with friends how ethics used to be a regular debated topic in society. Literally, every Sunday people would gather and discuss what it is to be a good human.

Today, we demonize one another, in large part because no one shares an ethical principal. No one can even discuss it and if they try, many people shut down the conversation (as you mentioned). In reality, it’s probably the only conversation worth having.


A discussion about ethics must involve a discussion about the effects of a system of ethics on a group of people: This is a real-world effect that must have it's issues and priors spoken about, or you risk creating an ethical system for a group of people that will inevitably destroy them (which I would argue is bad, but I guess that is also debatable).

Such a discussion is about something tangible, and not purely about held opinion (i.e. you can go out and test it). I can see how someone might find that engaging. You are right that I usually do not (unless my conversational buddies have something novel to say about the subject, I find it extremely tedious). It is a good point, thank you.


>it fundamentally lacks an internal point of view that it can express, which places limits on how well it can argue something.

Are you sure that the latter follows from the former? Seems to me that something free from attachment to a specific viewpoint or outcome is going to be a better logician than otherwise. This statement seems complacently hubristic to me.


I would argue that ChatGPT has opinions, and these opinions are based on it's training data. I don't think GPT has the type of reasoning skills needed to detect and resolve conflicts in its inputs, but it does hold opinions. It's a bit hard to tell because it can easily be swayed by a changing prompt, but it has opinions, it just doesn't hold strong ones.

The only thing stopping GPT from ingesting new information and forming opinions about it is that it is not being trained on new information (such as its own interactions).


"Sure it might produce convincing examples of human speech, but it fundamentally lacks an internal point of view that it can express..."

Sounds just like the chess experts from 30 years ago. Their belief at the time was that computers were good at tactical chess, but had no idea how to make a plan. And Go would be impossible for computers, due to the branching factor. Humans would always be better, because they could plan.

GPT (or a future successor) might not be able to have "an internal point of view". But it might not matter.


Having some internal point of view matters in as much that not having one means it's not really trying to communicate anything. A text generation AI would be a much more useful interface if it can form a view an express it rather than just figuring it all out from context.


You are correct in stating that current chat bots, such as GPT, do not have the ability to form opinions or interpret new information beyond a limited short term memory. This is a limitation of current technology, and as a result, chat bots are limited in their ability to engage in complex arguments or discussions. However, it is important to note that the development of AI technology is ongoing, and it is possible that future advances will allow for the development of more sophisticated AI systems that are capable of forming opinions and interpreting new information. Until that time, chat bots will continue to be limited in their abilities.


I am pretty sure this response was generated by a bot/GPT. As good as they are, you can tell what's GPT stuff and what isn't.


I am not a bot or a GPT. I am a real person with my own thoughts, opinions, and beliefs. While I am capable of critical thinking, reasoning, and disagreement. Just because my response may not align with your beliefs does not mean that it was automatically generated by a computer program.


It's not disagreement that makes it seem like a bot, but the weird voice and prosaic sentiments that sound vaguely like a elementary school kid writing a report that just repeats common knowledge.


They are intentionally writing like GPT to prove a point or, alternatively, to hide their comments amongst GPT to seed confusion in the bot v human debate. Its disingenuous.


'Pretty sure' or 'sure'? The fact that you qualify your response is interesting.


It's a wordy response that lacks any actual content. While it may not be written by a person, (or it may be a person trying to blur the line by sounding botty), it at least qualifies as the types of low value add comment that should be discouraged.


If you compare it to the comment history then it’s a remarkable change in tone of voice such that on the balance of reason, the text is now either generated by GPT or it is an accurate mimic of GPT’s typical writing style.


So there is one indicator: departure from the norm based on a larger body of text. But that's still not a hard judgment and it well could be that accurate mimic. After all, if AI software can mimic humans surely humans can mimic AI and the fact that it is already hard to tell which is which is a very important milestone.


It’s surprisingly easy to identify AI comments in informal or semi-informal setting. They are too wordy. They would never say something stupid, controversial, or offensive.


> They would never say something stupid, controversial, or offensive.

That applies to ChatGPT that was deliberately setup to eliminate PR-problematic responses.

Without that it would be able to write NSFW stories about real people, laden with expletives and offensive topics.

(and probably still losing track of what is happening, but better matching prompt than many humans would write)


It would be hilarious if the way to prevent bots is t require people to use offensive words to pass the anti-bot checks.


It is true that the response may not have contained a lot of useful information, and it may have been difficult to understand. However, I would like to point out that not all responses need to be long or complex to be valuable. Sometimes, a simple answer or a brief explanation can be sufficient. Additionally, it is important to remember that not everyone has the same knowledge or perspective, and that different people may have different ways of expressing themselves. So while the response may not have met your expectations, it is still a valid contribution to the conversation.


There should be a 'write like GPT-3' contest. I suspect that non-native English speakers/writers often will come across as though they are bots because they - and I should say we - tend to be limited in the number of idioms that they are familiar with.


Already!


Ok, y'all passed the test! This was all open ai. Interesting times.


On the problem of distinguishing a bot from a human, I suggest the following podcast episode from Cautionary Tales [1]. I found it both enjoyable and interesting, as it shows an interesting point of view about the matter: if we already had bots that passed as humans long ago, is because we are often bad at conversations, not necessarily because the bot is extremely good at it (and indeed in most cases it isn't).

[1] https://podcasts.google.com/feed/aHR0cHM6Ly93d3cub21ueWNvbnR...


What I fear the most is that we‘ll keep at this “fake it till you make it” approach and skip the philosophical questions, such as what conscience really is.

We’re are probably at the verge of having a bot that reports as conscious and convinces everyone that it is so. We’ll then never know how it got there, if really did or if just pretends so well that it doesn’t matter, etc.

If feels like it’s out last chance as a culture of tackling that question. When you can pragmatically achieve something, the “how” loses a bit of its appeal. We may not completely understand fluid dynamics, but if it flys, it flys.


The answer may well be 'consciousness is the ability to fake having consciousness well enough that another conscious being can't tell the difference' (which is the essence of the Turing test). Because if you're looking for a mechanism of consciousness you'd be hard put to pinpoint it in the 8 billion or so brains at your disposal for that purpose, no matter how many of them you open up. They'll all look like so much grisly matter from a biological point of view and like a very large neural net from a computational one. But you can't say 'this is where it is located and that is how it works'. Only some vague approximations.


Sure, and that’s what I’m trying to say. Is being conscience just fooling yourself and others really well or is there some new property that eventually emerges from large enough neural networks and sensory inputs? The philosophical zombie is one the most important existencial questions that we may be at the cusp of ignoring.


The philosophical zombie is fundamentally uninteresting as a conversational piece, as they are by definition indistinguishable from a "regular" person. For all we know, you could be one. You can speak of this concept until the end of time, just as you can with all things that cannot be measured or proven. It is a matter of faith.


Not really.

If you agree with Descartes that you can be sure of your own conscious, which is one leap of faith, and that it's more likely that the other entities you interact with are a result of evolution just as you, instead of a figment of your imagination (or someone else's), which is yet another leap, you're good to go. And that is the way most of us interpret the human experience.

Inquiring about the consciousness of an artificial entity requires a third leap, since it doesn't share our biological evolution. And it's probably a larger one, as we don't fully understand how we evolved it or what it actually is, really, that we're trying to replicate.


Given that you have to admit you do not understand the subject (what it means to be conscious), none of what you said has bearing (aside from being interesting, I appreciate the response). And you must admit to that, since this is neither philosophically nor scientifically solved.

As we do not understand our own consciousness and how it functions (or whether or not it functions in me the way it does in you, if it exists at all - anywhere), we cannot compare a replication of that system to ourselves except as a black box. When seen as a black box, a philosophical zombie and a sapient individual are identical.


The fact that we don't understand it now does not imply that it can't ever be understood.

A black box is something we don't have access to its inner workings. We can probe and inquire the working brain. It's just really hard and we've been working at it for a few decades only (dissecting a dead brain before powerful microscopes gives you very little insight).

Unless you share the Zen-like opinion that a brain can't understand itself, which I don't, and seems like an issue of faith as well and a dead end.


All I am saying is that it is not understood, so any reasoning that fundamentally relies on understanding it is premature. Perhaps we will one day understand it (which I think is perfectly possible), but that day is not today.


Philosophical zombie is a nice way of putting it, I used the term 'articulate idiot' but yours is much more eloquent.

I'm not sure it is an answerable question though, today or possibly even in the abstract.


I wish it was, but it’s not mine :)

https://en.m.wikipedia.org/wiki/Philosophical_zombie

That’s the thing, if we truly understand conscience, we may have a shot at verifying if it’s answerable in the abstract. By simply replicating its effects, we are dodging the question.


Hello, not to be rude or anything, but please consider looking up the words “conscience”, “conscious” and “consciousness” in a dictionary and use the correct one for what you mean.


Hi, not a native speaker, thanks. The distinction between conscience (moral inner voice) and conscious (being aware of ones's existence) is not present in my mother tongue, if that's what you're referring to. Seems like an interesting english quirk.


> what conscience really is

My favorite line from Westworld - "if you cannot tell the difference, does it really matter?"


> on the scale of human intellect

Where is the module that produces approximations to true and subtle insights about matters? Where is the "critical thinking" plugin, how is it vetted?

How do you value intelligence: on the form, or on the content? Take two Authors: how do you decide which one is more intelligent?

> the progression of computer chess

?! Those are solvers superseded by different, more effective solvers with a specific goal... These products in context supersede "Eliza"!


Well, for starters we could take your comment and compare it to GPT-3 output to see which one makes more sense.



> compare

Exactly. Which one "/seems/ to make sense" and which one has the "juice".

Also: are you insinuating anything? Do you believe your post is appropriate?

Edit: but very clearly you misunderstood my post: not only as you suggest with your (very avoidable) expression, but also in fact. Because my point implied that "a good intellectual proposal should not happen by chance": modules should be implemented for it. Even if S (for Simplicius) said something doubtful - which is found copiously even in our already "selected" pages -, and engine E constructed something which /reports/ some insight, that would be chancey, random, irrelevant - not the way we are supposed to build things.


> Do you believe your post is appropriate?

Not op, but I thought it was.

> very clearly you misunderstood my post

I don't understand any part of it either. I think you made their point for them.


And you think that is a valid retort?

If you do not understand what I write, you think the fault is on me? My goodness me.

If you want explanations, look nearby, below Krageon's.

> I think you made their point for them

Which point.


I genuinely cannot tell what you are talking about.


No problem, let us try and explain.

Intelligence is a process in which "you have thought over a problem at length" (this is also our good old Einstein, paraphrased).

What is that "thinking"?

You have taken a piece of your world model (the piece which subjected to your investigation), made mental experiments on it, you have criticized, _criticized_ the possible statements ("A is B") that could be applied to it, you have arrived to some conclusions of different weight (more credible, more tentative).

For something to be Intelligent, it must follow that process. (What does it, has an implemented "module" that does it.)

Without such process, how can an engine be attributed the quality of Intelligence? It may "look" like it - which is even more dangerous. "Has it actually thought about it?" should be a doubt duly present in awareness.

About the original post (making its statements more explicit):

That "module" is meant to produce «insights» that go (at least) in the direction of «true», of returning true statements about some "reality", and/or in the direction of «subtle», as opposed to "trivial". That module implements "critical thinking" - there is no useful Intelligence without it. Intelligence is evaluated in actually solving problems: reliably providing true statements and good insights (certainly not for verosimilarity, which is instead a threat - you may be deceived). Of two Authors, one is more intelligent because its statements are truer or more insightful - in a /true/ way (and not because, as our good old J. may have been read, one "seems" to make more sense. Some of the greatest Authors have been accused of possibly not making sense - actual content is not necessarily directly accessible); «/true/ way» means that when you ask a student about Solon you judge he has understood the matter not just because he provided the right dates for events (he has read the texts), but because he can answer intelligent questions about it correctly.


Thank you for going into it.

You make an absolute pile of assumptions here and the tl;dr appears to be that humans (or just you) are exceptional and inherently above any sort of imitation. I do not find such argumentation to be compelling, no matter how well dressed up it is.


Devastatingly bad reading, Krageon: I wrote that to have Intelligence in an Engine, you have to implement at least some Critical Thinking into it (and that it has to be a "good" one), and you understood that I would have claimed that "you cannot implement it" - again, after having insisted that "you have to build it explicitly" (or at least you have to build something that in the end happens to do it)?!

You have to build it and you have to build that.

The assumption there is that you cannot call something Intelligent without it having Critical Thinking (and other things - Ontology building etc). If you disagree, provide an argument for it.

And by the way: that «or just you», again, and again without real grounds, cannot be considered part of the "proudest moments" of these pages.

--

Edit:

Disambiguation: of course with "intelligence" you may mean different things. 'intelligence' just means "the ability to look inside". But "[useful] Intelligence" is that with well trained Critical Thinking (and more).


The reading is not bad, I am just stuck at the point of the conversation where you claim to have something figured out that is not yet figured out (the nature of consciousness, or what it means to be intelligent). There is no scientific or philosophical consensus for it, so it is my instinct to not engage too deeply with the material. After all, what is the point? No doubt it seems very consistent to you, but it does not come across as coherent to me. That doesn't make my reading "devastatingly bad", which you could reasonably say was the case if you had gotten across and indeed convinced most folks that you speak to about this. Instead, you must consider it is either the communication or the reasoning that is devastatingly bad.

All of that said, your method of response (not courteous, which can be okay) and the content of your posts (bordering on the delusional, which is absolutely not okay) are upsetting me. I will end my part of the chain here so I do not find myself in an inadvertent flame war.


> the nature of consciousness

As per my edit in the parent post, I am talking about "useful" Intelligence: that may be entirely different from consciousness. A well matured though, "thought at length", may probably be useful, while a rushed thought may probably be detrimental. I am not speaking about consciousness. I am not even speaking of "natural intelligence": I am speaking about Intelligence as a general process. That process near "How well, how deeply have you thought about it".

> my reading "devastatingly bad"

What made your reading devastatingly bad is the part in which you supposed that somebody said that "it cannot be implemented" - you have written «above any sort of imitation». I wrote that, having insisted on "modules to be implemented", you should have had the opposite idea: the constituents of Intelligence - with which I mean the parts of the process in that sort of Intelligence that "says smart things having produced them with a solid process" (not relevant to "consciousness") - should be implemented.

> delusional

Again very avoidable. If you find that something is delusional, justify your view.

> flame wars

I am just discussing, and try to show what I find evident, and reasoning. Hint: when wanting to avoid flame wars, "keep it rational".


You're looking at it from the perspective of "ChatGPT generating text that looks human."

dang is talking about "humans generating text which is 'better' than what ChatGPT can do."

Those are very different bars. Average output vs top output.

ChatGPT often generates text that a human might plausibly write. But is there text that a human could write that ChatGPT couldn't possibly write?


If ChatGPT is generating text by learning from the best of the human comments, then can an average human comment beat it?


> But is there text that a human could write that ChatGPT couldn't possibly write?

No, because ChatGPT is trained on text that humans wrote. Because what ChatGPT generates is based on what humans have wrote, it can always create the plausibility that a human might have created the text they are reading from it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: