Hacker Newsnew | past | comments | ask | show | jobs | submit | dontupvoteme's commentslogin

Throw it at an LLM with the simple command "fix", highlight every character that has a delta and send it back to them.

Add a grade in red at the top if you're feeling extra cheeky


It's really maddening just how right the board was.


The majority of employees didn’t care about a lying CEO or alignment research: they wanted the stock payoffs and Sam was the person offering it - end of the day that’s what happened with the coup.

Now Sam is seen as fucking with said stock, so maybe that isn’t panning out. Amazing surprise.


It's funny to me to read now about employees of OpenAI being coerced or tricked or whatever. Didn't they threaten to resign en masse a few months ago, in total unquestioned support of Sam Altman? They pretty much walked into it, in my opinion.

That's not saying anything OpenAI or Altman do is excusable, no way. I just feel like there's almost no good guys in this story.


While true, it doesn't mean they were offering a better alternative.


I wish more people didn't expect an alternative before getting rid if a bad situation. Sometimes subtraction, rather than replacement, is still the right answer.


Well appropriately enough it was an AI movie that taught as a valuable lesson, namely that sometimes the only correct move is not to play.

It's such an insidious idea that we ought to accept that you can just give up your promises you explicitly made once those rules get into your way of doing exactly what they were supposed to prevent. That's not anyone else's problem, that was the point! The people that can't do that are supposed to align AI? They can't align themselves


I'll bet Emmett Shear would've been a fine CEO.


Doesn't really matter at this point because they gave literally zero info to the public or most of the company employees when they fired Altman. Almost no one sided with them because they never attempted to explain anything.


it's called "power" and money is merely a proxy of it.

we talk dollars all day long but we haven't quantified power nearly as well


Are they really the ones with the best chance now though?

They're basically owned by Microsoft, they're bleeding tech/ethnical talent and credibility, and most importantly Microsoft Research itself is no slouch (especially post-Deepmind poaching) - things like Phi are breaking ground on planets that openai hasn't even touched.

At this point I'm thinking they're destined to become nothing but a premium marketing brand for Microsoft's technology.


Could they have made it look less like Midler vs Ford?

"Midler was asked to sing a famous song of hers for the commercial and refused. Subsequently, the company hired a voice-impersonator of Midler and carried on with using the song for the commercial, since it had been approved by the copyright-holder. Midler's image and likeness were not used in the commercial but many claimed the voice used sounded impeccably like Midler's."

As a casual mostly observer of AI, even I was aware of this precedent


What was the result of that? Did Ford or Midler end up winning?


Midler won, it’s a cornerstone case in protecting image/likeness.

In tech we’re used to IP law. In entertainment, there is unsurprisingly a whole area of case law on image and likeness.

Tech will need to understand this—and the areas of domain specific case law in many, many other fields—if AI is really to be adopted by the entire world.


>Jason (star emoji) (uk flag emoji) SaaStr LDN June 4-5 (star emoji) Lemkin

This person is famous?

sounds like he's mad about being sloppy seconds `:)`


The innovative power of the hangover


Ironically Microsoft is the one that's notoriously terrible at checking their "AI" products before releasing them.

Besides the infamous Tay there was that apparently un-aligned Wizard-2[or something like that] model from them which got released by mistake for about 12 hours


As an MS employee working on LLMs, that entire saga is super weird. We need approval for everything! Releasing anything without approval is quite weird.

We can’t just drop papers on arxiv. There is no way running your own twitter, github, etc as a separate group allowed.

I checked fairly recently to see if the model was actually released again, it doesn’t seem to be; I find this telling.


Sydney was their best "lest just release it without guardrails" bot.

Tay way trivially racist, but boy was Sydney a wacko.


I was able to download a copy of that before they took it down. Silly.


Yeah it was already mirrored pretty quickly. I expect enough people are now running cronjobs to archive whitelists of HF pages and auto-cloning anything that gets pushed out.


One could argue that at this point openai is being Extended and Embraced by Microsoft and is unlikely to have much autonomy or groundbreaking impact one way or another.


Turns out we already have alignment, it's called capitalism.


This is true and we do not talk about it enough. Moreover, Capitalism is itself an unaligned AI, and understanding it through that lens clarifies a great deal.


oh no, it's just a real world reinforcement model


People experience existential terror from AI because it feels like massive, pervasive, implacable forces that we can't understand or control, with the potential to do great harm to our personal lives and to larger social and political systems, where we have zero power to stop it or avoid it or redirect it. Forces that benefit a few at the expense of the many.

What many of us are actually experiencing is existential terror about capitalism itself, but we don't have the conceptual framework or vocabulary to describe it that way.

It's a cognitive shortcut to look for a definable villain to blame for our fear, and historically that's taken the form of antisemitism, anti-migrant, anti-homeless, even ironically anti-communist, and we see similar corrupted forms of blame in antivax and anti-globalist conspiracy thinking, from both the left and the right.

While there are genuine x-risk hazards from AI, it seems like a lot of the current fear is really a corrupted and misplaced fear of having zero control over the foreboding and implacable forces of capitalism itself.

AI is hypercapitalism and that is terrifying.


Ted Chiang on the Ezra Klein podcast said basically the same thing:

AI Doomerism is actually capitalist anxiety.


Probably not even that specific, more like an underlying fear that 8 billion people interacting in a complex system will forever be beyond the human capacity to grasp.

Which is likely true.


So, this has happened multiple times. Its best case.example.is.eugenics, where "intellectuals" believe.they can degermine what.the best traits are.in a.complex system and prune sociery to achieve some perfect outcomr.

The peoblrm, of course, is the sysyrm is complex, filled with hidden variables.and humans will tend to focus entirrly on phenotypes which are the easiest to observe.

Thesr modrls will do the same humanbbiased selection and grabitateb to a substatially vapid mean.


Well, we do have a conceptual framework and vocabulary for massive, pervasive and implacable forces beyond our understanding - it's the framework and vocabulary of religion and the occult. It has actually been used to describe capitalism essentially since capitalism itself, and it's been used explicitly as a framework to analyze it at least since Deleuze. Arguably, since Marx : as far as I'm aware, he was the first to personalize capital as an actor in and of itself.


Different words with different meanings mean different things. A communist country could and would produce AI, and it would still be scary.


That's because most communist countries are closer to authoritarian dictatorship than Starfleet


That's because most communist countries are closer to authoritarian dictatorship than hippie commune.


tl;dr: Fear of the unknown. The problem is more and more people don't know anything about anything, and so are prone to rejecting and retaliating against they don't understand while not making any effort to understand before forming an emotionally-based opinion.


This is a pretty old idea, which dates back to the study of capitalism itself. Here's some articles on it : https://harvardichthus.org/2013/10/what-gods-we-worship-capi... and https://ianwrightsite.wordpress.com/2020/09/03/marx-on-capit...


Nick Land type beat


You mean freedom is an analigned AI?


How does capitalism work if there aren’t any workers to buy the products made by the capitalists? Not being argumentative here, I really want to know.


The way it works in any country where workers can't afford to buy the products today, so I imagine as in those countries that function most like the stereotypical African developing country.

So I imagine that the result would be that industry devolved into the manufacturing of luxury products, in the style of the top-class products of ancient Rome.


The machines can buy the products. We already have HFT, which obviously has little to do with actual products people are buying or selling. Just number go up/down.


If a machine buys a product from me and does not pay, whom should I sue?

That is the person who actually made the purchase.


Transfer payments, rent, dividends would provide income. People would then use it to buy things just like they do now.


All that matters are quarterly profits.


Honestly don’t know if these kinds of people have thought that far ahead


Yes, thos is definitely the signal that capitalism eill determine the value.of AI.

SAME way.google search is now a steaming garbage.pile.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: