Yup, my boss at old co had some similarities to Greg. He worked long hours, was pretty much aware of everything happening at the company both in the US and off-shore, could talk technical details with the tech peeps, and business stuff with the non-tech folks. He was always on top of things and unblocking people all over the org. He also has the great ability to remember things very well, even after a few months have passed by.
Even though he was CTO and incredibly busy, he would find time to spend with individual engineers. Once he spent an hour pair-programming with me on a difficult issue. Even though his time was obviously not spent coding (and hadn't for years), it was a very productive session.
The founder acknowledge he really couldn't have done it without him, or someone like him on the team. I 100% agree they couldn't have built the org without him. He is just on a different level and it was awesome seeing him in action.
That's an impressive will power.
P.S. "and hadn't for years" => He could not have helped you to solve that issue if that is the case. No way he could. He obviously codes every single day (not for long hours)
I don't think so. I don't code anymore because I am responsible for around 100 FTE but can still pair programme effectively. Coding is just the tip of the engineering iceberg.
That isn't true; sometimes just talking through the problem and checking assumptions, or just having another brain to work through the problem is what's needed. I guarantee you the issue wasn't one of the OP not knowing the language or framework well enough.
He said: "Once he spent an hour pair-programming with me on a difficult issue."
I quoted what is "pair programming" for you (and the 2 others): "One, the driver, writes code while the other, the observer or navigator reviews each line of code as it is typed in. The two programmers switch roles frequently."
Near-100% certainty that Altman and Brockman cofound a new AI company in the coming days. The question is will they be able to recruit a team that can actually build competitive models? Ilya Sutksevers don't grow on trees. Maybe they'll just get a team good enough to specialize Llama2, since Altman/Brockman seem to think what's lacking in this space is glitzy products, app stores, b2b integrations, etc. Maybe OpenAI starts to open source everything and Altman/Brockman can have their cake and eat it, too.
You're massively overestimating the skills needed to build "competitive models". Researchers like Ilya were essential several years ago when OpenAI got started, because at that time nobody even knew what to build or how to go about it. It was all research, and OpenAI was solely focused on RL. But these days, LLM code is 95%+ data and infrastructure engineering and very little novel research. Even things like LLM speedups via new attention mechanisms are more engineering than they are research.
Above all else, what you need to build competitive models is a huge amount of money and access to compute. Money alone doesn't solve that either due to the global hardware shortage. You are competing for a globally limited pool of hardware with all other companies.
Google and Meta have more money and more compute than OpenAI. They have more data, better infrastructure, and more AI researchers. And they have been trying hard to catch up - so far unsuccessfully.
But they have the key problem of all established companies: they built the infrastructure for what is now the wrong thing.
This means to do the right thing at Google would require fighting battles on all sorts of fronts as established mini empires within attempt to latch on to this new perceived opportunity to expand.
Google and Meta have less compute infrastructure purpose-built for training and testing LLMs than OpenAI does. It's something that OpenAI has been investing in for several years, while Google/Meta/etc have just started playing catch up very recently.
Researchers like Ilya were essential several years ago when OpenAI got started, because at that time nobody even knew what to build or how to go about it.
Nah Sam will just use GPT4 to code his vision. That's how it works now, we've been told.
It will be very ironic when their new startup gets dragged down by regulations due to them not having an AI license that sama pushed heavily in congress.
OAI still have people who are true leaders in this deeply technical space - I don't see research wins going to a competitor unless they cultivate similar talent at the top.
It's also possible that Altman was privy to a breakthrough which makes it possible for him to execute from scratch on the AGI goal without a strong research org.
> You know, years ago I wrote a little program to look at this, like how quickly our best founders — the founders that run billion-plus companies — answer my emails versus our bad founders. I don’t remember the exact data, but it was mind-blowingly different. It was a difference of minutes versus days on average response times
One thing to keep in mind is that this was the email response time to Sam Altman, the head of YC. What competent startup founder waits to reply to that?
Responsiveness as a general approach to all email is a bad idea. But one needs to know who are the high-priority emailers, and how much they value quick replies.
If you can answer every mail in minutes from important people, then how can you meaningfully do other stuff?
It's perfectly fine to spend a day without answering to the head of YC if you've had the entire day full at hiring, talking to investors, etc. You can't do that effectively and be distracted by your phone all time.
This is about the nature of emails themselves, if they conveyed a more urgent information then a phone call would've been better.
> You can't do that effectively and be distracted by your phone all time.
I think the differentiator is that some people can and do remain effective at a primary task while handling a multitude of distractions, and that this trait strongly indicates high ability overall.
It also depends on what your task queue is. I’d expect an IC to have a few tasks to work on with complete focus for hours at a time. Senior management however, having delegated properly, I’d expect to have a large set of tasks that require a limited amount of time to address.
Eg you’d expect a CEO to enter a meeting, receive a variety of reports from their staff and make a final call or two — and then moving onto another meeting in an entirely different domain — and do so fluidly without interruption.
Which would make it a lot easier to be readily available for ad-how things like emails. Though 5-minutes feels a little much.
I assumed that the responses were something like "hey, not sure I'll need to get back to you" (and then he researches/investigates after re-prioritizing other work and gets back to him).
Yes, that is the way to do it. Don’t leave the sender wondering. Immediately confirm that you read the email and are prioritizing a substantive response.
I would absolutely believe that founders who (go on to) run billion dollar companies make a point of replying to YC partners very quickly. "Successful executives are highly responsive to investor and advisor emails" seems eminently plausible. It doesn't suggest that they're equally responsive to all emails, but they've got a sense of who's important/needs to feel important.
I'm sure they do. But you can also interpret that as 'bootlickers are the kind of people I like'. And that is not necessarily equivalent to 'good founders'. So I think it is a bit of a thin element to judge people by.
I think it is maybe best reframed as "good founders from the perspective of those who control capital".
Whether these people ultimately improve society, or create a better sense of purpose for their employees, or provide visionary direction for the company at a higher rate than other founders is kind of orthogonal (or perhaps anticorrelated) to being good stewards of invested capital.
Founders can, to some extent, get help with (or succeed financially despite the lack of) the other things, but I think it can be reasonably argued that if they're not personally regarded as good stewards of capital, then the whole enterprise is in doubt.
Precisely. Good founders, bad founders, in the eyes of the beholder.
I'm pretty sure I have a completely different opinion on what constitutes a good founder and what constitutes a bad one compared to Sam Altman, fortunately I don't have enough clout to make authoritative statements on the subject.
Startup success is random enough that it'll be correlated with a great variety of interesting things. A good scientist makes a hypothesis based on a casual model first, then constructs a test for it. Data mining for correlated things is a recipe for apophenia.
Goodhart's law would tell you what any metric-driven understanding of the cause of success would cease to be useful as soon as it was announced.
It remains plausible that the connection is neither causal nor spurious, but after the announcement of the correlation, the correlation becomes useless
A very stupid metric, or very useful if you want to know who just wait for emails instead of doing long strands of deep work, with notifications off or not visible.
So my understanding from reading the drama the past day is Sam Altman was fired from OpenAI due to being too inclined to 'move fast and break things' by commercializing OpenAI technology, with Greg Brockman (cofounder/board member/close friend/ally) choosing to resign in solidarity. The board coup was organized by cofounder/Chief Scientist Ilya Sutskever who apparently wants OpenAI's original slow moving safety vision.
It's speculated Sam Altman and Greg Brockman may start a new AI company.
So now seems like a good time to mention a few very high-level points in case they read it:
1. I love Sam Altman's ship early and often inclinations, even if that apparently got him fired. OpenAI was such a breath of fresh air compared to sclerotic companies like Google that can invent the Transformer architecture yet be organizationally incapable of shipping ChatGPT-level tools for years due to overly conservative safety concerns
2. I hate OpenAI (or Sam Altman's?) apparently puritanical inclination to anything considered Not Safe For Work, especially for paid API usage. Why not allow people to build and sell virtual partner chat bots with explicit NSFW content?
3. I dislike his apparent inclination to build a regulatory moat to block others from developing advanced AI -- it's easy to interpret this as purely in the self-interest of OpenAI shareholders
Without Sam Altman's inclination to move fast I imagine OpenAI may become slow, sclerotic and less capable of shipping early, like what Google has become.
> 2. I hate OpenAI (or Sam Altman's?) apparently puritanical inclination to anything considered Not Safe For Work, especially for paid API usage. Why not allow people to build and sell virtual partner chat bots with explicit NSFW content?
I don't think that is either's fault, the US is just very puritan and a lot of it is because credit card companies and banks don't like it.
> it's easy to interpret this as purely in the self-interest of OpenAI shareholders
My guess is that this is probably what the board didn't like, Altman focused too much on profits in various ways.
An OpenAI which allowed NSFW content literally could not exist. It would be shut down in under a week. Maybe possible under a different regulatory regime (France?) but even then I doubt it... any model developed by a company and offered as a product will have some censorship which gets baked in.
Because current porn companies know the law, and toe the line if they want to stick around.
AI doesn’t. Image generators know what naked people look like, and they know what children look like. And they’re really good at mashing ideas together like that.
Text generators will happily compose any scenario for you.
I’m not here to argue ethics, but it’s easy to see why a nsfw open ai would immediately run into major problems.
I think I understand the problem you are pointing at. For example, children porn via AI? My question that is more epistemological and take aside ethics is if you can stop AI to produce those images when all the tools are available. I can compare this (not in ethics terms but in feasibility) to the US government push to control the information about how to make bombs or cryptography in the 90s. They couldn't but they can prosecute people who take advantage of this technologies to commit a crime. I don't know how this applies to people watching children porn that is produced by AI. Here I understand this is an ethics question that talks about the person consuming this and not the technology.
But there's plenty of successful US-based sites that host both SFW and NSFW content: Reddit, Twitter, Tumblr (before Yahoo), DeviantArt, etc
Even it seems Patreon (which I've actually heard it described as an "NSFW launderer") -- is fundamentally built upon interactions with credit card and banks.
I don't know how true it is, but I've read that payment processors like Visa and Mastercard are actually agnostic -- it's the high-rates of chargebacks that they have a problem with.
The profit focus being wrong seems so weird to me. It's an awfully expensive operation to run GPT-4 at scale and even now, it's rumored they are running the services at a loss. I understand the philosophical side, sure, but you can't just disregard all those massive GPU farms and staff tuning their models. AI is said to have created a new country in terms of energy use and OpenAI no doubt accounts for a large portion of that.
Certainly Microsoft's GPTv4 infrastructure is still eye-wateringly expensive:
> GitHub Copilot has reportedly been costing Microsoft up to $80 per user per month in some cases as the company struggles to make its AI assistant turn a profit.
> According to a Wall Street Journal report, the figures reportedly come from an unnamed individual familiar with the company, who noted that the Microsoft-owned platform was losing an average of $20 per user per month in the first few months of 2023.
I'm hacking on some GPT-for-long-form-text stuff right now and it is _eye wateringly_ expensive once you start generating at anything close to "professional human" token outputs. $80 per month sounds already pretty optimized.
And the board was ridiculous thinking they could demote him and have him stick around. That was either weirdly short-sighted or strategic theater. I kind of think it might have been the former.
They knew he was going to leave. It's likely a combination of the following:
1. They couldn't fire him as an employee (or felt it was an overreach of their mandate)
2. They wanted to signal a clear distinction that they lost faith in him as Chairman, while not losing faith in his work as an employee.
3. They felt like it would play better with the company if his ultimate departure was his decision rather than theirs.
4. Mira, as the new acting CEO (and someone who had nothing to do with the actions), declined to fire him even though she knew it was ultimately futile.
I doubt they care about this. This move already signals they're not optimizing for financial outcomes, and the independent board members (3/4 involved in this decision) have no equity in OpenAI.
My take is, Sam and Greg are not the executives they want people to think they are. This was recognized, and they got upset because of this, and things shook out this way.
Over the past decade, I've never heard a single bad thing about Sam or Greg from anyone who has worked with them.
The board may know something nobody else does, but I think (given the current information) it's significantly more likely that they _are_ who they purport to be... it's just that the board wanted something different.
I've worked directly with Sam on a few projects, and was always impressed by how thoughtful and empathetic he was. I've worked with a number of smart, wealthy people, and Sam is the only one I'd say that about.
Ilya is the technical mastermind behind OAI. The technical breakthroughs needed for AGI are not there yet. Ilya, Yann, Demis and many others are aware of it.
An aggressive push for applied research and commercialisation means less resources for technical breakthroughs.
The web API based licensing scheme is dumb and bad. It’s including a buggy whip holder on a model T thinking. The license should be that they sell a license for use of the weights. They can also sell a SaaS that does a web API to use the weights. But the weights are the thing other businesses actually want and it’s controlling and obviously a monopoly play to not sell the weights. Other businesses have an obvious incentive to only work with companies that sell weights so as to prevent their being mere serf’s on someone else’s SaaS farm.
I read alot. Saw many rumors. I'm aware of the various 'insider scoops'. I still maintain we really don't know what happened, more or less.
I'm certain of this though... when Greg Brockman walked out the door, they lost a major piece of talent.
That guy was a true believer. His enthusiasm was infectious. It traveled across the video link... You could feel how passionate he was about the future of artificial intelligence and it's capacity to change humanity for the better.
I'm sure he'll throw himself at something very cool for the next run.
A lot of people ask me what the ideal bro-founder looks like. I now have an answer: Greg Bro-ckman.
OpenAI wouldn’t have happened without Greg. He commits quickly and fully to a fellow bro, and I fully trust that when the board of director dickheads fire me, Greg will dutifully bail within a day or two.
I read this and thought we collectively should remember to respect non-technical people at startups. Sometimes we forget. There's people behind the scenes that help drives things forward.
One of the first startups I joined, I remember the engineers would routinely stay late and get shit down. The COO would made sure food was ordered (dietary needs were meet), cleaned up and made sure the gears stayed oiled. This significantly helped moral.
I don't know gdb very well, but I did get to chat with him a bit about what he was working on around the time of this article, which was mostly infrastructure grunt work, removing obstacles and procedural rough edges—basically anything to make the researchers and engineers as happy and productive as possible. It is so, so easy to undervalue that kind of work done by a totally brilliant and capable technologist. For an early stage startup it's gold. For a later stage startup it's gold.
It's gold, but it's not singular. There are many people who have been doing that work for decades and are able to step into the role. The same can't be said for the R&D work he's supporting, as comparatively few have deep insight or experience for working with the innovative tech yet.
So while Greg's work would have been extremely valuable, it's value is on a lesser order of magnitude than many of the other researchers and engineers who OpenAI had collected into its ranks. More essential innovative value will be lost to the bleed of loyalists and startup bettors who will peel off from those ranks.
I'm suggesting that there are not many people who have been doing that work, at least not at the same level or to the same effect. He did it with Stripe and OpenAI, back to back.
I think that there is some kind of elitism around AI researchers. Yes they are very valuable, but someone helping everyone else be more productive is absolutely critical.
Having a car might be critical and acquiring a car might be expensive, but there are a lot of them and they are ultimately replaceable. If yours is lost and you still have cash, you can generally go find a new one the same day and borrow a ride from someone if you really need to.
That's not necessarily true for (say) the rare high-end graphics board you use for running local inferences. It's also expensive -- even less expensive than the car -- but replacing it can be a bigger deal and cause a complete interruption.
There are countless experienced late-career generalists who can keep projects moving by contributing to critical, smart support. I'm one of them. We're extremely valuable indeed.
But there really are far fewer people who were ahead of the curve and years-deep into the AI research central to OpenAI's entire existence. Those people are beyond critical, they're essential.
That doesn't make them better people, or smarter people, or in any other way elite. It just means that in the context of OpenAI those people are much harder to come by and can be much more disruptive when lost.
Apologies, I'm afraid I wasn't very awake reading your comment. You're right, my issue is more about the dismissive attitude that I often see: for some reason I mistook you as someone having that attitude as well.
I think it's fair to say that OpenAI retains its AI talent so the loss of the founding team may not result in lost research progress.
The founding team does however bring tactical experience that is (maybe) unmatched. They also have experience solving problems that turn cutting edge AI models into usable products. It's easy to devalue non research contributions, but they have legitimate value and are instrumental in OpenAIs success.
I feel like this is a point that is not being talked about enough. Yes, OpenAI gave us GPT and DALL-E. But had sama and gdb remained there, would we have gotten anything new that is as groundbreaking as the original GPT and DALL-E, or would we have continued getting GPT-12 and DALL-E-19? Sure, iPhone 15 sells, but some may say Apple has stagnated since iPhone was released.
OpenAI was releasing innovations in the GenAI space at a breakneck pace. Remember, GPT-1 didn't change the world, it was GPT-3.5/4 from earlier this year. OpenAI was at peak innovation when sama and gdb left.
And folks used to say Apple was stagnant, but after Apple Silicon completely upended the personal computer world (along with some other things) the dissidents have been mostly silent.
> Apple Silicon completely upended the personal computer world
How did it upend the personal computer world? Apple's chip developments are an amazing technological achievement, but they don't have anything innovative to put them in. Apple slaps them in grossly thermally limited form factors, where the chips can't operate anywhere close to their capability. It's kind of a silly exercise, in my opinion. At the end of the day, Apple has made the same computers, phones, and tablets for the past 10 years. I'm not sure where the innovation is.
Saying the chips haven't enabled any sort of innovative products it's like driving down the interstate wearing goggles, err, blinders. Let's check back in four months.
You have! The Apple Vision Pro is slated to be released by the March/April 2024. The device has only been made possible by the steady march of Apple Silicon.
Apple Silicon made x86 silicon look bad, but what has it really upended? Macs are taking over more of the personal computer market, but hard to say what the factor is there. I think it's mostly network effects, partially due to their shameless proprietary approach. PCs, Apple or other, are kinda generally good, no matter what the price or combo, and disappearing at the same time, a lot can be done with just a browser on any foundation. Apple seems to be years behind or nonexistent where things are really changing, AI and cloud.
From the various sources it appears that they’re being fired because they were trying to push the envelope TOO HARD, not the other way around.
And, outside of the Cynicism-Is-Intelligence hackernews crowd, basically everyone has been fawning over the breakneck speed of progress coming out of OpenAI, even at the recent OpenAI devdays.
But now, are we even going to get GPT-4 or GPT-5 with the same level of polish that sama would have put into it?
I'd argue right now that we're at the "iPhone 3G" point on the technology curve, with significant improvements to come over the next few years as the tech gets polished.
I don't even know how people like this get valued so much. Why do people treat Silicon Valley "entrepreneurs" and investors as if they're made out of some sort of intellectual adamantium? Aren't they, generally speaking, just people looking to make a name and buck for themselves, primarily driven by ego rather than intellectual or philosophical pursuits? Most of them got lucky with some relatively dumb or straightforward product in the middle of a bubble and are not responsible for some major leap forward in technology.
So what did Sam actually do besides a social media startup that was more or less a flop into being gifted a high position at ycombinator where he then had a few good investments during literally the easiest time to invest in tech history? I’m sure there’s something I’m missing, but there’s not much public info.
This is a common retort, but after his run at YC (hand-picked by Paul Graham) and OpenAI (taking on Google at AI is no mean feat, despite the backing), and his ongoing work with Helion Energy and WorldCoin, it is safe to say Sam has more than earned his place, perhaps may be twice over, among SV royalty. And he's not even 40.
"Hand-picked by so-and-so" used to have another name: "one of the good ole boys." Before that, in England, one was "sound." It's not a qualification, it's an anointing.
So we have "handing out money," OpenAI, a typical fusion outfit (breakeven next year, every year), and a cryptocurrency that has already been chased out of the one country that tried to adopt it.
I like Sam Altman and he seems to be a genuine person with laudable goals, but OpenAI is the only place where he really seemed to deliver, and even then there are a lot of people unhappy with the non-profit/private subsidiary surprise structure.
But the TLDW version is that environment required for fusing He3 with deuterium also leads to deuterium fusing with itself, a reaction that creates neutron radiation that irradiates its environment.
Long story short: Helion plans a net-electricity demo in 2024 and to start selling to the grid in 2028. The timeline seems too good to be true but no one says it's impossible. Many says they have not enough publications and that there are many scientific unknowns. Failure is a possibility, success also. Given the timeline we'll know soon.
I haven't worked for Sam, and expect most people commenting on him haven't either, so they only have his interviews and his public commentary to judge him by. From that commentary he seems extremely ... vanilla? But that is probably good for an exec.
I haven't read any of his blogs and thought "wow, how insightful?" - rather, they read similar to press releases I see constantly on LinkedIn. "You have to put something out there" type of stuff. Just doing it to do it, not to share insight.
That's my take, anyway, from basically all I've seen of him, and this gives a "not special" vibe, but my gut tells me that's very, very intentional ...
Maybe some want to celebrate BSers like Adam Neumann or Elizabeth Holmes who are good at pretending to be important and conning investments, but it never really impressed me, sorry.
I’ll stick to celebrating the actual brains, like Ilya Sutskever.
Even though he was CTO and incredibly busy, he would find time to spend with individual engineers. Once he spent an hour pair-programming with me on a difficult issue. Even though his time was obviously not spent coding (and hadn't for years), it was a very productive session.
The founder acknowledge he really couldn't have done it without him, or someone like him on the team. I 100% agree they couldn't have built the org without him. He is just on a different level and it was awesome seeing him in action.