I know that's true, but I find that the images on my Pixel are starting to have a bit of an eery feel, with some of the details looking more and more like AI generated images. I'd give back a bit of the quality for more "natural" looking images.
You're right but commercial leases for offices are usually multi year and larger companies usually sign longer leases (20, 30 years or more). They can be costly, though not impossible to wind down.
So for those large companies, the sunken cost is larger.
I'll let parent elaborate more on the intent, but the way I interpreted it was : Saying that a startup will fail (i.e. being a naysayer) and being right about that is the most likely outcome due to the current "success" distribution (most businesses/startups fail).
Also the most memorable ones are when people were dismissive but ultimately wrong about the viability of the business (like the "dropbox" comment).
I think there's a deeper implication that the naysayers _about the subject of the hype_ are usually right, rather than simply about anyone trying to exploit the hype. Metaverse was going to be the next big thing. Naysayers (correctly) laughed. Nobody talks about metaverse now.
To be fair, I doubt Maestro will take off like Airflow did.
Airflow filled a void of an easier orchestrator for Big Data with a prettier UI than the competitors of the time (Oozie, Luigi), implementing some UX patterns which had been tested at scale at Facebook with data swarm.
Seems like you have some experience with the orchestrator offerings. Airflow still the way to go, or would you recommend something else for someone just starting down the path of selecting and implementing a data orchestrator?
I haven't used Airflow for years but it used to be quite clunky, not sure how much it's improved since. I'd look into Prefect and/or Dagster first, both are more modern alternatives built with Airflow's shortcomings in mind.
Sounds like a really good move by Databricks, in particular because a lot of the main platforms had implementations of catalogs to the Iceberg Spec, and several vendors, Snowflake included was starting to support Iceberg as an external Table format.
I have similar questions about the future of Delta Lake, but not really about the future of Iceberg, that's what the Apache Foundation is for after all. There are enough large enterprise players relying on this (Apple, Netflix, ...) to keep the project going for a while.
> It really seems like criticizing Sam is the new hot thing to do, with tons of people jumping on the bandwagon. Whether it's hiring a voice actor who sounds like ScarJo, having non-disparagement clauses in separation agreements (something basically all big companies and institutions tend to do), being associated with a crypto project (Worldcoin), "lying" to OpenAI board members, etc. No one is perfect, and when you are put under a microscope, just about anyone can look bad in the wrong light.
True, but it's hard to start something as big as OpenAI and not warrant a little scrutiny. At least, I think there is plenty of public interest here, in particular because of the chosen mission statement for the company.
> Ultimately, I ask myself, is my life better because Sam was born and did what he did? And the answer is 1,000 times "yes!" because the introduction of ChatGPT changed so much and enabled so much creation and learning for me personally.
Which is a very reasonable position, but is the fact that your life is better negate concerns that applications of ChatGPT may actually make other people's lives worse? And that the lack of transparency around conflicts of interest raises reasonable concerns about both judgement and the ability of the organization to deliver on its mission?
That's just it-- I really don't think ChatGPT and Sam have harmed anyone besides possibly a very few people who disagreed with Sam and tried to resist him and got outmaneuvered by him. But I think many tens of millions of people have greatly benefitted from him. And to ignore that in the calculus of "is Sam worthy of reproach?" seems silly.
And I also don't feel like I am somehow owed a huge amount of transparency around the exact details of how Sam may or may not benefit financially from his association with OpenAI, or the legal agreements they had with departing staff. Even if he does benefit, is that really so horrible? They have a for-profit division now so they are paying taxes. And the fortunes made from OpenAI stock with be taxed for sure. And the people who left are rich and got to work on a world changing product.
Where is all the harm? It's really hard to point at any real harm from my standpoint. But the benefits and gains are palpable, and they are obvious to anyone without an agenda to push or axe to grind.
Altman aims to be trusted to say what regulations should and should not be made. It should not surprise you people consider evidence of dishonesty and suspicious coincidences relevant to trust.
People have lost jobs and likely careers to AI models trained on their works. You could assert in the long run all individuals will be better off. You could assert the benefits to others made the harms virtuous. You could assert they deserved it. I don't know how you could deny they were harmed. You could assert it was inevitable. But this would negate credit if it would negate blame. This is a distraction from the question of trust however.
Unfortunately, I think this was totally inevitable, particularly now that there are powerful open models that can be fine-tuned Llama3. At this point, you can't stop it any more than you can stop piracy of books and movies. And I'm not even so sure that "access to their copyrighted works" was the primary reason for anyone being disrupted from AI.
I’m a huge fan of building minimal self-contained tools, so all of the C programs statically link in the required parser libs (libavcodec/wuffs/freetype) so the resulting binaries don’t require additional dependencies on the target machine. The python wrapping code is rather straightforward as well and is only like 300 lines of code.
>> It’s an organization created by a national government.
> Why? What about this requires the power of "government?"
Budget mostly. I don't think the power of government is strictly required. There are some private organizations which try to take care of the commons (Hiya, Mozilla!), but it's still by and far had to fund. Why not use public funding for this?
> Contributor agreements are about to get way more parsimonious and annoying.
Why? I don't think the project necessarily needs to be owned by the organization, right? In which case, nothing changes to the contribution model.
> Nation states use software and knowledge of zero days to commit espionage against each other. He can't be serious with this.
That's true, but it's not as if there was no tension there. Significant backdoors could have impacts on the economy of some nations which are therefore incentivized to keep things running smoothly. You can play offense and defense at the same time.