Even being uncharitable, a big off-by-default checkbox saying “make this discoverable in web searches” is roughly as explicit as you can possibly make this feature textually, assuming your users will be applying any reading comprehension.
If they’re not, no further warnings were going to save them, so short of removing the feature or gating it behind increasingly elaborate “if only you knew better!” emails or pop-up modals they also presumably would not be reading, this was the likely outcome.
At some point, I don’t feel bad saying this is a user-side PEBKAC, and that more alerting would be a waste of time.
I assume that he means "rather than pushing up each individual container for a project, it could take something like a compose file over a list of underlying containers, and push them all up to the endpoint."
That's an interesting idea. I don't think you can create a subcommand/plugin for compose but creating a 'docker composepussh' command that parses the compose file and runs 'docker pussh' should be possible.
My plan is to integrate Unregistry in Uncloud as the next step to make the build/deploy flow super simple and smooth. Check out Uncloud (link in the original post), it uses Compose as well.
They have a lot of ways they could’ve built trust without a full negative burden: which of them, if any, are they doing?
Open sourcing of their watch word and recording features specifically, so people can self-verify it does what it says and that it’s not doing sketchy things?
Hardware lights such that any record functionality past the watch words is visible and verifiable by the end user and it can’t record when not lit?
Local streaming and auditable downloads of the last N hours of input as heard by amazon after watchwords, so you can check for misrecordings and also compare “intended usage” times to observed times, such that you can see that you and Amazon get the same stuff?
If you really wanna go all out, putting in their TOS protections like explicit no-train permissions on passing utterances without intent, or adding an SLA into their subscription to refund subscription and legal costs and to provide explicit legal cause of action, if they were recording when they said they weren’t?
If you explicitly want to promote trust, there are actually a ton of ways to do it, one of them isn’t “remove even more of your existing privacy guardrails”.
On the first two, if you already think they're blatantly lying about functionality, why would you think the software in the device is the same as the source you got, or that it can't record with the light off?
Aspersions aside, the content’s actually typically pretty involved and has a lot to speak for itself, it’s not low-effort content that one would typically associate with AI.
Laid bare, it’s generally a variety comedy show of a human host and AI riffing off each other, the AI and the chat arguing and counter-roasting each other with human mediation to either double down or steer discussions or roasts in more interesting directions, a platform for guest interviews and collaborations with other streamers, and a showcase of AI bots which were coded up by the stream’s creator to play a surprising variety of games. There’s a lot to like, and you don’t need to be on “that bit of the bell curve” to enjoy a skilled entertainer putting new tools to enjoyable use.
I think this optimism is extremely misplaced, as I think things are likely to get substantially less evenly distributed. It seems more likely we'll have a future where indies more successfully drown each other out in a sea of noise, while major studios continue to enjoy massive moats of content and marketing.
On the content front, studios have benefited massively by selling sure bets in a sea of noise: of the top 10 box office films of 2024, there isn't a single original IP, every single film is either an adaptation or sequel to prior work made long ago. [1]. I view this as part of a broader "flight towards quality" pattern in the internet age - even if there's tons of great content online (orders of magnitude more), the viewing public still ultimately values both studio curation and IP familiarity. This goes beyond the films themselves: the studio IP rights moat includes the access to famous big-name actors that fill seats, the IP rights to use their likeness, and access to their personal platforms to push the film, all of which these studios control. Even if indies can generate "a person" or "a movie", the inability to legally generate "specific people that the public knows and likes without their permission" or "specific movies set in universes they know and care about" represents moat that isn't leaving studio's hands in a world of widespread AI.
Separate from IP rights, it also assumes a hyper-specific model of how AI specialization and use will look if it's widely available. For example, even assuming AI that can generate anything you ask for, it's likely that people will continue valuing significant sound, music, and visual post-processing to augment end-state AI generations to better match and personalize a final vision differentiated from the models themselves. This means labor costs, which indies at scale would continue to lack access to. This also assumes a model of the world where AI becomes a reducer of specialization, which isn't guaranteed: even assuming superhuman production capabilities by AI, such that there is no longer any individual human input on some aspect of the film, someone has to point the superhuman AI in productive directions that map to somewhere good in the quality spectrum, and it's likely that there will be significant differentiation in human skill at this task across different domains, as one can think of the superhuman AI as a tool being consumed by a skilled collaborator, even if the AI is doing most of the work. In the event that this is the case, studios can and will continue having much bigger labor budgets to continue to differentiate themselves on quality compared to indies trying to DIY this process.
And lastly, marketing is still king. About half of a modern big-budget picture's budget gets spent on the marketing today, only the other half goes towards making the actual film. Even if state-of-the-art Hollywood grade AI means anyone can produce a shot-for-shot reproduction of any current Hollywood film, ultimately, people need to find the content in order to watch it, and even in a world with widely available AI, it's the indie's marketing budget of ~$0 dollars, versus the major studio's marketing budget of $50,000,000-$200,000,000. I would happily continue betting against indies winning this fight, especially when the low end of this market, which is already oversaturated and hard to meaningfully stand out in, is drowned out in an order of magnitude more noise from low-end AI creators flooding the space with slop.
One of the best sonic games ever made, sonic mania, wasn’t made in-house by sonic team, but a handful of fans making fanwork.
Sega’s choice to treat their fanwork seriously and make them second-party developers rather than shutting them down was not only an immeasurably good thing for both the project and its fans, but a measurably, handsomely profitable move for Sega.
The number of ports of Doom that were made by fans and only licensed after the work was done so it could be sold is surprisingly high. Randy Linden on the SNES is an interesting story (and the very chip he used was also a similar story at its inception).
The author’s right about storytelling from day one, but then immediately throws cold water on the idea by saying it would have been a bad fit for this project.
This feels in error, as the big value of seeking feedback and results early and often on a project is that it forces you to confront whether you’re going to want or be able to tell stories in the space at all. It also gives you a chance to re-kindle waning interests, get feedback on your project by others, and avoid ratholing into something for about 5 years without having to engage with a public.
If a project can’t emotionally bear day one scrutiny, it’s unlikely to fare better five years later when you’ve got a lot of emotions about incompleteness and the feeling your work isn’t relevant anymore tied up in the project.
Thinking Fast and Slow is a result of some 20 years of regularly publishing and talking about those ideas with others.
Most really memorable works fit that same mold if you look carefully. An author spends years, even decades, doing small scale things before one day they put it all together into a big thing.
Comedy specials are the same. Develop material in small scale live with an audience, then create the big thing out of individual pieces that survive the process.
Hamming also talks about this as door open vs door closed researchers in his famous You And Your Research essay
It shouldn’t be surprising that a country with a war economy has a higher first derivative at producing material, the question of import is
1. What is the difference between absolute quantities comparing against all relevant players,
2. How long would it take to bridge the gap at current production rates, and
3. Can that rate of production be sustained long enough for it to alter any fundamentals?
The point to rebut isn’t that Russia is making more, it’s whether they can continue to do so ongoingly before Ukrainian advances, regime falter or economic collapse, US/China step-in, or internal unrest will dramatically weaken or make the current Russian negotiating position untenable.
Love the "first derivative" view! One can take a snapshot of a good day, but if russia really was producing more weapons than USA + NATO for a prolonged time, having also more people, Ukraine would fall a long time ago.
It didn't. As we say in Poland "paper will accept everything". And russia is known for shameless propaganda.
So far, Russia is still making gains on the battlefield though, not Ukraine. At some point, that momentum would have to reverse.
Also, I don't think it's an "until" about China stepping in, they seem to be squarely on Russia's side, just presenting themselves slightly more moderate in public to appear suitable as a mediator. (Maybe sort of like the US does with Israel)
Finally, there is BRICS and some massive shifts of attitude in Africa that seem to work in Russia's favor.
> So far, Russia is still making gains on the battlefield though, not Ukraine.
This is again first derivative. Russia annexed Crimea, sent unofficial troops to Donbas, in 2022 moved rapidly and captured a lot of territory... But later was pushed back severely. And after that, it was gaining terrain in a truly snail pace.
BTW India and China are in an ongoing border conflict, the hostilities don't end there, with India banning many Chinese apps for example. They're nowhere near as united as EU or NATO, it's more like the Visegrád Group.
Do Americans have the technical expertise for a higher curvature? The American primary/secondary schooling system sucks, and most of the top STEM students at university are not interested in working for the military.
Both sides are aching (very badly) for this thing to be over, or at least taken off the stove.
One can quibble further as to the details -- which are a matter of metrics, wildcards, politics. And (as recent events have shown) there are still many cards to be played.
But that's the fundamental equation we need to keep in mind.
Yes, by pushing to reduce the drafting age, which the Ukrainian don't want to, because someone needs to raise families and Ukraine has a way lower population, while Putin brings in north koreans already. And can also escalate by general mobilisation at some point if cornered. So I don't see the outcome as clear.
In an era of massive scale companies and giant projects, it's fascinating how often it feels like most of the success of a project or company's big successes ultimately hinge on the actions of a few key individuals in the right place at the right time - and it never fails to surprise me how little overlap these individuals and the actual actual company org charts share.
Org charts are structured around running the company day to day. The surprising one-off big boosts from an individual are by definition not reflected in an org chart.
> If Sega had held onto that Nvidia stock from then... ChatGPT tells me: "So, if you invested $5 million in Nvidia stock at its IPO in 1999, it would be worth approximately $3 billion today, assuming a current share price of $450 and accounting for stock splits."
why not just either do the math yourself or not include it at all? I don’t find GPT math about realtime prices and such to be even slightly reliable given the; market fluctuations , latest training data & of course the hallucinations.
I just put an asterisk on the number and moved on. Even without the exact number, I was reminded that 5 million in 1999 value for a company who's crown jewel was what, the riva tnt 2, means a serious amount of the company's worth now.
Privilege escalation and Dev Ops rot. Long-lived certs often get compromised when privilege escalations happen and someone gets access to an account or computer that has private keys on it.
One example scenario for privilege escalation: let's say a hacker gets access to one of your employee's or vendor's machines and associated accounts using a zero-day, or phishing, or some other method that goes undetected for some time. The attacker, as part of this attack, successfully gets access to your cert's private keys through some way or another without drawing attention to themselves.
Some time later, your firm makes several security updates. When doing this, you unknowingly patched the attacker out of your network. The attacker is now in a race against time if they want to do something with the cert before it expires, and in this kind of situation, the sooner that cert expires, the better, because the attacker gets less time to do something with it. In a perfect world, the cert expired exactly when they got patched out, but because we're not guaranteed to know if there's an attacker in the first place, "keeping the expiration time as short as is reasonably possible without impacting service reliability" is what things seem to be moving towards, to limit the blast radius during access leaks.
As for Dev Ops rot, speed has a tendency to change requirements in favor of automation. Generally, certificate rotations tend to be a pain point - they break management panes, they take down websites, they throw browser errors, they don't get updated in pipelines, and other woes happen when they expire that demand people keep track of a ton of localized knowledge and deadlines that's easy to lose or forget. However, paradoxically, the longer the time between rotations, the more painful they tend to be, because once rotations are sufficiently fast, it becomes unmanageable to do them manually: demanding speed forces people to build anti-fragile rotation systems. Making the requirement be shorter is in some sense an attempt to encode into managerial culture "you need to automate this", as a bulwark against swapping your certs out being anything besides automated or one click rotations.
Even being uncharitable, a big off-by-default checkbox saying “make this discoverable in web searches” is roughly as explicit as you can possibly make this feature textually, assuming your users will be applying any reading comprehension.
If they’re not, no further warnings were going to save them, so short of removing the feature or gating it behind increasingly elaborate “if only you knew better!” emails or pop-up modals they also presumably would not be reading, this was the likely outcome.
At some point, I don’t feel bad saying this is a user-side PEBKAC, and that more alerting would be a waste of time.