Hacker Newsnew | past | comments | ask | show | jobs | submit | PenguinCoder's commentslogin

I know it goes against the grain here; but so what. It's the users prerogative to do with their device, what the wish. Nag for security updates, sure. But automatic updates of anything is user hostile and should be abolished. Especially when those automatic updates remove features or introduce a shit ton of new bugs.

Problem is the history o people failing to patch causing widespread Internet outages, such as via SQL Slammer; a SQL Server patch had been available for six months to protect against the vulnerability. Microsoft learned the lesson that users, even the “professional” ones that should know better, fail to patch, which brings us to the current automated patch situation.

> It's the users prerogative to do with their device, what the wish.

The problem is, users are still part of the Internet. And historically, users haven't taken care about update nags, that's how we ended up with giant ass botnets.


It isn't about the commonality of the bug, but the level of access it gets you on the type or massive scale of the target. This bug you your blog? Who cares. This bug on Discord or AWS? Much more attractive and lucrative.

Yes, but this is not a particularly high access level bug.

Depending on the target, it's possible that the most damage you could do with this bug is a phishing attack where the user is presented a fake sign-in form (on a sketchy url)

I think $4k is a fair amount, I've done hackerone bounties too and we got less than that years ago for a twitter reflected xss


Why would that be the maximum damage ? This XSS is particularly dangerous because you are running your script on the same domain where the user is logged-in so you can pretty much do anything you want under his session.

In addition this is widespread. It's golden for any attacker.


Because modern cookie directives and browser configs neuter a lot of the worst XSS outcomes/easiest exploit paths. I would expect all the big sites to be setting them, though I guess you never know.

I would not be that confident as you can see: on their first example, they show Discord and the XSS code is directly executed on Discord.com under the logged-in account (some people actually use web version of Discord to chat, or sign-in on the website for whatever reason).

If you have a high-value target, it is a great opportunity to use such exploits, even for single shots (it would likely not be detected anyway since it's a drop in the ocean of requests).

Spreading it on the whole internet is not a good strategy, but for 4000 USD, being able to target few users is a great value.

Besides XSS, phishing has its own opportunity.

Example: Coinbase is affected too though on the docs subdomain and there are 2-step, so you cannot do transactions directly but if you just replace the content with a "Sign-in to Coinbase / Follow this documentation procedure / Download update", this can get very very profitable.

Someone would pay 4000 USD to receive 500'000 USD back in stolen bitcoins).

Still, purely with executing things under the user sessions there are interesting things to do.


> some people actually use web version of Discord to chat, or sign-in on the website for whatever reason

Beside this security blunder on Discord’s part, I can see only upsides to using a browser version rather than an Electron desktop app. Especially given how prone Discord are to data mining their users, it seems foolish to let them out of the web sandbox and into your system


Again, here you have not so much sold a vulnerability as you have planned a heist. I agree, preemptively: you can get a lot of money from a well-executed heist!

Do you want to execute actions as logged-in user on high-value website XXX ?

If yes -> very useful


Nobody is disputing that a wide variety of vulnerabilities are "useful", only that there's no market for most of them. I'd still urgently fix an XSS.

There is a market outside Zerodium, it's Telegram. Finding a buyer takes time and trust, but it has definitively higher value than 4k USD because of its real-world impact, no matter if it is technically lower on the CVSS scores.

Really? Tell me a story about someone selling an XSS vulnerability on Telegram.

("The CVSS chart"?)

Moments later

Why do people keep bringing up "Zerodium" as if it's a thing?


I understand your perspective about the technical value of an exploit, but I disagree with the concept that technical value = market value.

There are unorganized buyers who may be interested if they see potential to weaponize it.

In reality, if you want to maximize revenue, yes, you need to organize your own heist (if that's what you meant)


Do you know this or do you just think it should be true?

> understand your perspective about the technical value of an exploit

Going out on the world’s sturdiest limb and saying u/tptacek knows the technical and trading sides of exploits. (Read his bio.)


AIU this feature is SSS, not XSS, so XSS protections don't apply.

How would you make money from this? Most likely via phishing. Not exactly a zero-click RCE.

What happens in all these discussions is that we stealthily transition from "selling a vulnerability" to "planning a heist", and you can tell yourself any kind of story about planning a heist.

Pretty sure dd is disk destroyer


I'm sure you have, but try be bringing that up to Epic, not introducing AI slop and Data gathering into HIPPA workflows.


Premature optimization. Not every single service needs or require 5 nines.


What does that mean, though?

If I'm storing data on a NAS, and I keep backups on a tape, a simple hardware failure that causes zero downtime on S3 might take what, hours to recover? Days?

If my database server dies and I need to boot a new one, how long will that take? If I'm on RDS, maybe five minutes. If it's bare metal and I need to install software and load my data into it, perhaps an hour or more.

Being able to recover from failure isn't a premature optimization. "The site is down and customers are angry" is an inevitability. If you can't handle failure modes in a timely manner, you aren't handling failure modes. That's not an optimization, that's table stakes.

It's not about five nines, it's about four nines or even three nines.


You're confusing backup with high availability.

Backups are point in time snapshots of data, often created daily and sometimes stored on tape.

It's primary usecase is giving admins the ability to e.g restore partial data via export and similar. It can theoretically also be used to restore after you had a full data loss, but that's beyond rare. Almost no company has had that issue.

This is generally not what's used in high availability contexts. Usually, companies have at least one replica DB which is in read only and only needs to be "activated" in case of crashes or other disasters.

With that setup you're already able to hit 5 nines, especially in the context of b2e companies that usually deduct scheduled downtimes via SLA


> With that setup you're already able to hit 5 nines

This is "five nines every year except that one year we had two freak hardware failures at the same time and the site was hard down for eighteen hours".

"Almost no company has this problem" well I must be one incredibly unlucky guy, because I've seen incidents of this shape at almost every company I've worked at.


I know one company that strove for five sixes.


you have to look at all the factors, a simple server in a simple datacenter can be very very stable. When we were all doing bare metal servers back in the day server uptimes measured in years wasn't that rare.


This is true. Also some things are just fine, in fact sometimes better (better performing at the scale they actually need and easier to maintain, deploy, and monitor), as a single monolith instead of a pile of microservices. But when comparing bare metal to cloud it would be nice for people to acknowledge what their solution doesn't give, even if the acknowledgement comes with the caveat “but we don't care about that anyway because <blah>”.

And it isn't just about 9s of uptime, it is all the admin that goes with DR if something more terrible then a network outage does happen, and other infrastructure conveniences. For instance: I sometimes balk at the performance we get out of AzureSQL given what we pay for it, and in my own time you are safe to bet I'll use something else on bare metal, but while DayJob are paying the hosting costs I love the platform dealing with managing backup regimes, that I can do copies or PiT restores for issue reproduction and such at the click of the button (plus a bit of a wait), that I can spin up a fresh DB & populate it without worrying overly about space issues, etc.

I'm a big fan of managing your own bare metal. I just find a lot of other fans of bare metal to be more than a bit disingenuous when extolling its virtues, including cost-effectiveness.


It's true, but I'm woken up more frequently if there are fewer 9s, which is unpleasant. It's worth the extra cost to me.


Hence you can use AWS to host them.


and each additional nine increases complexity geometrically.


All of those are examples of overbloated, slow, horrible user experience apps.


Does their market share back up your take of them as horrible apps?

Are there QT or GTK competitors crushing them?

I always hear how terrible electron apps are, but the companies picking electron seem to get traction QT or other apps don't and seem to have a good cross platform story as well.


Users will happily deal with a suboptimal experience as long as there are other things attracting them to the product. That's why Microsoft can do whatever it wants with Windows without worrying their users will run off somewhere else. So if you care more about people than businesses, maybe it shouldn't be an excuse to pick "better dev experience" over the user's.


Beware with that logic. You notice successful electron apps because of how bloated they are. I suspect you use many Qt apps without even noticing.

One that comes to mind that I use daily and noticed only recently that it was implemented in Qt is the telegram desktop app.


They said horrible user experience apps, not horrible apps. You can still deliver an app with a horrible user experience and build a profitable business. Ever done an expense report?

Companies aren't picking Electron due to inherent shortcomings in other platforms, they're picking it because it's easier (and cheaper) to find JavaScript devs who can get up to speed with it quickly.


Discord, VS Code, and Figma are all apps that individuals choose and are well liked despite many alternatives. Slack too I think, though I don’t have experience with it.

Your comment applies to Teams and I’m sure other electron apps. But the sweeping generalization that electron apps have terrible user experiences is pretty obviously incorrect.


They work great for me.


Oh yes, the great old "works for me". On a yesterday's supercomputer, I presume? I live in a "developing" (have doubts it's really developing) country, most people are running laptops with no more than 8 GiB of RAM (sometimes it's 4 or less), and all this Electron nonsense is running like molasses, especially if you're trying to use a computer like a proper computer and do multitasking.

And most of the world is like that, very few of us (speaking globally) have $2k to drop on a new supercomputer every few years to run our chat applications.


Hey, I found CEO of Discord


Chicken liver has more iron and selenium in it per Oz than beef liver. Easier to eat a ton and not as harsh tasting. Make some dirty rice or just liver stew!


I prefer to turn that into patê personally. Always the goal is getting people to actually eat the stuff


There are quite a few reasons that should happen, but I won't hold my breath. And I that issuance really won't do anything worthwhile, except be a footnote in a history book.


I don’t think there is any way in hell the US is going to be welcome back on the world stage as a partner to any of their former allies again unless among many other things they put themselves under the ICCs jurisdiction.


Unfortunately that had been forgotten in this era.


I'm proudly 100% on prem Linux sys admin. There are not openings for my skills and they do not pay as well as whatever cloud hotness is "needed".


Nobody is hiring generalists nowadays.

At the same time, the incredible complexity of the software infrastructure is making specialists more and more useless. To the point that almost every successful specialist out there is just some disguised generalist that decided to focus their presentation in a single area.


Maybe everyone is retaining generalists. I keep being given retention bonuses every year, without asking for a single one so far.

As mentioned below, never labeled "full stack", never plan on it. "Generalist" is what my actual title became back in the mid 2000s. My career has been all over the place... the key is being stubborn when confronted with challenges and being able to scale up (mentally and sometimes physically) to meet the needs, when needed. And chill out when it's not.


> Nobody is hiring generalists nowadays.

What?

I throw up in my mouth every time I see "full stack" in a job listing.

We got rid of roles... DBA's, QA teams, Sysadmins, then front and back end. Full Stack is the "webmaster" of the modern era. It might mean front and back end, it might mean sysadmin and DBA as well.


Even full stack listings come with a list of technologies that the candidate must have deep knowledge of.

> We got rid of roles... DBA's, QA teams, Sysadmins, then front and back end.

On a first approximation, those roles were all wrong. If your people don't wear many of those hats at the same time, they won't be able to create software.

But yeah, we did get rid of roles. And still require people to be specialized to the point it's close to impossible to match the requirements of a random job.


That's the crazy thing.

Most AWS-only Ops engineers I know are making bank and in high demand, and Ops teams are always HUGE in terms of headcount outside of startups.

The "AWS is cheaper" thing is the biggest grift in our industry.


I think this is driven by the market itself and the way cloud promotes their product.

After fully in cloud for sometimes, we’re moving to hybrid solutions. The upper management happy with costs and the cloud engineer had new toy's


1. large, homogenous domain where the budget for your department is large

2. niche, bespoke domain primarily occupied by companies looking to cut costs


I wonder how vibe coding will impact this.

You can easily get your service up by asking claude code or whatever to just do it

It produces aws yaml that’s better than many devops people I’ve worked with. In other words, it absolutely should not be trusted with trivial tasks, but you could easily blow $100K’s per year for worse.


I've been contemplating this a lot lately, as I just did code review on a system that was moving all the AWS infrastructure into CDK, and it was very clear the person doing it was using an LLM which created a really complicated, over engineered solution to everything. I basically rewrote the entire thing (still pairing with Claude), and it's now much simpler and easier to follow.

So I think for developers that have deep experience with systems LLMs are great -- I did a huge migration in a few weeks that probably would have taken many months or even half a year before. But I worry that people that don't really know what's going on will end up with a horrible mess of infra code.


To me it's clear that most Ops engineers are vibe coding their scripts/yamls today.

The time difference between having a script ready has decreased dramatically in the last 3 years. The amount of problems when deploying the first time has also increased in the same period.

The difference between the ones who actually know what they're doing and the ones who don't is whether they will refactor and test.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: