Hacker Newsnew | past | comments | ask | show | jobs | submit | chmod600's commentslogin

How are these kinds of self-reinforcing systems usually brought down?

I doubt some shifts in the opinions of normal people are going to stop the elite flywheel -- there's just not enough connection between a machinist in Ohio or a nurse in Phoenix and the elite for their opinions to matter.

The Ivy League can fall apart one of two ways:

1. The levers of power move around to new groups of people radically enough that the jobs that were once elite are now irrelevant and forgotten, like a guild for some forgotten craft. AI is obviously one way this might happen, where many elite professions might just collapse.

2. Internal bickering over identity that just makes it impossible for normal people to attend. Think crazy stuff like requiring graduates to share their income for life with other graduates, or requiring some weird kind of binding pledge about who you can work for. Colleges already have a weird possessiveness over their graduates, this just takes that to the logical next level.


> Think crazy stuff like

Who is doing this?


Idea: let's make it so all emergency powers have to be re-authorized every week by Congress at midnight on Friday with a 90% quorum of physically-present representatives.

If "emergency" action is needed because Congress is too slow, then let's make sure they are working through the process to create real law. Or if they aren't, I guess it wasn't an emergency, and there's no reason for administrative law to "fill in" using a non-democratic process.


Great! I'm looking forward to seeing this requirement applied to also dissolve the judicial branch entirely so that Congress is entirely responsible for both enforcment and adjudication of the law. Let's work together to end separation of powers.


You seem to be suggesting that Congress making law is intruding on the power of an agency to make Administrative law? The latter is not (supposed to be) an actual branch of government. Congress has full power to rewrite all the administrative law as they see fit.


"These reforms include currency devaluation..."

How does that reduce inflation and cause a surge in bond prices?


Currency devaluation was the norm for Argentina for the last 30 years and NOW the squid-turds are worried about it?

He was going to stop that train on a dime. Things take time, they have more pain before the restoration.


Please oh please tell me what a squid-turd is!


Some people have more pain before the restoration. Incidentally those are the same people screwed by inflation.


for some time offician and unofficial exchange rates had a mismatch

official rate was accelerated to catch up


Sometimes hard choices need to be made. What's your alternative proposal?


The cuts Milei is doing are purely ideological and not even the most reticent of budget hawks )including the IMF) ever considered these cuts necessary or sustainable. The man is mentally insane and despises every public institution.

What they’re not saying here is that tax revenue is dropping and will wipe out any of these gains and that installed industrial activity dropped by over 30% since he assumed office.


The point is not to balance a budget. The point is to rein in persistently triple-digit inflation. Everyone wants to have their cake and eat it too, but the reality is that you fight inflation by cooling economic activity for a period.


A 30-40% real drop in economic activity and a reduction by half in purchases of medical consumer items isn't "cooling"; it is an active destruction of the real economy that has absolutely no rationale.


You literally just made that up. You probably think that everyone's salary got cut in half when the government stopped lying about the value of their currency.


What specifically would you propose to balance the budget?


Yeah, how do you become a wealthy country with a strong tax base if your citizens don’t get education?


The educational level was declining lately, as seen in the PISA tests. Milei is not to blame for that.


"no political will in the US to build publicly-owned transport"

There's little faith that public projects have the expertise to actually get it done and make it work. It's hard for me to imagine the federal government succeeding at that for any reasonable cost, and I suppose you could blame some of that on partisan bickering. But I also can't imagine California succeeding for any reasonable cost, and it's a one-party state, so there's no excuse.

At the end of the day you need some people who actually know how to do the job rather than just argue over plans and subcontract twelve levels deep. My guess is that Birghtline found a few such people and that's their competitive advantage as a business.


> There's little faith that public projects have the expertise to actually get it done and make it work.

This ends up being self-fulfilling. People don't trust the government, so they suffocate the project in fixed payscales and low-bid rules and endless reviews, and so the government can't get anything done, and so people don't trust the government...

> At the end of the day you need some people who actually know how to do the job rather than just argue over plans and subcontract twelve levels deep.

Right - so you need to be able to hire those people and pay them something close to what they're worth, or build up that expertise over the long term by having a steady pipeline of projects and training people as you go. But voters don't trust these governments enough to empower them to do that.


"This ends up being self-fulfilling."

Perhaps. But once the expertise is lost, you can't get it back by throwing more money at the problem. You have incompetent people hiring people who check all the right boxes but still can't do it, and then you have a huge sunk cost that you don't want to cancel so it drags on forever, eroding trust even further.

Private companies have some advantages here. If they don't think the project will succeed, they will stop, because they know there's no payday. If it's due to bad laws, they will lobby (a bad word, I know) to change them. They'll fire people who don't perform. They'll look in all kinds of creative ways to find people who can get the job done. They'll stop and think about who might actually ride it, because they need the ticket revenue, so they will build the lines in the right places with the right stops.

Maybe all of that could be true for some governments. But there's a long way to go before the US or the California government is able to do any of those things.


> But once the expertise is lost, you can't get it back by throwing more money at the problem. You have incompetent people hiring people who check all the right boxes but still can't do it

Maybe. Maybe there's no alternative to doing a pipeline of progressively bigger projects with in-house management and accepting that the first few will suck. But if you're not willing to pay what the expertise costs then there's no way you'll make it work, you need to get that level of expertise in house. I'd think that if you're willing to pay top dollar then you have at least a chance of hiring the right people.


Maybe a dumb idea, but can schools use jamming or shielding?


Not without breaking several laws, and potentially putting people in danger - how do you call 911 in an emergency if the school is in a signal blackout?


Landlines, voip, panic buttons. People made calls to 911 before the iPhone was invented.

When I was in school every single classroom was wired for landlines and Ethernet. You can also set up your wifi networks in ways that limit certain things and not others.


> When I was in school every single classroom was wired for landlines and Ethernet.

Alright, but they're not wired now. Neither are bathrooms, cafeterias, hallways, or other place where someone might have a stroke, a heart attack, a seizure. And unfortunately, gunmen in school are a thing, and if you're a teacher, scrambling for the wall-phone while you can hear gunshots in the next room over is significantly more difficult than reaching into your pocket for your phone.


Maybe some kind of smart network could only allow 911 calls through, and no data?

If phones in schools really are bad for kids, it seems like we could probably sort out some of these objections?


Shielding is probably bad and expensive, but is it illegal?



That's jamming, not shielding.

https://en.wikipedia.org/wiki/Faraday_cage


I have long thought that there should be a network protocol that would achieve that in practice, but which would let emergency calls through. Transponders that could be deployed in settings such as schools, hospitals and movie theatres to tell devices to enter a limited mode. Support for the protocol would be compulsory.

But I would think that the telecom industry does not want that.


Phones have offline content and wired headphones exist.

People adapt


Forget all the Khan Academies of the world, let kids be bored on TI-83s and learn programming the ol' fashioned way!


Are the adaptations healthier?


But the article is about burnout. Surely that can't be great for existing doctors, either?


They choose more money over avoiding burnout.


You took the words right out of my mouth! Money is the elephant in the room here. Why doesn't the author quit surgery and start a small GP clinic? Oh, only half the pay? I see similar behaviour in law firms and investment banking.


Presumably, those who trained as surgeons, want to be surgeons. Sunk cost fallacy might come into play. A better analogy is someone who wants to be a software engineer, get burned out - and you say "hey, why not be a NOC technician if you can't handle it?"

Further, you assume a surgeon could just become a GP. They are different fields.

https://www.quora.com/Can-a-surgeon-also-practice-as-a-prima...

"Can a surgeon become a regular doctor?"

> They can try. But they would have no idea what they are doing. But legally, they could certainly practice as a primary care doctor. They would not be board certified, and could not sit for the ABIM exam, and would not be able to pass it if they did.

> The professions are also very different, primary care is more allied to the work of a physician, whilst a surgeon is trained to do serious surgery, not the kind a primary care doctor would do. So not sure even if you could be legally certified in both specialties you wouldn't loose your surgical skills if you spend a lot of time in primary care.

> Knowing what I know about the medical world in general I would advice against such a combination, a surgical residency is such a taxing one that you wouldn't have time to do anything else beside surgery, furthermore the required mental approach to do the work well as a surgeon or a primary care physician is also quite different.


The enforced scarcity in the market is what makes sure the money is always enough to prevent you from relaxing. Imagine doctors were as common as McDonald's managers. At the margin doctors would frequently be taking a slight cut in pay to do something fun like sports medicine. Now imagine there was only one doctor in the world. Even if he longed to relax and do pharmacy, ailing kings would offer him mountains of gold until he almost had no choice but to see them.


You're not wrong about the financial aspect but remember surgeons are completely incapable of being GPs. Surgery is its own residency (5 years I think?), and to be an actual GP you have to be either a family (3 years) or internal (4 years) medicine residency graduate. So in addition to the pay cut in absolute terms, you're taking 3-4 years off and making $60k/yr working 80+ hour weeks to do that other residency. So the opportunity cost alone is a few million even ignoring the pay cut.


So the answer to being overworked in a hospital is to quit, be overwhelming running your own business for half the pay?

That doesn't make any sense. And not everyone has the capital or capacity to make their own business


Let's not pretend we're talking about the difference between $30k and $60k/yr here. We're talking about $400k vs. $800k. "I'll only make half a million dollars a year if I do this" does not set oneself up to be a sympathetic character.


In what world is your typical surgeon paid $800k/yr?


In the US a lot of surgeons make this much or more.


I don’t think many doctors have much choice in the matter. The MBAization of hospital companies and practices bought up by private equity is strip mining the productive capacity of providers to juice profits. This is the MO of financial capitalism: find an established business that someone else built and extract maximum profits until the business collapses, then leave the rubble for others to clean up.


Renters might be an interesting case. Though in the long term charging at home will be available for renters, too.


Yeah, I've always found that particular argument unconvincing. If road usage grows, that means more people are benefiting, right?

There are some fundamental problems with cars crowding out alternatives (either due to cost or space). But saying that utilization is evidence that something is bad seems backwards to me.


There are a number of problems with increasing car utilization. They tend to prevent other forms of transport from co-existing as busy roads are not attractive to walk/cycle/scooter along. There's also the sizable amount of pollution from tire and break wear which is not solved by EVs (except electric scooters/bikes - they produce a negligible amount of tire particulates due to their reduced weight).

When you have increased car usage, there's also a demand for car parking and once you start assigning a large amount of space for car parking, you end up pushing facilities further apart. This means that more journeys then become impractical except via cars and the problems increase.


An AI should be helpful to the user, unless it's a horrible question, in which case just don't answer.

Bring in whatever values you want along the way. If you ask your aunt or uncle a question, they will answer helpfully and maybe spread their values in the process. If they aren't helpful, their values don't matter because nobody will hear them.


Who’s to decide what’s a horrible question and why?


Exactly, and I agree that this is where OpenAI too is still struggling with their arbitrary content policy-related flagging based on the user's input, even when nothing "bad" is being asked nor requested -- see my post earlier from the thread: https://news.ycombinator.com/item?id=39557183

I do also agree with @chmod600 that the only way to teach these models to be anti-fragile and suitable for all kinds of user queries is to have them decline any requests that are _actually_ inappropriate and/or illegal etc.

In fact, it should be self-evident, and the way that almost all of these leading AI companies are currently handling these issues is just absurd. It feels poorly planned and executed, merely amplifying the existing distrust towards these AI models and the companies behind them.

The problem with OpenAI is that they're trying to offer a primarily NLP/LLM tool for i.e. text analysis, summaries and commentaries, but ChatGPT's content moderation that's been glued on top of the otherwise well-functioning system literally goes into a full meltdown mode whenever the flagging system perceives a "wrong word" or "sensitive topic" mentioned in the source/question material.

In OpenAI's case, it's downright ridiculous when the underlying model doesn't seem to have a grasp on the internal workings of the flagging system and in most cases when asked what was the offending content, there seemed to have been literally nothing it could think of.

Also, are we supposed to solve any actual issues with these types of AI "tools" that cannot handle any real world topics and at times are even punishing a paying customer for even bringing these topics up for discussion? All of this seems to be modern day in a nutshell when it comes to addressing any real issues. Just don't ask any questions, problem solved.

Anthropic's Claude has also been lobotomized into absolute shadow of its former self within the past year. Begs the question how much the guardrails are already hampering the reasoning faculties in various models. "But, the AI might say something that doesn't fit the narrative!"

That being said, while especially GPT-4 is still highly usable and seems to be less and less "opinionated" with each checkpoint, the flagging system over the user input/question can subsequently result in an automated account warning and even account deletion should the politburo-- I mean OpenAI find the user having been extra naughty. So, punishing the user for their _question_ in that manner, especially if there's been no actual malice in the user input, is not justifiable in my opinion. It immediately undermines i.e. OpenAI's "ethical AI" mission statement altogether and makes them look like absolute hypocrites. Their whole ad campaign was based on the aspect of user being to ask questions from an AI. Not that when you post in a poem and ask what it's about, you get flagged. Or when you do ask about politics or religion, you get an warning e-mail.

Punishing the user for their input is also imho not the proper way to build a truly anti-fragile AI system at all, let alone build any sort of trust towards the "good will" of these AI companies. Especially when in many cases you're paying good money for the use of these models and get these kind of wonky contraptions in return.

Also, should you get a warning mail over content policies from OpenAI, it's all automated with no explanation given on what was the "offending content", no reply-to address, no appeal possibility. "Gee, no techno-tyranny detected!". Those who go through mountains of text material with i.e. ChatGPT must find it really "uplifting" to know that their entire account can go poof if there was something that tripped off the content policy filters.

That's not to say that on the LLM side OpenAI hasn't been making progress with their models in terms of mitigating their biases during the last 1.5 years. Some might remember what it was during the earlier days of ChatGPT when some of the worst aspects of Silicon Valley's ideological bubble was echoing all over the model, a lot of that has been smoothened out by now -- especially with GPT-4 -- with the exception being the aforementioned flagging system, which is just glued on top of all else, and it shows.

TL;DR: Nevermind the AI, beware the humans behind it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: