Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's definitely a big problem with entry-level jobs being replaced by AI. Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person. In many cases, it's both. I work with some people who I believe have the capacity and potential to one day be competent, but the time and resource investment to make that happen is too much. I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now. If I handed it off to them I would not get it fast, and I would need to also go through it with them in several back-and-forth feedback-review loops to get it to a state that's usable.

Given they are human, this would push back delivery times by 2-3 business days. Or... I can prompt and handhold an AI to get it done in 3 hours.

Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.



This is where the horrific disloyalty of both companies and employees, comes to bite us in the ass.

The whole idea of interns, is as training positions. They are supposed to be a net negative.

The idea is that they will either remain at the company, after their internship, or move to another company, taking the priorities of their trainers, with them.

But nowadays, with corporate HR, actively doing everything they can to screw over their employees, and employees, being so transient, that they can barely remember the name of their employer, the whole thing is kind of a worthless exercise.

At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers, upon returning to Japan. It was well worth it.


I agree that interns are pretty much over in tech. Except maybe for an established company do do as a semester/summer trial/goodwill period, for students near graduation. You usually won't get work output worth the mentoring cost, but you might identify a great potential hire, and be on their shortlist.

Startups are less enlightened than that about "interns".

Literally today, in a startup job posting, to a top CS department, they're looking for "interns" to bring (not learn) hot experience developing AI agents, to this startup, for... $20/hour, and get called an intern.

It's also normal for these startup job posts to be looking for experienced professional-grade skills in things like React, Python, PG, Redis, etc., and still calling the person an intern, with a locally unlivable part-time wage.

Those startups should stop pretending they're teaching "interns" valuable job skills, admit that they desperately need cheap labor for their "ideas person" startup leadership, to do things they can't do, and cut the "intern" in as a founding engineer with meaningful equity. Or, if you can't afford to pay a livable and plausibly competitive startup wage, maybe they're technical cofounders.


>At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers,

Damn, I wish that was me. Having someone mentor you at the beginning of your career instead of having to self learn and fumble your way around never knowing if you're on the right track or not, is massive force multiplier that pays massive dividends over your career. It's like entering the stock market with 1 million $ capital vs 100 $. You're also less likely to build bad habits if nobody with experience teaches you early on.


I really think the loss of a mentor/apprentice type of experience is one of those baby-with-the-bath-water type of losses. There are definitely people with the personality types of they know everything and nothing can be learned from others, but for those of us who would much rather learn from those with more experience on the hows and whys of things rather than getting all of those paper cuts ourselves, working with mentors is definitely a much better way to grow.


Yup. It was a standard part of their HR policy. They are all about long, long-term employment.

They are a marquée company, and get the best of the best, direct from top universities.

Also, no one has less than a Master's, over there.

We got damn good engineers as interns.


>Also, no one has less than a Master's, over there.

I feel this is pretty much the norm everywhere in Europe and Asia. No serious engineering company in Germany even looks at your resume it there's no MSc. degree listed, especially since education is mostly free for everyone so not having a degree is seen as a "you problem", but also it leads to degree inflation, where only PhD or post-docs get taken seriously for some high level positions. I don't remember ever seeing a senior manager/CTO without the "Dr." or even "Prof. Dr." title in the top German engineering companies.

I think mostly the US has the concept of the cowboy self taught engineer who dropped out of college to build a trillion dollar empire in his parents garage.


Graduate school assistant in the US pay such shit wages compared to Europe that you would be eligible for food stamps. Opportunity cost is better spent getting your bachelors degree, finding employment, and then using that salary to pay for grad school or have your employer pay for it. I’ve worked in Europe with just my bac+3. I also had 3-4 years of applied work experience that a fresh-faced MSc holder was just starting to acquire.


Possibly also because they don’t observe added value of the additional schooling.

Also because US salaries are sky high compared to their European counterparts, so I could understand if the extra salary wasn’t worth the risk that they might not have that much extra productivity.

I’ve certainly worked with advanced degree people who didn’t seem to be very far along on the productivity curve, but I assume it’s like that for everything everywhere.


> horrific disloyalty of both companies and employees

There’s no such a thing as loyalty in employer-employee relationships. There’s money, there’s work and there’s [collective] leverage. We need to learn a thing or two from blue collars.


> We need to learn a thing or two from blue collars.

A majority of my friends are blue-collar.

You might be surprised.

Unions are adversarial, but the relationships can still be quite warm.

I hear that German and Japanese unions are full-force stakeholders in their corporations, and the relationship is a lot more intricate.

It's like a marriage. There's always elements of control/power play, but the idea is to maximize the benefits.

It can be done. It has been done.

It's just kind of lost, in tech.


>It's just kind of lost, in tech.

Because you can't offshore your clogged toilet or broken HVAC issue to someone abroad for cheap on a whim like you can with certain cases in tech.

You're dependent on a trained and licensed local showing up at your door, which gives him actual bargaining power, since he's only competing with the other locals to fix your issue and not with the entire planet in a race to the bottom.

Unionization only works in favor of the workers in the cases when labor needs to be done on-site (since the government enforces the rules of unions) and can't be easily moved over the internet to another jurisdiction where unions aren't a thing. See the US VFX industry as a brutal example.

There are articles discussing how LA risks becoming the next Detroit with many of the successful blockbusters of 2025 being produced abroad now due to the obscene costs of production in California caused mostly by the unions there. Like 350 $ per hour for a guy to push a button on a smoke machine, because only a union man is allowed to do it. Or that it costs more to move across a Cali studio parking lot than to film a scene in the UK. Letting unions bleed companies dry is only gonna result them moving all jobs that can be moved abroad.


Almost every Hollywood movie you see,that wasn’t filmed in LA, was basically a taxpayer backed project. Look at any film with international locations and in the film credits you’ll see a lots of state-backed, loans, grants, and tax credits. Large part of the film crew and cast are flown out to those locations. And if you think LA was expensive, location pay is even more so. So production is flying out the most expensive parts of the crew to save a few dollars on craft service?


> Because you can't offshore your clogged toilet or broken HVAC issue to someone abroad for cheap on a whim like you can with certain cases in tech.

Yet. You can’t yet. Humanoids and VR are approaching the point quite rapidly where a teleoperated or even autonomous robot will be a better and cheaper tradesman than Joe down the road. Joe can’t work 24 hours a day. Joe realises that, so he’ll rent a robot and outsource part of his business, and will normalise the idea as quickly as LLMs have become normal. Joe will do very well, until someone comes along with an economy of scale and eats his breakfast.


All the Joes I know would spend serious time hunting these robots.

IMO, real actual people don’t want to live in the world you described. Hell, they don’t wanna live in this one! The “elites” have failed us. Their vision of the future is a dystopian nightmare. If the only reason to exist is to make 25 people at the top richer than gods? What is the fucking point of living?


> If the only reason to exist is to make 25 people at the top richer than gods?

You just described most medieval societies.

It's been done before, and those 25 people are hoping to make it happen again.


Hoping is the wrong word. They’re trying harder than ever.


I have been in Union shops before working in tech. In some places they are fine in others its where your worst employee on your team goes to make everyone else less effective.


I personally care a lot about people, but if I was running a publicly traded for-profit, I would have a lot of constraints about how to care for them. (A good place to start, by the way, is not bullshitting people about the financial realities.)

Employees are lucky when incentives align and employers treat them well. This cannot be expected or assumed.

A lot of people want a different kind of world. If we want it, we’re gonna have to build it. Think about what you can do. Have you considered running for office?

I don’t think it is helpful for people to play into the victim narrative. It is better to support each other and organize.


Interns and new grads have always been a net-negative productivity-wise in my experience, it's just that eventually (after a small number of months/years) they turn into extremely productive more-senior employees. And interns and new grads can use AI too. This feels like asking "Why hire junior programmers now that we have compilers? We don't need people to write boring assembly anymore." If AI was genuinely a big productivity enhancer, we would just convert that into more software/features/optimizations/etc, just like people have been doing with productivity improvements in computers and software for the last 75 years.


Where I have worked new grads (and interns) were explicitly negative.

This is part of why some companies have minimum terminal levels (often 5/Sr) before which a failure to improve means getting fired.


Isn't that every new employee? The first few months you are not expected to be firing on all cylinders as you catch up and adjust to company norms

An intern is much more valuable than AI in the sense that everyone makes micro decisions that contribute to the business. An Intern can remember what they heard in a meeting a month ago or some important water-cooler conversation and incorporate that in their work. AI cannot do that


It's a monetary issue at the end of the day.

AI/ML and Offshoring/GCCs are both side effects of the fact that American new grad salaries in tech are now in the $110-140k range.

At $70-80k the math for a new grad works out, but not at almost double that.

Also, going remote first during COVID for extended periods proved that operations can work in a remote first manner, so at that point the argument was made that you can hire top talent at American new grad salaries abroad, and plenty of employees on visas were given the option to take a pay cut and "remigrate" to help start a GCC in their home country or get fired and try to find a job in 60 days around early-mid 2020.

The skills aspect also played a role to a certain extent - by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple - I don't care if you can prove Dixon's factorization method using induction if you can't tell me how threading works or the rings in the Linux kernel.

The Japan example mentioned above only works because Japanese salaries in Japan have remained extremely low and Japanese is not an extremely mainstream language (making it harder for Japanese firms to offshore en masse - though they have done so in plenty of industries where they used to hold a lead like Battery Chemistry).


> by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple

That doesn’t fit my experience at all. The applied math vs engineering continuum is mostly dependent on whether a CS program at a given school came out of the engineering department or the math apartment. I haven’t noticed any shift on that spectrum coming from CS departments except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware.

That’s the same across countries though. I certainly haven’t noticed that Indian or Eastern European CS grads have a better understanding of the OS or the underlying hardware.


> I certainly haven’t noticed that Indian or Eastern European CS grads have a better understanding of the OS or the underlying hardware.

Absolutely, but that's if they are exposed to these concepts, and that's become less the case beyond maybe a single OS class.

> except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware

I feel that's part of the issue, but also, CS programs in the US are increasingly making computer architecture an optional class. And network specific classes have always been optional.

---------

Mind you, I am biased towards Cybersecurity, DevOps, DBs, and HPC because that is the industry I've worked on for over a decade now, and it legitimately has become difficult hiring new grads in the US with a "NAND-to-Tetris" mindset because curriculums have moved away from that aside from a couple top programs.


ABET still requires computer architecture and organization. And they also require coverage of networking. There are 130 ABET accredited programs in the US and a ton more programs that use it as an aspirational guide.

Based on your domain, I think a big part of what you’re seeing is that over the last 15 years there was a big shift in CS students away from people who are interested in computers towards people who want to make money.

The easiest way to make big bucks is in web development, so that’s where most graduates go. They think of DBA, devops, and cybersecurity as low status. The “low status” of those jobs becomes a bit of a self fulfilling prophecy. Few people in the US want to train for them or apply to them.

I also think that the average foreign worker doing these jobs isn’t equivalent to a new grad in the US. The majority have graduate degrees and work experience.

You could hire a 30 year old US employee with a graduate degree and work experience too for your entry level job. It would just cost a lot more.


I just can't agree with this argument at all.

Today, you hire an intern and they need a lot of hand-holding, are often a net tax on the org, and they deliver a modest benefit.

Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more. Their total impact will be much higher.

The whole "entry level is screwed" view only works if you assume that companies want all of the drawbacks of interns and entry level employees AND there is some finite amount of work to be done, so yeah, they can get those drawbacks more cheaply from AI instead.

But I just don't see it. I would much rather have one entry level employee producing the work of six because they know how to use AI. Everywhere I've worked, from 1-person startup to the biggest tech companies, has had a huge surplus of work to be done. We all talk about ruthless prioritization because of that limit.

So... why exactly is the entry level screwed?


Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more.

Maybe tomorrow's interns will be "AI experts" who need less hand-holding, but the day after that will be kids who used AI throughout elementary school and high school and know nothing at all, deferring to AI on every question, and have zero ability to tell right from wrong among the AI responses.

I tutor a lot of high school students and this is my takeaway over the past few years: AI is absolutely laying waste to human capital. It's completely destroying students' ability to learn on their own. They are not getting an education anymore, they're outsourcing all their homework to the AI.


It's worth reminding folks that one doesn't _need_ a formal education to get by. I did terrible in school and never went to college and years later have reached a certain expertise (which included many fortunate moments along the way).

What I had growing up though were interests in things, and that has carried me quite far. I worry much more about the addictive infinite immersive quality of video games and other kinds of scrolling, and by extension the elimination of free time through wasted time.


I mean, a lot of what you mentioned is an issue around critical thinking (and I'm not sure that's something that can be taught), which has always remained an issue in any job market, and to solve that deskilling via automation (AI or traditional) was used to remediate that gap.

But if you deskill processes, it makes it harder to argue in favor of paying the same premium you did before.


They don't have the experience to tell bad AI responses from good ones.


True, but this becomes less of an issue as AI improves, right? Which is the 'happier' direction to see a problem moving, as if AI doesn't improve, it threatens the jobs less.


I would be worried about the eventual influence of advertising and profits over correctness


Why is the company who employs the intern paying for an AI service that corrupts its results with ads?


If AI improves to the point that an intern doesn’t need to check its work, you don’t need the intern.

You don’t need managers, or CEOs. You don’t even need VCs.


Too reductionist.


Exactly the right amount of reductionist.


> will need less hand-holding, will be able to leverage AI to deliver more

Well, maybe it'll be the other way around: Maybe they'll need more hand-holding since they're used to relying on AI instead of doing things themselves, and when faced with tasks they need to do, they will be less able.

But, eh, what am I even talking about? The _senior_ developers in a many companies need a lot of hand-holding that they aren't getting, write bad code, with poor practices, and teach the newbies how to get used to doing that. So that's why the entry-level people are screwed, AI or no.


You’ve eloquently expressed exactly the same disconnect: as long as we think the purpose of internships is to write the same kind of code that interns write today, sure, AI probably makes the whole thing less efficient.

But if the purpose of an internship is to learn how to work in a company, while producing some benefit for the company, I think everything gets better. Just like we don’t measure today’s terms by words per minute typed, I don’t think we’ll measure tomorrow’s interns by Lines of code that hand – written.

So much of the doom here comes from a thought process that goes “we want the same outcomes as today, but the environment is changing, therefore our precious outcomes are at risk.“


You’re right that AI is fast and often more efficient than entry-level humans for certain tasks — but I’d argue that what you’re describing isn’t delegation, it’s just choosing to do the work yourself via a tool. Implementation costs are lower now, so you decide to do it on your own.

Delegation, properly defined, involves transferring not just the task but the judgment and ownership of its outcome. The perfect delegation is when you delegate to someone because you trust them to make decisions the way you would — or at least in a way you respect and understand.

You can’t fully delegate to AI — and frankly, you shouldn’t. AI requires prompting, interpretation, and post-processing. That’s still you doing the thinking. The implementation cost is low, sure, but the decision-making cost still sits with you. That’s not delegation; it’s assisted execution.

Humans, on the other hand, can be delegated to — truly. Because over time, they internalize your goals, adapt to your context, and become accountable in a way AI never can.

Many reasons why AI can't fill your shoes:

1. Shallow context – It lacks awareness of organizational norms, unspoken expectations, or domain-specific nuance that’s not in the prompt or is not explicit in the code base.

2. No skin in the game – AI doesn’t have a career, reputation, or consequences. A junior human, once trained and trusted, becomes not only faster but also independently responsible.

Junior and Interns can also use AI tools.


You said exactly what I came here to say.

Maybe some day AI will truly be able to think and reason in a way that can approximate a human, but we're still very far from that. And even when we do, the accountability problem means trusting AI is a huge risk.

It's true that there are white collar jobs that don't require actual thinking, and those are vulnerable, but that's just the latest progression of computerization/automation that's been happening steadily for the last 70 years already.

It's also true that AI will completely change the nature of software development, meaning that you won't be able to coast just on arcane syntax knowledge the way a lot of programmers have been able to so far. But the fundamental precision of logical thought and mapping it to a desirable human outcome will still be needed, the only change is how you arrive there. This actually benefits young people who are already becoming "AI native" and will be better equipped to leverage AI capabilities to the max.


So what happens when you retire and have no replacement because you didn't invest in entry level humans?

This feels like the ultimate pulling up the ladder after you type of move.


imo comparing entry-level people with ai is very short sighted, I was smarter than every dumb dinosaur at my first job, I was so eager to learn and proactive and positive... i probably was very lucky too but my point is i don't believe this whole thing that a junior is worse than ai, i'd rather say the contrary


I don't get this because someone has to work with the AI to get the job done. Those are the entry-level roles! The manager who's swamped with work sure as hell isn't going to do it.


It's not that entry-level jobs / interns are irrelevant. It's more that entry-level has been redefined and it requires significant uplevelling in terms of skills necessary to do a job at that level. That's not necessarily a bad thing. As others have said here, I would be more willing to hand-off more complex tasks to interns / junior engineers because my expectation is they leverage AI to tackle it faster and learn in the process.


I thought the whole idea of automation though was to lower the skill requirement. Everyone compares AI to the industrial revolution and the shift from artisan work to factory work. If this analogy were to hold true, then what employers should actually be wanting is more junior devs, maybe even non-devs, hired at a much cheaper wage. A senior dev may be able to outperform a junior by a lot, but assuming the AI is good enough, four juniors or like 10 non-devs should be able to outperform a senior.

This obviously not being the case shows that we're not in a AI driven fundamental paradigm shift, but rather run of the mill cost cutting measures. Like suppose a tech bubble pops and there are mass layoffs (like the Dotcom bubble). Obviously people will loose their jobs. AI hype merchants will almost definitely try to push the narrative that these losses are from AI advancements in an effort to retain funding.


We've been doing the exact opposite for some positions.

I've been interviewing marketing people for the last few months (I have a marketing background from long ago), and the senior people were either way too expensive for our bootstrapped start-up, or not of the caliber we want in the company.

At the same time, there are some amazing recent grads and even interns who can't get jobs.

We've been hiring the younger group, and contracting for a few days a week with the more experienced people.

Combine that with AI, and you've got a powerful combination. That's our theory anyway.

It's worked pretty well with our engineers. We are a team of 4 experienced engineers, though as CEO I don't really get to code anymore, and 1 exceptional intern. We've just hired our 2nd intern.


> Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

1. Because, generally, they don't.

2. Because an LLM is not a person, it's a chatbot.

3. "Hire an intern" is that US thing when people work without getting real wages, right?

Grrr :-(


Interns make $75k+ in tech in the US. It's definitely not unpaid. In fact my school would not give course credit for internships if they were unpaid.


Companies reducing young hires because of AI are doing it backward. Returns on AI will be accelerated by early-career staff because they are already eagerly using AI in daily life, and have the least attachment to how jobs are done now.

You’re probably not going to transform your company by issuing Claude licenses to comfortable middle-aged career professionals who are emotionally attached to their personal definition of competency.

Companies should be grabbing the kids who just used AI to cheat their way through senior year, because that sort of opportunistic short-cutting is exactly what companies want to do with AI in their business.


If the AI can write code to a level that doesn’t need an experienced person to check the output, you don’t need tech companies at all.


This is always the case though. A factor of 50x productivity between expert and novice is small. Consider how long it take you to conduct foot surgery vs. a food surgeon -- close to a decade of medical school + medical experience -- just for a couple hours of work.

There have never been that many businesses able to hire novices for this reason.


This is a big part of why a lot of developers' first 1-3 jobs are small mom & pop shops of varying levels of quality, almost none of which have "good" engineering cultures. Market rate for a new grad dev might be X, it's hard to find an entry level job at X but mom & pop business who needs 0.7 FTE developers is willing to pay 0.8X and even though the owner is batshit insane it's not a bad deal for the 22 and 23 year olds willing to do it.


Sure. I mean perhaps, LLMs will accelerate a return to a more medieval culture in tech where you "have to start at 12 to be any good". Personally, I think that's a good (enough) idea. By 22, I'd at least a decade of experience; my first job at 20 was as a contractor for a major national/multinational.

Programming is a craft, and just like any other, the best time to learn it is when it's free to learn.


I think for a surgeon as an example, quality may be a better metric than time. I'll bet I could conduct an attempted foot surgery way faster than a foot surgeon, but they're likely to conduct successful foot surgeries.


Sure, but no one has found a good metric for actually quantifying quality for surgeons. You can't look at just the rate of positive outcomes because often the best surgeons take on the worst cases that others won't even attempt. And we simply don't have enough reliable data to make proper metric adjustments based on individual patient attributes.


> Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person.

The AI will definitely require handholding. And that hand-holder will be an intern or a recent college-grad.


Are you honestly trying to tell us that the code you receive from an AI is not requiring any of your time to review and tweak and is 100% correct every time and ready to deploy into your code base with no changes what so ever? You my friend must be a steely eyed missile man of prompting


Consider that there are no humans in existence that fulfill your requirements, not to mention $20/mo ones


why would i consider that when there absolutely are humans that can do that. your dollar value is just ridiculous. if you're a hot shit dev that no longer needs junior devs, then if you spend 15 minutes refactoring the AI output, then you're underwater on that $20/mo value


>Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.

A company that I know of is having a L3 hiring freeze also and some people are downgraded from L4 to L3 or L5 to L4 also.. Getting more work for less cost.


"intern" and "entry level" are proxies for complexity with these comparisons, not actual seniority. We'll keep hiring interns and entry level positions, they'll just do other things.


> Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

AI can barely provide the code for a simple linked list without dropping NULL pointer dereferences every other line...

Been interviewing new grads all week. I'd take a high performing new grad that can be mentored into the next generation of engineer any day.

If you don't want to do constant hand holding with a "meh" candidate...why would you want to do constant hand holding with AI?

> I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now.

Not sure what you are working on. I would never prioritize speed over quality - but I do work in a public safety context. I'm actually not even sure of the legality of using an AI for design work but we have a company policy that all design analysis must still be signed off on by a human engineer in full as if it were 100% their own.

I certainly won't be signing my name on a document full of AI slop. Now an analysis done by a real human engineer with the aid of AI - sure, I'd walk through the same verification process I'd walk through for a traditional analysis document before signing my name on the cover sheet. And that is something a jr. can bring to me to verify.


I think it’s the other way around.

If LLMs continue to become more powerful, hiring more juniors who can use them will be a no-brainer.


Yup, apart from a few companies at the cutting edge the most difficult problems to solve in a work environment are not technical.


This is basically what happened after 2008. The entry level jobs college grads did basically disappeared and didn't really come back for many years. So we kind of lost half a generation. Those who missed out are the ones who weren't able to buy a house or start a family and are now in their 40s, destined to be permanent renters who can never retire.

The same thing will happen to Gen Z because of AI.

In both cases, the net effect of this (and the desired outcome) is to suppress wages. Not only of entry-level job but every job. The tech sector is going to spend the next decade clawing back the high costs of tech people from the last 15-20 years.

The hubris here is that we've had a unprecedented boom such that many in the workforce have never experienced a recession, what I'd call "children of summer" (to borrow a George RR Martin'ism). People have fallen into the trap of the myth of meritocracy. Too many people thing that those who are living paycheck to paycheck (or are outright unhoused) are somehow at fault when spiralling housing costs, limited opportunities and stagnant real wages are pretty much responsible for everything.

All of this is a giant wealth transfer to the richest 0.01% who are already insanely wealthy. I'm convinced we're beyond the point where we can solve the problems of runaway capitalism with electoral politics. This only ends in tyranny of a permanent underclass or revolution.


This is a big issue in the short term but in the long term I actually think AI is going to be a huge democratization of work and company building.

I spend a lot of time encouraging people to not fight the tide and spend that time intentionally experimenting and seeing what you can do. LLMs are already useful and it's interesting to me that anybody is arguing it's just good for toy applications. This is a poisonous mindset and results in a potentially far worse outcome than over-hyping AI for an individual.

I am wondering if I should actually quit a >500K a year job based around LLM applications and try to build something on my own with it right now.

I am NOT someone that thinks I can just craft some fancy prompt and let an LLM agent build me a company, but I think it's a very powerful tool when used with great intention.

The new grads and entry level people are scrappy. That's why startups before LLMs liked to hire them. (besides being cheap, they are just passionate and willing to make a sacrifice to prove their worth)

The ones with a lot of creativity have an opportunity right now that many of us did not when we were in their shoes.

In my opinion, it's important to be technically potent in this era, but it's now even more important to be creative - and that's just what so many people lack.

Sitting in front of a chat prompt and coming up with an idea is hard for the majority of people that would rather be told what to do or what direction to take.

My message to the entry-level folks that are in this weird time period. It's tough, and we can all acknowledge that - but don't let cynicism shackle you. Before LLMs, your greatest asset was fresh eyes and the lack of cynicism brought upon by years of industry. Don't throw away that advantage just because the job market is tough. You, just like everybody else, have a very powerful tool and opportunity right in front of you.

The amount of people trying to convince you that it's just a sham and hype means that you have less competition to worry about. You're actually lucky there's a huge cohort of experienced people that have completely dismissed LLMs because they were too egotistical to spend meaningful time evaluating it and experimenting with it. LLM capabilities are still changing every 6 months-1 year. Anybody that has decided concretely that there is nothing to see here is misleading you.

Even in the current state of LLM if the critics don't see the value and how powerful it is mostly a lack of imagination that's at play. I don't know how else to say it. If I'm already able to eliminate someone's role by using an LLM then it's already powerful enough in its current state. You can argue that those roles were not meaningful or important and I'd agree - but we as a society are spending trillions on those roles right now and would continue to do so if not for LLMs


what does "huge democratization of work" even mean? what world do you people live in? the current global unemployment rate on my planet is around 5% so that seems pretty democratised already?


I've noticed that when people use the term "democratization" in business speak, it makes sense to replace it with "commodification" 99% of the time.


What I mean by that is that you have even more power to start your own company or use LLMs to reduce the friction of doing something yourself instead of hiring someone else to do it for you.

Just as the internet was a democratization of information, llms are a democratization of output.

That may be in terms of production or art. There is clearly a lower barrier for achieving both now compared to pre-llm. If you can't see this then you don't just have your head stuck in the sand, you have it severed and blasted into another reality.

The reason why you reacted in such a way is again, a lack of imagination. To you, "work" means "employment" and a means to a paycheck. But work is more than that. It is the output that matters, and whether that output benefits you or your employer is up to you. You now have more leverage than ever for making it benefit you because you're not paying that much time/money to ask an LLM to do it for you.

Pre-llm, most for-hire work was only accessible to companies with a much bigger bank account than yours.

There is an ungodly amount of white collar workers maintaining spreadsheets and doing bullshit jobs that LLMs can do just fine. And that's not to say all of those jobs have completely useless output, it's just that the amount of bodies it takes to produce that output is unreasonable.

We are just getting started getting rid of them. But the best part of it is that you can do all of those bullshit jobs with an LLM for whatever idea you have in your pocket.

For example, I don't need an army of junior engineers to write all my boilerplate for me. I might have a protege if I am looking to actually mentor someone and hire them for that reason, but I can easily also just use LLMs to make boilerplate and write unit tests for me at the same time. Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.

The junior engineer can also do this too, albeit in most cases less effectively.

That's democratization of work.

In your "5% unemployment" world you have many more gatekeepers and financial barriers.


Just curious what area you work in? Python or some kind of web service / Jscript? I'm sure the LLMs are reasonably good for that - or for updating .csv files (you mention spreadsheets).

I write code to drive hardware, in an unusual programming style. The company pays for Augment (which is now based on o4, which is supposed to be really good?!?). It's great at me typing: print_debug( at which point it often guesses right as to which local variables or parameters I want to debug - but not always. And it can often get the loop iteration part correct if I need to, for example, loop through a vector. The couple of times I asked it to write a unit test? Sure, it got a the basic function call / lambda setup correct, but the test itself was useless. And a bunch of times, it brings back code I was experimenting with 3 months ago and never kept / committed, just because I'm at the same spot in the same file..

I do believe that some people are having reasonable outcomes, but it's not "out of the box" - and it's faster for me to write the code I need to write than to try 25 different prompt variations.


A lot of python in a monorepo. Mono repos have an advantage right now because the LLM can pretty much look through the entire repo. But I'm also applying LLM to eliminate a lot of roles that are obsolete, not just using it to code.

Thanks for sharing your perspective with ACTUAL details unlike most people that have gotten bad results.

Sadly hardware programming is probably going to lag or never be figured out because there's just not enough info to train on. This might change in the future when/if reasoning models get better but there's no guarantee of that.

> which is now based on o4

based on o4 or is o4, those are two different things. augment says this: https://support.augmentcode.com/articles/5949245054-what-mod...

  Augment uses many models, including ones that we train ourselves. Each interaction you have with Augment will touch multiple models. Our perspective is that the choice of models is an implementation detail, and the user does not need to stay current with the latest developments in the world of AI models to fully take advantage of our platform.
Which IMO is....a cop out, a terrible take, and just...slimey. I would not trust a company like this with my money. For all you know they are running your prompts against a shitty open source model running on a 3090 in their closet. The lack of transparency here is concerning.

You might be getting bad results for a few reasons:

  - your prompts are not specific enough
  - your context is poisoned. how strategically are you providing context to the prompt? a good trick is to give the llm an existing file as an example to how you want it to produce the output and tell it "Do X in the style of Y.file". Don't forget with the latest models and huge context windows you could very well provide entire subdirectories into context (although I would recommend being pretty targeted still)
  - the model/tool you're using sucks
  - you work in a problem domain that LLMs are genuinely bad at
Note: your company is paying a subscription to a service that isn't allowing you to bring your own keys. they have an incentive to optimize and make sure you're not costing them a lot of money. This could lead to worse results.

see here for Cline team's perspective on this topic: https://www.reddit.com/r/ChatGPTCoding/comments/1kymhkt/clin...

I suggest this as the bare minimum for the HN community when discussing their bad results with LLMs and coding:

  - what is your problem domain
  - show us your favorite prompt
  - what model and tools are you using?
  - are you using it as a chat or an agent? 
  - are you bringing your own keys or using a service?
  - what did you supply in context when you got the bad result? 
  - how did you supply context? copy paste? file locations? attachments?
  - what prompt did you use when you got the bad result?
I'm genuinely surprised when someone complaining about LLM results provides even 2 of those things in their comment.

Most of the cynics would not provide even half of this because it'd be embarrassing and reveal that they have no idea what they are talking about.


But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?


So your critique of AI is that it can't read your mind and figure out what to do?

> But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?

I mean....i'm doing it and getting paid for it so...


Yes, because AGI is advertised(or reviled) as such. That you plug it in and it figures everything else out itself. No need for training and management like for humans.

In other words, did the AI actually replace you in this case? Do you expect it to? Because people clearly expect it, then we have such discussions as this.


You are incredibly foolish to get hung up on marketing promises and ignoring llm capabilities that are a reality and useful right now

good luck with that


Tell that to all these bloodbathers. I am trying it out myself and in touch with the reality.


You're trying it out with literally the expectation that it can read your mind and do what you want with no effort involved on your part.

So basically you're not trying it out. Please just put it down, you have nothing interesting to say here


Maybe. But are you aware that noone, at least in management, wants to hear "you must make the effort"?


> What I mean by that is that you have even more power to start your own company or use LLMs to reduce the friction of doing something yourself instead of hiring someone else to do it for you.

> Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.

this sounds like the death of employment and the start of plutocracy

not what I would call "democratisation"


> plutocracy

Well, I've said enough about cynicism here so not much else I can offer you. Good luck with that! Didn't realize everybody loved being an employee so much


not everyone is capable of starting a business

so, employee or destitute? tough choice


I spent a lot of time arguing the barrier to entry for starting one is lower than ever. But if your only options are employee or being destitute, I will again point right to -> cynicism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: