From a PR perspective, I wonder why YouTube is at the same time forcing unwanted AI features down people's throats[1], a move that many companies now do to drum up their perceived AI competence, but THEN at the same time, when asked, also downplaying this use of AI by splitting words.
The combination of the two confuses me. If this was about shareholders, they'd hype up the use of AI, not downplay it. And if this was about users, they'd simply disable this shit.
[1] I mean, they're sacrificing Google Search of all things to push their AI crap. Also, as a bilingual YouTube user, AI-translated titles and descriptions make the site almost unusable now. In addition to some moronic PM forcing this feature onto users, they somehow also seem to have implemented the worst translation model in the industry. The results are often utterly incomprehensible, and there's no way to turn this off.
The customers are advertisers and marketers. The product is access to the userbase. This is how we arrive where we are at today, where major clients have made significant investments into AI and expect further return on that investment through proliferation of the technology, while the users could not care less or even balk at it. But we are also at a time where there is no viable alternative to the monopolied corners of the internet such as youtube any longer, so the userbase has nowhere to flee if they even wanted to.
If you think that's even close to good, then it's you who lives in a reality distortion field. But so are all of the PC laptop manufacturers, reviewers and buyers. I don't get it.
I desperately want to move to a Linux laptop (I run it on every desktop PC I own, and I hate that I have to deal with a locked-down system). I've tried more laptops than is probably financially healthy for me. There's no price point that buys you even close to what an entry-level Macbook Air offers, not only in terms of battery life, but also weight, screen quality and keyboard.
> > 8-10h battery life
>
> If you think that's even close to good, then it's you who lives in a reality distortion field. But so are all of the PC laptop manufacturers, reviewers and buyers. I don't get it.
>
That's a laptop from 2016, IIRC at the time that was about the same you got out of a top of the line macbook. But I'm pretty certain that 2016 macbook would not have that battery life now, while I could easily swap out the battery and am back to 10h battery life.
My work-provided M1 Pro (which I got brand new out of the box) will last all day if idle, but if I'm doing literally anything like even light browsing, the battery life is around 8-10h. More like 5 when I'm running my full local dev setup.
That's because the architecture isn't built for it to know what it knows. As someone put it, LLMs always hallucinate, but for in-distribution data they mostly hallucinate correctly.
I've been refactoring a ton of my Pandas code into Polars and using ChatGPT on the side as a documentation search and debugging tool.
It keeps hallucinating things about the docs, methods, and args for methods, even after changing my prompt to be explicit about doing it only with Polars.
I've noticed similar behavior with other libraries that aren't the major ones. I can't imagine how much it gets wrong with a less popular language.
I mean, doesn't Ethereum, probably the most prominent proof-of-stake coin, roll back the consensus whenever something happens that they don't like? It's easy to claim you're algorithm is safe when you're not actually running it.
You: "Post seems confused. A 51% attack doesn't allow the attacker to sign transactions with someone else's key."
Maybe you misread, the post says this: "With its current dominance, Qubic can rewrite the blockchain, enable double-spending, and censor any transaction."
All of which are possible if someone has that level of control, and none of which involve signing with other people's keys.
(As some people seem confused about the impact of 51% attacks: Of course you can't double-spend in a single blockchain, as that is prevented. But the nature of these attacks is that there's no longer one true blockchain. You can create one fork of the blockchain where you send the money to someone, receive goods in return, and then afterwards switch to a longer fork of the blockchain where the money was never sent.)
> You can create one fork of the blockchain where you send the money to someone, receive goods in return, and then afterwards switch to a longer fork of the blockchain where the money was never sent.
Why would you do this double spend attack with goods (and I assume you mean physical goods) and not for example a swap to ETH?
Doing this requires massive tangible infrastructure subject to seizure to pay your new bad debts as you become subject to arrest in a lot of the places one may want to spend time in.
This doesn't seem like as much of an actual risk. A better way to make money would be to create a perception that the value of the coin is at risk before buying it cheap.
Actually devaluing it doesn't seem worthwhile financially.
I mean, the candidate can design and build innovative custom hardware, but do they remember an obscure impractical algorithm from a second semester CS course? No? Obviously not a fit for this company.
Companies of that size are common. It would in isolation even be profitable to serve them. The problem is if you introduce a middle tier that includes SSO, many enterprises will go for that instead of the expensive enterprise tier you want them to buy. Basically, you sacrifice medium companies as customers in order to chase after that sweet enterprise money.
Companies of that size are served by the "enterprise call a salesperson" offering. If you really don't need all of the other features you can probably negotiate a discount.
That makes sense, but I still think there are other features that can be gated behind enterprise to help make sure that doesn't happen while still providing SSO for smaller companies.
You can have user limits on the non-enterprise plans (Microsoft does this, for example, with Business Premium locked at 300 users or less), or gate other features behind enterprise: Have MFA across the board, but lock conditional access behind enterprise, lock more advanced audit logs & reporting behind enterprise, lock RBAC behind enterprise, or data residency, custom security policies, API limits, etc.
There are numerous other features that are non-negotiable for enterprises to help funnel them into the enterprise plan, while still being able to service medium companies with SSO.
We as an industry need to seriously tackle the social and market dynamics that lead to this situation. When and why has "stable" become synonymous with "unmaintained"? Why is it that practically every attempt to build a stable abstraction layer has turned out to be significantly less stable than the layer it abstracts over?
So one effect I've seen over the last decade of working: if it never needs to change, and no one is adding features, then no one works on it. If no one works on it, and people quit / change teams / etc, eventually the team tasked with maintaining it doesn't know how it works. At which point they may not be suited to maintaining it anymore.
This effect gets accelerated when teams or individuals make their code more magical or even just more different than other code at the company, which makes it harder for new maintainers to step in. Add to this that not all code has all the test coverage and monitoring it should... It shouldn't be too surprising there's always some incentives to kill, change, or otherwise stop supporting what we shipped 5 years ago.
That's probably true, but you're describing incentives and social dynamics, not a technological problem. I notice that every other kind of infrastructure in my life that I depend upon is maintained by qualified teams, sometimes for decades, who aren't incentivized to rebuild the thing every six months.
If you're asking why software has more frequent rebuild cycles than, say, buildings, or roads, or plumbing, it's because it's way cheaper and easier, can be distributed at scale for ~free (compared to road designs which necessarily have to be backward compatible since you can't very well replace every intersection in a city simultaneously), and for all the computerification of the modern world, is largely less essential and less painful to get wrong than your average bridge or bus.
It's like the difference between building a Bird house, Dog house, people house, mansion, large building and a sky scraper... it's different levels of planning, preparation and logistics involved. A lot of software can (or at least should) be done at the bird or dog house level... pretty easy to replace.
For roads, brushes, wastewater, sewer, electricity - it's because these things are public utilities and ultimately there is accountability - from local government at least - that some of these things need to happen, and money is set aside specifically for it. An engineer can inspect a bridge and find cracks. They can inspect a water pipe and find rust or leaks.
It's much harder to see software showing lines of wear and tear, because most software problems are hard to observe and buried in the realities of Turing completeness making it hard to actually guarantee there aren't bugs; software is often easy to dismiss as good enough until it starts completely failing us.
A bridge is done when all the parts are in place and connected together. Much software is never really done because
- it's rare to pay until we have nothing more to refactor
- the software can only be as done as the requirements are detailed; if new details come to light or worse, change entirely, the software that previously looked done may not be. That would be insane for a bridge or road.
Maybe it's a documentation problem? It seems to me that for a piece of software to be considered good, one has to be able to grok how it works internally without having written it.
Not per Naur, in his seminal "Programming as Theory Building" he claims that documentation doesn't replace the mental model that he original authors had developed: https://pages.cs.wisc.edu/~remzi/Naur.pdf
> When and why has "stable" become synonymous with "unmaintained"?
Because the software ecosystem is not static.
People want your software to have more features, be more secure, and be more performant. So you and every one of your competitors are on an update treadmill. If you ARE standing (aka being stable) on the treadmill, you'll fall off.
If you are on the treadmill you are accumulating code, features, and bug fixes, until you either get too big to maintain or a faster competitor emerges, and people flock to it.
Solving this is just as easy as proving all your code is exactly as people wanted AND making sure people don't want anything more ever.
> People want your software to have more features, have fewer bugs, and not be exploited. So you and every one of your competitors are on an update treadmill. If you ARE stable, you'll probably fall off. If you are on the treadmill you are accumulating code, features, bug fixes, until you either get off or a faster competitor emerges.
Runners on treadmills don't actually move forward.
Kinda the point of the threadmill metaphor. If you are standing on a threadmill, you will fall right off. It requires great effort to just stay at one spot.
Honestly. I think it is. All software exists in a kind of attract consumer/developer evolutionary race.
If I assume your point is true, wouldn't everyone then just switch to Paint for all 2D picture editing? I mean it's the fastest - opens instantly on my machine vs 3-4sec for Krita/Gimp/Photoshop. But it's also bare bones. So why isn't Paint universal used by everyone?
My assumption: what people want is to not waste their time. If a program is 3 seconds slower to start/edit, but saves you 45 minutes of fucking around in a less featureful editor, it's obvious which one is more expedient in the long run.
I use paint, because it starts fast so I don't waste my time. When I want complex edit, I use paint.net - because it does everything yet starts million times faster than Krita/Gimp/Photoshop. In fact paint.net 3 already had all the features in 2008, after that it was vapid churn, it's literally impossible to notice the difference.
That's kind of my point. Even you eschew Paint's simplicity when you need a more complex transformation. Nothing you do in Paint.Net isn't impossible in Paint, given enough calculation and preparation. So a performance isn't the deciding factor. It's the speed of achieving thing X (of which startup/lag is a tiny cost).
Similarly in Paint.Net you could emulate many Photoshop features (e.g., non-destructive editing), but doing so would be tedious (duplicate layer, hide layer adjust copy layer, then edit until you get it to where you want it).
Performance is a deciding factor, it's the reason I use paint and don't use Krita/Gimp/Photoshop. I use Ctrl+Z for non-destructive editing in paint. Also paint has a more reliable and predictable UI, you never know what those overly smart editors will do. Will they add too much antialiasing? Randomly switch to subpixel precision? Insert transparent background (antialiased)?
There are two things in play: just because it's a deciding factor for you doesn't mean it's a deciding factor for everyone else. Second, even for you, Paint isn't enough. You also got Paint.net. You can reproduce almost any effect in PS or Gimp or Krita in Paint/Imagemagick. Why not just use those two for everything.
It's the same thing as using an IDE vs notepad(++). Anything done in the IDE behemoth can be done in notepad. Albeit at a significant time penalty, and with way more CLI jousting.
> I use Ctrl+Z for non-destructive editing in paint
That's not really non-destructive editing - that's Undo. A non-destructive editing means you can edit, change things, save, close the program. Reopen file after X days amount of time, and change effect or thing you applied.
> People want your software to have more features, be more secure, and be more performant
I think it's worth noting that one reason hardware rots is because software seems to become slower and slower while still doing the same thing it did 15 years ago.
The core issue, in my humble opinion, is that it's not doing the same thing. But from a thousand miles away it looks like that, because everyone uses 20% of functionality, but everyone in aggregate uses 100% of functionality.
> I think it's worth noting that one reason hardware rots is because software seems to become slower and slower while still doing the same thing it did 15 years ago.
I'd like to see some actual proof of (/thoughts on) this. Like if you didn't patch any security issues/bugs, or add any features, or fixed any dependencies, how is the code getting slower?
Like I understand some people care about performance, but I've seen many non-performant solutions (from Unity, to Photoshop to Rider, being prefered over custom C# engine, Paint, Notepad++) being used nearly universally, which leads me to believe there is more than one value in play.
Well Slack may have a few more features over IRC but none of them should cause my laptop fans to start spinning. Many of the slowdowns in modern web are because of trackers. Software may be more efficient but all that is neglected by software trying to do more. But so much of that "more" isn't resulting in more features for the user.
And yes convenience, social adoption and flashy modern appearance are major factors in the decision making. Whether it contributes to obsolescence or is fast isn't a factor at all.
> Well Slack may have a few more features over IRC but none of them should cause my laptop fans to start spinning. Many of the slowdowns in modern web are because of trackers. Software may be more efficient but all that is neglected by software trying to do more. But so much of that "more" isn't resulting in more features for the user.
It's literally Electron based. Trackers are a rounding error in the ocean that is Chrome + Node.
Got one idling in memory on my Intel mac rn. Let's see. 341 MB or Slack renderer, 198 MB of Slack Helper GPU, 72 MB for Slack itself + 50MB for Slack Helper stuff. Literally eating 661 MB of memory doing practically nothing. Which means a huge web tracker (circa 10MB) is 1.5% of that.
Electron itself is the culprit, more than any tracker. And reason Electron is used is that: HTML/CSS/JS is the only cross platform GUI that looks similar on all platforms, has enough docs/tutorials, and has enough of frontend developers available.
post a link to a stable repository on github on this site. watch as several people pipe up and say "last commit 2020; must be dead"
source code is ascii text, and ascii text is not alive. it doesn't need to breathe, modulo dependencies, yes. but this attitude that "not active, must be dead and therefore: avoid" leads people to believing that the opposite: unproven and buggy new stuff, is always better.
silly counter-example: vim from 10 years ago is just as usable for the 90% case as the latest one
I don't assume it's dead based on the last commit. I will look a little further and see where it stands. If there hasn't been a commit for a few years AND there are multiple pull requests that have been sitting unmerged for years and dozens/hundreds of really old issues... then, I'll assume it's dead, and often open another issue asking if it's dead, if there isn't one already.
At any given moment, there are 6 LTS versions of Ubuntu. Are you proposing that there should be more than that? The tradeoffs are pretty obvious. If you're maintaining a platform, and you want to innovate, you either have to deprecate old functionality or indefinitely increase your scope of responsibilities. On the other hand, if you refuse to innovate, you slide into obscurity as everyone eventually migrates to more innovative platforms. I don't want to change anything about these market and social dynamics. I like innovation
They aren't still supporting 14.04. How do you get 6? There's one every other year, and they retire one shortly after a new one comes out. They're also pretty quick to shutter non-lts release support each lts generation.
I don't think those explanations are mutually exclusive.
Yes, there's a large cohort of "senior" software engineers who can't actually code. They bullshit their way into jobs until they're fired and then apply for the next one. These are the people you want to filter out with live coding.
But also, someone can fail a live coding interview for reasons other than belonging to that group.
I think there's a lot of developers who can ace a live-coding interview but who lack the understanding of engineering systems at scale so they'll make your whole codebase worse over time by introducing tech debt, anti-patterns, and inconsistencies. These are the people you really want to avoid, but very few interview processes are set up to filter them out.
There's an assumption that the company's existing senior architects and developers will stop a new person from making the code worse, but also devs at every company thinks their codebase is terrible so it obviously isn't working.
I've seen lots of devs who think their codebase is the only correct way to do things. Lots of overconfident people out there. Inconsistencies are fine as long as there's file level consistency. All that really matters is if you can relatively quickly understand what you are working with. What you really want to avoid is having functions doing 20 different things from 5 different contexts.
Work experience discussions are the best I’ve come up with.
What I’m suggesting is hire experienced people based on that and resume verification and behavior interviews like nearly every other job on the planet.
And if someone lies about their ability to actually code, fire them quickly.
I agree. Live coding always has a much smaller scope than real software, and after a few interviews it is easy to learn to read the room, even for the worst developers.
I think we can leave companies who don't care about quality out of the discussion, but for those who do, the time to detect those developers is in a probational period, which is not something that most companies really use on their favor.
The problem is this requires a good management that is able to actively paying attention to the work of the developer. Which is often not in place, even in companies who want to prioritize quality :/
You could filter then out much more effectively by letting them sit in a room by themself to write the code, that way you aren't missing out on good candidates who can't function when under abnormal stress(that has nothing in common with the actual job).
I've had take home problems for job interviews that were given a few days before and during the actual interview I only had to explain my code. But I wouldn't be sure this still works as a useful candidate filter today, given how much coding agents have advanced. In fact, if you were a sr dev and had a bunch of guys to bounce this problem back and forth, it wouldn't even have filtered out the bad ones back in the old days. There is very little that is more telling than seeing a person work out a problem live, even if that sucks for smart people who can't handle stress.
I have found over the years that I learn more by asking easier questions and interacting with candidates as they think through problems out loud. Little things that test the critical ability to craft a Boolean expression to accurately characterize a situation can be explored in a brief interview where you have some assurance that they're working on their own, and not just getting an answer online or from a smart roommate. (Sample: given two intervals [a,b] and [c,d], write an expression that determines whether they intersect.). Candidates that have lots of trouble programming "in the small" are going to have trouble programming in the large as well.
What I find effective (on both sides of the interview table) is not only asking easier questions, but active encouragement to first talk out and work through the most fundamental, basic aspects of the problem and it's simplest solutions before moving into more advanced stuff.
I think a lot of experienced people's brains lock up in an interview when asked simple questions because they assume that they're expected to skip straight past the "obvious" solution and go straight for some crazy algorithm or explain the fine points of memory/time tradeoff. This often doesn't present as intended - it looks to the interviewer like you don't even know the basics and are grasping at straws trying to sound smart.
If people can explain their decisions, I'd say it's fair game. It would be nice to know up front if someone used AI of course.
The other implication here is that if a candidate can use AI for a take home and ace the interview, then maybe the company doesn't have as tough of problems as it thought and it could fill this seat quickly. Not a bad problem to have.
I don’t use those LLM tools, but if someone can pass the test with LLM tools, then they can pass the test unless there’s something special about the environment that precludes the LLM tools they use.
> But I wouldn't be sure this still works as a useful candidate filter today, given how much coding agents have advanced.
Prior to ChatGPT coming out, I gave a take home test to sort roman numerals.
What before was a "here's a take home that you can do in an hour and I can check the 'did you write the code reasonably?'" is now a 30 seconds in ChatGPT with comments embedded in it that would make an "explain how this function works" to be less useful. https://chatgpt.com/share/688cd543-e9f0-8011-bb79-bd7ac73b3f...
When next there's an interview for a programmer, strongly suggest that it be in person with a whiteboard instead to mitigate the risks of North Korean IT workers and developers who are reliant on an LLM for each task.
My solution for this was a propose a problem obscure enough that no LLM tool really knows how to deal with it. This involved some old Fortran code and obscure Fortran file format.
You can have the AI explain it to you. There's also a middle ground between vibe coding and "I can code some things but never could have coded this without an AI".
Doesn't even have to be AI. Give me some random file from the Linux kernel and I could probably explain everything to you if you gave me a few hours. But that doesn't mean I would ever be able to write that code.
I don’t disagree, but in those interviews the explanation is also a bit of a q&a, so an effective interviewer can detect candidates who only memorized things. Someone who can ace a q&a about Linux code is already better than average.
Are you asking how they would get that info they didn't have / couldn't come up with? Because you can literally have a chatbot explain every line to you and why it is there and even ask it about things you don't know like a teacher. And for simple problems this will probably work better than with actual humans.
I assume questions to explain the code would be extremely specific about why you did something on a specific line, or why you chose to design this part that way instead of another, to detect plagiarism and vibe coding, not a request for a prepared monologue.
Isn't that a good thing? The fact that the candidate dumped out code that they didn't write is often called "cheating". The fact that candidates can't explain it (because they didn't write it) means it's a good test of something most interviewers find unacceptable.
Leaving aside that many companies have pulled back from remote to at least some degree, I'd always push for an in-person day for a variety of reasons. In general, the cost is nothing for a late-stage/end-stage confirmation. And, honestly, a candidate that just doesn't want to do that is a red flag.
While I don’t disagree with you I find it to be a slippery slope to some extent.
Would you screen out Linus Torvalds because he hypothetically doesn’t want to come in to a physical office for an interview?
Hiring managers should think long and hard in a data-driven way about whether the office presence is so necessary that you are willing to miss out on the best candidates who have the luxury of being picky.
Is it true scientifically that an in-person interview day results in better candidate quality or is that just a vibe?
I think eliminating top talent who refuse to step foot in an office and are rare enough to be able to maintain that demand is a lot of quality people being left out of your talent pool. I thought during the pandemic we already proved by numerous studies that in-office workers are less productive.
My company philosophy would be more like, put the burden of identifying quality talent on the employer rather than the employee. Put the candidate through the minimum effort required to screen them and identify standout talent. Then when you find that standout talent you roll out the red carpet and focus on convincing them to work at your company.
You can come up with outlier examples of course--though I'm not sure how relevant they are unless you're looking at hiring a "name" for some reason. But I'd still default to an in-person visit of some sort. I've never seen any data but then in-person was just assumed in most cases until a few years ago.
Read your last two sentences over again. That’s exactly my point: it’s all an old habit that isn’t based on outcomes.
I think it’s a human social instincts thing and not a quantitative thing. There might be a better way but we default to the social ritual because ape brain is most comfortable with it.
> In general, the cost is nothing for a late-stage/end-stage confirmation.
One in-person day costs a nearby candidate about 3 days, and a more remote candidate anything from a week to a couple of months depending mostly on where you are. And yeah, it also costs some money that maybe you will reimburse.
It doesn't cost you much, that's for sure. But if it's for a full-remote position, it's absolutely not a "the cost is nothing" situation and the candidate refusing it for some random company in a random stage of the interview is absolutely reasonable.
I don't know about that. Long ago I interviewed with someone that wanted some trivial C++ thing written on their laptop. I hadn't seen a Windows dev machine before and had no Internet access. I think I'd worked out the compiler was called visual studio and how to compile hello world by the time limit. Not sure that told either of us much.
I share your point of view, but live coding these days are just beyond that testing programming skills. You must know by heart the most common algorithms out there and design solutions that might involve two or three of them to solve a problem in 30 minutes.
Sometimes you spend the whole time trying to figure out how to solve the puzzle that don't even have time to show that you can - actually - code.
> You must know by heart the most common algorithms out there and design solutions that might involve two or three of them to solve a problem in 30 minutes.
You're not going to pass every interview. Some of them are truly outlandish, and require all of the above.
What you need is the technical knowledge to translate requirements into a loose pattern like "This looks like a search problem", then have the charisma (or more accurately, practice) to walk the interviewer through how each search algorithm you know could apply there. Then of course be able to actually write some code.
I've passed interviews where I had never heard of the datastructure they wanted me to solve it with; I just walked them through the tradeoffs of every data structure that I knew applied to it instead.
I’ve been doing this for 20 years. I’ve never worked with a single one of those people. I don’t think I’ve ever even interviewed one where I couldn’t have screened them out based on their resume and a 15 minute conversation.
I’ve worked with plenty of people who passed a whiteboard interview and then spent years actively reducing aggregate productivity.
Why do you need arbitrary (and very short) deadlines, and for someone to stand up at a whiteboard while simultaneously trying to solve a problem and "walk you through their thought process" to filter out people who can't write code on the job?
The short deadlines are because neither the company nor the candidate wants to spend a month on an extended interview. Solving a problem and walking through the thought process are because that's what "coding" is.
I don't know about you, but I've never had to live code a PR and explain to my reviewer what I was thinking while writing the code. By "deadlines" I'm referring to the length of the interview. Take home problems theoretically solve both these issues, but they need to be properly scoped and specified to be valid assessments.
I sit down with juniors and sketch out designs or code for them while talking through the thought process at least once a week, and even when solo coding, I expect everyone produces work that explains itself. For particularly complex/nuanced changes, people do hop on a call to talk through it.
Like I said the deadlines work for both sides. If a company wants to give homework instead of having their own senior engineers spend time talking to me, that tells me what I need to know about how they value my time.
> I sit down with juniors and sketch out designs or code for them while talking through the thought process at least once a week, and even when solo coding, I expect everyone produces work that explains itself. For particularly complex/nuanced changes, people do hop on a call to talk through it.
That's not equivalent to what I said, nor is it live coding.
Again, those deadlines are artificially short compared to real world scenarios, and completely arbitrary. They are so short, in fact, that they render the interview an invalid test of real working ability. A work sample has been proven time and again to be the most valid measure of whether a candidate can actually perform the job, but the conditions under which a live coded "work sample" is performed in an interview render it invalid.
It's not artificial: the company has a day of my time. I have a day of their time. We both want me to meet several people on the team to see if it's a good fit. Because of the constraint, we keep it to relatively simple discussions around toy problems that can be solved in an hour.
Yes, it is artificial. Everything about a live coding interview is artificial. Code doesn't get written in 1 hour blocks while someone's watching over one's shoulder, all the while asking questions to interrupt one's thought process, in any company I've ever worked for.
Like I said, this is literally a thing I do all the time. I have standing 1 hour blocks for each of my team members every week and it's not uncommon for us to build out the skeleton of a problem solution together. I literally did what you said on Wednesday for someone for a gitlab change because I don't expect they know how secret injection works, but I want them to know. And absolutely I've encouraged them to ask questions, and I ask them questions to check their understanding.
In most of the western world, firing employees is a high risk, high cost task. Ideally companies would hire quickly and fire poor matches just as quickly once they've been evaluated in the real world environment of the company. For this to work, on the employee side there needs to be knowledge that this is the company's process, financial depth to deal with the job not being stable, and a savviness to not relocate into a job that's risky. On the employer side, there needs to be a legal and social environment that doesn't punish removing non-productive employees.
The legal environment is what it is and unlikely to change. The social environment is fickle and trend driven. Workers can't always fully evaluate their odds of success or the entirety of risk of leaving a job that's valuable for the employee and employer for one that might end up as a poor match, even if both sides have been transparent and honest. It's a difficult matchmaking problem with lots of external factors imposed and short term thinking on all sides.
Ideally young workers would have an early career period that involves a small number of short lived jobs, followed up by a good match that lasts decades, providing value to both the employee and employer. Much like finding a spouse used to be a period of dating followed by making a choice and sticking with it so a life could be built together, employment ideally should result in both sides making the other better. Today however everyone seems focused on maximizing an assortment of short term gains in search for the best local timescale deal at the expense of the long term. It's interesting how the broken job market and broken family formation process in the western world mirror each other so much.
There was a lot of social pressure in the past for permanent marriages. That doesn't mean they were happy marriages. With the social changes in the west in the 1960s, divorce became more socially acceptable. Legal changes meant women had the ability to join the workforce and support themselves. People in unhappy marriages had options to seek happiness elsewhere. Those options didn't exist before.
For job retention, the problem is that changing jobs is often the only way to advance. I lost my best worker because the suits wouldn't give him a raise. He now makes more than I do at a different company. He liked his job with us, but he tripled his pay by leaving. My coworkers all tell the same story. I'm one of the lucky ones that managed to move up in the company, and that's only because I had the boss over a barrel.
1. Make sure you pick every good candidate, but some bad candidates will slip through as well.
2. Make sure you reject every bad candidate, but some good candidates will fail as well.
Candidates want process #1, but companies have no reason to push for it. The cost of accidentally hiring a bad employee is simply too high, way more than rejecting a good employee. The current system in place prioritizes #2. Yes they are rejecting great candidates, and they are aware of it.
Also I don't think being artificially picky is a better filter than just going with some gut feeling after weeding out candidates with fake credentials.
Being picky gives the illusion of choosing when you in practice are bound down by the process.
> Yes, there's a large cohort of "senior" software engineers who can't actually code. They bullshit their way into jobs until they're fired and then apply for the next one. These are the people you want to filter out with live coding.
Genuinely, are there any amount of these at any significant scale in a place like Silicon Valley? I'm not sure I've ever met someone who couldn't code at any of the places I've worked.
Senior engineers are heavily evaluated for their ability to pump out code. If you're not coding, what the hell are you doing at a startup that needs features built to generate any revenue?
Raspberry Pi gets a lot of negative comments these days, with unfavorable comparisons to mini PCs at similar price points, which is certainly justified. But I don't know, it's not completely rational, I still love my Raspberry Pis. Especially a Pi 5 with an NVME SSD is a beast in terms of performance. They use very little power, they are tiny, the programmable GPIO pins are awesome. There's still a sense of magic, which for hobby use, is more important than the raw numbers. I just don't get the same "sense of tinkering" when booting a PC.
> Raspberry Pi gets a lot of negative comments these days, with unfavorable comparisons to mini PCs at similar price points, which is certainly justified.
It entirely depends on the purpose you use them for. Mini-PCs are good for PC-things, meaning raw power, storage, just running software. But they fall flat if you tinker with them. They usually don't have GPIO, nore a community build around hacking and tinkering with them (AFAIK).
But here is the thing, many people were using raspis for those software-jobs, as NAS, homeserver, mediacenter, gamestation, they have no need for tinkering and GPIO. So this group of people is totally fine with a mini-pc, and maybe even should stay with them, and giving the raspi room to focus on its original purpose again.
I’m a long time user of raspberry pis for various tinkering projects. I think the GPIO and camera interface are important, but also the size. The pi zero I would consider to be generally the most functional format of the pis.
Hardware has also evolved over the years. I had been using a pi to run pihole, but an incident one day that caused my SD card to burn up made me go looking around at other options.
There is now a whole stockpile of used “thin clients” which can be had with case, power supply and more RAM for less than the cost of a pi, with other niceties like an extra SODIMM slot and M.2 with a few more lanes than a pi.
These are also fanless systems that idle at a few watts and generally serve that purpose better in nearly every way. That said, the sticker price on one of those systems is not competitive and only the somewhat recent turning over of supply from call centers and other places with low computational needs has really entered them into the market (and also driven the continued development of the atom chips used in mini pcs).
> But I don't know, it's not completely rational, I still love my Raspberry Pis.
Feelings over facts, at least you acknowledge it.
The success of (and the issues with) the raspberry pi mainly derive from it being mistaken for a good home-server platform. It's not, it's awful for that use case. For pretending it to be an embedded systems platform (either for prototyping or to later target the compute modules for production usage) sure, it's great.
It's all fine as long as the (computing) needs are low and budget is not an issue.
The problem, imho, is that it's amazing right up to a point where it isn't. It's tiny, noiseless, sips power....while providing what you need it to. Until one day the service you set up is not available anymore, the Pi isn't responding on the network, and then you check and the microSD is corrupt and everything you've set up is gone. Hope you had a good backup because the only way to fix it is to set it up again from scratch.
> Hope you had a good backup because the only way to fix it is to set it up again from scratch.
You can get that from any homelab setup though. Personally, I long since went the route of regularly setting up my Pis from scratch using Ansible - that way I at least know that I didn't forget to commit any manual changes made.
Pi-specific, my recommendation is to have a serious power supply. For the old Pis with Micro USB, Meanwell makes good ones, link that with a good wire gauge (18 AWG or more) and off you go. New Pis with USB-C, Anker power supply and a decent USB-C cable... that solves a lot of microSD corruption issues because the power regulation to the card isn't that good and just passes through brownouts/undervoltage conditions.
And the second recommendation, use "industrial" microSD cards, preferably those that are SLC. Grab them from Mouser, yes they are a bit more expensive than "normal" microSD cards but will live so much longer.
I didn't say that. It's just a reply based on my personal experience - I've set up probably 10-12 raspberry pis around my home for various projects, they all died due to SD corruption within a year. My intel-based NAS has worked fine for 8 years with no issues, then I finally replaced it with a newer one, that's now been running for 6 years. Obviously, anecdotes, the intel server is a lot more expensive, yes yes yes. But like OP said, Pis are not a great choice for anything like a home server because they aren't very reliable(imho) - maybe that works for your usecase, or maybe for most peoples usecases. I'm personally steering away from them except for some hobby tinkering.
I would be curious to see if some qualcomm/snapdragon mini PCs could embrace the tinker-ability and power consumption approach and add some nice competition there
The combination of the two confuses me. If this was about shareholders, they'd hype up the use of AI, not downplay it. And if this was about users, they'd simply disable this shit.
[1] I mean, they're sacrificing Google Search of all things to push their AI crap. Also, as a bilingual YouTube user, AI-translated titles and descriptions make the site almost unusable now. In addition to some moronic PM forcing this feature onto users, they somehow also seem to have implemented the worst translation model in the industry. The results are often utterly incomprehensible, and there's no way to turn this off.