Esp. when they aren't even sure whether they will commit to offering this long term? Who would be insane enough to build a product on top of something that may not be there tomorrow?
Those products require some extensive work, such a model finetuning on proprietary data. Who is going to invest time & money into something like that when OpenAI says right out of the gate they may not support this model for very long?
Basically OpenAI is telegraphing that this is yet another prototype that escaped a lab, not something that is actually ready for use and deployment.
But it focuses too much on the big companies. Many indiehackers have figured out how to make profit with AI:
1. No free tier. Just provide a good landing page.
2. Ship fast. Ship iteratively. Employ no one besides yourself.
3. Profit.
The old silicon valley idea that you need to raise a bunch of money, hire a bunch of devs, and scale a ton to satisfy investors is dying rapidly for software. You can code and profit millions as just a single person company, especially in the age of cursor.
Ah weird, here in Europe if you apply for a job in another city or country they won't fly you out there. You can come on your own dime and usually you wouldn't even tell them you don't live there yet (after all if you don't even live there, why would they bother with you, it's only extra hassle for them when you start). Probably C-suite roles are an exception to this. But they're an exception to pretty much everything. Roles with particular foreign languages (e.g. support) too but this is also an edge case.
Moving personnel between countries when they are already working for the company does happen. They did it for me. But at that point they already know what they have.
> Why are you on a thread about Google-style interviews?
For the same reason you wrote "Google-style". Because this thread is specifically about those interviews happening not at Google.
Oh, maybe you misunderstood their question. When they suggested Google wasn't relevant, they meant the company culture at Google itself because that's what you were talking about.
>Interview coding questions aren't like the day-to-day job, because of the nature of an interview.
You have missed his point. If the interview questions are such that AI can solve them, they are the wrong questions being asked, by definition. Unless that company is trying to hire a robot, of course.
I miss one option from the list of non-solutions the author presents there - ditch the idiotic whiteboard/"coding exercise" interview style. Voila, the AI (non)problem solved!
This sort of comp-sci style exam with quizzes and what not maybe somewhat helps when hiring junior with zero experience fresh out of school.
But why are people with 20+ years of easily verifiable experience (picking up a phone and asking for references is still a thing!) being asked to invert trees and implement stuff like quicksort or some contrived BS assignment the interviewer uses to boost their own ego but with zero relevance to the day to day job they will be doing?
Why are we still wasting time with this? Why is always the default the assumption there that the applicants are all crooked hochstaplers that are lying on their resumes?
99% of jobs come with probationary period anyway where the person can be fired on the spot without justification or any strings attached. That should be more than enough time to see whether the person knows their stuff or not after having passed one or two rounds of oral interviews.
It is good enough for literally every other job - except for software engineering. What makes us the special snowflakes that people are being asked to put up with this crap?
But if you are the only technical person around, who is going to show you what a good or bad practice *in your specific field* is? That you won't find on Stack Overflow or by asking ChatGPT.
Being able to talk to an experienced mentor who knows the field you are working in is invaluable. Unlike learning some framework or design patters or what not, this information you won't find anywhere else.
You'd be surprised at how useful Stack Overflow and ChatGPT can be at helping to illuminate knowledge gaps.
I've found that one of the harder aspects of being unguided is figuring out the unknown unknowns.
You might stumble into a solution of sorts that mirrors a best practice but not know there's a "name" for that solution -- until you see it spelled out after googling around. That discovery can lead you down a rabbit hole where you gain fuller context.
Sure, having more experienced people around can help expedite that process in some cases, but then again you're limited by what that person has experienced. There's always some level you reach where you need to be curious enough in your explorations to seek out the next layer of knowledge in a self-directed manner, and the tools today are immensely better at supporting that process than 10-15 years ago.
I think the OP you are replying to points to "you don't know what you don't know". SO and ChatGPT can be useful, if you know what you are doing is fishy and ask for directions.
Everyone hates hearing this one:
Documentation, documentation, documentation.
Programming is a social task. Therefore, everything else related to software development best practices branches off from that.
What percent of developers do you think are actively using fuzzing? I would be shocked if more than 1%. Please do not read this as I do not think fuzzing is important! It is very important for system-level software.
I often include valgrind tests before Beta releases, as it is usually going to point out suspect areas needing inspection.
Fuzzing is only really useful for a very narrow range of analysis scenarios. If people understand threading properly: code should be able to take getting hammered, exiting gracefully, and cleanly get re-instantiated.
Also, banning hosts/accounts with an error-rate quota system is more common these days. =3
many languages gracefully handle errors, making those errors transparent to automated detection -- our crashes are now silent correctness failures
this trend in programming culture reduces our ability to do automated error detection!
you make a good point, and a good case for crash early and crash often -- with choice of erlang style recovery, or fuzzing style hard nosed correctness enforcement
If you want to grow you will need to change jobs. Small companies are not a good fit for a fresh grad with little to no experience. In a small company you are by necessity a jack of all trades because there are only so few of you.
If you aren't experienced already it is a very hard position to be in, with a huge responsibility and potential to screw up due to inexperience - and be promptly thrown under the bus when the proverbial shit hits the fan. Code reviews by ChatGPT can't compensate for lack of more experienced colleagues.
The best way to grow as a fresh grad is to join a medium sized, established business. There you are going to be a part of a team where you will get to learn the ropes of your field (that's something you won't find in ChatGPT or university) and at the same time the pressure won't be so hard. And while you aren't going to get the perks of working at huge companies like Google or Facebook, you likely won't have to deal with assorted corporate BS that comes with it either.
Only once you get a few years under your belt think about startups, small companies, etc.
>
If you want to grow you will need to change jobs. Small companies are not a good fit for a fresh grad with little to no experience. In a small company you are by necessity a jack of all trades because there are only so few of you. [...]
> The best way to grow as a fresh grad is to join a medium sized, established business. There you are going to be a part of a team where you will get to learn the ropes of your field (that's something you won't find in ChatGPT or university) and at the same time the pressure won't be so hard.
My experience differs: already at medium sized, established companies there is a lot of corporate bullshit. Also, at small companies, you know the colleagues better, so that the culture is often more "humane". Also, exactly because at a small company you are a "jack of all trades", you by necessity learn a lot more; but be aware that on the other hand you won't specialize so deeply into the topic. So, if you want to deeply specialize in a topic, you better look at a medium or large company (but be aware that there might exist few or no jobs there that allow you to specialize in the topic that you are into).
Good luck with that when you are not at the stage of your career yet to have enough experience to judge what is good practice - and what is hype, BS and liable to cause problems for your project.
Having a good mentor or two is pretty essential because most of knowledge isn't written down and retrievable by LLMs or about some framework or tool. It is the experience of people who have been there before, done it, got burned and learned to not do the same mistake again.
Esp. when they aren't even sure whether they will commit to offering this long term? Who would be insane enough to build a product on top of something that may not be there tomorrow?
Those products require some extensive work, such a model finetuning on proprietary data. Who is going to invest time & money into something like that when OpenAI says right out of the gate they may not support this model for very long?
Basically OpenAI is telegraphing that this is yet another prototype that escaped a lab, not something that is actually ready for use and deployment.