Hacker Newsnew | past | comments | ask | show | jobs | submit | msbarnett's commentslogin

If you’re a top performer you bail as soon as there are layoffs anyways. I certainly do. It’s rarely a sign that anything good is in your future, they’re often performed poorly, and the work environment post-layoffs is incredibly bleak and disheartening.

If you have options there’s no good reason to stay.


Where would you go in this environment? Most of the top-name companies have had layoffs themselves, or at the very least are at the hiring freeze stage of the layoff routine.


The "we're in a hiring freeze" press releases all have fairly broad asterisks. New people are being onboarded daily in all of the companies that announced layoffs.


That’s true, but those same companies still wound up doing layoffs later on. So if your criteria is not to work at a company without layoffs then you’re still SOL.


I'd rather join a company post-layoff than pre-layoff.


Say you're at company A. Company A and B do layoffs of similar size. Why is switching to company B after the layoffs your preferred action?


Of course. But in a recession there can be multiple rounds of layoffs.


You’ll laugh but bear with me,

Meta recruiter just reached out to me this week, so even parts of Facebook are hiring, and my relevant skill set. Of course I don’t care about VR goggles, but you just had large layoffs, why would I work for you? (Plus ok Zuck is ruining your core business, but that aside)

Layoffs hurt inside and outside too.


Hire to fire is fairly common.

If a layoff is inevitable, It is common in many set ups for managers to hire people new people whom they would want to let go instead of their top dogs who've been rocking their projects for years.


Nah, this role has been open to long, survived recruiting cuts, and it appears even layoffs. I wonder if FB will get so desperate they even stop trying to downlevel.


If meta is hiring for VR, they are probably taking advantage of the recent Microsoft layoff anyways to pick up talent.


Do you mean the Microsoft VR layoffs literally announced only 1 or 2 days ago? That's giving BigCo a lot of credit in the agility department, skirting over the edge into conspiracy theory territory.


If they were still hiring for VR, why would the recruiters wait? It’s not like they are very busy.


This role's been open for a while, probably just recruiting getting re-organized after being cut to the bone.


There’s tons of very profitable companies hiring right now - the “top-name” companies doing layoffs were all “growth-first, profit-never” types. We’re not in an industry-wide downturn and there’s a hell of a lot more out there than FAANG or MANGA or whatever we’re calling it now.


Google, Microsoft, Facebook, and Amazon have made an absolute shitload of money. They are among the most profitable companies in the history of humanity. Definitely not "profit-never."


> the “top-name” companies doing layoffs were all “growth-first, profit-never” types

What companies specifically are you referring to? In the last few months we've seen layoffs from Microsoft, Google, Facebook, and Amazon. These rank among the most profitable tech companies in the world. https://companiesmarketcap.com/tech/most-profitable-tech-com...


In my case, I did the math on my savings and I’m taking 1-3 years working on side projects, by myself and with groups of collaborators


I have that kind of cushion and I am considering taking a few months off working on side projects. But 1-3 years? I would need a very, very, very solid project to consider doing that!


Wow, good on you. I wish you all the best. I wish I had such financial safety net to change jobs


Reading a bunch of responses, I feel like I should qualify my initial comment slightly - I really meant it as being specific to these circumstances, in which the largest/most successful companies are doing layoffs.

I generally work at early-stage startups, and I 100% agree with you - at those, if layoffs start, barring some extraordinary circumstances, it's time to get the hell out because your options are about to be worth bupkis.

When it's FAANG, etc. I don't think that's true. Amazon and Microsoft can lay off swathes of people and still have plausible outcomes in which their stock prices are higher in a year or two (perhaps even more plausible than they were pre-layoff).


Not if you are at a top company. Also it’s not like any company gets to cut 10% from what it offers new hires… it’s hard enough recruiting at market rates, will be harder 10% below market rates.


bail off to... where? Every single big tech has done layoff except Apple. Which fair you could go to Apple but then you have to deal with Cupertino and inflated bay area housing.


for one thing, zips are far less likely to destroy data than sourcesafe, which was famous for doing just that


> SVN had a great feature set, and a great way to handle merges, and sane defaults.

Are you kidding? Subversion had a notoriously awful way of handling merges, which was a huge driver of people onto Git as soon as it appeared. You truly had to have been there to believe it, but in all but the simplest of merge scenarios, declaring branch bankruptcy and manually moving things back into the target branch by hand was your only real option. Early to mid 2000s, the most common team branching strategy I saw with subversion was "there's only one branch and everybody does all the development in it because god help you if you try to put it back together after branching for something".

It wasn't until well after the momentum was clearly in Git's favour and a huge chunk of the user base was gone that Subversion finally fixed it to not be complete dogshit.


it was great, it basically forced developers into the equivalent of a git rebase, maybe the cli wasn't the best, but that was a problem that tortoise solved quite well, in 2002


You’ll have to wait until 3rd parties do those comparisons?

Apple has literally never posted “here’s a ream of benchmarks vs the Dell XPS XYZ and HP modelnumbersneeze738462” — pretending this is indicative of some grand conspiracy to hide performance deficiencies of these new M2 models the way GP is, is silly.


Realistically it’s expected it will perform relative to competitors now as the old models did in the past.

More realistically, this is one of the more drab ways apple announces products. They didn’t even have a presentation or anything. So they’re clearly not marketing with the same goals in mind as a presentation. I’d say the intended audience here are people already going to buy the latest model because they’re in the apple ecosystem.

Not that I’m inclined to defend a multi trillion dollar company, of course. It’s just pithy to hop into this sort of thread and grind the old axe against apple.


No but look what they did for the M1 press release:

https://www.apple.com/ca/newsroom/2020/11/introducing-the-ne...

They talk all about how it's faster than 98% of PC laptops sold. When they are in the lead or feel they are then they do.


> Finally, doesn’t the fact that apple has a fundamentally different rendering pipeline relevant?

Is it still all that fundamentally different? All of the RDNA parts are tile-based renderers (I think even the Vega series GCN parts made that switch?)


It's pretty different alright. First, there is the tile size. For current crop of desktop GPUs, tiling is primarily about cache locality (if you keep your processing spatially local you are also less likely to trash caches), but they still have very fast RAM and want to keep the triangle binning overhead to the minimum. So the tile size for desktop GPUs is much larger (if I remember correctly, it was about 128x128 pixels or something like that when I last tested it on Navi). Mobile GPUs really want to keep all of the relevant processing in the local memory entirely, so they use much smaller tiles (32x32 or even 16x16) at the expense of more involved and costly binning.

Apple (inherited from PowerVR) adds another twist on top: the rasterised pixel are not shaded immediately but instead collected in a buffer. Once all fragments in a tile are rasterised you basically have an array with visible triangle information for each pixel. Pixel shading is then simply a compute pass over this array. This can be more efficient as you only need to shade visible pixels, and it might utilise the SIMD hardware better (as you are shading 32x32 blocks containing multiple triangles at once rather than shading triangles separately), plus it radically simplifies dealing with pixels (there are never any data races for a given pixel, pixel data write-out is just a block memcpy, programmable blending is super easy and cheap to do) — in fact, I don't believe that Apple even has ROPs. There are of course disadvantages as well — it's very tricky to get right and requires specialised fixed-function hardware, you need to keep transformed primitive data around in memory until all primitives are processed (because shading is delayed), there are tons of corner cases you need to handle which can kill your performance(transparency, primitive buffer overflows etc.). And of course, many modern rendering techniques rely on global memory operations and there is an increasing trend to do rasterisation in a compute shader, where this rendering architecture doesn't really help.


They might rasterize fragments inside tiles to reduce blending costs, but still very much behave like immediate renderers: single-pass, with vertex shading results passed continuously into fragment shaders. Apple GPU is tile-based deferred renderer: vertex stage runs first storing results into intermediate buffer, then each tile is processed running fragment shader, at the end flushing results to framebuffer. This reduces memory bandwidth but might require multiple passes when eg. intermediate vertex output buffer overflows.


And there are GPUs that have both operating modes: Adreno.


Does Adreno really have a deferred mode? The documentation I could find only describes tiled immediate rendering.

Edit: I just had another look, pretty sure this is standard Tile-Based Immediate Rendering. The documentation sometimes refers to this as "deferred" probably because copying of the final image values to the RAM is deferred. But "deferred" in TBDR means "deferred shading", not just "deferred memory copy". Adreno does not do deferred shading.


You could equally conclude that one of the reasons that very few people used it was because of the input lag.


The major reason stadia failed was sentiment.

Why buy games full price? All my gamer friends say its crap!

Google shuts things down, why invest?

The tech was pretty good for casual gamers, it also pushed linux gaming in the AAA studios.

I think a lot of negative sentiment also came from gamers thinking that Stadia intended to replace their gaming setups.


I will tell you I 100% refused to buy anything from stadia because I was sure Google would cancel it in a year or two. You can attribute sentiment wherever you want, but Google failed because it was Google, not because it was a bad idea.


Nit: Zen 3 increased the CCX size to 8 cores.


It’s not just the number of points that matter, it’s points over time. 3 points very quickly will rocket you over 20 points in 4 hours or something. And comments factor in too, so commenting “why is this on the front page” just helps it be on the front page


That bit about comments is not true, and is rather important, because adding noise comments damages the signal/noise ratio of discussion.


I believe a high point to comment ratio ranks a post higher, that a low ratio is considered a sign of contentiousness or low quality post.


Not to mention this was at 23 points just 3 minutes after the GP post.

It's a technology and business story of general interest. Right in line with HN.


>> Right in line with HN.

It is unquestionably that.


It was heavily flagged by users, so I'd question that. Sensational partisan stories are nearly always off topic.


An extremely partisan voting block heavily flagging something is a piss-poor signal for on-topicness, come on. That kind of sloppy logic is a great way to let organized voting rings suppress uncomfortable news, though.

Incredibly embarrassing statement for someone moderating this place to be making.


Sorry, I'm not following you here. The sensational-indignant OP, just like anything sensational-indignant of whatever political flavor, is obviously not in line with HN. That should be obvious both in theory (https://news.ycombinator.com/newsguidelines.html) and in practice (https://news.ycombinator.com/front). So it was correct for users to flag the submission. If people hadn't been flagging such things for well over a decade, HN wouldn't still exist.

I mean, if you want to argue that the story is actually a substantive and intellectually interesting one, containing significant new information (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...), and it's just the title that's sensational-indignant, that would be fine. But that bar is necessarily pretty high when it comes to garden-variety political pieces. If it weren't, then HN would just be a political site.

Edit: btw, adding comments definitely does not help a post stay on the front page - see https://news.ycombinator.com/item?id=34041016.


> I mean, if you want to argue that the story is actually a substantive and intellectually interesting one, containing significant new information

Why would I want to argue that? It certainty doesn’t describe the million identical ChatGPT circlejerks that rocket to the front page without getting flagged out of existence.

This NFT story was no less tech adjacent than them, and no less vapid and inane than them, but it had the appearance of being embarrassing for a certain hyperpartisan political segment, so it was flagged by a bunch of those supporters, and you in your infinite fecklessness are sitting here back-inventing justifications for this flag based on “well no I’m sure all those flags were about the low intellectual quality of the article” as if 97% of the front page wasn’t idiotic pablum aimed at potted plants.

It’s hard to take seriously the suggestion that you’re too stupid to realize those flags weren’t politically motivated and not just a comment on its indistinguishable-from-any-other-garbage-that-gets-popular quality.


Sorry but I really think you're off base here. It's true that politically minded users (or partisans of any strong cause) tend to flag stories from the side they don't like. But it's also true that many HN users flag stories that they think don't belong on HN, given the site guidelines. You can tell the two categories apart by their flagging histories. I looked at a bunch of the flags on the OP—not all, because there was a massive number of them—and saw plenty of examples of the latter, including some from users who have been high-quality HN contributors for many years and never shown signs of politically motivated flagging.


Were there any acceptable stories about the Trump trading card NFTs? Maybe OP was too biased, but it seemed like an interesting topic in general.


Trump and NFTs are both such well-trodden flamebait that it's hard to imagine one , but maybe it's possible to get a -1 x -1 = +1 effect. It's not intrinsically offtopic, just probably so.


Fair enough. Thank you.


You only have to look at Japan for the last few decades to see what the long term effects of deflation look like. Negative economic growth, few job opportunities for young people, investment and lending are de-incentivized so innovation and industry growth moves to other countries, while owning property becomes a liability, etc. Managed decline is about the best case scenario for a deflationary economy.


At this point I’ve only got one non-arm64 container, and it’s solely optional support for testing some obscure cases that our CI infrastructure handles more robustly anyways.

Your information is pretty out of date.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: