Hacker Newsnew | past | comments | ask | show | jobs | submit | more cherrycherry98's commentslogin

Most people probably never upgrade their machines at all. In my case I used the same PC from 2009 until about a month ago. Over its 16 year lifespan it saw 3 GPUs, the memory was doubled from 6GB to 12GB, a Wifi card was added (and then got flakey after about 7 years but was able to switch to Ethernet over coax with MoCa), and an SSD was added for hosting the OS and most apps (original HDD relegated to additional storage).

If you're planning for a 10-12 year lifespan I have this advice. CPUs have surprising longevity these days as most usages don't significantly tax them, go a little above mid range on core count and it should last. GPUs are a throwaway item, plan to replace them every 3-5 years to stay current. Storage can be something that's worth adding if you're planning for a long lifespan and depending on usage. Photos, video, and games use more storage than they used to but personal photos and videos largely live in the cloud now. RAM you might need to upgrade if you go midrange but might not if you aim higher than standard in the initial build. The buses and interfaces become the main limiting factors to longevity. RAM technology will advance, PCIe and USB will have new versions. There may be new standards you can't take advantage of, like I was still on SATA II when the world had since moved on to SATA III and then NVMe.

Sometimes it's more about repairability than upgradability. My stuff lasted but I've had HDDs, PSUs, and fans die in the past. It's nice to be able to replace a dead part and move on.

I will also say that I'm a little surprised that the enthusiast market is still mostly these big ATX mid tower cases. They feel massive and unnecessary today when 5.25" bays are obsolete and storage is not 3.5" HDDs but an m.2 chips that sit flush with the motherboard. The smaller form factors are still the exception. Is it all to support the biggest and baddest high end GPUs that cost more than the rest of the system?


> Is it all to support the biggest and baddest high end GPUs that cost more than the rest of the system?

I think it's more to have a big window with lots of RGB LEDs to show off on the internet.

Newer SFF cases from Ncase/Formd/Louqe are designed with perforations or mesh on every exterior surface to maximize air flow. They can support an air-cooled 5090 and an AIO or massive tower cooler for the CPU. Put a 1000W SFX PSU in there and I don't know if you'd really be wanting for anything spec-wise.


> Is it all to support the biggest and baddest high end GPUs that cost more than the rest of the system?

For me, no. I bought a big case, Fractal North XL. It sits on the floor next to my desk so the size really doesn't matter at all to me, except I want it tall enough so that it's convenient to turn on.

It's a nice bonus that building and maintenance is also easier, but it's frankly it's the reaching convenience that matters the most. Could even be a bit larger still.


Nice case! I actually do the same thing, PC lives on the floor next to my desk. I definitely rejected cases that had didn't have the buttons and front I/O on top for that reason.


A small platform to elevate the case off the floor can help lower dust accumulation and may also help it reach the perfect button pushing height for you!


Hehe never thought of that but might not be a bad idea!


Capital gains receive favorable treatment under US tax code but are also a realized gain by definition. That is you actually have to sell the asset and are taxed based on any profit earned.

An increase in the estimates value of your real estate holdings does not trigger a capital gain. Your municipality, however, may use it as an excuse to increase their assessment of the value of your property, which is used to calculate the tax they charge.


So you admit that many people do pay unrealized gains taxes on their largest asset (their house)?


Yeah it functions like a wealth tax, but the claim was that it was a capital gains tax, which it isn't.


His net worth increased due to asset appreciation. Nobody physically transferred him any money and it can fall back down tomorrow. Should he get a refund if Oracle stock tanks?


He pays less next year because Oracle stock is worth less. Just like property taxes on people's houses.

The math on taxing unrealized gains or losses doesn't work out for the reasons you pointed out. Property taxes, on the other hand, have been working for a long time.


> He pays less next year because Oracle stock is worth less. Just like property taxes on people's houses.

Does he get a refund if he loses money or is it just tax if you win, tax if you lose, tax if it doesn't move?

I'll give a few feelings about property taxes. They are known up front when the purchase is made. There's an expectation that they remain reasonably consistent year over year. In that way they can be consistently planned for, enough that it's seen as more of a maintenance expense for upkeep of local services rather than a wealth tax. If my neighbor sells their comparable property for double what they paid for it a few short years I don't expect my tax bill to have a massive jump. In my experience the city's assessed values tend to lag the true market value pretty significantly. The goal appears to use the assessed value as a means to have some graduated component to the property tax. Being a local tax, any significant jumps are seem to be avoided by design, lest it trigger angry residents showing up at town hall meetings.

With a wealth tax it can be highly variable year to year and out of one's control. If stocks go way up you're on hook for paying those taxes. Especially if you're Larry Ellison with a controlling stake in Oracle, you could find yourself in the situation of having to liquidate assets to pay taxes, thereby reducing your control of your own company.

My main objection to a wealth tax is many of its proponents see it as a means of reducing inequality and "leveling the playing field". I find these positions to come from a place of envy and reject them of those grounds. Many arguing in favor also assume that federal confiscation of wealth inherently benefits the public, as if its some benevolent charity. The reality is more mixed. There is seemingly no limit to politicians' ability squander money on nice sounding projects that give them good headlines while enriching cronies and delivering questionable actual value. It's nice to imagine that all that money is going to roads, bridges, schools, and research, but a whole lot is also going to spying on the populace, subverting foreign governments, and blowing people up.


> With a wealth tax it can be highly variable year to year and out of one's control.

It could be designed to be closer to property tax.

> you could find yourself in the situation of having to liquidate assets to pay taxes

Maybe. There are many other ways: the stock pays enough in dividends to cover the tax, the owner has other sources of income, the owner borrows against the stock to pay tax, and so on. In many dual-class structures the privileged class stock becomes common stock when sold so some founders could maintain control even after selling.

Private companies are trickier but still manageable. I don't want to turn this into a long post though.

> many of its proponents see it as a means of reducing inequality and "leveling the playing field".

I see it as a way to reduce income taxes. Welfare states are currently funded by income and payroll taxes aka taxes on labor. For the math to work out you need higher and higher tax rates or more and more workers. And you're fighting an uphill battle because improving productivity constantly reduces the need for workers.

Instead let improved productivity pay for the welfare state. Stop penalizing people for working by taxing them more.


> Should he get a refund if Oracle stock tanks?

Presumably it would function the same way as realized capital gains taxes (no refund on tax already paid)?


Proto2 let you do this and the "required" keyword was removed because of the problems it introduces when evolving the schema in a system with many users that you don't necessarily control. Let's say you want to add a new required field, if your system receives messages from clients some clients may be sending you old data without the field and now the parse step fails because it detects a missing field. If you ever want to remove a required field you have the opposite problem, there will components that have to have those fields present just to satisfy the parser even if they're only interested in some other fields.

Philosophically, checking that a field is required or not is data validation and doesn't have anything to do with serialization. You can't specify that an integer falls into a certain valid range or that a string has a valid number of characters or is the correct format (e.g. if it's supposed to be an email or a phone number). The application code needs to do that kind of validation anyway. If something really is required then that should be the application's responsibility to deal with it appropriately if it's missing.

The Captn Proto docs also describe why being able to declare required fields is a bad idea: https://capnproto.org/faq.html#how-do-i-make-a-field-require...


> Philosophically, checking that a field is required or not is data validation and doesn't have anything to do with serialization

But protocol buffers is not just a serialization format it is an interface definition language. And not being able to communicate that a field is required or not is very limiting. Sometimes things are required to process a message. If you need to add a new field but be able to process older versions of the message where the field wasn't required (or didn't exist) then you can just add it as optional.

I understand that in some situations you have very hard compatibility requirements and it makes sense to make everything optional and deal with it in application code, but adding a required attribute to fields doesn't stop you from doing that. You can still just make everything optional. You can even add a CI lint that prevents people from merging code with required fields. But making required fields illegal at the interface definition level just strikes me as killing a fly with a bazooka.


> Philosophically, checking that a field is required or not is data validation and doesn't have anything to do with serialization.

My issue is that people seem to like to use protobuf to describe the shape of APIs rather than just something to handle serialization. I think it's very bad at the describing API shapes.


I think it is somewhat of a natural failure of DRY taken to the extreme? People seem to want to get it so that they describe the API in a way that is then generated for clients and implementations.

It is amusing, in many ways. This is specifically part of what WSDL aspired to, but people were betrayed by the big companies not having a common ground for what shapes they would support in a description.


> Let's say you want to add a new required field, if your system receives messages from clients some clients may be sending you old data without the field and now the parse step fails because it detects a missing field.

A parser has to (inherently) neither fail (compatibility mode) nor lose the new field (a passthrough mode), nor allow diverging (strict mode). The fact that capnproto/parser authors don't realize that the same single protocol can operate in three different scenarios (but strictly speaking: at boundaries vs in middleware) at the same time, should not result in your thinking that there are problems with required fields in protocols. This is one of the most bizzare kinds of FUD in the industry.


Hi, I'm the apparently-FUD-spreading Cap'n Proto author.

Sure! You could certainly imagine extending Protobuf or Cap'n Proto with a way to specify validation that only happens when you explicitly request it. You'd then have separate functions to parse vs. to validate a message, and then you can perform strict validation at the endpoints but skip it in middleware.

This is a perfectly valid feature idea which many people have entertained an even implemented successfully. But I tend to think it's not worth trying to do have this in the schema language because in order to support every kind of validation you might want, you end up needing a complete programming language. Plus different components might have different requirements and therefore need different validation (e.g. middleware vs. endpoints). In the end I think it is better to write any validation functions in your actual programming language. But I can certainly see where people might disagree.


It gets super frustrating to have to empty/null check fields everywhere you use them, especially for fields that are effectively required for the message to make sense.

A very common example I see is Vec3 (just x, y, z). In proto2 you should be checking for the presence of x,y,z every time you use them, and when you do that in math equations, the incessant existence checks completely obscure the math. Really, you want to validate the presence of these fields during the parse. But in practice, what I see is either just assuming the fields exist in code and crashing on null, or admitting that protos are too clunky to use, and immediately converting every proto into a mirror internal type. It really feels like there's a major design gap here.

Don't get me started on the moronic design of proto3, where every time you see Vec3(0,0,0) you get to wonder whether it's the right value or mistakenly unset.


> It gets super frustrating to have to empty/null check fields everywhere you use them, especially for fields that are effectively required for the message to make sense.

That's why Protobuf and Cap'n Proto have default values. You should not bother checking for presence of fields that are always supposed to be there. If the sender forgot to set a field, then they get the default value. That's their problem.

> just assuming the fields exist in code and crashing on null

There shouldn't be any nulls you can crash on. If your protobuf implementation is returning null rather than a default value, it's a bad implementation, not just frustrating to use but arguably insecure. No implementation of mine ever worked that way, for sure.


Sadly, the default values are an even bigger source of bugs. We just caught another one at $work where a field was never being filled in, but the default values made it look fine. It caused hidden failures later on.

It's an incredibly frustrating "feature" to deal with, and causes lots of problems in proto3.


You can still verify presence explicitly if you want, with the `has` methods.

But if you don't check, it should return a default value rather than null. You don't want your server to crash on bad input.


Tim Sweeney of Epic is up there for me too.


I’d really love to hear his story from the beginning. I believe his first published game was a Blue Disk one, ZZT, in 1991, and he went forward to write the Unreal engine which was released in 1998. People like Tim and John really could bag a huge amount of knowledge in half a decade.


Agreed, as an OO scripting language it's lovely, especially compared to Perl where the OO features never meshed quite right. Going back 10 years ago it had a number of things that were novel compared to other languages: literals for common data structures, context managers ("with" statement), first class functions, comprehensions, generators.

On the other hand duck typing is largely a joke. Letting functions take anything as an argument and then just assuming it's a duck with the methods you want is no way to write a robust system. Performance wise you're boxed into a corner. The slow runtime will have you writing native code in another language, complicating your project. The GIL will have you jumping through hoops to work around.

As an offline data exploration and research tool limited to an individual or a small team, or for writing small utilities, I totally get it. For anything mission critical I'd much rather be in something like Java or C# where the typing situation is stronger, the performance is going to be much better, I have better access to threads if I need them, the reasons for dropping into native code are fewer, and the cross platform support works more seamlessly without additional layers like Docker.


>>Agreed, as an OO scripting language it's lovely, especially compared to Perl where the OO features never meshed quite right.

Back then the whole OO crowd was with Java though.

Python's Moat was beginner friendliness to write simple scripts, which at the time- Perl was more like a thermonuclear scripting language. Most people who never wanted to graduate to that advanced stage of writing mega million lines of Perl code over a week(which was literally the use of Perl then), realised they needed some thing simpler, easier to learn, and maintain for smaller scripts- Kind of moved to Python.

But then of course simplicity only takes you that far, and its logically impossible to have a simple clean interface to problems that require several variables tweaked. Python had to evolve and is now having the same bloat and complications that plague any other language.

Having said that, I would still use Java if I had to start a backend project.


When Perl ruled the Earth, the OO crowd was with Smalltalk, Object Pascal, Delphi, Clipper 5, FoxPro, Actor, CA Objects, VB, C++.

In fact, many books that the OOP haters attribute to Java, predate its existence.


Why not Go? I don’t understand starting new backend projects in a JVM language when go exits and its both faster and simpler. People love to proclaim Java’s ability to handle “big data” but I have programs parsing TB of data daily in Go without a sweat. And it was much faster to write and teach new engineers on than Java


> Letting functions take anything as an argument and then just assuming it's a duck with the methods you want is no way to write a robust system

Everyone keeps harping on type safety, but it just doesn't play out in reality. Linux Kernel is incredibly robust, has never been a broken mess, and has basically no type enforcement, as you can cast pointers into other stuff.

In general, all typing does is move error checking into the compiler/preprocessor instead of testing. And time spent on designing and writing type safe code is almost equivalent to time spent writing tests that serve as an end-to-end contract.

There is a reason why NodeJS was the most used language before all the AI stuff came with Python.

>Performance wise you're boxed into a corner. The slow runtime will have you writing native code in another language, complicating your project.

Most of these performance arguments are similar to arguging that your commuter car needs to be a track spec Ferrari in terms of performance, by people that have very little experience with cars.

Plenty of fast/performant stuff runs on Python. PyPy is a thing also. So is launching small compiled executables like a process, that takes literally one line of code in Python.

>The GIL will have you jumping through hoops to work around.

This is just laughable. Clearly you have extremely little experience with Python.


> Linux Kernel is incredibly robust, has never been a broken mess, and has basically no type enforcement

Yes, as a result of strong development practices, a high skill floor, and where work is done by a small number of people knowledgable in the domain. These things are not mutually exclusive with type checking.

> In general, all typing does is move error checking into the compiler/preprocessor instead of testing

Which is an enormously powerful thing. Having those things be required by tests requires that the test exists for all the cases, which turns your tests into things that not only have to test behaviour but are also simultaneously responsible for checking types.

Compile-time type checking can essentially eliminate all need for type-based tests and is able to do it automatically for any code that exists, whereas this has to be opted in when using tests to check types.

You also don't get the benefit of compile-time tests automatically checking use cases for the types you don't support (which is likely more than the ones you do), whereas leaving this to test-time is practically impossible and you can only test the valid path. I've never seen a codebase that checks the negative paths for the hundreds-upon-thousands of types that aren't supported.

None of this is to say I'm against tests as end-to-end contracts, but moving type checking to compile time gives you a lot of extra kinds of assertions for free that you likely don't get from having tests to check types.

> There is a reason why NodeJS was the most used language

And the reason was? AFAIK Node came along as a runtime option for using a familiar language outside of the browser. Coupled with a single-threaded event-driven concurrency model out of the box it was an enormously practical/easy choice from the perspectives of both language familiarity for developers and fpr the workloads it was given.


I dunno where this idea comes from that a code erroring out is somehow catastrophic. If you pass a wrong object type to a function, that mistake is very easy to fix. If you are structuring your code with crazy inheritance to where this error can get hidden, thats solely a you problem.


> I dunno where this idea comes from that a code erroring out is somehow catastrophic.

I mean if it's a medical device it might not be great?

> If you pass a wrong object type to a function, that mistake is very easy to fix.

And with compile time checks you can avoid ever having to get to the point where you have to fix it.


> And time spent on designing and writing type safe code is almost equivalent to time spent writing tests that serve as an end-to-end contract.

Do you write tests for every third-party function that interacts with your code, so that it never fails in runtime after a version bump?

How do you guarantee that your own refactoring is exhaustively covered by the prior tests you've written for the old version?


You don't need a test for every function. You probably want every function call covered by a test, though, otherwise you have untested code.

The exact granularity is a debate that has gone on for a long time. Nowadays, people seem to prefer larger tests that cover more code in one go so as to avoid lots of mocking / stubbing. Super granular tests tend to be reserved for libraries with no internal state.


While what you say could be argued, this is both an insufficient argument against, and irrelevant to, the post you’re commenting on.


> Everyone keeps harping on type safety, but it just doesn't play out in reality.

If you ignore the vast number of cases where it does, and cherry pick an extraordinarily non-representative example like the Linux kernel.

> This is just laughable. Clearly you have extremely little experience with Python.

Or you have extremely little experience with the use cases where it applies, extremely little knowledge of the ongoing effort by the Python developers to address it, and think that ignorant mocking is an argument.


See the 2024+ Dodge Charger Daytona.


I recently built a new PC and faced the dilemma of paying Microsoft for their experience or rolling the dice with Linux (nothing to lose). I've been running Kubuntu and while not perfect it certainly works well enough.

The sad part is that I'd gladly pay Microsoft double what they currently charge for something that basically works like Windows 7 did. It's like a theme park where I pay admission for the privilege to be upsold various add-ons. So now I just don't pay them anything.


Unless you're a big city you probably only have one library. At least by me municipalities have occasionally built new, bigger libraries and/or put additions onto existing ones in the last 30 years. The number of libraries would remain flat in those cases even if they have actually expanded.


I saw both a Rivian and a Cybertruck at an RV park just a month ago. No idea what kind of range they get towing but I was impressed someone was actually using them as real trucks. The vast majority of vehicles were three-quarter-ton or better trucks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: