Hacker Newsnew | past | comments | ask | show | jobs | submit | mike_d's commentslogin

IPv4 isn't perfect, but it was designed to solve a specific set of problems.

IPv6 was designed by political process. Go around the room to each engineer and solve for their pet peeve to in turn rally enough support to move the proposal forward. As a bunch of computer people realized how hard politics were they swore never to do it again and made the address size so laughably large that it was "solved" once and for all.

I firmly believe that if they had adopted any other strategy where addresses could be meaningfully understood and worked with by the least skilled network operators, we would have had "IPv6" adoption 10 years ago.

My personal preference would have been to open up class E space (240-255.*) and claw back the 6 /8s Amazon is hoarding, be smarter about allocations going forward, and make fees logarithmic based on the number of addresses you hold.


> IPv4 isn't perfect, but it was designed to solve a specific set of problems.

IPv4 was not designed as such, but as an academic exercise. It was an experiment. An experiment that "escape the lab". This is per Vint Cerf:

* https://www.pcmag.com/news/north-america-exhausts-ipv4-addre...

And if you think there wasn't politics in iPv4 you're dead wrong:

* https://spectrum.ieee.org/vint-cerf-mistakes

> IPv6 was designed by political process.

Only if by "political process" you mean a bunch of people got together (physically and virtually) and debated the options and chose what they thought was best. The criteria for choosing IPng were documented:

* https://datatracker.ietf.org/doc/html/rfc1726

There were a number of proposals, and three finalists, with SIPP being chosen:

* https://datatracker.ietf.org/doc/html/rfc1752

> I firmly believe that if they had adopted any other strategy where addresses could be meaningfully understood and worked with by the least skilled network operators, we would have had "IPv6" adoption 10 years ago.

The primary reason for IPng was >32 bits of address space. The only way to make them shorter is to have fewer bits, which completely defeats the purpose of the endeavour.

There was no way to move from 32-bits to >32-bits without every network stack of every device element (host, gateway, firewall, application, etc) getting new code. Anything that changed the type and size of sockaddr->sa_family (plus things like new DNS resource record types: A is 32-bit only; see addrinfo->ai_family) would require new code.


This is a lot of basically sharpshooting, but I will address your last point:

> There was no way to move from 32-bits to >32-bits without every network stack of every device element (host, gateway, firewall, application, etc) getting new code. Anything that changed the type and size of sockaddr->sa_family (plus things like new DNS resource record types: A is 32-bit only; see addrinfo->ai_family) would require new code.

That is simply not true. We had one bit left (the reserved/"evil" bit) in IPv4 headers that could have been used to flag that the first N bytes of the payload were an additional IPv4.1 header indicating additional routing information. Packets would continue to transit existing networks and "4.1" capable boxes at edges could read the additional information to make further routing decisions inside of a network. It would have effectively used IPv4 as the core transport network and each connected network (think ASN) having a handful of routed /32s.

Overlay networks are widely deployed and have very minor technical issues.

But that would have only addressed the numbering exhaustion issues. Engineers often get caught in the "well if I am changing this code anyway" trap.


An explicit goal of IPv6 considered as important as the address expansion was the simplification of the packet header, by having fewer fields and which are correctly aligned, not like in the IPv4 header, in order to enable faster hardware routing.

The scheme described by you fails to achieve this goal.


I am glad you brought this up, that is another big issue with IPv6. A lot of the problems it was trying to solve literally don't exist anymore.

Header processing and alignment were an issue in the 90s when routers repurposed generic components. Now we have modern custom ASICs that can handle IPv4 inside of a GRE tunnel on a VLAN over MPLS at line rate. I have switches in my house that do 780 Gbps.


It is irrelevant what we can do now.

At the time when it was designed, IPv6 was well designed, much better than IPv4, which was normal after all the experience accumulated while using IPv4 for many years.

The designers of IPv6 have made only one mistake, but it was a huge mistake. The IPv4 address space should have been included in the IPv6 space, allowing transparent intercommunication between any IP addresses, regardless whether they were old IPv4 addresses or new IPv6 addresses.

This is the mistake that has made the transition to IPv6 so slow.


> The IPv4 address space should have been included in the IPv6 space […]

See IPv4-mapped ("IPv4-compatible") IPv6 addresses from RFC 1884 § 2.4.4 (from 1995) and follow-on RFCs:

* https://datatracker.ietf.org/doc/html/rfc1884

* https://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresse...


> The IPv4 address space should have been included in the IPv6 space, allowing transparent intercommunication between any IP addresses, regardless whether they were old IPv4 addresses or new IPv6 addresses.

How would you have implemented it that is different from the NAT64 that actually exists, including shoving all IPv4 addresses into 64:ff9b::/96?


Ideally, 464XLAT should have been there from the beginning and its host part (CLAT) should have been a mandatory part of IP stack.

> That is simply not true. We had one bit left (the reserved/"evil" bit) in IPv4 headers […]

Great, there's an extra bit in the IPv4 packet header.

I was talking about the data structures in operating systems: are there any extra bits in the sockaddr structure to signal things to applications? If not, an entirely new struct needs to be deployed.

And that doesn't even get into having to deploy new DNS code everywhere.


But v6 did do what you're describing here?

They didn't use the reserved bit, because there's a field that's already meant for this purpose: the next protocol field. Set that to 0x29 and it indicates that the first bytes of the payload contain a v6 address. Every v4 address has a /48 of v6 space tunnelled to it using this mechanism, and any two v4 addresses can talk v6 between them (including to the entire networks behind those addresses) via it.

If doing basically exactly what you suggested isn't enough to stop you from complaining about v6's designers, how could they possibly have done any better?


Imo they should have just clawed 1 or 2 bits out of the ipv4 header for additional routing and called it good enough

This would require new software and new ASICs on all hosts and routers and wouldn't be compatible with the old system. If you're going to cause all those things, might as well add 96 new bits instead of just 2 new bits, so you won't have the same problem again soon.

IPv6 is literally just IPv4 + longer addresses + really minor tweaks (like no checksum) + things you don't have to use (like SLAAC). Is that not what you wanted? What did you want?

And what's wrong with a newer version of a thing solving all the problems people had with it...?

There are more people than IPv4 addresses, so the pigeonhole principle says you can't give every person an IPv4 address, never mind when you add servers as well. Expanding the address space by 6% does absolute nothing to solve anything and I'm confused about why you think it would.


Have a look at cdb. The more you read about its simplistic design the more you realize it is damn near the perfect solution for static and semi-static datasets. Fetches are either 1 or 2 disk reads depending on if the key exists.

https://cdb.cr.yp.to/


There’s a Best Buy a few miles from my house. Why aren't I allowed to put my own products on their shelves, or set up a little folding table next to the phone accessories to sell my own cases?

It is not fair to me as a merchant that everyone who wants to buy a phone case goes to Best Buy. That's where all the foot traffic is. It's clearly anti-competitive that they expect me to pay for shelf space I benefit from.

And now they want to charge me to verify that the USB-C cables I'm selling actually work? How is that remotely reasonable? Just because most of my cables are faulty and customers will inevitably go complain to their customer service desk, why should I bear that cost?

Consumers deserve the right to choose accessories from multiple independent merchants inside Best Buy. Suggesting otherwise is anti-consumer, anti-choice, and proof that you hate open and accessible ecosystems.


For this analogy to be comparable, you would first have to consider that Best Buy, together with Walmart, owns 99.9999% of all store real estate in the world. You would also have to consider that the "shelf space" in this case is free and comes at zero cost to Best Buy; in fact, giving you virtual shelf space increases the amount of traffic that comes into their stores, resulting in a benefit to themselves.

Your analogy as presented was so lacking in merit you might as well have been talking about cats and leprechauns for how completely nonsensical it was to bring it up in the context of Apple.


Simon and GGP combined do own an overwhelming percentage of all retail square footage in the US, but lets at least consider the rest of the argument here.

Apple's "shelf space" is not free. There are constant R&D expenses involved in introducing new sensors and screens that make the underlying apps better. They take on the support load of on-boarding users, managing the relationship, and dealing with any problems. Advertising, carrier validation, third party hardware ecosystem, etc.

Epic wants to sidestep all of the costs of building a platform, and offload support costs onto Apple.


> Simon and GGP combined do own an overwhelming percentage of all retail square footage in the US

This is factually incorrect, and not only incorrect, but so wildly far from being correct that one wonders if this statement was made in bad faith. They only have around 300 million sqft out of an estimated 12 billion sqft, around 2.5%. That is not an overwhelming percentage, nor is it "99.9999% of all retail square footage in the world", which was not a hyperbolic statement. Competitors in retail can obtain their own shelf space. You cannot obtain your own shelf space for mobile software. The network effects of hardware+OS centralization are too strong, so there are and never will be any viable competitors to iOS and Android.

> Apple's "shelf space" is not free. There are constant R&D expenses involved in introducing new sensors and screens that make the underlying apps better.

The R&D expenses do not change regardless of whether there are 1 million or 10 million apps available for iOS. Allowing people to distribute their own software comes at no cost to Apple.

> They take on the support load of on-boarding users, managing the relationship, and dealing with any problems.

Apple absolutely does not do any of this as it pertains to individual apps.

> Epic wants to sidestep all of the costs of building a platform, and offload support costs onto Apple

Nobody is asking for Apple's support; really, what the world needs is less of Apple's involvement in the hardware the people own, not more. Epic is clearly willing to spend money on building platforms, since it has a documented $600 million in losses in its effort to build a competitor to Steam. This, however, is not a case where it is possible to build a platform.


> It is not fair to me as a merchant

You absolutely can sell your product as a merchant! Best buy doesnt force you to pay them a fee, if you are selling electronics. You are perfectly within your right to ship the electronics to the merchant yourself and best buy doesnt take a dime!

The same is not true for Apple. For Apple, a customer can want to make a direct agreement with an app store developer, without the involvement of Apple in any way, on the phone that they completely own, and Apple wasn't allowing this to happen.

It would be like if it was illegal to setup competing stores that are located next to best buy that dont involve best buy in any way. That would be absurd.


Best buy owns their store. I own my phone. You can open a store next door to best buy, thats what epic wants to be allowed to do on ios.

Apple pays 100% of the tax on the service road to the stores and pays for the parking lot, though. They deserve some fee and that's what the courts said, right?

You call it a tax, most others would call it the cost of doing business.

But yes, that's built into the product's price. Devs are paying for a license to work with IOS and need to own hardware only Apple sells to work on IOS. So I think those costs are covered.

We'll see what the "reasonable" price is. If nothing else, we know 27% was too much even for appeals.


Payment processing is worth 3. I assume the other stuff is somewhere within an order of magnitude of that, so maybe like 9-12% total is fair?

> They deserve some fee

Not if the only way to get to the store was through that road. In that case, there are public access laws and it is literally illegal for people who "own" a road to charge people money, if there is an easement.

Thats probably a simplification, but they are called "easement by necessity." rights. So even in your example of the roadway, thats also wrong. They get zero dollars.


Isn't that only to get somewhere else?

My point is in the real world sharing an area with it would mean the other store also contributes tax wise. It's not equivalent to bring up real life if the real life paying part isn't also adhered to; the lack of symmetry is notable. I don't think they deserve to set their price, though (30% is way too high).


> would mean the other store also contributes tax wise.

No, land parcel A does not pay land parcel B any amount of dollars at all.

The fact that the government gets tax revenue says nothing about the fact that land parcel B receives nothing, even if they are require to open their street to the public for easement.


What?? Replace DRAM with peaches or table legs and you have the exact example they give you during management training to explain implicit collusion.


That's when you raise prices without a change in the market conditions.

OpenAI is creating more demand, therefore the price must go up, if it didn't then there'd be shortages.


I can see how it can be confusing.

If you know demand will go up because Microsoft announced that each new Xbox will have 2TB of RAM, that is perfectly fine. Or if OpenAI issues a press release that they intend to buy half the worlds RAM.

If you know demand will go up because you learn the volume your customer intends to purchase from your competitor during confidential negotiations, that is not ok.


This actually points the opposite direction, to doubling down on commercial GPUs.

NVIDIA recently told their board partners that they will need to source their own RAM and will not be bundling it with chips anymore.

If there is a supply crunch on DRAM, commercial GPU production lines will start having idle downtime. That is literally the worst possible thing that can happen to a company that has invested heavily in tooling and they will negotiate at or below cost production runs to fill the gaps if a customer can bring their own DRAM to the table.


I understand all of the benefits with regards to compromise and pushing automation, but I really hope they don't push the maximum lower.

It is already getting dangerously close to the duration of holiday freeze windows, compliance/audit enforced windows, etc.

Not to mention the undue bloat of CT logs.


> It is already getting dangerously close to the duration of holiday freeze windows, compliance/audit enforced windows, etc.

How do those affect automated processes though? If the automation were to fail somehow during a freeze window, then surely that would be a case of fixing a system and thus not covered by the freeze window.

> Not to mention the undue bloat of CT logs.

I'm not sure what you mean by "CT logs", but I assume it's something to do with the certificate renewal automation. I can't see that you'd be creating GBs of logs that would be difficult to handle. Even a home-based selfhosted system would easily cope with certificate logs from running it hourly.


"CT Logs" are Certificate Transparency Logs, which are cryptographically provable append-only data structures hosted by trusted operators. Every certificate issued is publicly logged in two or more CT Logs, so that browsers can ensure that CAs aren't lying about what certs they have or have not issued.

Reducing the lifetime of certificates increases the number of certificates that have to be issued, and therefore the number of certs that are logged to CT. This increases the cost to CT operators, which is unfortunate since the set of operators is currently very small.

However, a number of recent improvements (like static-ct-api and the upcoming Merkle Tree Certs) are making great strides in reducing the cost of operating a CT log, so we think that the ecosystem will be able to keep up with reductions in cert lifetime.


90% of people aren't using "SQL" anyway. They are just doing basic CRUD operations on a data store with SQL syntax, usually abstracted through an ORM. The only reason they want a SQL database in the first place is for ACID.

If you find yourself caring about data types or actually writing a query, you should probably setup an actual database server.


User opens DevTools and loads pretty much any website on the internet, film at 11.


Not sure why this is downvoted, this is exactly the case on any commercial website. They often whitewash it under the pretext of “legitimate interest” or “fraud protection”.


JA3/JA4 are useless now.

At best they identify the family of browser, and spoofing it is table stakes for bad actors. https://github.com/lwthiker/curl-impersonate


Slight correction: Spoofing it is table stakes for ever so slightly capable actors.

These will still help against the masses of dumb actors flooding your stuff.


The article rants about how turning off JavaScript is actually harmful because it makes you more fingerprintable, then in the same breath recommend switching to an obscure browser nobody else uses?

If you want to avoid being uniquely identifiable stick to Chrome, signed into a Google account, running on a PC from Best Buy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: