You can’t. When my ISP switched me to CGNAT, I spent days upgrading everything to IPv6, only to discover that gmail didn’t even support it! (Mail Server to mail server, not the web app) I gave up, asked my ISP IPv4 back and, fortunately, got back a new IPv4. But I fear the day that option will disappear…
What year was this? While I can't find a source I believe Gmail has supported IPv6 for sending and receiving since the World IPv6 day back in 2011. I've certainly been doing it since 2017.
Your issue might be rather that Gmail actually enforces all their guidelines on IPv6 instead of silently degrading your reputation behind the scenes like they do for IPv4. So proper RDNS, SPF and DKIM are tablestakes with DMARC and MTA-STS strongly recommended.
either buy paying a few bucks for a vps with static v4 or try techniques like "nat hole punching" to keep the cgnat statemachine happy. but tbf it isn't meant to
>billions of ppl access the internet thru nat everyday
A caveat is that a lot of people are knowing or unknowingly relying on things like UPnP and NAT-PMP to have services operating normally under NAT. That conveniently masked a lot of the issues with NAT in P2P usecases such as online gaming and torrenting.
Unfortunately, even that is broken under CGNAT.
The more layers of NAT you put on your connection, the more things you break.
interestingly, i religiously disable upnp/pmp on all residential cpe's that i configure due to it's glaring security implications. never heard of a problem
though i do defend v4-nat internet as the way it was meant to be, being jailed behind a cgnat w/o repercussions would push me to another isp.
In gaming communities e.g. Minecraft you regularly get people asking for port forwarding related questions. Some gamedevs automate that process using UPnP, I believe Eve is one of them.
Neither solution works for me though, as someone whose IPv4 connnectivity is behind a CGNAT.
ALL ISPs in my country have deployed CGNAT so there's no "changing ISP" for me either. IPv6 is the only solution left unless I want to pay a premium to get one of those public IPv4 addresses. Really, single-layered IPv4 NAT can't last forever. The address space of IPv4 is simply too limited.
the push of p2p comms in gaming was never a good idea, but i can totally see how it was sold. apart from that i don't know why any game would need incoming connections.
the upnp cargo cult in gaming is real though, despite the prevalence of cgnat.
i agree that you should have choice but am not yet ready to accept that ~11B ppl cannot manage with ~3B addresses given the typical ratio of users per v4 with nat.
Using "11 billion" as an estimate of total needed addresses is a bad idea (TM).
Both sides of the internet (provider and user) need an IP address. An average human may possibly require two or more addresses simultaneously (phone, laptop, office PC, and maybe IoT) in the future. And internet infrastructures like routers and managed switches, although never visible to the end users, need an IP address for themselves too. And don't get me started on containerization.
Furthermore, there are internal networks running out of RFC1918 addresses to use so even internal IPv4 has a real limit. Comcast is one of them, T-mobile is another. I believe Facebook moved to IPv6-core because of this too.
People constantly find new ways to use more IP addresses. 4.3B is just too small, even with NAT.
The fact that we are deploying CGNAT everywhere should have made that obvious enough.
MIMO, active beam steering and paths with reflections. In theory. In practice, we'll see, but probably you'll have to hold your phone high to provide some clear line of sight between the antennas if you want those blazing fast speeds.
I'd recommend watching this video of a real-world 5g test https://youtu.be/_CTUs_2hq6Y - The speeds are >1gbps when you're effectively next to the 5g nodes, while it would reach <400mbps while going around walls and would jump between LTE and 5g (in an attempt to be on the fastest network).
I'm not sure I understand your reasoning, why should Google honor noindex everywhere but on .gov websites? What about other countries' government TLDs? What about publicly traded companies? What about personal websites of elected officials? What about accounts of elected officials on 3rd party websites?
That seems like a can of worms not really worth opening.
This might be controversial but everything is fair game everywhere. If you can crawl it, tough luck. It's there and everyone can get to it anyways, why not a crawler?
Because the rules a well-functioning society runs by are more nuanced than "Is it technically possible to do this?"
If you'd like a specific example of why people might seek this courtesy, someone might have a page or group of pages on their site that works fine when used by the humans who would normally use it, but which would keel over if bots started crawling it, because bot usage patterns don't look like normal human patterns.
A society is composed of humans. But there are (very stupid) AIs loose on the Internet that aren't going to respect human etiquette.
By analogy: humans drive cars and cars can respond to human problems at human time-scales, and so humans (e.g. pedestrians) expect cars to react to them the way humans would. But there are other things on, and crossing, the road, besides cars. Everyone knows that a train won't stop for you. It's your job to get out of the way of the train, because the train is a dumb machine with a lot of momentum behind it, no matter whether its operator pulls the emergency brake or not.
There are dumb machines on the Internet with a lot of momentum behind them, but, unlike trains, they don't follow known paths. They just go wherever. There's no way to predict where they'll go; no rule to follow to avoid them. So, essentially, you have to build websites so that they can survive being hit by a train at any time. And, for some websites, you have to build them to survive being hit by trains once per day or more.
Sure, on a political level, it's the fault of whoever built these machines to be so stupid, and you can and should go after them. But on a technical, operational level—they're there. You can't pre-emptively catch every one of them. The Internet is not a civilized place where "a bolt from the blue" is a freak accident no one could have predicted, and everyone will forgive your web service if it has to go to the hospital from one; instead, the Internet is a (cyber-)war-zone where stray bullets are just flying constantly through the air in every direction. Customers of a web service are about the same as shareholders in a private security contractor—they'd just think you irresponsible if you deployed to this war-zone without properly equipping yourself with layers and layers of armor.
Honestly that is the site owners problem. If it can be found by a person it's fair. I genuinely respect the concept of courtesy but I don't expect it. People can seek courtesy but they should have expectations of whether or not it will happen.
Techies forget the rule of laws. A dos has intent. A bot crawling a poorly designed website accidentally causing the site owners problems does not have malicious intent. They can choose to block the offender just like a restaurant can refuse service. But intent still matters.
This thread is about what behavior we should design crawlers to have. One person said crawlers should disregard noindex directives on government sites, and you replied that they should ignore all robots.txt directives and just crawl whatever they can. If you intentionally ignore robots.txt, that has intent, by definition.
Not intentionally ignore it by going out of their way to override it, just not be required to implement a feature to their crawler. Apparently parsing those sounds tricky with edge cases. Ignoring that file is absolutely on the table. People of course can adhere to but it's not required and in my opinion shouldn't even be paid attention to.
In my younger years the only time I ever dealt with robots.txt was to find stuff I wasn't supposed to crawl.
If you don’t want something public, don’t allow a crawler to find it or access it. The people you want to hide stuff from are just going to use search engines that ignore robots.txt
And if you don't want to switch, there are things like privacy badger[1] and ghostery[2]? Although, strangely Privacy Badger does not seem to mention their chrome extension[3].
These aren't really a replacement when the browser itself is compromised and the corporation behind it is leveraging it to try to take control of the web standards.
From the article: In the centre, at Lujiazui, the city has given up on the street level entirely and created a huge, circular, above-grade walkway accessible by escalator and connecting to all surrounding buildings.
But that's probably not what you meant by "solved".
Interesting. I didn’t know this was a problem. Kanji uses chinese characters. If sorting has been solved for chinese character, I’m assuming that the existence of Chinese dictionaries mean that sorting is no longer an issue, then why can’t the same method be used for kanji characters?
Scanning isn't going to work if its an IPv6 network. I would just use wireshark and the busy device would probably be your camera it it's on the network.
And probably don't use an IPv6 network if you're the host unless you want to hide a device on the network.
Use case? You're in the middle of a protest. Where to next?