Hacker Newsnew | past | comments | ask | show | jobs | submit | danachow's commentslogin

> We're on 400Mbps, and even then I manage to block internet for others when I download a large file at full speed.

That's more of a router/QoS issue. A large download shouldn't negatively affect quality for other uses, especially VoIP. If the place you're downloading from has a pipe bigger than yours and can saturate your bandwidth you're going to need to implement some kind of rate-limiting/queue management.


I've seen a lot of people complaining that their 100M+ connections are "slow" for that reason, and they upgrade to 1G and they complain again. It's not like they're doing much on that 1G line either, just say a Steam game download at full speed followed with a stuttering Zoom call. You'll be surprised how far you can push a 10M DSL connection with good QOS (assuming you don't have multiple users streaming video and such).


Steam will saturate the downstream link so it's no surprise that, without qos, that it kills zoom calls


> included alpha channels

I would be very surprised by this. Raster ops, yes. Alpha channel image compositing operations, no way - that's an entirely whole nother level of complexity.


Not sure why you are down voted, its a somewhat fair point.

So I looked it up, the 542x series in this article do support "transparency" blt operations, but it might be fair to call them raster ops rather than full blending. They would be be sufficient for the parallax scrolling Duke Nukem here was implementing, which was sorta my original point.

OTOH, full 32-bit ARGB support shows up in the Cirrus line of adapters with the next revision 543x, mostly for video overlay though, although the way its wired seems to leave a lot of open doors for interesting effects too.

And full blown alpha blending shows up one generation later in the 546x series.

So there is full hardware support by the mid 1990's in fairly low end HW/PCs at that point. I've said before on this board that alpha blending was going on all through the 1990's and there are various ways to "cheat" and speed up what is presented in '84 (https://dl.acm.org/doi/10.1145/964965.808606). Pulling my Copy of CGPnP off the shelf it says under compositing "since it is fairly easy to do". I give you a demo from '92 with real time apparent blended transparency: https://www.youtube.com/watch?v=pLJhtefcoPE see about 4 mins in. I sorta doubt this is the first case, but was one I vaguely remembered, since I was myself hacking these kinds of things with my schoolmates around that time as we tried to emulate what we saw others doing/etc. And none of us went into computer graphics or gaming oriented parts of the technology fields.

BTW: That demo notes it needs a 386+VGA, so we are talking late 1980's PC hardware.


Binary transparency :). 1992 Cirrus Logic GD5402 aka AVGA2 supported Masking.

http://www.vgamuseum.info/index.php/cpu/item/130-cirrus-logi... "D3 Selects the memory read data latches to be eight bytes wide, instead of the normal four bytes. This bit can be used in Write Mode 1, in order to rewrite 8 latched pixels (64 bits) back into display memory. This bit should be used in X8 addressing mode only."

= Set/Reset and Compare registers extended to full 8 bits and read/write mode 1 extended to 8 bytes at a time with extra foreground/background masks. If Im calculating correctly this means you can perform internal copies at 12-20MB/s and lines/pattern fills at 24-40MB/s


Forgot to add the biggest problem was lack of universal DOS graphic API. Vesa came out with VBE/AF way too late https://en.wikipedia.org/wiki/VESA_BIOS_Extensions#VBE/accel...

Things might of looked different with VBE/AF fully defined in early 1992 and subsequent graphic products all providing at least partial support (8/16bit panning and sprites would be enough).

Instead everything was too late. Even VESA LFB never truly got implemented on ISA cards (afaik only ATI Mach64 and maybe experimental support in ET4000?), and only started working on VBE and PCI.


What block layer cache?


I think I was wrong, typed too quickly. There’s (potentially) a file buffer block cache available, but that’s not for caching raw disk blocks, but filesystem blocks of content.


> Programmers are not commodities

But usually, they are treated as such. American companies have a hard time treating most of their white collar workforce as anything but. On the other hand, Stripe has been seemingly well managed up to this point - but they have only existed in happy times so far. Many companies change their tune when the chips are down.

> It's also very expensive and difficult to hire new ones, even if you're hiring them at a cheaper salary than the last ones.

This may be true - but the average tenure of a tech worker shows most firms are not able to do act on this.

> Stripe 100% wants to retain its employee base, just like any company would.

I wouldn't put it past Stripe, but "just like any company would" is pretty naive. Serious retention efforts are by far the exception in my observation. This also weakens your argument - is Stripe not actively working to retain talent or are they just like "any company?" If really the latter, then they are fucked.


> and not when the console was still in production?

The "Atari 8-bit" refers to a line of 6502 based personal computers manufactured from 1979 until 1992! There was a commercially unsuccessful video game console, the 5200, derived from this general architecture - it's wild that for their illustrious place in video game history, Atari was a one hit wonder - they never really succeeded in a console release after the 2600.

Atari also had a line of 68k based 32/16 bit computers that sadly also discontinued in 1992 so that Atari could focus on their pathetic attempts to break back into the console market.


> it's wild that for their illustrious place in video game history, Atari was a one hit wonder

They had many, many popular coin-op machines.


I think from the part of the sentence that you took out it was clear the point was about consoles.

The Atari Pong console was also successful but I tend to not count it as I was only considering programmable consoles.


> 219 bytes per second and with 98 percent accuracy.

Perhaps I'm a boomer, but 219 Bps is damn fucking fast - faster then the first few modems I used.

> where do you draw the line?

Probably somewhere fucking much further below any point where human communication was deemed practical in the past 150 years.

Somewhat pathetic to me that people can't imagine 200 Bps as a usable bandwidth.


Yeah, it's between 1200 baud and 2400 baud - which would be right around where dialup was in the 80s. Plus the attack runs at 3.9KB/s on AMD processors - over 10 times faster!


Hell it’s not much slower than the UART comms we’re using in our product at work.


> Yeah, it's between 1200 baud and 2400 baud - which would be right around where dialup was in the 80s.

Eh?

219 Bps ~= 2100 bps (effective)

Assuming QAM that'd be something under 600 baud.

Baud and bps are not the same thing.


Fair! Thanks for the correction. I intended to say bits per second. Luckily I think I’m still in the right ballpark for speeds (at least in the early 80s).

Did any of those old modems use QAM? It seems like many of the older protocols were FSK/PSK-based (yes, special cases of QAM, but not decoded the same way).


Good question. I've long since tossed my various modems from the epoch. I started using dial-up systems around 1987 or 1988, working through 300, 1200/75, and 2400bps, etc.

Referring to wikipedia [0] I note v22.bis was released in 1984, and while they're a bit vague, they suggest 2 or 3 bits per signalling change in those 1200bps modems.

I also recall, though perhaps later, vendors were often champing at the bit with 'proposed spec' compliant hardware available prior to the formal specifications dropping, so it was not uncommon to have some of those new speed devices prior earlier (obviously with some risk attached). I think that was more around the 14400bps era though.

Anyway, my extremely solid Telecom 2400bps modem was almost definitely running at 600baud (produced in the late 1980's, though in my possession from very early 1990's).

Depressingly, a lot of ostensibly tech articles (eg [1]) conflate baud with bps, making them poor-use / low-trust historical references.

[0] https://en.wikipedia.org/wiki/Modem#1980s

[1] https://www.techradar.com/au/news/internet/getting-connected...


It’s a fair point but also back in the day - “everybody” (especially the younger BBS and hobbyist crowd) incorrectly referred to them as 1200 baud or 2400 or even 9600 baud modem - as evidenced by the numerous historical references.


Sure, but we're on HN, not at a 1980's breakfast club.


That's like AOL dailup territory right there.


Wow, people like to shit on Apple, but this is yet another example of something that just works.


I've a QR app because I need advanced options, but just tested that my stock camera app when presented a barcode or a QR read them without any action. It offers to open the link in the browser, to connect to the wifi or to search the numbers of the barcode.

My guests doesn't ask for the WiFi pass, they see the QR hanging and ask "what is this QR for?", "It connects to my WiFi", and they immediately try it (succesfully) out of curiosity, "It's cool, how is it done?". The don't understand the magic behind, but don't need a guide to use it.


Android phones might all do this now for all I know, I have no idea. My phone is a few years old so hard to say.


QR code is not particularly dense (like compared to something like a hard drive) - why waste space that could not be put towards more redundancy (error correction)?


> But 40% of non-obese Americans are sick with metabolic syndrome.

40% of the non obese adult population in the US? More BS dude.

It should be easy to point to some relatively reputable public health agency or peer reviewed publication where this is substantiated.

Among non overweight individuals I’m going to peg it more around 5%.


Sucrose is hydrolyzed in the gut by the enzyme α-glucosidase into glucose and fructose. Without any other absorptive buffer, it's basically the same.


> it's basically the same

Not really. Fructose is JUST fructose, sucrose is half fructose.


"high fructose" corn syrup--what's typically used to replace sucrose--is 55% fructose, not 100% fructose. IOW, only 5% more fructose. (It can even be less than sucrose, as low as 42%, but for sake of argument 55% seems fair.)


~55%, plus another 50% of the sucrose content once its broken down in the gut, which means it's effectively somewhere around 75% sucrose for the body to break down~

EDIT: nevermind, assumed the rest was sucrose rather than glucose


I'm not following. HFCS is 55% fructose and 45% glucose, not 45% sucrose.


Fair point. I incorrectly assumed the parent was comparing fructose to sucrose directly.


"Basically the same" as fructose and glucose separately.

In soda, the ingredients say sugar, but it has already been dissociated to fructose and glucose in the bottle.


If we’re talking about sugar soft drinks in the US this is invariably HFCS.


There is no point in quibbling.

"High fructose corn syrup" is the same as table sugar, to your body.


So why do people make a big deal out of it, does it just taste bad?


Hipsterism. You cannot trust the taste of anybody who touches either of them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: