Hacker Newsnew | past | comments | ask | show | jobs | submit | more RicoElectrico's commentslogin

This is clown economy... or maybe always has been ever since we financialized it.


     Being not a programmed micro controller, the LM8560 is also a virtually eternal component. Many modern micro controllers incorporate a flash memory to store the software that let the controller work and execute the desired functions. Flash memories retain their content not for an unlimited lifespan. It may be several decades, but before or later it comes the day when they begin to lose their content, and the micro controller stops to work. This can’t happen to LM8560, because it doesn’t contain any flash memory.
That's a strawman, as the cheapest devices using microcontrollers use mask ROM.


Mask ROM is actually starting to become less common as the price of OTP flash has dropped significantly, and changes can be implemented without paying for a new set of masks.


    Collaboration fuels innovation: Apple's reliance on specialized third-party suppliers, such as TSMC and Bosch, enables it to focus on refining design and user experience while leveraging the technical expertise and R&D of its partners for advanced components.
    Outsourcing is a strategic necessity: The complexity and cost of producing certain components, such as chips and sensors, make it impractical for Apple to develop everything in-house. By outsourcing, Apple taps into specialized knowledge while staying competitive.
    Suppliers are key to Apple's success: Companies like SiTime and Texas Instruments provide essential components that allow the Apple Pencil Pro to deliver high performance, showcasing the importance of Apple's global network of partners.
Excuse me, this is how almost every high tech company works? Feels like an LLM wrote it, especially considering how empty it is, reiterating the same thing 3 times with different buzzwords.


    At the time, Intel believed the mobile phone processor market was too small to justify the immense R&D investment required to deliver on Apple's request. Designing chips is an incredibly costly endeavor, and Intel assumed that the volumes wouldn't be large enough to cover those costs. In hindsight, this decision would go down as one of the biggest missed opportunities in business history. Otellini later admitted that Intel had drastically miscalculated, underestimating potential demand by 100x.
Admittedly this one is funny given the "Intel lifecycle" as my colleagues who are Intel alumni put it: acquire a company for a mountain of cash (much more than it'd cost to develop an SoC back in 2000s) only to fumble its potential. And the current CEO might be continuing this lunacy if rumors about AI startup acquisition are true.


Likewise with DRM.


Being a nerdy kid in the 80’s, I can’t see the acronym MCP without thinking, “You’re in trouble program. Why don’t you make it easy on yourself. Who’s your user?”


Well that one at least has appreciable parallels :)

Letting an LLM loose on a real system without containing it in a sandbox sounds about as predictably disastrous as letting a glorified chess program run all ENCOM operations…


And your mom who grew up in the 1960s might have yet another interpretation in mind ( https://www.ebay.com/itm/305272862225 ). MCP is definitely an overloaded acronym at this point.


Well, my mom was in her mid-twenties by the time that phrase came into usage, but point still well taken.


They should have called it OCP, the Omni Control Protocol.


Over Current Protection


Digital Radio Mondiale?


Interesting, because I do detect many BLE beacons in a residential building. These aren't bona fide beacons, this I can infer, but not sure what devices they are.


They could be anything from indoor location augmentation to hundreds of TV, headphone, pacemaker and other media/medical/anything-you-can-imagine devices.


In USA the trial itself is a punishment.


Kafka would be proud/horrified.


> A driver developer noticed that it was possible to turn off the built-in vertical and horizontal blank suppression

Aren't you confusing that with Fresco Logic USB to VGA?


Maybe not only reference software, but also reference RTL should be provided? Yes, this is more work, but should speed up adoption immensely.


There's no point having reference RTL. The point of reference software is to demonstrate the correct behaviour for people implementing production grade libraries and RTL. Having an RTL version of that wouldn't add anything - it should have identical behaviour.

Providing a production grade verified RTL implementation would obviously be useful but also entire companies exist to do that and they charge a lot of money for it.


Could help people on the hobby or lower budget FPGA side. H.264/5/etc never really made it


There is absolutely no way an FPGA would make sense. The requirements for AV1 and H265 far exceed the hardware resources of lower budget FPGAs. For the same process, FPGA logic density is about 40x lower than ASIC, and lower budget FPGAs use older processes.

A h265 or AV1 decoder requires millions of logic gates (and DRAM memory bandwidth.) Only high-end FPGAs provide that.


There's mention that the decode could get a lot easier. Here's an H264 core that runs on older lattice chips and only takes 56k luts. https://www.latticesemi.com/products/designsoftwareandip/int... . Microchip's polarfires have a soft H.264 core as well taking under 20k. If AV2 will really be easier for hardware to implement, it might work out. Here's another example, H 264 decode in an artix 7, can do 1080p60 https://www.cast-inc.com/compression/avc-hevc-video-compress... . So with all due respect, what in the world are you talking about?


I didn't mention h264 for a reason. It's a codec that was developed 25 years ago.

The complexity of video decoders has been going up exponentially and AV2 is no exception. Throwing more tools (and thus resources) at it is the only way to increase compression ratio.

Take AV1. It has CTBs that are 128x128 pixels. For intra prediction, you need to keep track of 256 neighboring pixels above the current CTB and 128 to the left. And you need to do this for YUV. For 420, that means you need to keep track of (256+128 + 2x(128+64)) = 768 pixels. At 8 bits per component, that's 8x768=6144 flip-flops. That's just for neighboring pixel tracking, which is only a tiny fraction of what you need to do, a few % of the total resources.

These neighbor tracking flip-flops are followed by a gigantic multiplexer, which is incredibly inefficient on FPGAs and it devours LUTs and routing resources.

A Lattice ECP5-85 has 85K LUTs. The FFs alone consume 8% of the FPGA. The multiplier probably another conservative 20%. You haven't even started to calculate anything and your FPGA is already almost 30% full.

FWIW, for h264, the equivalent of that 128x128 pixel CTB is 16x16 pixel MB. Instead of 768 neighboring pixels, you only need 16+32+2*(8+16)=96 pixels. See the difference? AV2 retains the 128x128 CTB size of AV1 and if it adds something like MRL of h.266, the number of neighbors will more than double.

H264 is child's play compared later codecs. It only has a handful of angular prediction modes, it has barely any pre-angular filtering, it has no chroma from luma prediction, it only has a weak deblocking filter and no loop filtering. It only has one DCT mode. The coding tree is trivial too. Its entropy decoder and syntax processing is low in complexity compared to later codecs. It doesn't have intra-block copy. Etc. etc.

Working on a hardware video decoder is my day job. I know exactly what I'm talking about, and, with all due respect, you clearly do not.


Hmmm so you're ignoring the crux of my argument because it's convenient for you (h264 is comfortably small, AV1 is maybe too big, so between them might work). So anything that's related to why AV1 won't fit is pointless. They know that and are improving on it.

Your argument about your large amount of flops is odd. You would only store data that way if you needed everything on the same cycle. You say there's a multiplexor after that. Data storage + multiplexor is just memory. Could use a bram or lutram which would cut down on that dramatically, big if there's a need based on later processing which you haven't defined. And even then, that's for AV1 which isn't AV2 and may change


I’m ignoring h264 because it’s irrelevant in a discussion about AV2, for the reasons that I already brought up in my earlier reply. It’s like having a discussion about a Zen CPU and bringing up the 8088 architecture.

Let’s cut to the chase. AV2 will not be smaller than AV1 at all. The linked article doesn’t say that. The slides don’t say that either.

The only thing that could make somebody think that it’s smaller is the claim that all tools have been validated for hardware efficiency. The goal of this process is to make sure that none of the new tools make the HW unreasonably explode in size, not to make the codec smaller than before, because everyone knows that this is impossible if you want to increase compression ratio.

Let’s look at 2 of those new tools. MRLS: this adds multiple reference lines, just like I expected there would be. Boom! Much more complexity for neighbor handling. I also see more directions (more angles.) That also adds HW. The article mentions improved chroma from luma. Not unexpected because h266 already has that, and AV2 needs to compete against that. AV1 has a basic 2x2 block filter. I expect AV2 to have a more complex FIR filter, which makes things significantly harder for a HW implementation.

You are delusional if you think AV2 will be smaller than AV1.

The reason I brought up neighbor handling is because it’s so easy to estimate its resource requirements from first principles, not because it’s a huge part of a decoder. But if neighbors alone already make a smaller FPGA nearly impossible, it should be obvious that the whole decoder is ridiculous.

So… as for storing neighbors in RAM: if I’d bring this up at work, they’d probably send me home to take mental health break or something.

Neighbor processing lives right inside the critical latency loop. Every clock cycle that you add in that loop impacts performance. You need to update these neighbors after predicting every coding unit. Oh, and the article mentions that the CTB size (“super block” in AV2 parlance) has been increased from 128x128 to 256x256. Good luck area reducing that. :-)


I think when they talk about AV2 being more hardware friendly they mean compared to AV1 not H.264.


Yeah so if H264 fits comfortably, AV1 is maybe too big, then being better than AV1 could mean it's possible


Can a "lower budget" FPGA really outperform a consumer-grade CPU for this?

And what hobbyist is sending off decoding chips to be fabbed? If this exists, it sounds interesting if incredibly impractical.


It’s not possible on any but the largest $$$ FPGAs… and even then we often need to partition over multiple FPGAs to make it fit. And it will only run at a fraction of the target clock speed.


Guess the corporate development team needed to justify its existence. We've been through many dubious acquisitions in the tech sector for the last 5 years or so.


The "press y to start" is a nice touch, regardless of validity of the whole experience.

(It's probably meant as an OCD trigger; barely anyone uses anything but space/enter/any key to dismiss the splash screen)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: