So? It is still a pretty popular and useful piece of software even if your circle doesn't use it.
One of the big barriers to having more people use Linux is having the software packages they use to actually do work available on the platform. Image editing is the most popular software type that isn't really available on Linux with an equivalent to the commercial package that everyone uses.
The point is that if of the hundreds or thousands of people I know don't use it then it seems mathematically provable that the largest majority of people don't use them either and so it's not a strong argument against Linux becoming the standard OS which is what is happening now regardless how much some people don't want it do happen.
A large part of it is that for most people, the vast majority of their computer use is in a web browser. Even "standalone" programs are often just an Electron app so they don't even have to use their computer differently than they are used to. Yes Windows has gotten bad, and Linux no longer has some of the major issues people would frequently run into (e.g. hardware compability is largely a non-issue, audio just works, etc.), but I think it is mostly that things are just way more platform agnostic today.
I was annoyed recently because I replaced my GPU and I had to boot into Windows for the first time in months and install drivers just to turn off the RGB on the card because OpenRGB wouldn't find it.
>"you can't appreciate good playback until you've heard awful playback on shitty record players like I had to.". My eldest is now plotting a complete hifi system
This has strong energy of "Teach your kids how to play Magic, they won't have money for drugs."
I think the worry about power consumption is a bit overblown in the article. My NAS has an i5-12600 + Quadro P4000 and uses maybe 50% more power than the one in this article under normal conditions. That works out to maybe $4/month more cost. Given the relatively small delta, I'd encourage picking hardware based on what services you want to run.
Less power, less heat. Less heat, less cooling required. At some point that allows you to go fanless, and that's very beneficial if you have to share a room with the device.
Since this is about NAS, you very likely have a bunch of HDDs connected to it. And if you do, I feel like they'll "out-noise" a lot of cooling solutions as long as the fans are not spinning at max by default.
I'm with you, but my "NAS" is also really just a server, running tons of other services, so that justifies the power consumption (it's my old 2700X gaming rig, sans GPU).
But i do have to acknowledge that the US has relatively low power costs, and my state in particular has lower costs than that even, so the equation is necessarily different for other people.
Indeed, I always compare it with what I get if I ran it via cloud services and the electricity cost pales in comparison.
My NAS is around 100W (6-year old parts: i3 9100 and C246M) which comes to $25/£18 per month (electricity is expensive), but I can justify it as I use many services on the machine and it has been super reliable (running 24/7 for nearly 6 years).
I will try to see if I can build a more performant/efficient NAS from a mix of spare parts and new parts this coming month (still only Zen 3: 5950X and X570), but it is more of a fun project than a replacement.
This is why I stated that the important part is sizing the machine for your use case. I use my NAS as far more than just a storage server, it also runs a couple VMs and about 20 docker containers all the time. Plus I've also got my Windows VM that I boot up for the few programs I use that don't have a Linux equivalent (which is also the only time the P4000 is working). That is much more different than comparing to just cloud storage.
The article buries the lede on the only point that matters with these "AI" hardware devices. They need to solve a problem their customers have, and all the devices these companies have released so far don't do anything that a smartphone can't easily do.
An "AI Hardware" device that needs a network connection, and makes API calls to the mother ship, in order to accomplish anything, is not interesting. It's a Raspberry Pi.
On the other hand, a device that could work offline is interesting. One that could work in a zombie apocalypse is even more interesting. Especially if it was solar powered and contained the knowledge needed to rebuild society.
> An "AI Hardware" device that needs a network connection, and makes API calls to the mother ship, in order to accomplish anything, is not interesting. It's a Raspberry Pi.
Kind of an unnecessary dig at the Raspberry Pi, no? Modern Pis and SBCs in general are good at lots of things. I use mine for self-hosting some apps I use, and I've definitely seen them used in little compute clusters for AI inference.
> On the other hand, a device that could work offline is interesting. One that could work in a zombie apocalypse is even more interesting. Especially if it was solar powered and contained the knowledge needed to rebuild society.
This is kind of interesting in an abstract sense; it's fun to imagine burying a solar-powered oracle in a hardshell case in a bunker somewhere so that some hypothetical person can use it to bootstrap civilization after the end, but that's all it really is: hypothetical. Fun to imagine. A project for hackers or maybe a non-profit. It certainly fails the "toothbrush test" mentioned in the article; no one will be consulting their doomsday box once or twice a day (absent a doomsday, anyway).
If I can be really reductionist for a second, I think there's a lot of AI cart-before-horse happening with these hardware products. Smartphones changed the world a decade and a half ago because they took something that people wanted--the internet, but mobile--and finally made it work. Since then they've dramatically changed the landscape of the internet and social media etc, but the idea--that people already had the internet but wanted to interact with it in a different way--should probably be the foundation for how we think about AI hardware products. What can they do for people better than what they already have? We should not need the benefit of hindsight to see why something like the Human AI pin, that doesn't really do anything and does it badly, failed.
RPis just aren't interesting. It's a full fat computer on a small board that does everything a normal computer does, just small.
PCs just aren't interesting because they're all fundamentally the same thing and are capable of the same set of tasks.
RPis aren't interesting because 99.9999% of projects they're put in are better served by a microcontroller and not an entire linux system. If not just a 555. It's boring to throw an entire linux computer into a project. You've utterly given up on the hardware and have assumed you can do everything with software.
RPis (and similar) are interesting because they're a tiny full-fat Linux computer with user-accessible GPIOs. That's the only special thing about them. There are other tiny Linux computers, but RPi-like computers have GPIOs, so they can use communication protocols that don't have standard external headers like I2C & SPI, switch relays, etc. They're interesting precisely because they can do what a microcontroller can do, while still being a full Linux system.
I mostly use them for situations where I'd otherwise need a small Linux computer and a microcontroller together, e.g. for CAN + DoIP simulation with SCPI control of test equipment over Ethernet & relay control of a vibration table.
Your market is going to be doomsday preppers. Can you imagine starting a business in that market? I bet the trade shows are filled with bunker developers, underground infrastructure dealers and arms dealers.
I'm imagining a stratified market with two distinct customer personas - very rich and paranoid, and very poor paranoid.
Well, I wouldn't put it past that. It could literally feed Android into an llm context window and probably redesign it to be even better and it already is.
Android still has weird laggy jumps and just is not that smooth. Even on the new pixel devices.
Just wait until we can just feed a serialization of our "coworker's" mind into the context and get an improved coworker who has more practical skill in applying LLMs!
Agreed, I've run into just enough installers that don't work with Ventoy where I've just defaulted back to using etcher when I need access. The 5 minutes wait is worth it over the frustration of booting into Ventoy and finding it doesn't work with the ISO I'm trying to use.
Really? I don't go out and photograph near as much as I used to, but nobody has ever reacted with anything other than interest at what I'm doing. I was recently traveling to a couple cities I had last been to 5-10 years ago and was shocked at how packed places were with people getting their photos taken, I have photos that would be impossible to take again because there would be people in the way.
This is assuming that "AI" isn't already being used extensively on manufacturing lines. Computer Vision has used "AI" neural networks for years for various tasks. The issue is that it is a lot of investment to implement automated assembly and there are still enough places in the world where labour is cheap enough to make it not worth it. As I said to one of my suppliers recently when they asked how their factory compared to others, "Automation is nice to have, but at the end of the day I'm choosing a vendor based on who can get me the product cheapest, quickest, and with high quality."
The main argument would be if you are relying on other countries and you can't produce anything yourself then you need to rely on other countries being good trading partners. If the relationship with those trading partners fails your economy is in trouble.
One of the big barriers to having more people use Linux is having the software packages they use to actually do work available on the platform. Image editing is the most popular software type that isn't really available on Linux with an equivalent to the commercial package that everyone uses.
reply