I would argue that intelligence is the ability to survive in your environment. Homeostasis meaning sleeping, eating, drinking, reproducing and avoiding pain. Most of that does require prediction.
I think that depends on the type of animal. There are plenty of animals (insects, fish, reptiles) that don't rely on intelligence, but instead survive just by being well adapted to their environment. Behavior in these animals is largely hardwired via genetic coding.
Other classes of animals (birds, mammals) have evolved to become more generalists, which required them to evolve intelligence to become more adaptive to diverse environments.
So, seeing as evolutionary success isn't inherently tied to intelligence, it seems better not to define it that way. One could still "define" it as that capability that helps provide these generalist classes of animals with some of their survival needs, but that's really only saying what the benefits of intelligence are, not what it actually is.
I still think my predictive definition of intelligence is hard to beat, since it seems about as fundamental a definition as is possible.
Looks like it has a clip to attach to the front of a pair of glasses.
Obviously suboptimal, really needs to be integrated into some old-timey welding goggles for the steampunk vibe.
At $349 it’s almost in the range where I’d buy it to play around with but I suspect it’d end up in a drawer like a bunch of other things that looked cool but I never got around to hacking on. Once the technology gets good enough where they can fit it into a regular pair of glasses then we can talk — though looking at this Bluetooth Aftershockz headset I use that day probably isn’t too far off.
I suspect that fitting something like this into what looks like standard prescription glasses will require some advances in manufacturing optical materials. I think its already possible to make the appropriate lenses to sufficient precision, but the cost is a bit prohibitive.
The other issue is that the projection display is a bit low-res for what I'd really like to see, but 1 pixel works out to ~0.5mm at 1m, so it might be ok. That might be precise enough for (for instance) identifying particular through-hole connections on a circuit board at a rework station. It's certainly adequate for identifying components on an engine.
This is wonderful, no doubt about it, but the bigger problem is for making this usable on commodity hardware. Stablediffusion only needs 4 GB of RAM to run inference, but all of these large language models are too large to run on commodity hardware. Bloom from huggingface is already out and no one is able to use it. If chatgpt was given to the open source community, we couldn’t even run it…
> Bloom from huggingface is already out and no one is able to use it.
This RLHF dataset that is being collected by Open Assistant is just the kind of data that will turn a rebel LLM into a helpful assistant. But it's still huge and expensive to use.