No it's provided as part of the Android OS. Very simple and intuitive to use and has been for the past 10 years since I started using it. The only thing that was annoying initially was that you couldn't pass through the WiFi that your phone is connected to but I think that was corrected in later versions of Android. For a time I was using one of my older Pixel phones as a WiFi extender to improve signal in my home's basement. Worked like a charm. I'm honestly surprised this isn't available on iOS.
You are in a hotel, you have a wife two kids. So assume 4 phones, 3 laptops, an ipad, and maybe a chromecast. It is faster and easier and more private to use a travel router, connect to wifi, and create a private network than tp connect and authenticate (and possible pay fees) for every device.
Memory prices will rise short term and generally fall long term, even with the current supply hiccup the answer is to just build out more capacity (which will happen if there is healthy competition). I meant, I expect the other mobile chip providers to adopt unified architecture and beefy GPU cores on chip and lots of bandwidth to connect it to memory (at the max or ultra level, at least), I think AMD is already doing UM at least?
> Memory prices will rise short term and generally fall long term, even with the current supply hiccup the answer is to just build out more capacity (which will happen if there is healthy competition)
Don't worry! Sam Altman is on it. Making sure there never is healthy competition that is.
Do you not think that some DRAM producer isn't going to see the high margins as a signal to create more capacity to get ahead of the other DRAM producers? This is how it always has worked before, but somehow it is different this time?
> Do you not think that some DRAM producer isn't going to see the high margins as a signal to create more capacity to get ahead of the other DRAM producers?
They took the bite during COVID and failed, so there's still fear from over supply.
It only works if they collude on keeping supply steady. If anyone gets greedy for a bigger share of the AI pie, then it implodes quickly. Not all DRAM is made in South Korea so some nationalism will muddy the waters as well.
High margins are exactly what should create a strong incentive to build more capacity. But that dynamic has been tamped down so far because we're all scared of a possible AI bubble that might pop at any moment.
There's not in the end all that much point having more memory than you can compute on in a reasonable time. So I think probably the useful amount tops out in the 128GB range where you can still run a 70b model and get a useful token rate out of it.
They're running on custom ASICs as far as I understand, it may not be possible to run them effectively at lower clock speeds. That and/or the market for it doesn't exist in the volume required to be profitable. OpenAI has been aggressively slashing its token costs, not to mention all the free inference offerings you can take advantage of
TBH when I hit the Claude daily limit I just take that as a sign to go outside (or go to bed, depending on the time).
If the project management is on point, it really doesn't matter. Unfinished tasks stay as is, if something is unfinished in the context I leave the terminal open and come back some time later, type "continue", hit enter and go away.
reply