Hacker Newsnew | past | comments | ask | show | jobs | submit | allovertheworld's commentslogin

more like now Nvidia wants to release their own ASIC to combat google

Umm... no one tell them, okay?

Yea if this had 5g it would be worth it

Yeah you're not gonna get a cellular modem under $100 USD. Sorry

This is for those not aware of that setup lmao

Aka its the next stage of stackoverflow/google search

whats the point of this? I got wireguard on my phone connected to my home network (also unifi).

If this device had a 5g sim slot, then I could see the point but it’s not that.


The main benefit of a travel router is creating a private network, and sharing a wifi connection. An iPhone can't do that, though Android phones can.

> though Android phones can

Interesting, as someone who has always used iPhones, wouldn't mind getting an Android phone for this.

Is there some app?


No it's provided as part of the Android OS. Very simple and intuitive to use and has been for the past 10 years since I started using it. The only thing that was annoying initially was that you couldn't pass through the WiFi that your phone is connected to but I think that was corrected in later versions of Android. For a time I was using one of my older Pixel phones as a WiFi extender to improve signal in my home's basement. Worked like a charm. I'm honestly surprised this isn't available on iOS.

Some third party WiFis limit the number of devices. This gets around that limit.

Sharing where? All my devices can connect directly thru to Wireguard vpn on my home network. Ipad, iphone, MBP, etc

A 5g phone tethering to your Wireguard connected MBP beats this out of the water


You are in a hotel, you have a wife two kids. So assume 4 phones, 3 laptops, an ipad, and maybe a chromecast. It is faster and easier and more private to use a travel router, connect to wifi, and create a private network than tp connect and authenticate (and possible pay fees) for every device.

Only because of Apples unified memory architecture. The groundwork is there, we just need memory to be cheaper so we can fit 512+GB now ;)

Memory prices will rise short term and generally fall long term, even with the current supply hiccup the answer is to just build out more capacity (which will happen if there is healthy competition). I meant, I expect the other mobile chip providers to adopt unified architecture and beefy GPU cores on chip and lots of bandwidth to connect it to memory (at the max or ultra level, at least), I think AMD is already doing UM at least?

> Memory prices will rise short term and generally fall long term, even with the current supply hiccup the answer is to just build out more capacity (which will happen if there is healthy competition)

Don't worry! Sam Altman is on it. Making sure there never is healthy competition that is.

https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram...


We’ve been through multiple cycles of scarcity/surplus DRAM cycles in the last couple of decades. Why do we think it will be different now?

> Why do we think it will be different now?

Margins. AI usage can pay a lot more. Even if they sell less than can still be more profitable.

In the past there wasn’t a high margin usage. Servers didn’t charge such a high premium.


Do you not think that some DRAM producer isn't going to see the high margins as a signal to create more capacity to get ahead of the other DRAM producers? This is how it always has worked before, but somehow it is different this time?

> Do you not think that some DRAM producer isn't going to see the high margins as a signal to create more capacity to get ahead of the other DRAM producers?

They took the bite during COVID and failed, so there's still fear from over supply.


It only works if they collude on keeping supply steady. If anyone gets greedy for a bigger share of the AI pie, then it implodes quickly. Not all DRAM is made in South Korea so some nationalism will muddy the waters as well.

High margins are exactly what should create a strong incentive to build more capacity. But that dynamic has been tamped down so far because we're all scared of a possible AI bubble that might pop at any moment.

There's not in the end all that much point having more memory than you can compute on in a reasonable time. So I think probably the useful amount tops out in the 128GB range where you can still run a 70b model and get a useful token rate out of it.

How cheap is glm at Cerebras? I cant imagine why they cant tune the tokens to be lower but drastically reduce the power, and thus the cost for the API

They're running on custom ASICs as far as I understand, it may not be possible to run them effectively at lower clock speeds. That and/or the market for it doesn't exist in the volume required to be profitable. OpenAI has been aggressively slashing its token costs, not to mention all the free inference offerings you can take advantage of

It's a lot more expensive than normal, $2.25/2.75 I think. Though their subscription is a lot cheaper.

this doesn’t mean much if you hit daily limits quickly anyway. So the API pricing matters more

TBH when I hit the Claude daily limit I just take that as a sign to go outside (or go to bed, depending on the time).

If the project management is on point, it really doesn't matter. Unfinished tasks stay as is, if something is unfinished in the context I leave the terminal open and come back some time later, type "continue", hit enter and go away.


Eh sonnet 4.5 was better at Rust for me

If only C++ had a fully supported cargo system


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: