Hacker Newsnew | past | comments | ask | show | jobs | submit | dimava's commentslogin

Tokens will cost same on Mac and on API because electricity is not free

And you can only generate like $20 of tokens a month

Cloud tokens made on TPU will always be cheaper and waaay faster then anything you can make at home


This generally isn't true. Cloud vendors have to make back the cost of electricity and the cost of the GPUs. If you already bought the Mac for other purposes, also using it for LLM generation means your marginal cost is just the electricity.

Also, vendors need to make a profit! So tack a little extra on as well.

However, you're right that it will be much slower. Even just an 8xH100 can do 100+ tps for GLM-4.7 at FP8; no Mac can get anywhere close to that decode speed. And for long prompts (which are compute constrained) the difference will be even more stark.


A question on the 100+ tps - is this for short prompts? For large contexts that generate a chunk of tokens at context sizes at 120k+, I was seeing 30-50 - and that's with 95% KV cache hit rate. Am wondering if I'm simply doing something wrong here...


Depends on how well the speculator predicts your prompts, assuming you're using speculative decoding — weird prompts are slower, but e.g. TypeScript code diffs should be very fast. For SGLang, you also want to use a larger chunked prefill size and larger max batch sizes for CUDA graphs than the defaults IME.


When other models would grep, then read results, then use search, then read results, then read 100 lines from a file, then read results, Composer 1 is trained to grep AND search AND read in one round trip It may read 15 files, and then make small edits in all 15 files at once


Just ask LLM to write one on top of OpenRouter, AI SDK and Bun To take your .md input file and save outputs as md files (or whatever you need) Take https://github.com/T3-Content/auto-draftify as example


$30 in API pricing

> I was running this against my $20/month ChatGPT Plus account


refined title:

ArXiv CS requires peer review for surveys amid flood of AI-written ones

- nothing happened to preprints

- "summarization" articles always required it, they are just pointing at it out loud


DeepSeek on GPUs is like 5x cheaper then GPT

And TPUs are like 5x cheaper then GPUs, per token

Inference is very much profitable


You can do most anything profitability if you ignore the vast majority of your input costs.


Except 12:01 is in 24-hour clock which doesn't have 12:00 problem in the first place


Can you make a variant for relative passing time?

You probably barely remember anything up to around 10, and then each doubling of age adds one logarithmical unit

So 10 is 1, 20 is 2, 40 is 3 and 80 is 4 (or maybe 0, 1 and 2?)

20 is already half of life passed by -_-


I think that's a bit too simplistic, unless someone can testify that the 20 years 20 t0 40 feel as long as 40 years, 40 to 80.

Here's an interesting graph and discussion on reddit: https://www.reddit.com/r/dataisbeautiful/comments/1e18fmz/pe...

Still looking if anyone has a study of (life/long-term) time perception w/ graph(s).


Aka VSCode DevContainer?

Could work I think (be wary of sending .env to the web though)


One way of doing it, yes. Why would your dev repo have any credentials in .env?


Edit: it got fixed, thanks to the author

I think with majority of TypeScript projects using Prettier, 2 is more likely to be the default[0]

The linked page literally says to ignore it [1]

> STOP READING IMMEDIATELY THIS PAGE PROBABLY DOES NOT PERTAIN TO YOU > These are Coding Guidelines for Contributors to TypeScript. This is NOT a prescriptive guideline for the TypeScript community.

4 is a historical thing used as a default for all languages in VSCode [2]

[0] https://prettier.io/docs/options#tab-width

[1] https://github.com/microsoft/TypeScript/wiki/Coding-guidelin...

[2] https://github.com/Microsoft/vscode/issues/41200

Edit: found the TS style guide at https://github.com/basarat/typescript-book/blob/master/docs/... , it should be the correct link

P.S. did send a mail to author hopefully they fix it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: