Hacker Newsnew | past | comments | ask | show | jobs | submit | jaigupta's commentslogin

When I said forget, I meant that it does not follow instructions in .claude/agents/chrome-tester.md and rather starts working on stuff that was going on in main context.

I am not sure if I need to write better CLAUDE.md (main context) or better agent file .claude/agents/chrome-tester.md for this case.

I get what you were trying to say but AI does have memory, fixed memory files that we can modify and we may also write creative prompts for dynamic memory for e.g. while troubleshooting keep adding learnings to learning.md file so that you know what all was tried. It also have in-built memory in the sense that it has a lot of built in knowledge. Okay, I admit to be incorrect on purely technical point of view but in just simpler terms.


Only if I could figure out how to use it. I have been using Claude Code and enjoy it. I sometimes also try Codex which is also not bad.

Trying to use Gemini cli is such a pain. I bought GDP Premium and configured GCP, setup environment variables, enabled preview features in cli and did all the dance around it and it won't let me use gemini 3. Why the hell I am even trying so hard?


Have you tried OpenRouter (https://openrouter.ai)? I’ve been happy using it as a unified api provider with great model coverage (including Google, Anthropic, OpenAI, Grok, and the major open models). They charge 5% on top of each model’s api costs, but I think it’s worth it to have one centralized place to insert my money and monitor my usage. I like being able to switch out models without having to change my tools, and I like being able to easily head-to-head compare claude/gemini/gpt when I get stuck on a tricky problem.

Then you just have to find a coding tool that works with OpenRouter. Afaik claude/codex/cursor don’t, at least not without weird hacks, but various of the OSS tools do — cline, roo code, opencode, etc. I recently started using opencode (https://github.com/sst/opencode), which is like an open version of claude code, and I’ve been quite happy with it. It’s a newer project so There Will Be Bugs, but the devs are very active and responsive to issues and PRs.


Why would you use OpenRouter rather than some local proxy like LiteLLM? I don't see the point of sharing data with more third parties and paying for the privilege.

Not to mention that for coding, it's usually more cost efficient to get whatever subscription the specific model provider offers.


Thanks, I didn't knew about LiteLLM!

OpenRouter have some interesting providers, like Cerebras, which delivers 2,300 token/s on gpt-oss


I have used OpenRouter before but in this case I was trying to use it like Claude Code (agentic coding with a simple fixed monthly subscription). I don't want to pay per use via direct APIs as I am afraid it might have surprising bills. My point was, why Google makes it so damn hard even for paid subscriptions where it was supposed to work.

Have you tried Google Antigravity? I use that and GitHub Copilot when I want to use Gemini for coding tasks.

use cursor. it allows you to choose any model to use.

Yes. Noticed in Claude Code after enabling documents skill then had to disable it for this reason.


We had been colocating servers from decades but there is too much "YOU", compared to that we find Hetzner doing a lot for us (hardware inventory, replacement, remote hands, networking etc). We are slowly moving away from colocating to renting at Hetzner. It is so much better.


Same here. Hetzner found no issues with hardware in diagnostics, they insisted it is related to OS/Software side but on my request they changed hardware which fixed issue.


I Was given the choice between diagnostics and hardware replacement, and decided to do the diagnostics first, which turned out nothing. Decided to reinstall the os and when that didnt fix I was sure it had to be something related to hardware which diagnostics didnt catch. If you have a server mentioned in here and problems turn up, just get the hardware replacement immediately

https://docs.hetzner.com/robot/dedicated-server/general-info...


To us, 750 operations/s per Bucket and 50,000,000 objects per Bucket are deal breakers.

Maybe xx operations/s per TB of storage would have been better since that way large buckets would be able to scale.


Yes, the amount of operations per second is too small for directly serving files like for hosting. (We didn't have this issue since we proxy/cache all files via dedicated hardware)

Though, it's still too small if a HEAD request counts as an operation, since we need to check if files updated or not.


We use a cache too but we still will hit this limit because our cache is not very large compared with bucket size and as you said, we need to make HEAD requests.

I do not know if this is a hard limit or if some kind of burst is available for spikes. Constant limit per bucket without considering bucket size makes no sense. At least charge for operations as well.

Maybe they just can't scale quickly enough or maybe they consider object storage just for backup/archiving purposes.


It doesn't make sense to limit HEAD requests though as all S3 implementations store this metadata inside a KV database which is _built_ to handle much more than 750 reads per second.

Edit: There's an ongoing discussion here https://forum.hetzner.com/index.php?thread/31569-will-per-bu...


What is durability? I can't find this information.


https://docs.hetzner.com/storage/object-storage/faq/general/

What kind of redundancy does Object Storage offer? How resilient is the product to failures?

"Each uploaded data object is divided into chunks, which are distributed across multiple servers within the cluster. Using erasure coding, the system can ensure data integrity even if up to three storage servers fail.

As always, each of our products can only be one part of a secure backup strategy."

What location is data stored in?

"The entire data of a Bucket is stored in the location you selected. In that location, the data is stored in a single data center. The power and network infrastructure is designed with built-in redundancy for high availability."

So it is single data center and they don't specifically claim durability percentage.


In India, credit cards and even debit cards (which have much lower transaction fee) were never a hit. UPI changed everything, it is easy, convenient and has zero cost for either party.

12.20 billion UPI transactions in a month (January 2024).


They will have to increase transparency (rankings algo, pricing etc), allow data portability for users (and hotels), and avoiding self-preferencing of its services over competitors and face stricter regulatory oversight from EU.


Seems like good things to me.


It is about $40/month plan in India for 1GB connection. I am also pretty limited by wireless speeds.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: