I'm building a supermaven competitor for jetbrains. We use the Jetbrains PSI (basically Jetbrain's version of the LSP) to pull definitions into context to make the autocomplete smarter. My colleague wrote a blog on this here: https://blog.sweep.dev/posts/autocomplete-context.
We also found a lot of cases where caching ends up being actually slower than doing the operation. The 100% solution would probably be to use a SQL db the way diskcache does it, but this is easier to use for us.
We're building a webhook services on FastAPI + Celery + Redis + Grafana + Loki and the experience with setting up every service incrementally was miserable, and even then it feels like logs are being dropped and we run into reliability issues. Felt like something like this should exist already but I couldn't find anything at the time. Really excited to see where this takes us!
That's exactly why we built Svix[1]. Building webhooks services, even with amazing tools like FastAPI, Celery and Redis is still a big pain. So we just built a product to solve it.
Hatchet looks cool nonetheless. Queues are a pain for many other use-cases too.
The new Sweep assistant is also interactive! You can request for a plan, approve the plan to have Sweep run through each of the proposed files to edit. We found that it was really difficult to iterate with an agent that only responds every 20 min.
We have tried many of the current open source models but unfortunately the only model whose capability is close to GPT-4 is Deepseek and unfortunately Deepseek can’t follow our specified format and is very sensitive to prompt changes.
I feel like at least for me, when I want to build fast, writing the implementation and having AI generate the unit tests to double check and catch edge cases is easier. I also don't use the unit tests as a verification but just to ensure the util functions don't deviate.
Writing unit tests is also more boring so I prefer writing the implementation.
How has your experience with function calls been? We tried doing code generation and making ChatGPT generate diffs and it seems to perform worse than the March edition.
It’s hit or miss - 8/10 it will do what I ask it to do, but a lot of times even the JSON output is not parseable so you definitely need to add retry logic for that in your product.
This makes me believe simulation theory even more tbh. Quantum mechanics exist to fuse operations, altogether making simulating our universe more computationally inexpensive.
There's an even deeper way to thing about it: if you actually want to parallellize the simulation of multiple scenarios, or if you're running smth. that needs to compute smth in >4d, quantum mechanics + parallel universes" might be the computationally optimal way to do it!
...we don't think about it this way often because we'd be thinking about computational problems so huuuuge that we'd be like the quarks inside the atoms inside the transistors inside plannet-sized clusters spanning galaxies to even fathom computing it ...and it's not necesarily a feel-good perspective :)
I mean, even the speed-of-light limit and general relativity seem like optimizations you'd do in order to better parallelize something you need to compute on some unfathomable "hardware" in some baseline-reality that might not have the same constraints...
...and to finish the coffee-high-rant: if you want FTL you probably can't get it "inside" because it would break the simulation, you'd need to "get out" ...or more like "get plucked out" by some-thing/god :P (ergo, when we see alien artifacts UFOs etc. that seemed to have done FTL... we kind of need to start assuming MORE than _their_ existence and just them being 'more advanced' than us)
People write this sort of thing a lot, and I don't really understand it. Simulating quantum systems is dramatically (formally speaking exponentially) more expensive than simulating classical ones (at least as far as our current understanding of complexity theory goes). If you're going to simulate a universe, and you want to cheap-out on computer power, then you should simulate a classical one.