Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


I found it interesting because they:

- Made a RAG in ~50 lines of ruby (practical and efficient)

- Perform authorization on chunks in 2 lines of code (!!)

- Offload retrieval to Algolia. Since a RAG is essentially LLM + retriever, the retriever typically ends up being most of the work. So using an existing search tool (rather than setting up a dedicated vector db) could save a lot of time/hassle when building a RAG.


I built a similar system for php and I can tell you what is the smart thing here: accessing data using tools.

Of course tool calling and MCP are not new. But the smart thing is that by defining the tools in the context of an authenticated request, one can easily enforce the security policy of the monolith.

In my case (we will maybe write a blog post one day), it's even neater as the agent is coded in Python so the php app talks with Python through local HTTP (we are thinking about building a central micro service) and the tool calls are encoded as JSON RPC, and yet it works.


I had to do something similar. Ruby is awful and very immature compared to python, so I "outsourced" the machine learning / LLM interaction to python. The rails service talks to it through grpc / protobuf and it works wonderfully.


While I agree that the training/learning ecosystem is pretty heavily centered in Python, going from that to "Ruby is awful" seems like a very drastic jump, especially if we are talking about the LLM interaction only.

I probably wouldn't write a training system in Ruby (not because it's not doable, just because it's not a good use of time to rewrite stuff that is already available in python ecosystem)... but hooking up a Ruby system up to LLM's for interaction is eminently doable with very little effort.

I am assuming your situation had some specific constraints that made it harder, but it would be nice to understand what they were - right now your comment describes a more complicated solution and I am curious why you needed it.


While I agree that Python is where most of the implementation action is, one of the great things about building applications with LLMs is that almost all API providers offer a rich REST interface, and I have found it simple to use LLM services in Haskel, various Lisp languages, etc. It is nice having very old code in various languages and be able to add new functionality with LLMs.

Not all cool code is in new greenfield projects.


By the guidelines as written, it isn't. By the guidelines as mysteriously and generously interpreted by the hall monitors, it's the most beautiful thing to ever exist on God's green earth. Nothing says "hacker spirit" like waffle-stomping AI into software that was already working just fine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: