Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Curious as to how this stacks up to some of the other AI copilots, I think Make and Zapier kinda have something similar?


Our AI goes much further than Zapier/Make in terms of how far it can get you towards a complete, ready-to-run workflow.

Make's copilot is pretty limited to generating an outline of the flow by selecting the right nodes but does not actually configure them. You still need to manually click into each one and set it up.

Zapier goes a bit further than Make, but it still leaves the workflow with a lot of configuration work that needs to be picked up by the user.

In both Make and Zapier, you really need to prompt the AI copilot in a very specific way to get good results. In our case, the AI is designed to use its business analyst/consultant mode to extract information so it can work from very general, unclear and ambiguous instructions to a clearly defined workflow/process to build.

The ability for our AI to edit the workflow at any time (including on top of your own manual changes) also means you can have a continuous iterative dialog/interaction with our AI copilot vs a once off interaction at the start. Both Make and Zapier's AI Copilots lack this or are very weak in being able to edit existing workflows reliably.


What exactly is "your AI"? From how you describe it, it's a GPT model with a "You're a business consultant" prompt.

I'm sorry to be rough, but from your description it just sounds like your AI somehow does a better job with prompts compared to Zapier & Make, which is highly subjective.

Besides that, this looks very cool and IMO is the future of interfacing AI automations in work environments


Valid point. At its core, our AI is indeed a LLM that is prompted to provide an output. A lot of the work however is in (1) prompting it in a way that allows it to actually understand the user instructions and current state of the workflow and (2) allow it to reliably output a response that acts as a set of instructions on what needs to be done within the platform e.g. add this, remove this, change this question text to this, write this code etc.

One example of what we had to do to achieve this was to develop an "intermediary language" defines how the current state of the workflow is represented to the AI and how the AI responds back - this needed to capture enough detail about the workflow without overwhelming it with too much context. We also developed techniques for structuring the prompting, with the process of building a workflow actually split into 3 stages: a pre-build planning stage, a build stage where the overall structure of the workflow is set, and then a build node stage where each individual node its configured. There is a bunch of other techniques we developed to get LLMs to be able to do what they current do, but these are just some examples of how it's a bit more than just a "You're a business consultant" prompt.

One thing I'd encourage people to do is test these co-pilots head-to-head on the same prompt. If you were to ask Zapier or Make to "build me a process for triaging customer complaints", I'd expect them to not get very far, perhaps an outline of some apps you could connect together to achieve it. If you asked our AI this same request, it would be able to deliver a complete workflow with fully configured forms, tables, branching logic, tasks etc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: