> Is it the prompt that Anthropic has been polishing in Claude code for so long?
I think so.
The opencode TUI is very good, but whenever I try it again the results are subjectively worse than Claude Code. They have the disadvantage of supporting many more models in terms refining prompts / tool usage.
The Claude Code secret sauce seems to be running evals on real world performance and then tweaking prompts and the models themselves to make it work better.
I think so.
The opencode TUI is very good, but whenever I try it again the results are subjectively worse than Claude Code. They have the disadvantage of supporting many more models in terms refining prompts / tool usage.
The Claude Code secret sauce seems to be running evals on real world performance and then tweaking prompts and the models themselves to make it work better.