Hacker Newsnew | past | comments | ask | show | jobs | submit | nishilpatel's commentslogin

This resonates. A lot of debugging time isn’t spent finding where the bug is, but unlearning the assumption that “this part is obviously correct.”

The LIBR logic is straightforward, but OCR quality, auditability, and evidence integrity are what make this usable in the real world.

lawyers care about chain of custody, auditability, and immutability makes this less of an “AI app” and more of a compliance workflow tool, which might matter a lot for positioning.

On B2C vs B2B: individuals feel this once, lawyers feel it every case — which usually determines who actually pays.

The biggest risk seems less about accuracy and more about how courts classify the output (calculator vs expert opinion). That likely drives both liability and pricing.

Have you run this past a practicing family lawyer or forensic accountant yet, even informally?


OP here – thanks for the feedback. I just pushed an update to address the Chain of Custody concerns.

The system now generates immutable forensic reports with SHA-256 integrity hashes for every document. Also added a regression suite to verify the tracing algorithm against known edge cases. The focus is definitely shifting from just "AI wrapper" to "Audit Compliance tool.

Case: https://exitprotocols.com/static/documents/Sterling_Forensic...


I’m cautiously optimistic about AI, but less about the hype cycle around it.

AI (especially LLMs) will likely stay top-of-mind in 2026, but I expect costs to drop meaningfully and capabilities to feel more “infrastructure-like” rather than magical. SME Adoption will drive the AI to masses.

If AI doesn’t meet near-term revenue and productivity promises, we may see pressure or stagnation in tech valuations, even as the underlying technology continues to improve. In other words, the market may cool before the tech does.

On the macro side, I wouldn’t be surprised if we see more market stability or mild declines, driven by a re-rating of expectations rather than a systemic collapse. Capital might rotate from speculative growth into cash-flow-positive businesses that actually deploy AI profitably.

More broadly, I think 2026 will reward: reliability over flashy innovation in AI , Engineering depth over marketing narratives, Systems thinking over isolated “features”

Less “what’s possible?” and more “what actually works at scale?”


That’s true. This kind of writing usually shows up as post-implementation retrospectives, so it’s inherently fragmented.

I’m trying to surface and study those scattered examples—especially the ones that explain why decisions were made, not just what was built.


Fair Observation, HN surrounded by mostly Software guys, which directly add nuances of "Engineering" and <Software> Engineering.

but to specific is much important, imo Engineering means "Solving problem at a scale", irrelevant of the industry.


Perhaps. Sometimes the scale is "one" - the amount of engineering that goes into bespoke space missions is very large, and very little of that work is re-used for anything other than direct follow up missions

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: