Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any self-hosted open source solution? I would like to digitize my paper notebooks but I do not want to use anything proprietary or that uses external services. What is the state of the art on the FOSS side?

Ideally something that I can train with my own handwriting. I had a look at Tesseract, wondering if there’s anything better out there.



Regular handwriting there are many.

Historical handwriting, Gemini 3 is the only one which gave a decent result on a 19th century minutes from a town court in Northern Norway (Danish gothic handwriting with bleed through). I'm not 100% sure it's correct, but that's because it's so dang hard to read it to verify it. At least I see it gets many names, dates and locations right.

I've been waiting a long time for this.


> Regular handwriting there are many.

Please share. I am out of the loop and my searches have not pointed me to the state of the art, which has seen major steps forward in the past 3 or 4 years but most of it seems to be closed or attached to larger AI products.

Is it even still called OCR?


Totally not what you asked, but making an OCR model is a learning exercise for AI research students. Using the Kaggle-hosted dataset https://www.kaggle.com/datasets/landlord/handwriting-recogni... and a tutorial, eg https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorfl... you can follow along and train your own OCR model!


The best open source OCR model for handwriting in my experience is surya-v2 or nougat, really depends on the docs which is better, each got about 90% accuracy (cosine similarity) in my tests. I have not tried Deepseek-OCR, but mean to at some point.


Try various downloadable weights that has Vision, they're all good at different examples, running multiple ones and then finally something to aggregate/figure out the right one usually does the trick. Some recent ones to keep in the list: ministral-3-14b-reasoning, qwen3-vl-30b, magistral-small-2509, gemma-3-27b

Personally I found magistral-small-2509 to be overall most accurate, but it completely fails on some samples, while qwen3-vl-30b doesn't struggle at all with those same samples. So seems training data is really uneven depending on what exactly you're trying to OCR.

And the trade-off of course is that these are LLMs so not exactly lightweight nor fast on consumer hardware, but at least with the approach of using multiple you greatly increase the accuracy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: