Cumulative is a good term too. I come from the browser world where it's typically called incremental parsing, e.g. when web browsers parse and render HTML as it streams in over the wire. I was doing the same thing with JSON from LLMs.
Incremental JSON parsing is key for LLM apps, but safe progressive UIs also need to track incompleteness and per-chunk diffs. LangDiff [1] would help with that.
I imagine if you reason about incomplete strings as a sort of “unparsed data” where you might store or transport or render it raw (like a string version of printing response.data instead of response.json()), but not act on it (compare, concat, etc), it’s a reasonably safe model?
I’m imagining it in my mental model as being typed “unknown”. Anything that prevents accidental use as if it were a whole string… I imagine a more complex type with an “isComplete” flag of sorts would be more powerful but a bit of a blunderbuss.
Scratching an itch, the intention is that its a map, centered on the users that shows all (configurable) things of interest near by. Think of Atlas-Obscura but much more local - eg AO doesn't list every prehistoric burial mound on the planet, but I want to know where they are ;)
For dates etc - you got it. I think from memory it would be 'Wednesday' + 'the 18th' + 'of' + 'may...' + '20' + '22'
For the narrative speech it would be more words in a file. There are plenty of files (EDIT: just checked 350ish files that cover all the variations of script that can be generated at the moment)
In general the TTS - part of the project is the 'art of the almost possible' (if TTS engines sounded really good - I'd have just used one of the shelf)