Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hi Jeff, are there any plans to support dual-channel audio recordings (e.g., Twilio phone call audio) for speech-to-text models? Currently, we have to either process each channel separately and lose conversational context, or merge channels and lose speaker identification.


this has been coming up often recently. nothing to announce yet, but when enough developers ask for it, we'll build it into the model's training

diarization is also a feature we plan to add


Glad to hear it's on your radar. I'd imagine phone call transcription is a significant use case.


I’m not entirely sure what you mean but twilio recordings supports dual channels already


Transcribing Twilio's dual-channel recordings using OpenAI's speech-to-text while preserving channel identification.


Oh I see what you mean that would be a neat feature. Assuming you can get timestamps though it should be trivial to work around the issue?


There are two options that I know of:

1. Merge both channels into one (this is what Whisper does with dual-channel recordings), then map transcription timestamps back to the original channels. This works only when speakers don't talk over each other, which is often not the case.

2. Transcribe each channel separately, then merge the transcripts. This preserves perfect channel identification but removes valuable conversational context (e.g., Speaker A asks a question, Speaker B answers incomprehensively) that helps model's accuracy.

So yes, there are two technically trivial solutions, but you either get somewhat inaccurate channel identification or degraded transcription quality. A better solution would be a model trained to accept an additional token indicating the channel ID, preserving it in the output while benefiting from the context of both channels.


(2) is also significantly harder with these new models as they don’t support word timestamps like WHISPR.

see > Other parameters, such as timestamp_granularities, require verbose_json output and are therefore only available when using whisper-1.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: