I think a lot of people have heard of OpenAI’s local-friendly Whisper model, but I don’t see enough self-hosters talking about WhisperX, so I’ll hop on the soapbox:
Whisper is extremely good when you have lots of audio with one person talking, but fails hard in a conversational setting with people talking over each other. It’s also hard to sync up transcripts with the original audio.
Enter WhisperX: WhisperX is an improved whisper implementation that automatically tags who is talking, and tags each line of speech with a timestamp.
I’ve found it great for DMing TTRPGs — simply record your session with a conference mic, run a transcript with WhisperX, and pass the output to a long-context LLM for easy session summaries. It’s a great way to avoid slowing down the game by taking notes on minor events and NPCs.
I’ve also used it in a hacky script pipeline to bulk download podcast episodes with yt-dlp, create searchable transcripts, and scrub ads by having an LLM sniff out timestamps to cut with ffmpeg.
Privacy-friendly, modest hardware requirements, and good at what it does. WhisperX, apply directly to the forehead.
I’ve also used it in a hacky script pipeline to bulk download podcast episodes with yt-dlp, create searchable transcripts, and scrub ads by having an LLM sniff out timestamps to cut with ffmpeg.
This is genius. Could you appify this and I’ll pay you in real or pretend currency as you prefer
I’ve found it great for DMing TTRPGs — simply record your session with a conference mic, run a transcript with WhisperX, and pass the output to a long-context LLM for easy session summaries. It’s a great way to avoid slowing down the game by taking notes on minor events and NPCs.
Okay that’s just crazy. ;)
Probably not that hard to build a simple flask frontend around it.
Automatically processing files in an S3/WebDAV directory would also be useful.
Nice. I learned about different applications of whisper because I’m a degenerate.
Can’t say I’ve ever wanted to turn on the subtitles for porn lol
Sometimes in JAV you really just get curious what the fuck is happening.
What would be some use cases for WhisperX? I’m struggling to envision how I would use that in a selfhosting/homelabbing environment.
Likely everyday stuff… Meeting minutes, phone or video conferences and such…
I guess that’s why I am having difficulty coming up with a use case. I mean, I walk around the lab talking to myself all day long, but I think it’d be a bad idea to have a record of all those conversations. lol
If you don’t have to sit through a bunch of ‘meetings that could have been emails’ on a daily basis, you likely won’t have a use case for it.
But in my last job I was a systems engineer for a web development company. I had to be included on all of the dev calls in case an infrastructure question came up that I needed to answer, and so I was vaguely aware of what the devs were doing.
This software would have been a lifesaver, because my ADHD doesn’t let me listen to stuff like that for a straight hour or two.
I’m personally looking at setting up whisper or whisperx with bazarr, to get subtitles for movies and series that I can’t find any to download.
Long videos or voice notes where you’re usually just looking for a small snippet.
Now that’s an interesting angle. I am a mediocre musician on my best day, but sometimes I incorporate phrases and lyric snippits in a piece. I wonder if I could use WhisperX to find those words or phrases from a stack of songs. For instance, I did a piece that used a line from Jimi Hendrix’s ‘If 6 were 9’ where he says ‘I’m the one who’s gotta die when it’s time for me to die. So let me live my life the way I want to.’ I wonder if WhisperX could pick that out of a stack of Jimi Hendrix songs.
It might take a while, but when your PC is working on it you are not and searching for words might be easier ^^
I’m excited to hear how well it works ^^
I’m always excited to try new stuff. You never know. A use case might develop that you didn’t think of.
You should be able to get decent results if you pipe your tracks through demucs first to isolate the vocals.
https://github.com/adefossez/demucs
Vanilla whisper will probably be better than whisperX for that use case though.
Depending on how esoteric your music library is, you can also build a lyrics DB with beets: https://beets.readthedocs.io/en/stable/plugins/lyrics.html
I use UVR for vocal isolation. It just works, but that shouldn’t be a problem. I’ll check it out. At the worst, I’ll learn something.
That is cool! I’ve been wanting I’ve wanted to use a model like this but haven’t really looked.
Are you self hosting the long context llm, of do what are you using?
Context lengths are what kill a lot of my local llm experiments.
Are you self hosting the long context llm, of do what are you using?
I did a lot of my exploration back when GPT4 128K over API was the only long-context game in town.
I imagine the options are much better these days between Llama 3/4, Deepseek, and Qwen — but haven’t tried them locally myself.
Man where was this post when in was DMing? lol.
This is super cool though. Rn I’m doing some film editing work for my friend, and this could probably be useful for subtitles too. Thanks for sharing.
Just finished a thesis, I used OtterAI which was user friendly and expensive. It got the job done but required some revisions and corrections.