This weekend, I’ve been attending PyCon UK in Cardiff. This is my first time at a PyCon (or indeed, at any tech conference), and one nice surprise has been the live captioning of the talks.
At the front of the main room, there are two speech-to-text reporters transcribing the talk in real-time. Their transcription is shown as live, scrolling text on several large screens throughout the room, and shows up within seconds of the speaker finishing a word.
Here’s what one of those screens looks like:
I’m an able-bodied person. I appreciate the potential value of live captioning for people with hearing difficulties – but my hearing is fine. I wasn’t expecting to use the transcription.
Turns out – live captioning is really useful, even if you can already hear what the speaker is saying!
Maintaining complete focus for a long time is remarkably hard. Inevitably, my focus slips, and I miss something the speaker says – a momentary distraction, my attention wanders, or somebody coughs at the wrong moment. Without the transcript, I have to fill in the blank myself, and there’s a few seconds of confusion before I get back into the talk. With the transcript, I can see what I missed. I can jump straight back in, without losing my place. I’ve come to rely on the transcript, and I miss it when I’m in talks without it. (Unfortunately, live captioning is only in one of the three rooms running talks.)
And I’m sure I wasn’t the only person who found them helpful. I saw and heard comments from lots of other people about the value of the live captioning, and it was great for them to get a call-out in Saturday’s opening remarks. This might be pitched as an accessibility feature, but it can help everybody.
If you’re running a conference (tech or otherwise), I would strongly recommend providing this service.