Menu

Technology Enhanced Learning Team

Enhance your teaching menu

Captioning

Digital Accessibility legislation of 2018 states that digital content (including live lectures and pre-recorded video content) we create for teaching must be fully accessible to students. However, live lectures and recordings of videos provide several challenges particularly for hearing-impaired students, such as:

  • the audio being too poor to hear clearly,
  • the quality of the video being insufficient for lip-reading,
  • staff delivering lectures may not always face/enable the camera whilst speaking.

Hence, captions are required for pre-recorded videos, live lectures and recordings of the lectures/events. QMUL supports several systems for online teaching and streaming of videos, and most of these provide automatic options. However, reports have been made about poor accuracy levels within certain disciplines with technical vocabulary.

Existing services and options
QMUL supports MS Teams and Zoom to deliver live online lectures. Kaltura (QMplus Media) and Echo360 (Q-Review) are primarily used to store/stream pre-recorded videos and recordings of live lectures. These four systems have some level of ASR (Automatic Speech Recognition) captioning and they work in most situations.

Service ASR technology used
Echo 360 (Q-Review) SpeechMatics
Kaltura Verbit
MS Teams MS Speech Translation technology – powered by Azure Cognitive Services
Zoom Automated Captions – used to use Otter.ai but switched to a native Zoom translation in 2022

 

QMplus Media (Kaltura) allows captions to be edited, but users need to check and edit the captions manually in order to make them accurate, which is time consuming particularly for busy academics. This makes videos not being fit for purpose and less useful for students, particularly those with disabilities. Using Google Chrome Live captioning can help overcome this problem – this is free of cost.

Return to top