While certain gesture triggers exist to activate if a given user is speaking on voice (/voicelevel1. /voicelevel2, /voicelevel3) it appears that there is no LSL analog to detect these states (or any voice use state).
Adding AGENT_SPEAKING to llGetAgentInfo would be analogous with voice, to the long-existing AGENT_TYPING detection.
The creative uses for such a trigger are nearly endless, from scripted a multi-camera setup for an interview program, with a camera that could automatically change position to the person currently speaking (useful for vlogs like Lab Gab!), to debate and open mic performance timers, recording cues for editing machinima, scripting a mesh head with custom bones or other nontraditional 'mouth' designs to animate the mouth while the wearer is speaking, or even silly things like a microphone that appears in your hand when speaking (similar to the classic magic keyboards).
This seems like a simple to implement feature, that could add a wide variety of creative options.