feat: add facial-expressions in speaker stats (#1724)
* feat: facial expression in speaker stats
* feat: send xmpp message with facial expression to server
* fix: rebase conflicts
* feat: facial expression in speaker stats
* feat: facial expression in speaker stats
* fchore(facial-expressions): remove the send facial expression call from update facial expression function
* feat(facial-expressions): store expresions as a timeline
* refactor(facial-expressions): store expressions by counting them in a map
* feat(facial-expressions): camera time tracker
* add(facial-expression): increase facial expression with duration parameter from payload
* add(facial-expressions): the disgusted expression
* refactor(facial-expressions): remove camera time tracker
* refactor(facial-expressions): change data channel message handel position for facial expressions and renamed some types
* fix(facial-expressions): move facial expression endpoint message handler from statistics.js to ConnectionQuality.js
* fix(facial-expressions): remove unused type
fix(stats): Use promise-based getStats on all browsers.
Get rid of the browser specific keys and use the standard spec-compliant fields for stats.
Get the resolution/fps for remote streams from 'inbound-rtp' stats. Use the 'track' stats for the local resolution/fps since these take the active simulcast streams into account.
feat(stats): Get audio levels for the top 5 speakers only.
Capture the audio levels only for the top 5 speakers as RTCRtpReceiver#getSynchronizationSources can be expensive when we have too many audio receivers in the call.
Also, capture the audio levels for track that are unmuted if RTCRtpReceiver#getSynchronizationSources is not supported.
Switch Safari to using getStats since its reporting errorneous values, i.e., 0.000001 as audio level for all remote audio tracks.
feat(browser-support): Add support for WKWebview based browsers.
Apple added getUserMedia support for WkWebview based browsers like chrome and Firefox on iOS 14.3. These browsers behave as Safari does on iOS. Therefore, extend the Safari checks to these webkit based browsers as well.
Add a performance stat around long tasks. Chrome supports PerformanceObserver API that lets us
register for long tasks event. Any task that takes longer than 50ms is considered a long task.
fix: Scale remote audio levels reported on receiver to getStats levels
The audio levels reported on the audio receivers are lower when compared to the value reported by getStats.
Values reported by getStats on chrome do not follow the the spec and since we have combination of clients using both getStats and getSynchronizationSources,
lets stick to one scale to make them look uniform.
Also, the receivers seem to be reporting audio level for a little bit after the remote user has muted. Make sure the track is unmuted
before setting the audio level on the track.
feat: use getSynchronizationSources on the receiver for remote audio levels (#1245)
* feat: use getSynchronizationSources on the receiver for remote audio levels
Use getSynchronizationSources if it is supported, fallback to using getStats otherwise.
* feat/ref: Use the local audio levels from LocalStatsCollector
When using getSynchronizationSources, use the audio levels from LocalStatsCollector for NoAudioSignalDetection.js
Remove obsolete code - TalkMutedDetection feature using audio levels is not used anymore
Partially reverts 24bda8e and uses domain/roomname to report to cs.
To be sure we always report room name in small case and as mobile and jigasi report this way it will take time for them to adopt which leads to wrong stats.