* feat: Adds events for session-add, session-remove and session-accept.
* feat: Refactor updating presence for audio/video mute and video type.
This brings few changes and fixes. Thew initial presence will always miss audioMuted and videoMuted value, those will be added on session accept. All updates to presence are done on sessionAccept, on source-add or on source-remove and the only one not signalled when camera track is replaced by video track and vice versa.
This change brings more presence updates when number of participants are below startAudioMuted/startVideoMuted, but when participants are above those numbers we get less presence update. This is important for big meetings.
Fixes wrong videoMuted state, as replace track and mute are both executed in promise, sometimes the replace-track one finishes first and when the mute one is resolved there is no conference object in the track to be able to update the presence (hitting this when we pass the startAudioMuted threshold).
* squash: Put back the tracks for _setTrackMuteStatus and _setNewVideoType.
* squash: Fix line length.
* squash: Fix skip sending presence twice.
* squash: Adds the error to the error callback.
* squash: Fix newJingleErrorHandler.
feat(conference) Implement audio/video mute disable when sender limit is reached.
Jicofo sends a presence when the audio/video sender limit is reaced in the conference. The client can then proceed to disable the audio and video mute buttons when this occurs.
They are companion rooms created in a separate MUC. The room relationship is
maintained by a Prosody plugin.
All signalling happens through the breakout rooms MUC component.
Co-authored-by: Saúl Ibarra Corretgé <saghul@jitsi.org>
feat: add facial-expressions in speaker stats (#1724)
* feat: facial expression in speaker stats
* feat: send xmpp message with facial expression to server
* fix: rebase conflicts
* feat: facial expression in speaker stats
* feat: facial expression in speaker stats
* fchore(facial-expressions): remove the send facial expression call from update facial expression function
* feat(facial-expressions): store expresions as a timeline
* refactor(facial-expressions): store expressions by counting them in a map
* feat(facial-expressions): camera time tracker
* add(facial-expression): increase facial expression with duration parameter from payload
* add(facial-expressions): the disgusted expression
* refactor(facial-expressions): remove camera time tracker
* refactor(facial-expressions): change data channel message handel position for facial expressions and renamed some types
* fix(facial-expressions): move facial expression endpoint message handler from statistics.js to ConnectionQuality.js
* fix(facial-expressions): remove unused type
This is a stopgap measure. Ideally we'd support it, but the interactions between
both JVB and P2P sessions are complex and very timing sensitive and there are a
number of corner cases that lead to not having media after toggling e2ee.
fix(e2ee) fix race condition when restarting media sessions
Make sure the P2P tracks are not added to the JVB session when we are restarting
the media sessions, since the PC has not been created with the encoded streams
constraint yet.
fix: Avoid sending two presences if start muted and then screen share. (#1771)
* fix: Avoid sending two presences if start muted and then screen share.
* squash: Drop some changes and simplify sending presence.
* squash: Always send presence when sourceNameSignalingEnabled.
...to advertise track's muted state and the video type.
For now, if the source name signaling flag is enabled, both legacy and
new <SourceInfo> element will be used at the same time. This is to be
able to interoperate with legacy clients and start testing the new
format at the same time. Whenever possible <SourceInfo> will be used
as main source of truth with the fallback to legacy <audiomuted/>,
<videomuted/> and <videotype/> elements.
ref(JitsiConference) Remove remote tracks from conf before reneg is done.
We do not have to wait for the removal of the ssrcs from the remote description for removing the remote tracks associated with a participant that left the call. This speeds up removal of the participant from call even if the JingleSession modification queue is backed up.
fix(connectionstatus) Increase the rtc mute timeout for p2p.
Increase the RTC mute timeout from 500ms to 2500ms for p2p connections. This fixes an issue with Chrome tab sharing where the application keeps switching between the avatar and the share contnuously because of a chrome bug https://bugs.chromium.org/p/chromium/issues/detail?id=1258034
fix(iOS15) fix not being able to unmute if "everyone starts muted" is set
We need to make sure the audio track is added to the JVB connection, or we won't
be able to unmute.
Why this happens is a mystery wrapped in an enigma, it started happening with
iOS 15.
Same thing applies to Safari for macOS.
Fixes: https://github.com/jitsi/jitsi-meet/issues/10104
fix(JitsiConference):2 instances for the same room
If a second JitsiConference instance for the same room is created we were
throwing an error but some listeners were already attached. This commit
makes sure that we throw the error as soon as possible and no listeners
are added.
ref(JitsiConference): don't crash on wrong oldTrack (#1709)
If oldTrack was not previously in the conference this will
lead to a crash in onLocalTrackRemoved where oldTrack doesn't
have `muteHandler` and `audioLevelHandler` listeners defined.
fix(browser-support): Add audio track to pc always on mobile Safari.
On mobile Safari, if a user joins audio and video muted, the browser doesn't decode the incoming audio. Workaround is to always add the audio track to pc and mute it if needed.
feat(BridgeChannel): Signal a new videoType for high fps screenshare.
This lets the bridge adjust the bitrate allocation for this source so that layers with higher fps are prioritized over layers with higher resolution.
As a result, endpoints with restricted downlink will receive a high fps low resolution share as opposed to a high resolution low fps screenshare.
feat(RTC): Add the ability to change desktop share fps.
Provide a method for changing the capture fps for desktop tracks during the call. These changes to the lib are needed for making it configurable from the UI.
fix(moderation): Unmuting after av moderation and no track.
When started muted and no tracks are being created and we were muted by focus (av moderation) we need to unmute the channels on the bridge after creating the tracks.