Get rid of all wrappers and use navigator.mediaDevices.getUserMedia, since all
supported platforms have it by now.
Also use the unprefixed versions of WebRTC APIs.
* Enables adapter for edge.
We were not filtering correctly all unsupported iceServers and an error is thrown and no connection is established.
* Enable desktop sharing for Edge.
Currently when replacing video track with desktop sharing one, doesn't work because of: https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/17528460/
If Edge user joins first and enables desktop sharing it will work when others join. Tried also to use replaceTrack/setTrack as a workaround, but again we hit an error, this time InvalidAccessError.
* Adds the helper function usesAdapter in BrowserCaps.
ref(video-quality): cache max frame height and send on channel open (#785)
* ref(video-quality): cache max frame height and send on channel open
Currently it is possible to try to change the max receive video
frame height before the data channel is open. In that case an error
will be thrown. This change makes it so that the desired frame height
is saved and sent on channel open, avoiding the thrown error the
max receive video frame height logic is exercised through the
JitsiConference api.
* squash: do the second part of the actual fix
Change layer suspension to use parameters in RTPSender (#786)
* Change layer suspension to use parameters in RTPSender
We no longer suspend unused simulcast layers via a bandwidth cap in the
SDP, instead we'll use the new parameters in RTPSender to enable and
disable streams explicitly. The main advantage here is the RTPSender
method ramps up immediately when we re-enable the layers (as opposed to
the SDP bandwidth cap which took 30+ seconds).
* Fix linter issues
core: refactor initialization not to return a Promise (continued)
1. The example was using the Promise return value of JitsiMeetJS.init
which is no longer possible/correct after commit "core: refactor
initialization not to return a Promise".
2. We went back and forth with the value returned by JitsiMeetJS.init:
we initially didn't return a value, then we started returning a Promise,
and now we're not returning a value. Whether we'll go back to returning
a value is up in the air. Anyway, the return value is practically
determined by the last in a chain of function calls: JitsiMeetJS, RTC,
RTCUtil. Since the chain is not really documented, it will not hurt much
to make it easier to refactor the chain by "composing" the functions.
core: refactor initialization not to return a Promise
There is nothing asynchronous about the initialization process (anymore), thus
turn it into a synchronous method.
In addition, WebRTC support is absolute, it cannot change from not being
supported to being supported (as it plreviously could, thanks to Temasys) so get
rid of the ancillary logic to support that.
Last, introduce a way to check if WebRTC is supported in the current
environment: JitsiMeetJS.isWebRtcSupported().
fix(muting): do not re-assign value of local track containers (#781)
RTCUtils.attachMediaStream was changed not to return elements;
instead it returns undefined by default. When mapping over
containers and call RTCUtils.attachMediaStream, containers
would be changed to undefined.
The case where we had created recvonly streams (start muted on FF) and we are unmuting, this creates a sendrecv stream and adds msid, we need to signal msid so listeners to be notified and create appropriate audio/video element and to start receiving the stream.
* Update reference to Prosody bugtracker.
* Updated refernce to Prosody module.
* Remove unused child element.
The child element in the query suggests that a specific host is being requested. Neither
the Prosody implementation nor the XEP-0215 specification defines this element. Its
inclusion is ignored by the XEP-0215 server implementation.
feat(RN): don't use local O/A for mute on React Native
With the update of react-native-webrtc to M67, the ability to stop the camera
when a track is disabled was introduced, so there is no longer a need for doing
a local O/A to remove the track for video muting.
When receiving presence, the XML is converted to JSON. In
the process, Strophe.getText is used, which calls its
xmlescape utility function. This is not desired as the
raw value is desired, especially for display name.
Note: this issue only affects presence as it is the only
place that call the helper packet2JSON, which is the only
place that calls Strophe.getText.
ref(recording): change implementation to match VideoSIPGW (#769)
* ref(recording): change implementation to match VideoSIPGW
VideoSIPGW takes in a chat room and uses instance variables
on the chat room. RecordingManager has been changed to
mirror this approach because of the case where jitsi is
deployed on a domain requiring authentication. In that
case, the initial chat room is created and fails, a
new chat room (connection) is created for authentication,
authentication is put onto the failed chat room, and the
failed chat room is used. As such, recordingManager
should not be tied to chat room creation itself but
rather tied to the first chat room.
* squash: use git mv to detect capitalization change