In TraceablePeerConnection: we're no longer injecting a recvonly SSRC
when the local video track is muted, so it's normal that there is no
SSRC found in the local SDP when it's muted.
About RTPStatsCollector: at the time of adding this log statement a case
was missed when local audio track could be replaced in the P2P
connection when a new audio device is selected.
* fix(permissions): remove space from requesting camera
Chrome errors on querying permissions with "camera "
because it does not match an expected enum.
* fix(permissions): check value of returned PermissionStatus
A permissions query returns an object with a status/status
of whether or not permission has been granted. Check that
value in addition to the existence of the object.
* fix(permissions): prevent permission being set to undefined
* ref(permissions): move permissions strings to constants
fix(screenshare): specify source type for fake and proxy (#878)
Spot is the consumer of screensharing using a camera
input and through a proxy connection. To differentiate
which one is active, declare a source type on the
created "desktop" stream.
fix(screenshare): do not limit resolution for fake screenshare (#877)
A camera source can be used as a screenshare source.
Normally for screenshare resolution is not capped and
the same will be true for the fake screenshare source.
feat(screenshare): support remote wireless screenshare (#857)
* feat(screenshare): support remote wireless screenshare
- ProxyConnectionService is exposed and meant to be the
facade for creating and updating a peer connection with
another peer, outside of the muc.
- ProxyConnectionPC wraps JingleSessionPC so the peer
connection handling can be reused.
* attempt to make more configurable
feat(screenshare): use camera as a screenshare source
This feature is intended for spot. Spot can have an
HDMI -> usb adapter hooked up to it. In that case,
attempting to screenshare should use that adapter
as a screensharing source.
Changes the behavior to actively open new WebRTC Data Channel instead
of waiting for the JVB to do it.
Adds ICE_RESTART_SUCCESS event used to re-initialize the data channel in
case of ICE restart where the conference could have been moved to
another bridge.
fix(remote-description): remove the default id of "-" (#845)
Starting in Chrome 71, tracks without a stream are
given the msid "-", which in plan b incorrectly
triggers an onaddstream even with a default local
stream.
fix(RTC): Pass options allowing GUM to work with config.
The 'options' identifier is used for different objects, which has lead to a bug where
the object that passes along the 'config.js' provided data is lost along the way. Code
that's further down the execution stack still expects to process that configuration,
which indicates that not passing it along is unintentional.
With this change, an Android device will adhere to the value defined in the 'resolution'
value of config.js, something it currently does not.
fix(TPC): ignore "ontrackadded" for existing track
Until M69 Chrome used to consistently emit "onstreamadded" when
audio/video stream is added for the first time and then emit track
events only if the stream is modified afterwards (video track replaced).
However it looks like now it can first emit "onstreamadded" and then
additional "ontrackadded" for the MediaStreamTrack that was already
included in the MediaStream signalled by the "onstremadded" event. I
have not managed to figure out what is causing this and the only
difference in the SDP is the fact that the remote peer (JVB) includes
IPv6 candidates. Anyway it doesn't hurt to have such a safeguard.
The code for handling device availability has been disabled for a long time,
plus it's ill named since it represents 2 abstractions: lack of permissions and
lack of devices.
Time for it to rest in the git graveyard.
fix(device-list): workaround for devicechange being fired twice
Chrome fires devicechange twice. This causes the DEVICE_LIST_CHANGED
event to fire twice. The listener in jitsi-meet has async logic
to handle the event, including creation of new tracks. So it
can happen that two devicechange events fire, the jitsi-meet
listener fires twice, creates duplicate tracks, and uses both
tracks. Diffing the device list for an actual change is a workaround
against extra devicechange events.
Note: this does not solve the issue in jitsi-meet of it not
handling quick successive DEVICE_LIST_CHANGED events.
Get rid of all wrappers and use navigator.mediaDevices.getUserMedia, since all
supported platforms have it by now.
Also use the unprefixed versions of WebRTC APIs.
* Enables adapter for edge.
We were not filtering correctly all unsupported iceServers and an error is thrown and no connection is established.
* Enable desktop sharing for Edge.
Currently when replacing video track with desktop sharing one, doesn't work because of: https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/17528460/
If Edge user joins first and enables desktop sharing it will work when others join. Tried also to use replaceTrack/setTrack as a workaround, but again we hit an error, this time InvalidAccessError.
* Adds the helper function usesAdapter in BrowserCaps.
ref(video-quality): cache max frame height and send on channel open (#785)
* ref(video-quality): cache max frame height and send on channel open
Currently it is possible to try to change the max receive video
frame height before the data channel is open. In that case an error
will be thrown. This change makes it so that the desired frame height
is saved and sent on channel open, avoiding the thrown error the
max receive video frame height logic is exercised through the
JitsiConference api.
* squash: do the second part of the actual fix
Change layer suspension to use parameters in RTPSender (#786)
* Change layer suspension to use parameters in RTPSender
We no longer suspend unused simulcast layers via a bandwidth cap in the
SDP, instead we'll use the new parameters in RTPSender to enable and
disable streams explicitly. The main advantage here is the RTPSender
method ramps up immediately when we re-enable the layers (as opposed to
the SDP bandwidth cap which took 30+ seconds).
* Fix linter issues
core: refactor initialization not to return a Promise (continued)
1. The example was using the Promise return value of JitsiMeetJS.init
which is no longer possible/correct after commit "core: refactor
initialization not to return a Promise".
2. We went back and forth with the value returned by JitsiMeetJS.init:
we initially didn't return a value, then we started returning a Promise,
and now we're not returning a value. Whether we'll go back to returning
a value is up in the air. Anyway, the return value is practically
determined by the last in a chain of function calls: JitsiMeetJS, RTC,
RTCUtil. Since the chain is not really documented, it will not hurt much
to make it easier to refactor the chain by "composing" the functions.
core: refactor initialization not to return a Promise
There is nothing asynchronous about the initialization process (anymore), thus
turn it into a synchronous method.
In addition, WebRTC support is absolute, it cannot change from not being
supported to being supported (as it plreviously could, thanks to Temasys) so get
rid of the ancillary logic to support that.
Last, introduce a way to check if WebRTC is supported in the current
environment: JitsiMeetJS.isWebRtcSupported().