Release date: 2024-09-27
New Features
When the media player plays transparent effects, a new ZegoAlphaLayoutType>ZegoAlphaLayoutTypeRightTop enumeration is added to support Alpha data being concatenated above the RGB data on the right side. When setting this enumeration, only a 0.5x zoom factor is supported.
For related API, please refer to loadResourceWithConfig, ZegoAlphaLayoutTypeRightTop
Supports setting the stream resource type separately before and after the audience joins the stage, making the streaming method more flexible. It can be set to any of the following,RTC streaming, ultra-low latency live streaming (L3), or CDN streaming. For example, it can be used to implement a live streaming scenario where the audience defaults to L3 streaming before joining the stage, switches to RTC streaming during the interaction, and resumes L3 streaming after leaving the stage.
For related API, please refer to startPlayingStream, ZegoStreamResourceModeCustom, ZegoPlayerConfig > customResourceConfig
Enhancements
When calling the loginRoom interface to log in to the room, the userName
was originally a required field, and it has been optimized to be an optional field.
For related API, please refer to loginRoom
For related API, please refer to setVoiceChangerParam
Bug Fixes
Release date: 2024-08-14
Bug Fixes
Release date: 2024-07-26
New Features
Note:
This feature is only available for iOS 13 and above. Please contact ZEGOCLOUD technical support to enable this feature.
This feature consumes a lot of phone performance, please use it with caution.
Developers can use the interface of [AVCaptureMultiCamSession.multiCamSupported] in the system library to determine if the simultaneous use of front and rear cameras is supported.
The new enumeration value ZegoVideoSourceTypeSecondaryCamera
is added to identify the video from the second camera. You can use the [setVideoSource] interface to push the videos from the two cameras separately. You can use [useFrontCamera] to switch between the front and rear views corresponding to ZegoVideoSourceTypeCamera
and ZegoVideoSourceTypeSecondaryCamera
. This capability can be applied to scenarios such as dual-camera live broadcasting.
For related API, please refer to setVideoSource, useFrontCamera, ZegoVideoSourceType > ZegoVideoSourceTypeSecondaryCamera, ZegoVideoSourceType > ZegoVideoSourceTypeCamera
Note:
This feature is only available for iOS 14 and above. Please contact ZEGOCLOUD technical support if you wish to use this feature.
You can use the system interface [AVCaptureDeviceDiscoverySession] to determine in advance whether the device supports 3 rear cameras.
When the device has 3 rear cameras (ultra-wide, main, and telephoto), after enabling the relevant configuration, if you update the zoom factor through [setCameraZoomFactor], the SDK will automatically select the clearest camera for video capture based on the zoom factor.
For related API, please refer to setCameraZoomFactor
Note:
Please contact ZEGOCLOUD technical support if you wish to use this feature.
Configure before initializing video super-resolution.
Higher video resolution than the device's supported limit can introduce additional damage, leading to a decrease in video quality. Therefore, the video super-resolution capability has added magnification factors of 1.33x and 1.5x to adapt to the best effect on different devices.
Note: Please contact ZEGOCLOUD technical support if you wish to use this feature.
Maintaining a pure noise reduction effect and high-fidelity human voice quality at a 10ms delay, suitable for scenarios such as In-Game Voice Chat, Party up in games, and real-time singing that are sensitive to latency. Currently, AI noise reduction supports balanced mode, low-latency mode, and lightweight mode.
For related API, please refer to setANSMode
Note: If a stream is set to allow review, it will not be submitted for review if the developer does not initiate a review task.
When calling the review interface, it will review all streams in the room by default. If the client wants to control that a certain stream should not be submitted for review, the review flag [streamCensorFlag] parameter can be set to 1 (not allowed) when calling the [startPublishingStream] interface to start pushing the stream.
For related API, please refer to startPublishingStream, ZegoPublisherConfig > streamCensorFlag
Media Player playback speed range of the layer has been expanded from [0.5, 4.0] to [0.3, 4.0].
For related API, please refer to setPlaySpeed
Note: Please contact ZEGOCLOUD technical support for using this feature.
When using ZEGO ultra-low latency live streaming (L3) for playback, it supports smooth switching between different bitrates based on the user's network bandwidth, ensuring a smooth playback experience.
[ZegoPlayerConfig] Added adaptiveSwitch
and adaptiveTemplateIDList
parameters to support bitrate adaptive switching based on the network environment in OnlyL3 playback mode.
For related API, please refer to startPlayingStream, adaptiveSwitch, adaptiveTemplateIDList
Note: Please contact ZEGOCLOUD technical support for using this feature.
Add [switchPlayingStream] interface, which is used to smoothly switch to another CDN stream when developers pull the CDN stream. That means the old stream will only be stopped after successfully pulling the new stream.
For example, when the video screen switches from a small window to a large window, the video needs to be switched to a higher bitrate and resolution stream. Only after successfully pulling the new stream will the old stream be stopped, in order to achieve a smooth transition effect.
For related API, please refer to switchPlayingStream
Note: Please contact ZEGOCLOUD technical support if you need to use this feature.
The control range of the local client encoding compatibility is either all the streaming users in the room or all users. This means that if there are users within the specified range who do not support H.265, the local client encoding will dynamically fallback.
For related API, please refer to loginRoom, startPublishingStream, ZegoPublisherConfig > codecNegotiationType, ZegoRoomConfig > capabilityNegotiationTypes
Bug Fixes
Deleted
Abandon CDN Plus Live Broadcasting related interfaces.
For related API, please refer to ZegoStreamResourceModeCDNPlus
Release date: 2024-06-05
Bug Fixes
Release date: 2024-05-29
New Features
Note: Please contact ZEGOCLOUD technical support if you need to use this feature.
Sound data that supports in-ear monitoring can be played from the speakers.
The media player has added the [enableVoiceChanger] interface, which supports enabling voice changer effects for the audio output of the media player, and allows selection of the desired pitch-shifting effects.
For related API, please refer to ZegoMediaPlayer > enableVoiceChanger
Enhancements
Optimize the noise reduction effect of "Balance Mode" in AI scene-based noise reduction, further enhancing the clarity and stability of the human voice without compromising performance, and achieving cleaner noise suppression.
Bug Fixes
Deleted
To enhance the playback experience during voice changing, the media player [ZegoMediaPlayer.setVoiceChangerParam] interface is deprecated. Please use [ZegoMediaPlayer.enableVoiceChanger] instead.
For related API, please refer to ZegoMediaPlayer > enableVoiceChanger
Release date: 2024-05-06
New Features
Note: Starting with this release, iOS 11.0 and earlier versions are no longer supported.
Starting from 2024-04-29, all apps listed on the App Store must support iOS 17.0 version. For details,please refer to Apple Developer Website Official Instructions.
Release date: 2024-04-23
New Features
ZegoVoiceChangerPreset added enumeration values for two voice-changing effects, Autobot and OutOfPower, to enrich the voice-changing effects.
For related API, please refer to setVoiceChangerPreset
Note: Please contact ZEGOCLOUD technical support if you need to use this feature.
For related API, please refer to enableViewMirror
ZegoMixerTask added new parameter mixImageCheckMode, used to control whether backgroundImageURL, inputList.imageInfo.url, or watermark.imageURL image resources fail to verify whether the mixing task can be initiated normally.
This function is not enabled by default (the default value of mixImageCheckMode is 0), which means that image verification is strictly performed, that is, the original restriction rules of parameters must be met before the mixed flow task can be initiated normally.
For related API, please refer to startMixerTask
The AI voice changing function has certain requirements on the performance of the running device. You can use the [isAIVoiceChangerSupported] interface to determine in advance whether the device can support the AI voice changing function.
For related API, please refer to isAIVoiceChangerSupported
Enhancements
Optimize the super-resolution effect and reduce the sharpness of the algorithm, thereby improving the subjective quality when the original picture has noise, there are faces in the picture, etc. For example, the flaws on the anchor’s face will not be highlighted, and the anchor’s hairline will not become more obvious.
Bug Fixes
Release date: 2024-03-29
Enhancements
PrivacyInfo.xcprivacy
within the iOS SDK.Note:If the customer has integrated an SDK version prior to 3.13.2 and intends to publish to the App Store, they need to download the latest SDK and copy the PrivacyInfo.xcprivacy file to the corresponding location in the older SDK version.
Please upgrade the privacy manifest file PrivacyInfo.xcprivacy within the iOS SDK to the new version. For detailed instructions, refer to the PrivacyInfo.xcprivacy file located in the ZegoExpressEngine.framework directory within the SDK package.
Bug Fixes
Release date: 2024-03-14
New Features
Note:
This feature is only available for use during internal video capture.
When this feature is enabled, there may be delays or cropping of the image, so please use it accordingly.
The new video stabilization feature is added to reduce the impact of camera shake during internal video capture and improve the quality of video capture.
For related API, please refer to setCameraStabilizationMode
Note:
The security of this feature is slightly lower compared to traditional methods. Please use it with caution.
When using this feature, set ZegoCDNConfig.protocol to quic.
[ZegoCDNConfig] adds the [quicConnectMode] attribute, which allows developers to use the QUIC protocol for CDN streaming. Set quicConnectMode to 1 for QUIC connection mode, enabling 0-RTT connection and fast service activation. Currently compatible with CDN live streaming products from Huawei, Wangsu, Tencent, and other vendors.
This feature is not enabled by default (quicConnectMode is set to 0, indicating normal connection establishment).
For related API, please refer to ZegoCDNConfig > quicConnectMode
Note: This feature only takes effect when initiating a retweet. If there is a disconnection during the retweet process, the SDK will maintain the retry logic and there will be no callback notification in this case.
When initiating a retweet task, you can set the timeout for the retweet CDN through the [addPublishCdnUrl] interface to monitor if the stream exists. For example, if the developer has initiated a retweet task but the stream has not started streaming yet, after the timeout set, the SDK will return a callback notification indicating that the stream does not exist.
This callback notification will only be sent to the retweet initiator, not the streaming initiator. If the retweet initiator and the streaming initiator are not the same user, it is recommended for developers to initiate the retweet from the server side and receive this notification.
For related API, please refer to addPublishCdnUrl
[ZegoDataRecordProgress] adds the [quality] attribute, which can be used to callback the quality data of the recorded file, such as frame rate and bit rate, during the local recording process.
For related API, please refer to onCapturedDataRecordProgressUpdate
Custom video rendering supports independent channel control. For example, for a specified stream ID, only SDK internal rendering is performed without executing custom rendering.
For related API, please refer to enableCapturedVideoCustomVideoRender, enableRemoteVideoCustomVideoRender
Note:
This feature may cause performance impact, please use it judiciously.
Developers need to manage the CVPixelBuffer data returned by the SDK interface through CVPixelBufferPool.
It is supported to perform other video pre-processing operations (such as beautification, etc., which need to be implemented by the developer) after obtaining the data processed by the Express SDK video, or use the processed video data directly for preview or publish.
For related API, please refer to sendCustomVideoProcessedCVPixelBuffer
Note: The external capture function and pre-processing function cannot be used at the same time, otherwise abnormal images may occur when playing streams.
After enabling the external capture function, you can use the [setLowlightEnhancement] and [enableColorEnhancement] interfaces to separately enable low-light enhancement and color enhancement to adjust the captured images according to your business needs.
For related API, please refer to setLowlightEnhancement, enableColorEnhancement
Note: Please contact ZEGOCLOUD technical support if you need to use this feature.
When some users in the room do not support the H.265 format, the streaming end that supports it will fall back to the H.264 format and republish stream.
Enhancements
Optimize the callback notification logic of the media streaming engine, add error callbacks for unsupported audio sampling rate (for example, not supporting a sampling rate of 24K), and help developers quickly locate problems.
Optimized color enhancement algorithm performs better than previous versions in scenes with high color saturation.
To adapt to the new release rules of Apple applications, the iOS SDK provides a list of privacy files. For details, please refer to the PrivacyInfo.xcprivacy in the ZegoExpressEngine.framework folder in the SDK package.
Bug Fixes
Release date: 2024-01-16
Bug Fixes
Release date: 2024-01-05
Bug Fixes
Release date: 2024-01-03
Bug Fixes
Release date: 2023-12-27
New Features
Note:
Please contact ZEGOCLOUD technical support if you need to use this feature.
The plugin cannot be used alone and must be used with Express SDK.
Support for copyright-music function pluginization, when the developer's business scenario only needs to update the copyright-music related, you can independently integrate the plugin without updating the Express SDK, which can smoothly migrate.
Note: The function retrieves a real-time stream list inside the room. If the room service is disconnected, the results obtained may not be accurate.
Developers are supported to obtain the stream list inside the room from the client, which can be used to handle related business logic.
For related API, please refer to getRoomStreamList
Note: Please contact ZEGOCLOUD technical support if you need to use this feature.
Support is provided for adding silent frames to the audio and video streams that are pushed to the CDN. This can be used to avoid issues such as stuttering or audio-video synchronization problems caused by timestamp discrepancies.
Support for obtaining frame rate statistical information of the currently playing media file, which can be used for data display, anomaly monitoring, etc..
For related API, please refer to getPlaybackStatistics
Support local caching of network resources, so that if the same network resource needs to be played, cached data will be prioritized, enhancing user experience.
For related API, please refer to enableLocalCache
Note: Please contact ZEGOCLOUD technical support if you need to use this feature.
Supports users to enable vibration feedback related APIs when using media volume, and trigger system vibration feedback normally.
Note: Please contact ZEGOCLOUD technical support if you need to use this feature.
Support obtaining the brightness value captured by the camera, which can be used for related logic processing on the business side, such as determining whether the camera is blocked.
Enhancements
Note: Please contact ZEGOCLOUD technical support if you need to use this feature.
Optimize the picture-in-picture function, which allows continued playback in picture-in-picture mode after closing the system menu.
Bug Fixes
Release date: 2023-11-29
New Features
Note: If you need to use this feature, please contact ZEGOCLOUD business personnel.
By applying leading coding and decoding algorithms and other video pre-processing capabilities in the cloud transcoding service, we continuously optimize the smoothness and clarity of video playback, significantly improving the image quality. This feature is suitable for the following scenarios:
Showroom live streaming scenes with high viewership. It ensures stable video transmission and high quality while saving bandwidth costs; without affecting the image quality, it can reduce the bitrate by about 30%.
Danmaku game live streaming, sports live streaming, and other scenes with rich color and texture details in the video content. Under the same bitrate conditions, it can provide a higher definition viewing experience.
For related API, please refer to ZegoMixerOutputVideoConfig > enableLowBitrateHD
For various cameras and other devices that capture images, if the colors appear grayish or have low saturation, we support enhancing the colors while preserving the natural skin tones. This will make the images more vibrant and brighter, creating a more realistic visual experience for the human eye.
For related API, please refer to enableColorEnhancement
Support sending real-time room messages to specified clients or client servers; message types are divided into normal and ordered, with the latter ensuring that messages are received strictly in order. This feature is suitable for scenarios where the anchor needs to manage the microphone positions in the room, for example:
Send messages to users who need to mute through the anchor client, and the receiving client will mute accordingly.
When the anchor wants to kick a user out of the room, send a message to the client server of the other party through the anchor client, and kick out the user.
For related API, please refer to sendTransparentMessage
Note: This feature only supports pre-processing of screenshots and does not support other processing such as rotation or watermarking.
When the video format output by the capture device is MJPEG, hardware decoding acceleration is enabled by default to prevent issues such as insufficient frame rate due to insufficient device performance.
This feature is suitable for use on capture devices with a 4K resolution mainly.
Note:
This feature is not enabled by default, meaning the server uses the default configuration values.
This feature may increase latency, so use it judiciously.
The automatic stream mixing interface supports setting a watermark to control the lower limit of the range for adaptive adjustment of the mixing server's stream cache. This helps maintain a balance between mixing time and video stuttering caused by unstable streaming from the source. This feature only takes effect on new input streams and does not affect input streams that have already started mixing.
For example, in a real-time karaoke KTV scenario, slight fluctuations in the streaming network from the source may cause mixing stuttering, which in turn increases the likelihood of stuttering for viewers. By adjusting the lower limit of the watermark, you can optimize the viewer's experience with stuttering, but this will increase latency.
For related API, please refer to ZegoAutoMixerTask > minPlayStreamBufferLength
Newly added support for using live streams as input streams for mixing; the URL of the live input stream supports both RTMP and HTTP-FLV protocols. This feature is suitable for mixing the RTC video streams of hosts' interactive broadcasting with cloud sports live streams, game live streams, etc., to achieve scenarios such as game or sports commentary in live broadcasting.
When using custom audio and video capture function and the corresponding audio capture sources have inconsistent delays, you can customize the audio offset value during mixing to achieve audio-video synchronization after mixing output, ensuring a better experience for the audience.
For related API, please refer to ZegoMixerInput > advancedConfig
The media player supports throwing relevant callback notifications to developers when the video resolution changes. This feature is suitable for scenarios where the resolution of the streaming screen changes multiple times and requires adjusting the encoding resolution on the streaming end and matching the rendering view size on the receiving end.
For related API, please refer to [videoSizeChanged]
The sound effect player supports setting the streaming volume and local playback volume separately, ensuring that the volume on both ends, local and remote, is within an appropriate range.
For related API, please refer to ZegoAudioEffectPlayer > setPublishVolume, ZegoAudioEffectPlayer > setPlayVolume, ZegoAudioEffectPlayer > setPublishVolumeAll, ZegoAudioEffectPlayer > setPlayVolumeAll
Enhancements
Optimize server-side mix streaming and single-stream transcoding capabilities to improve encoding efficiency and achieve a 5% or more increase in subjective and objective video quality at the same bitrate.
After the user successfully logs in on device A, device A loses network connection. Then, the user logs in successfully on device B using the same userID. If the network connection on device A is restored and a reconnection is attempted, it will fail and throw error code 1002086, indicating that the userID is already logged in another device.
Bug Fixes
Release date: 2023-11-18
Bug Fixes
Release date: 2023-11-09
Bug Fixes
Release date: 2023-10-13
New Features
Note:
The AI Voice-Changing function is a paid function. If you need to apply for a trial or inquire about the official charging standards, please contact ZEGOCLOUD business personnel.
The current official website SDK does not include this function. If necessary, please contact ZEGOCLOUD technical support for special packaging.
New AI voice changing function, like the Conan's Bowtie in real-time calls, perfectly reproduces the timbre and rhythm of the target character, while retaining the user's speech speed, emotion, and intonation, and can switch timbre at will, with ultra-low latency allowing users Enjoy social chat, live broadcast, game voice and other scenarios.
For related API, please refer to createAIVoiceChanger, destroyAIVoiceChanger
Note:
The current official website SDK does not include this function. If necessary, please contact ZEGOCLOUD technical support for special packaging.
The video filling method of the virtual background is centered and proportionally scaled. When the video is too large, the excess part will be cropped.
When using the subject segmentation function, the virtual background supports the use of video materials. The final frame rate of the video materials will be consistent with the encoding frame rate and played in a loop.
For related API, please refer to enableVideoObjectSegmentation
The media player supports accompaniment sound quality enhancement, which improves the sound quality of the accompaniment and the atmosphere of the scene. It is suitable for chat rooms, karaoke and other scenes.
For related API, please refer to enableLiveAudioEffect
Note: Since audio dump files are sensitive privacy data of users, developers must read ZEGOCLOUD Privacy Policy carefully when implementing this capability. In addition, when collecting audio Dump files, please indicate the purpose of Express SDK collection when obtaining user authorization and consent.
Supports saving and uploading audio data before and after processing, which can be used to locate audio-related problems, improve troubleshooting efficiency, and shorten access time.
For related API, please refer to startDumpData, stopDumpData, uploadDumpData, removeDumpData, onRequestDumpData, onStartDumpData, onStopDumpData, onUploadDumpData
Supports the extraction, encoding, and transmission of Alpha channel data in the RGBA channel collected by developers, thereby rendering the subject with a transparent background on the streaming side to achieve a more immersive and realistic video scene.
For related API, please refer to enableAlphaChannelVideoEncoder
Enhancements
In the automatic mode with low illumination enhancement, the dynamic adjustment of brightness will be smoother and smoother, improving the user's visual experience.
For related API, please refer to setLowlightEnhancement
Optimize the upper limit of expected publish and play streaming bit rates for network speed testing, increasing it to 15M. Developers can check how well the audio and video quality matches the current network before publishing and playing streams to ensure stable call quality.
For related API, please refer to startNetworkSpeedTest
Note: The new interfaces [muteAllPlayAudioStreams], [muteAllPlayVideoStreams] and the old interfaces [muteAllPlayStreamAudio], [muteAllPlayStreamVideo] cannot be mixed.
New interfaces [muteAllPlayAudioStreams] and [muteAllPlayVideoStreams] are added to receive the audio and video data of all remote users when playing streams; at the same time, the [mutePlayStreamAudio] and [mutePlayStreamVideo] interfaces are used to individually control the specified streams.
After the old interfaces [muteAllPlayStreamAudio] and [muteAllPlayStreamVideo] are called, the receiving status of the specified stream cannot be controlled individually.
For related API, please refer to muteAllPlayAudioStreams, muteAllPlayVideoStreams, mutePlayStreamAudio, mutePlayStreamVideo
Note: During playback, if the media stream type is modified, it will take effect the next time it is played.
When using a media player to play audio and video files, the [setPlayMediaStreamType] interface can be used to set it to Audio-only or Video-only, which does not consume audio and video decoding performance.
For related API, please refer to setPlayMediaStreamType
Bug Fixes
For related API, please refer to onAudioMixingCopyData
Release date: 2023-09-08
New Features
For related API, please refer to onPlayerRecvMediaSideInfo
Note:
If you need to use this function, please contact ZEGOCLOUD technical support.
Transcoding will cause additional delays. It is not recommended if you use this function in your Live Streaming scenarios which playing stream by RTC.
When RTC plays streams, it supports triggering single-stream transcoding tasks through preset transcoding templates, and outputs transcoded streams with different resolutions.
This function can be used in scenarios such as live broadcasts. Viewers can choose streams of different resolutions to ensure smooth playback that based on network quality, terminal equipment, etc..
For related API, please refer to ZegoPlayerConfig > codecTemplateID
For related API, please refer to onLocalDeviceExceptionOccurred
For related API, please refer to onPublisherDummyCaptureImagePathError
For related API, please refer to enablePublishDirectToCdn
Note: The current official SDK does not include this function. If necessary, please contact ZEGOCLOUD technical support for special packaging.
Support balanced AI noise reduction mode. Compared with the original mode, under the premise of the same human voice fidelity effect, the noise suppression effect is significantly improved, and can reach the level of clean and noise-free or non-disturbing; but the performance Consumption increased slightly. Suitable for noisy (low signal-to-noise ratio) outdoor environments such as streets, roads, markets, etc..
For related API, please refer to ZegoANSModeAIBalanced
Enhancements
The life cycle of [setLogConfig] is expanded to the App life cycle, and its priority is higher than the configuration in [setEngineConfig].
For related API, please refer to setLogConfig, setEngineConfig
Optimize the retry rules when the App is sleeping. During the loginRoom and publishing-playing process, the App Sleep Time is also included in the Maximum Allowed Retry Time.
Bug Fixes
Release date: 2023-08-16
Bug Fixes
Release date: 2023-08-09
New Features
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
After the developer sets the Smart Cloud Proxy mode, when publishing streams from RTC or L3, it will give priority to using the direct network mode to try. If the direct connection network is unavailable and it is currently a cellular network, continue to stay in the direct connection mode and try again; if the direct connection network is not available and it is currently a non-cellular network, then switch to the cloud proxy mode.
For related API, please refer to ZegoVideoBufferTypeNV12CVPixelBuffer
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
Added a low frame rate alarm callback that supports throwing encoding and hardware decoding. In 1v1 chats, live broadcasts and other scenarios, developers can adjust the streaming resolution and trigger transcoding based on this callback.
For related API, please refer to onPlayerLowFpsWarning, onPublisherLowFpsWarning
Added a callback [onPlayerSyncRecvVideoFirstFrame] that returns the first frame of the video network from a non-UI thread. This callback is not affected by UI freezes and can more accurately count the first frame of the video.
For related API, please refer to onPlayerSyncRecvVideoFirstFrame
The mediaplayer supports setting the Http Headers of network resources. Based on this configuration, developers can customize and limit the access methods of network resources to strengthen the security protection of resources.
For related API, please refer to setHttpHeader
Enhancements
Bug Fixes
Release date: 2023-07-13
New Features
Note:
Before using this function, you need to call the [setVideoConfig] interface to specify the video codecID as ZegoVideoCodecIDH264DualStream.
The ratio of setting the resolution for the big stream and the small stream needs to be consistent, otherwise it will cause errors when calling the interface.
When specifying the codecID as ZegoVideoCodecIDH264DualStream, you can separately set the resolution, frame rate, and bitrate for the big stream and the small stream.
For related API, please refer to ZegoExpressEngine > setVideoConfig, setPublishDualStreamConfig
In the scenarios of large-scale audio-video, game voice, it supports setting the attenuation range [min, max] of 3D sound effect distance. When the distance is less than min, the volume will not decrease with the increase of distance; when the distance is greater than max, the sound of the other party will not be audible.
For related API, please refer to setReceiveRange, setAudioReceiveRange
Added error codes for the three modules of voice detection (1018xxxxx), audio and video for 10,000 people (1019xxxxx), and screen capture (1020xxxxx).
Enhancements
For related API, please refer to onNetworkQuality
For related API, please refer to submitLog
Bug Fixes
Release date: 2023-06-09
New Features
Note: The current official website SDK does not include this function. If necessary, please contact ZEGOCLOUD technical support.
In real scene or greenscreen scenes, developers can use this function to blur the user's background, or replace it with a custom picture background.
This function can be used in video conferences, 1v1 audio and video calls and other scenarios to help users better protect personal privacy and improve the fun of calls.
For related API, please refer to enableVideoObjectSegmentation
Added an enhanced KTV reverb effect to achieve a more concentrated and brighter KTV vocal effect. Compared with the previous KTV reverb sound effect, the Enhanced KTV reverb effect shortens the reverb duration and improves the dry-wet ratio.
The original KTV reverb effect is only suitable for users with obvious vocal defects, and the enhanced KTV reverb effect is suitable for most professional users and ordinary users.
For related API, please refer to setReverbPreset
Developers can realize 3D sound effects of local audio and online audio resources by setting the position and orientation of media players and sound effect players. This function can be used to set the sound effect of the item in the virtual scene, as well as the background music of the specified location, etc.
For related API, please refer to ZegoMediaPlayer > updatePosition, ZegoAudioEffectPlayer > updatePosition
For the video file being played by the media player, the developer can actively obtain information such as the resolution and frame rate of the video.
For related API, please refer to getMediaInfo
The maximum speed of the media player has been increased to 4x. For example, when the user is playing an audio and video file, if it has been set to play at 2x, it can be accelerated to 4x when long pressing the screen.
For related API, please refer to ZegoMediaPlayer > setPlaySpeed
Enhancements
Note: The current official website SDK does not include this function. If necessary, please contact ZEGOCLOUD technical support.
When the microphone partially overlaps with the human body area, the shape of the microphone in the overlapping area can be preserved to maintain the complete shape of the human body area.
For related API, please refer to enableVideoObjectSegmentation
Bug Fixes
Release date: 2023-05-11
New Features
Note: Please contact ZEGOCLOUD technical support if you need to use this feature.
When the publisher no longer sends new video frames, it will cause the player's screen to appear black. With this feature, developers can make the viewing screen of the palyer stay on the last frame of the publisher's video screen, improving the user experience.
For related API, please refer to setEngineConfig
When publishing audio and video stream, monitor the release timing of the "first frame of audio" or "first frame of video" through [onPublisherSendAudioFirstFrame] and [onPublisherSendVideoFirstFrame] callbacks. This function can be used to count the time consumption of audio and video streaming, or update UI performance, etc.
For related API, please refer to onPublisherSendAudioFirstFrame, onPublisherSendVideoFirstFrame
When rendering audio and video through the media player, use the [firstFrameEvent] callback to monitor the release timing of the "first frame of audio" or "first frame of video" after rendering. This function can be used to count the time consumption of audio and video rendering, or update UI performance, etc.
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
When using the external acquisition function, support actively offsetting the NTP timestamp through the experimental API interface. This function can be used in KTV chorus, accompaniment, lyrics alignment and other scenarios.
In the multi-room mode, the [switchRoom] interface is supported to quickly and conveniently realize the function of switching rooms.
For related API, please refer to switchRoom
Note:
If you want to use this function, please contact ZEGOCLOUD technical support.
Calling this interface will take effect only after streaming is started.
Support developers to input the sound to be eliminated (that is, the reference signal) through the [sendReferenceAudioPCMData] interface, and eliminate it directly.
This function can be used in custom capture rendering scenes. For example, if the user plays background music outside and speaks on the microphone at the same time, the background music does not use custom rendering or external rendering. This function can be used to eliminate the echo of the background music included in the push stream.
For related API, please refer to sendReferenceAudioPCMData
Enhancements
This optimization takes effect from version 3.5.0 and does not require additional interfaces.
Bug Fixes
Release date: 2023-04-23
Bug Fixes
Deleted
For specific instructions, Please refer to App Store submission requirement starts April 25 and Xcode 14 Release Notes.
For specific instructions, Please refer to Xcode 14 Release Notes.
Release date: 2023-04-14
New Features
Note:
To use this feature, please contact ZEGOCLOUD technical support.
Please configure geofencing information before creating the engine.
Restrict access to audio, video, and signaling data to a specific area to meet regional data privacy and security regulations, which restrict users' access to audio and video services in a specific area.
For related API, please refer to setGeoFence
The status synchronization and 10000 person range audio and video functions support active streaming or custom streaming through stream ID. This function can achieve a gameplay that maintains streaming no matter how far away it is. It is suitable for scenes where the audience in any part of the virtual world can obtain a large screen or host's voice through streaming when there is a large screen or host in the virtual world.
For copyright music protection in online players, the media player supports downloading while recalling unencrypted binary data, which is decrypted by the developer and then returned to the media player for playback. During the process, no files or cache files are generated.
For related API, please refer to setBlockDataHandler
Note:
To use this feature, please contact ZEGOCLOUD technical support.
Developers who access through experimental APIs need to migrate to accessing the subject segmentation function through formal API interfaces.
For related API, please refer to enableVideoObjectSegmentation
Supports dynamic switch flow control function, as well as setting flow control attributes.
For related API, please refer to enableTrafficControl, setMinVideoBitrateForTrafficControl, setMinVideoFpsForTrafficControl, setMinVideoResolutionForTrafficControl
Enhancements
The interval range interface for adaptive adjustment of iOS streaming playback cache is not aligned with the Android end, which is prone to usage errors. Now, align the corresponding interface of iOS end with the Android end.
For related API, please refer to setPlayStreamBufferIntervalRange
Delete some unnecessary memory applications within the SDK and optimize the SDK's memory usage. Compared to the previous version, the memory usage has decreased by about 10%.
Bug Fixes
Deleted
Note: There may be compatibility issues with interface replacement, please check the description of [onPlayerRecvSEI].
To avoid data synchronization exceptions, the [onPlayerRecvSEI] interface will be discontinued in versions 3.4.0 and above. If you need to collect SEI content from remote streams, please use the [onPlayerSyncRecvSEI] interface instead.
For related API, please refer to onPlayerSyncRecvSEI, onPlayerRecvSEI
Release date: 2023-03-10
New Features
In the external scene, the microphone of the device is too close to the speaker, which may easily lead to blurred or dull voice. In this scenario, voice enhancement can effectively improve the clarity of voice and improve the sense of boredom. Therefore, it is recommended to enable this function in an external scenario.
In order to achieve the voice enhancement effect in the external broadcast scene, the voice enhancement effect can be turned on and the enhancement level can be set. It can be used in the KTV external broadcast scene to fine control the voice effect. The recommended enhancement level is 4.
For related API, please refer to enableSpeechEnhance
Media player supports the function of playing transparent special effect files through rendering alpha channel.
For related API, please refer to loadResourceWithConfig
The game voice supports the customized setting of voice mode and listening mode, which can be used to shield the scene of the same team of players outside the range after joining the team.
For related API, please refer to setRangeAudioCustomMode
Note: To use this function, please contact ZEGOCLOUD technical support.
Note: When pulling the transcoding stream through CDN, you must use the push CDN. To use this function, please contact ZEGOCLOUD technical support.
Single-stream transcoding refers to converting each original stream into transcoding streams with different encoding formats and different resolutions in the cloud. The transcoding template ID needs to be passed in to pull the transcoding stream. In live broadcast and other scenes, viewers can choose streams of different resolutions to watch based on the quality of the access network, terminal equipment, etc., to ensure the smoothness of playback.
Note:
At present, one mixing task can output up to four video streams with different resolutions, and only server mixing is supported.
To use this function, please contact ZEGOCLOUD technical support.
The same mixing task supports the output of multiple resolution video streams, which can be used to meet the transcoding requirements in the mixing scenario.
In the mixed-stream function, the operation content in the whiteboard can be converted into real-time video, and the whiteboard configuration information can be set, for example, the whiteboard ID, the whiteboard aspect ratio, and whether dynamic PPT loading is supported.
For related API, please refer to setWhiteboard, startMixerTask
StandardVoiceCall standard voice call scenario is added for scenario-based audio and video configuration, which is applicable to 1v1 pure voice call scenario.
For related API, please refer to setRoomScenario
Enhancements
Note:[enableVideoSuperResolution] modifies the call time, and can only be called after [initVideoSuperResolution].
For related API, please refer to initVideoSuperResolution, uninitVideoSuperResolution
The deep AEC optimization for KTV scenes has achieved:
The sound quality of the human voice in the external scene is greatly improved to make the human voice more fidelity.
While eliminating the echo, effectively avoid the occasional swallowing of words or the fluctuation of voice.
In the application project, developers can start AppGroup configuration through the new [ZegoExpressEngine>setAppGroupID] and [ZegoReplayKitExt>setupWithDelegate:appGroup] interfaces to obtain better performance and stability.
For related API, please refer to setAppGroupID, setupWithDelegate
Release date: 2023-02-23
Bug Fixes
Release date: 2023-01-13
New Features
The range scenario supports setting the push/pull flow mode, which includes whether to pull the flow within the range and whether to push the flow to the world.
For related API, please refer to enablePlayInRange, enablePublishToWorld
When video stream is layered by video size stream encoding (H.264 DualStream), compared with layered video encoding (H.264 SVC), video size stream encoding (H.264 DualStream) supports hardware encoding, that is, the [ZegoVideoCodecIDH264DualStream] field is added to the [ZegoVideoCodecID].
For related API, please refer to ZegoVideoCodecIDH264DualStream
Note: This function is an internal test function. For access experience, please contact ZEGO business personnel.
Android, iOS, Windows and MacOS (only Apple chips are supported temporarily) support live scene segmentation and green screen segmentation.
The internal rendering supports the alpha channel. Developers do not need to use custom rendering to realize the mixing of the main body and the background.
Enhancements
Note: The default size of customized signaling configuration is 1KB. If you need to expand to 4KB, please contact ZEGOCLOUD technical support for processing.
ZegoExpressEngine/Video
to ZegoExpressEngine
, see Integrate the SDK documentation for more details.Bug Fixes
Release date: 2022-12-09
New Features
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
Through this ability, you can quickly move and place items, seize items and other interactive play methods. Take the chair grabbinggame as an example:
First, you need to create a chair in your field of vision through the [createItem] interface in advance.
When you are near the chair, seize the chair through [bindItem] to obtain its use right.
If you only allow one user to preempt the chair, other users will not be able to preempt it until you release the permission through [unbindItem].
When you sit on the chair, you can update the status/command of the chair through [updateItemStatus] and [updateItemCommand] to notify other users that you are sitting on the chair.
For related API, please refer to createItem, bindItem, unbindItem, updateItemStatus, updateItemCommand
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
In the virtual scene, due to the different map size, audio and video interactive playing methods and scales of each scene, it is necessary to customize the configuration for each scene. After version 3.1.0, 10000 person range audio and video and multi person real-time status synchronization support SDK interface and use template ID to specify scenarios.The configuration item corresponding to the template ID can only be configured through the server API.
For related API, please refer to templateID
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
When logging in to the scenario, users can take the Token parameter to verify the validity.
For related API, please refer to ZegoSceneParam > token, ZegoRangeScene > renewToken
For a variety of interactive scenarios of audio and video sources such as online KTV, watching movies together, watching competitions, video conferences, and online education, multi-source acquisition provides flexible and easy-to-use audio and video acquisition sources and channel management capabilities, greatly reducing developers' development and maintenance costs.
Multi source acquisition capability shortens, optimizes and normalizes the implementation path of common capabilities such as screen sharing and mixing. After version 3.1.0, you can no longer implement the above complex capabilities through custom acquisition.
The main capabilities and characteristics are as follows.
Streaming channel supports setting or switching multiple audio and video sources.
Common capabilities such as screen sharing and mixing are supported.
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
By setting the cloud proxy interface of the SDK, all the traffic corresponding to the SDK is transferred through the cloud proxy server to achieve communication with the RTC.
For related API, please refer to setCloudProxyConfig
Enhancements
The ZEGO self-developed dispatching system has been deeply optimized for areas with poor network quality.
Bug Fixes
Release date: 2022-11-25
Bug Fixes
Release date: 2022-11-15
Bug Fixes
Release date: 2022-10-31
Bug Fixes
For related API, please refer to enableHardwareDecoder
Release date: 2022-10-28
This version contains breaking changes, please refer to v3.0.0 Upgrade Guide for details.
New Features
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
The new [enableVideoSuperResolution] interface supports super-resolution processing of a video stream to achieve better image quality. Super resolution, referred to as super resolution, is a technology that the client multiplies the width and height of the pulled video stream in real time. For example, from 640x360 to 1280x720.
For related API, please refer to enableVideoSuperResolution, onPlayerVideoSuperResolutionUpdate
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
The scene AI noise reduction function, based on the previous noise reduction for all non human voices, supports the noise reduction capability in the music scene, and restores the music sound quality by identifying music and intelligently adjusting the noise reduction effect. The SDK will perform music detection on the microphone input in real time, and automatically adjust the noise reduction level in sound card, playing and singing or near field music scenes to ensure the high fidelity sound quality of music.
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
The SDK provides orderly, high-frequency, low latency, and large-scale status synchronization cloud services for virtual scenes, helping customers quickly achieve real-time information synchronization capabilities such as player location, action, and image. At the same time, 10000 users can be online simultaneously in a single scene.
In a large virtual world, users generally do not need to perceive distant scenes or remote users. ZEGO provides AOI (Area Of Interest) capabilities to reduce information outside the user's visible range and greatly reduce customer traffic costs, user traffic and performance consumption.
For related API, please refer to createRangeScene
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
The 10000 person range audio and video function supports large-scale audio and video interaction. The cloud service dynamically routes users based on their location, maintaining an immersive interactive experience in a large virtual scene while significantly reducing customer audio and video costs.
Depends on the real-time synchronization service of multi person status, automatically pulls remote audio and video within the listening range according to the cloud user location, and provides spatial audio effects. In a single scene, a maximum of 10000 users can turn on the microphone and camera at the same time. By default, the user pulls the nearest 12 channels (configurable) of audio and video.
For related API, please refer to createRangeScene
In order to facilitate the rapid access of developers and reduce the access threshold for developers, the SDK provides a variety of preset scenarios. Developers can select the corresponding room mode [ZegoScenario] according to the desired scenario, and the SDK will automatically apply audio and video codecs, audio and video parameters, flow control strategies and other configurations suitable for the scenario, so as to quickly achieve the best effect in the scenario.
Currently supported scenarios include live show, KTV, standard 1v1 audio and video calls, high quality 1v1 audio and video calls, standard chat rooms, and high quality chat rooms.
For related API, please refer to setRoomScenario
The SDK supports obtaining the support of the codec mode of the video codec specified by the current device, so as to better help developers choose the encoder and codec mode to use and obtain better results.
The hardware or software encoding support of the current encoder can be obtained through the [isVideoEncoderSupported] interface.
Through the [isVideoDecoderSupported] interface. The hardware or software decoding support of the current decoder can be obtained. The above two interfaces contain three enumerated values:support hardware or software, support hardware, and support software.
Taking the Android side as an example, isVideoEncoderSupported(ZegoVideoCodecID. H265, ZegoVideoCodecBackend. HARDWARE)means to check whether the current device supports H265 hardcoding. If yes, return true.
For related API, please refer to isVideoEncoderSupported, isVideoDecoderSupported
Note: This function is enabled by default. If you need to disable this function, please contact ZEGOCLOUD technical support. n If the app has access to the geographical location, the developer can choose whether to allow the ZEGO SDK to obtain the GPS information cached by the system, which is obtained by default. When developers want to disable this function, they need to contact ZEGOCLOUD technical support to set it.
Support callback after SDK pulls the stream and renders the first frame of remote camera video data each time the remote camera is turned on. Developers can use this callback to calculate the first frame time consumption, or update the UI components of the playback stream.
For related API, please refer to onPlayerRenderCameraVideoFirstFrame
Enhancements
Note: If you need to use this function, please contact ZEGOCLOUD technical support.
It is optimized for 1v1 call scenarios and is applicable to pure RTC scenarios.
The space audio capability is optimized, so that users can distinguish the front and rear audio sources, so as to achieve a better sense of immersion.
The AGC automatic gain control algorithm is optimized. When the acquisition volume is too large, it will not cause broken sound.
The SDK optimizes the internal strategy. In the audio and video scenarios, it supports a minimum downlink of 50 kbps without congestion, ensuring a better experience in extremely weak networks.
Bug Fixes
Deleted
Discard [General], [Communication] and [Live] three scenarios in the [ZegoScenario] scenario enumeration.
Since version 3.0.0, bitcode is no longer supported in the iOS SDK. Please refer to Xcode 14 Release Notes for more details.
Release date: 2022-09-09
New Features
Due to the angle, resolution, rotation and other characteristics of mobile cameras, developers need to do many complex adaptations.
The current SDK encapsulates various configurations and provides simple mode selection. On the basis of the original custom mode, a new fixed scale mode, adaptive mode and alignment mode can be added, which can effectively reduce the access cost of developers.
For related API, please refer to setAppOrientationMode
Enhancements
For the convenience of cross-platform framework developers, we have attached the C++ interface header files to the Objective-C SDK; since the C++ API is in the form of Header-Only, only using the Objective-C API will not increase the package size after integrating the SDK; in addition, do not use both sets of APIs at the same time to avoid confusion of the SDK life cycle.
Bug Fixes
Release date: 2022-08-09
New Features
In an intranet or firewall scenario, you can interact with the public network through a proxy server, and set the proxy server address through [setEngineConfig] to ensure that the ZEGO's cloud-based RTC service is normal. Currently only supports SOCKS5 protocol.
For related API, please refer to setEngineConfig
Note: Low-light enhancement uses OpenGL by default, if you need to specify Metal, please contact ZEGOCLOUD technical support.
Note: It is recommended to set a GOP every 2s, and each I frame must carry SPS and PPS and put them at the first. When calling [enableCustomVideoCapture], the type must be set to [ZegoVideoBufferTypeEncodedData].
For related API, please refer to enableCustomVideoCapture
Add [setAudioDeviceMode] to dynamically modify the audio mode of the device. This configuration determines the volume mode, preprocessing mode and Mic occupation logic of the device. You can choose according to specific scenarios.
For related API, please refer to setAudioDeviceMode
Note: 1. When using the media player to play the accompaniment, you need to use the [enableAux] interface at the same time. 2. After enabling the [enableAlignedAudioAuxData] interface, the data of the media player will not be pushed out.
If you need to tune the accompaniment and align the vocals in the recording and singing scene, you can first mix the accompaniment into the main channel through the [enableAux] interface, then turn on the switch through the [enableAlignedAudioAuxData] interface, and finally through the [onAlignedAudioAuxData] interface. ] The callback obtains the PCM data of the media player. At this time, the data collected by the media player and the Mic are aligned, and the data frames correspond one-to-one.
For related API, please refer to enableAlignedAudioAuxData, onAlignedAudioAuxData, enableAux
Since the SDK supports feature trimming, some features may be trimmed; you can use this function to quickly determine whether the current SDK supports the specified features.
For related API, please refer to isFeatureSupported
Enhancements
When the remote user is abnormal, [onNetworkQuality] will call back the quality unknown state (ZegoStreamQualityLevelUnknown state) every 2s. When the user remains in this state for 8s, the remote user is considered to be abnormally disconnected, and the quality abnormal state (ZegoStreamQualityLevelDie state) will be called back.
For related API, please refer to onNetworkQuality
The push-pull stream quality callback will call back the result with the worst quality every 3s. When serious jitter or packet loss occurs during the period, the poor stream quality can be immediately reported.
For related API, please refer to onPlayerQualityUpdate, onPublisherQualityUpdate, onNetworkQuality
Optimize the log reporting strategy, improve log upload efficiency.
AGC's newly improved harmonic detection algorithm has a crash problem, and is now back to the old version of the harmonic detection algorithm.
Bug Fixes
Release date: 2022-07-14
Bug Fixes
Release date: 2022-07-08
New Features
The default distance update frequency of the SDK is changed from 1s to 100ms, which can basically meet the smooth attenuation effect for most developers when using range voice, optimize the experience of sound attenuation when using range voice, and achieve a smoother and more natural attenuation effect.
If you want to better match the actual business demand, you can call the [setPositionUpdateFrequency] interface to modify the frequency by yourself.
For related API, please refer to ZegoRangeAudio > setPositionUpdateFrequency
Note: The [setLowlightEnhancement] interface should be called after calling the [createEngine] interface to create an engine.
When the surrounding environment of the stream-publishing user is dark, or the frame rate set by the camera is high, resulting in a dark live broadcast screen, and the subject cannot be displayed or recognized normally, you can call the [setLowlightEnhancement] interface to set the low-light enhancement to increase the brightness of the video screen. The low-light enhancement function includes three modes: 1: Disable the low-light enhancement (default), 2: Enable the low-light enhancement, 3: Automatically switch on/off the low-light enhancement.
You can choose different low-light enhancement modes according to business scenarios: when you want to judge whether the low-light enhancement is needed, you can switch between modes 1 and 2; when you want the SDK to automatically enhance the brightness, you can enable the mode 3, and the SDK will automatically determine the lighting environment where the user is in, and turn on or off the low-light enhancement.
For related API, please refer to setLowlightEnhancement
When calling the [startMixerTask] interface to mix streams, you can set the [cornerRadius] through the [ZegoMixerInput] class to turn the video border to rounded corners. The unit of [cornerRadius] is px, and the value cannot exceed the half of the width or the height of video screen, which is shorter.
For related API, please refer to startMixerTask
Note: If you want to control the stream-playing mode from the cloud by more criteria such as region and user, please contact ZEGOCLOUD technical support for related configuration.
The [ZegoStreamResourceMode] interface adds CDN_PLUS as a new ZegoResourceType. interface. You can enable CDN_PLUS to play stream by yourself based on to the stream critirion. The CDN Plus stream-playing is a cost-effective method, because its quality is higher than CDN stream-playing with similar price.
For related API, please refer to startPlayingStream
Enhancements
Added 1002074, 1002075, 1002076, 1002077, 1002078, 1002079, 1002080 and other error codes. After enabling mandatory login authentication, if the Token is incorrect, these error codes will be returned. For details, please refer to Error codes.
Bug Fixes
For related API, please refer to enableCustomVideoProcessing
Fixed the issue that when the 2.20.0 ~ 2.20.2 SDK uses L3 to play streams, if the played-stream is the stream which is published by the SDK of 2.15.0 and earlier versions, it may fail.
Release date: 2022-06-20
Bug Fixes
Release date: 2022-06-18
Bug Fixes
Release date: 2022-06-09
New Features
After calling the [createEngine] interface to initialize the engine and the [createMediaPlayer] interface to create a media player, you can call the [setActiveAudioChannel] interface to set the left channel, right channel or all channels. When initialized, the media player defaults to all channels.
For related API, please refer to setActiveAudioChannel
Note: You must wait for the media player to finish playing before the API call takes effect.
Call the [createEngine] interface to initialize the engine, call the [createMediaPlayer] interface to create a media player, and call [clearView] to clear the last remaining frame.
For related API, please refer to ZegoMediaPlayer > clearView
Note: When the frame rate set by [setVideoConfig] is less than the minimum expected frame rate of [enableCameraAdaptiveFPS], the frame rate value set by [setVideoConfig] will be used. Due to the different hardware and algorithm strategies of different mobile phone manufacturers, the effect of this interface is different on different models or on the front and back cameras of the same model.
When the frame rate set by the user on the streaming end is high, and the ambient light is low and the subject cannot be displayed or recognized normally, you can call the [enableCameraAdaptiveFPS] interface to automatically reduce the frame rate within a certain range to increase exposure time, so as to improve the brightness of the video picture. This function is often used in live broadcast scenes with high exposure requirements. The [enableCameraAdaptiveFPS] interface needs to be called after calling the [createEngine] interface to initialize the engine and before starting the camera.
For related API, please refer to enableCameraAdaptiveFPS
Note: The length of the image address must not exceed 1024 bytes, otherwise the error code 1005034 will appear; the image format should be JPG and PNG format, otherwise the error code 1005035 will appear; the image must not exceed 1M, otherwise the error code 1005036 will appear.
You can set the image address through the [ZegoMixerImageInfo] type parameter of the [startMixerTask] interface to set the content of a single input stream as an image, which is used to replace the video, that is, when the image is used, the video is not displayed. This function is mainly used in a video call when a video caller may need to temporarily turn off the camera to display the image, or in a call between a video caller and a voice caller when the image of the voice caller may need to be displayed.
For related API, please refer to startMixerTask
NOTE: To use this feature, please contact ZEGOCLOUD technical support.
When you finds that the stream publisher violates the regulations, you can call the [mutePlayStreamVideo] interface to discontinue the stream puller from pulling the video stream of the violating user, and request the violating user to make corrections. Using this function at the same time can avoid the risk of violation caused by the video interface of the stream puller still retaining the last frame.
NOTE: To use this feature, please contact ZEGOCLOUD technical support.
A new volume gain method is provided, and you can choose an appropriate volume gain method according to actual needs.
Note: To use this function, please contact ZEGOCLOUD technical support to activate the background service.
When calling the [startPublishingStream] API to start streaming, you can set the [ZegoStreamCensorshipMode] parameter to conduct automatic audio and video censorship at the stream level, and automatically identify sensitive content, thus reducing the integration difficulty and business maintenance costs.
For related API, please refer to startPublishingStream
Enhancements
From v2.20.0, there is no longer any API difference between Express-Video SDK and Express-Audio SDK, that is, you can easily switch from Video SDK to Audio SDK and vice versa at any time. The only difference between the two SDKs is that for the Audio SDK, some video-related APIs (such as video encoding parameter settings, ZegoCanvas parameter in startPlayingStream, etc.) have no effect after being set, and no error will be reported. After upgrading from the old version, it may cause few incompatibilities and need to be fixed. Please refer to the FAQ How to solve the compilation error after upgrading to Express v2.20.0 or above?.
It indicates that the message input length exceeds the limit. When this error code appears, please check the input content length or contact ZEGOCLOUD technical support to extend the message content length.
When the copyrighted music is initialized, the authentication fails because the AppSign or Token is not set, and this error code will appear. At this time, if you use AppSign for authentication, please fill in AppSign when initializing the SDK; if you use Token authentication, before calling the [initCopyrightedMusic] interface, please call the [loginRoom] interface and pass in Token for authentication.
For related API, please refer to initCopyrightedMusic, loginRoom
Bug Fixes
Release date: 2022-05-11
New Features
When pushing CDN directly, without changing the push mode, the SDK pulls the stream from the customer's CDN source site, distributes the audio and video content to the audience through L3, and controls the source site resources through [ZegoResourceType]. This function is often used in live broadcast scenarios.
For related API, please refer to startPlayingStream
Note: Currently, only RTC scenarios are supported, and it is invalid in direct CDN and retweet CDN scenarios.
Starting from version 2.19.0, SEI (Media Supplemental Enhancement Information) can be sent synchronously with audio frames in audio and video scenarios. This function is often used in video scenarios where SEI is strongly related to audio, such as real-time KTV.
In versions before 2.19.0, the SEI data was sent along with the video frame data. Generally, the video frame rate is much lower than the audio frame rate, resulting in insufficient SEI accuracy/frequency in mixed stream alignment and accompaniment alignment scenarios.
For related API, please refer to onPlayerRecvAudioSideInfo, sendAudioSideInfo
Enhancements
Bug Fixes
Release date: 2022-04-13
Bug Fixes
Release date: 2022-04-09
New Features
Note: AI noise reduction will currently cause great damage to the music collected by the microphone, including the sound of people singing through the microphone. To use this feature, please contact ZEGOCLOUD technical support.
AI noise reduction means that the SDK will perform noise reduction processing on the sound collected by the microphone. In the case of normal processing of the original steady-state noise, it will also deal with non-steady-state noise, mainly including mouse, keyboard sound, tapping , air conditioners, kitchen dishes, noisy restaurants, ambient wind, coughing, blowing and other non-human noises. The AI noise reduction mode is set through the ZegoANSMode parameter in the [setANSMode] interface, and the noise reduction mode can be adjusted in real time.
This function is often used in calls, conferences and other scenarios without background music, such as normal voice chat rooms, voice conferences, voice blackouts, and one-to-one video calls.
For related API, please refer to setANSMode
After playing the sound effect, you can call the [setPlaySpeed] API in [ZegoAudioEffectPlayer] class to set four playback speeds for the sound effect (the local playback speed and the streaming speed will be set at the same time), which are 0.5x, 1.0x, 1.5x and 2x respectively, and the default is the original speed (1.0x).
For related API, please refer to ZegoAudioEffectPlayer > setPlaySpeed
The QUIC protocol push-pull streaming is mainly used to improve the unstable quality of CDN live streaming in a weak network environment, but the improvement is limited. It is recommended to use low-latency live streaming to enjoy high-quality and low-latency live streaming services. Currently, QUIC protocol push and pull streaming using Tencent and Wangsu's two CDN live streaming products are supported.
Configure the push protocol and QUIC version through the [ZegoCDNConfig] parameter in the [enablePublishDirectToCDN] interface. If you want to perform custom CDN streaming of the QUIC protocol, you need to configure the pull protocol through the [ZegoPlayerConfig] parameter in [startPlayingStream] and the QUIC version.
For related API, please refer to enablePublishDirectToCDN
After the push stream is initiated, you can monitor the push stream status in real time through the [onPublisherStreamEvent] callback, which will return the current push stream address, resource type, and protocol-related information.
After initiating the streaming, you can monitor the streaming status in real time through the [onPlayerStreamEvent] callback, which will return the current streaming address, resource type, and protocol-related information.
For related API, please refer to onPublisherStreamEvent, onPlayerStreamEvent
Call startMixerTask to start or update the muxing task. It supports setting the muxing watermark and muxing input volume through [backgroundUrl] and [inputVolume] respectively.
For related API, please refer to startMixerTask
The [loginRoom] interface adds a [callback] parameter, which supports returning the login room result from [callback].
The [logoutRoom] interface adds a [callback] parameter, which supports returning the result of exiting the room from [callback].
For related API, please refer to loginRoom, logoutRoom
When the connection state of the room changes, the [onRoomStateChanged] callback will be triggered, and the [ZegoRoomStateChangedReason] parameter will provide more detailed connection state and the reason for the state change.
For related API, please refer to onRoomStateChanged
Enhancements
Call the startMixerTask interface, use the [border] property in [ZegoFontStyle] to set whether the font has a border, and use the [borderColor] property to set the font border color.
For related API, please refer to startMixerTask
Added an error code of 1005000, indicating that the mixed streaming service has not been activated. When this error code occurs, please activate the mixed streaming service by contact ZEGOCLOUD technical support.
For related API, please refer to startMixerTask
Bug Fixes
Release date: 2022-03-11
Bug Fixes
Release date: 2022-03-09
New Features
Added the [setMinVideoFpsForTrafficControl] and [setMinVideoResolutionForTrafficControl] interfaces, which can be used to set the minimum video frame rate and resolution by calling the interface when the user's network is poor and the flow control is turned on, helping the user to comprehensively control the video display effect.
For related API, please refer to setMinVideoFpsForTrafficControl, setMinVideoResolutionForTrafficControl
The default detection period for steady-state voice is 3 seconds. If users need to modify the default detection period, they can customize the detection period parameters through the [startAudioVADStableStateMonitor] interface.
For related API, please refer to startAudioVADStableStateMonitor
Added enum ZegoRangeAudioModeSecretTeam secret team mode. In this mode, users and listeners in the same room can not only communicate with people in the same team, but also hear the voices of all voices within the audio reception range that are voices in the world mode, such as the space werewolf killing game scene.
For related API, please refer to setRangeAudioMode
Note: This function is only used in the development stage, please do not enable this function in the online version.
Added the [enableDebugAssistant] interface. The developer calls this interface to enable the debugging assistant function. The SDK will print the log to the console, and the UI will pop up an error message when other functions of the SDK are called abnormally.
For related API, please refer to enableDebugAssistant
Enhancements
For versions 2.17.0 and above, pass the AppSign blank or not when creating the engine, and you must pass in the Token when logging in to the room. After the authentication is passed, you can use the real-time audio and video function. For details, please refer to Control user mos.
Versions below 2.17.0, pass in AppSign when creating the engine, and use the real-time audio and video function after the authentication is passed.
Bug Fixes
Release date: 2022-02-10
Bug Fixes
Release date: 2022-01-26
Bug Fixes
Release date: 2022-01-20
Bug Fixes
Release date: 2022-01-14
New Features
The [muteUser] interface has been added to the game voice module. Local users can set whether to receive audio data from the specified remote user through the [MuteUser] interface after initializing the game voice [CreateRangeAudio] according to their needs.
This function is often used in game scenarios, such as the speaker is blocked by a wall, the listener does not need to receive the sound.
For related API, please refer to muteUser
The [onPlayerQualityUpdate] callback adds a [mos] parameter, which indicates the rating of the streaming quality. When developers are more concerned about audio quality, they can use this parameter to know the current audio quality.
For related API, please refer to onPlayerQualityUpdate
Note: Currently only specific video encoders support this function, if you want to use it, please contact ZEGOCLOUD technical support.
Developers can call the [setCustomVideoCaptureRegionOfInterest] interface to set the region of interest (ROI) of the custom video capture encoder for the specified push channel. Under the same bit rate, the image quality in the ROI region is clearer.
This feature is often used for remote control, Face detection etc. scene.
For related API, please refer to setCustomVideoCaptureRegionOfInterest
NOTE: To use this feature, please contact ZEGOCLOUD technical support.
In order to allow the streaming end to push higher-quality video streams in a weak network environment, the SDK supports streaming based on the rtmp over quic protocol.
This function is often used in single-host live CDN and live PK scenarios.
NOTE: To use this feature, please contact ZEGOCLOUD technical support.
Version 2.15.0 and earlier: When the SDK uses [startPlayingStream] to pull the H.265 encoded stream, if the decoding frame rate is insufficient due to poor hardware performance on the streaming end, the SDK cannot actively downgrade, and the user needs to first Stop pulling the H.265 encoded stream and re-pull the H.264 encoded stream.
Version 2.16.0 and above: Added the H.265 streaming automatic downgrade policy. When using [startPlayingStream] to pull H.265 encoded streams, the SDK can compare the hardware performance of the streaming end according to the streaming quality. If the decoding frame rate is insufficient due to the difference, the H.264 encoded stream will be automatically downgraded.
For related API, please refer to startPlayingStream
Enhancements
ZEGO provides a new basic beauty function, showing users a good skin condition and creating a natural beauty effect. Developers need to call the [startEffectsEnv] interface to initialize the beauty environment before pushing the stream, and then call the [enableEffectsBeauty] interface to enable the beauty function. Through the [setEffectsBeautyParam] interface, you can adjust the degree of whitening, smoothing, sharpening, and ruddy as needed to achieve basic beauty capabilities.
This function is often used in video calls, live broadcasts and other scenarios.
For related API, please refer to startEffectsEnv, stopEffectsEnv, enableEffectsBeauty, setEffectsBeautyParam
The [onVideoFrame] and [onVideoFramePixelBuffer] callbacks of the media player support returning the timestamp corresponding to the video frame.
When calling the [getNetworkTimeInfo] interface to obtain synchronized network time information, the SDK will regularly update the NTP time to reduce the error of the obtained NTP time.
For related API, please refer to getNetworkTimeInfo
Bug Fixes
Deleted
The old beauty function is relatively simple and does not meet the developer's expectations. Therefore, the [enableBeautify] interface is deprecated in version 2.16.0 and above, please use the [enableEffectsBeauty] interface instead; the [setBeautifyOption] interface is deprecated, please use [setEffectsBeautyParam] ] replace.
For related API, please refer to enableBeautify, enableEffectsBeauty, setBeautifyOption, setEffectsBeautyParam
Release date: 2021-12-09
New Features
Added the [setCustomVideoCaptureDeviceState] interface. When using custom video capture, developers can set the capture device status of the specified channel for custom video capture, and the remote can get the state change of the push stream through the [onRemoteCameraStateUpdate] callback. This function is often used in live show scenes.
For related API, please refer to setCustomVideoCaptureDeviceState
The media player has added a new sound wave spectrum callback and switch interface, which can control whether to turn on the callback and the frequency of the callback, so as to obtain the current sound wave and spectrum of the media player. When playing resources through the media player, such as watching a movie together or chatting in a room with a game, this function can be used to perform the function of spectrum animation to increase the interest.
After creating the media player, call the [enableSoundLevelMonitor] interface to enable sound monitoring. After enabling it, you can use the [onMediaPlayerSoundLevelUpdate] callback to monitor the sound changes.
After creating the media player, call the [enableFrequencySpectrumMonitor] interface to enable spectrum monitoring. After enabling it, you can use the [onMediaPlayerFrequencySpectrumUpdate] callback to monitor the spectrum changes.
For related API, please refer to enableSoundLevelMonitor, enableFrequencySpectrumMonitor
When using the custom video capture function, calling the [sendSEISyncWithCustomVideo] interface can realize that while pushing the stream to transmit the video stream data, it can send the stream media enhancement supplementary information to synchronize some other additional information, which is synchronized with the current video frame. This function is often used in scenes where the playback content needs to be strongly synchronized with the video frame, such as video karaoke, and the video is strongly synchronized with the lyrics.
For related API, please refer to sendSEISyncWithCustomVideo
Added support for omni-directional virtual stereo sound. The monophonic sound is processed by algorithms to simulate a somatosensory sound. This function is often used in KTV scenes to make the singing sound more three-dimensional.
When the [enableVirtualStereo] interface is called and the angle parameter is set to -1, it means that the stereo effect is omnidirectional stereo.
For related API, please refer to enableVirtualStereo
Through the [onLocalDeviceExceptionOccurred] callback, you can set the device type to be detected, such as camera, speaker, microphone, etc. Developers can handle the error callbacks of different device types accordingly.
For related API, please refer to onLocalDeviceExceptionOccurred
Developers can port iOS applications to macOS through the Mac Catalyst framework.
Enhancements
When the custom video capture video frame data type [ZegoVideoBufferType] is the PixelBuffer type, it is supported to call the [setCustomVideoCaptureRotation] interface to set the custom picture rotation angle.
For related API, please refer to setCustomVideoCaptureRotation
Mixed-stream output video configuration [ZegoMixerOutputVideoConfig] Added encodeProfile and encodeLatency parameters, which are used to set the mixed-stream output video encoding specifications and the mixed-stream output video encoding delay respectively.
Logging in to the room causes the network test to stop. As the network test consumes bandwidth, please do it before logging in to the room.
If the user is in the server blacklist when logging in to the room, this error code will be returned, indicating that the room is forbidden to log in.
When using the SDK to lower the latency of live streaming, this error code will be returned if you have not subscribed to the low latency live streaming service.
Bug Fixes
Deleted
In order to allow developers to intuitively understand the type of abnormal device and the specific abnormal situation, the [onDeviceError] callback is abolished in 2.15.0 and above. Please use the [onLocalDeviceExceptionOccurred] callback instead.
For related API, please refer to onLocalDeviceExceptionOccurred
Release date: 2021-11-16
New Features
When developers need to distribute instructions such as remote control, cloud games, etc., through real-time signaling, they can obtain news from the publisher with low latency.
For related API, please refer to createRealTimeSequentialDataManager
Note: To use this function, please contact ZEGOCLOUD technical support.
Support the copyright music function to obtain copyrighted songs or accompaniment resources, and combine with the media player for local playback control. It can be used in chorus or background music scenes such as online KTV and language chat rooms.
Added a new alarm callback for insufficient H.265 decoding performance, which is used to prompt the user whether to perform downgrade processing in a scenario where the stream is pulled through the CDN. If the developer receives a low frame rate callback [onPlayerLowFpsWarning] during the process of pulling the H.265 stream, it is recommended that the developer stop pulling the H.265 stream and switch to the H.264 stream.
H.265 codec error notifications have been added to the push stream state callback [onPublisherStateUpdate] and the pull stream state callback [onPlayerStateUpdate].
For related API, please refer to onPlayerLowFpsWarning, onPublisherStateUpdate, onPlayerStateUpdate
Note: To use this function, please contact ZEGOCLOUD technical support.
Allows developers to customize the callback notifications for the arrival of audio and video frames, including the first audio frame arrival callback, the first video frame arrival callback, and the first video frame rendering callback.
For related API, please refer to callExperimentalAPI
The media player has a new [loadResourceWithPosition] interface, which supports specifying the start playback progress when loading media resources, in milliseconds.
For related API, please refer to ZegoMediaPlayer > loadResourceWithPosition
Note: This function is only valid when the custom video capture frame data type [bufferType] is set to GLTexture2D.
After receiving the [onStart] callback, the developer can call the [setCustomVideoCaptureRotation] interface to set the clockwise rotation angle of the custom capture screen of the specified push channel.
For related API, please refer to setCustomVideoCaptureRotation
The SDK supports setting the camera focus and exposure mode, which is often used in Jianbao live broadcast scenes to zoom in and focus on the details of some objects.
After starting the local preview, you can call the [isCameraFocusSupported] interface to turn on the camera focus function. Through the [setCameraFocusPointInPreview] and [setCameraExposurePointInPreview] interfaces, you can set the focus and exposure point in the preview view (every time the camera restarts the capture, Both settings will be invalid and need to be reset). Call the [setCameraFocusMode] and [setCameraExposureMode] interfaces to set the camera focus mode and exposure mode respectively.
For related API, please refer to isCameraFocusSupported, setCameraFocusMode, setCameraExposureMode
This function is often used in scenes that require mixed stream alignment, such as KTV. When playing at the streaming end, use the [setPlayStreamsAlignmentProperty] interface to control whether the real-time audio and video streams need to be accurately aligned. If necessary, all the streams that contain precise alignment parameters will be aligned; if not, all streams are not aligned.
For related API, please refer to setPlayStreamsAlignmentProperty
Two new modes, GENERAL3 and COMMUNICATION4, have been added to the audio device mode. GENERAL3 mode means that the pre-processing of the system is turned off, the microphone is always occupied, and the media volume is used throughout. COMMUNICATION4 mode means that the pre-processing of the system is turned on, the microphone is used for the microphone, and the microphone is released for the microphone, and the call volume is used throughout the process.
For related API, please refer to setEngineConfig
This function can be used to determine whether someone is speaking into the microphone within a certain period of time, and is used to detect whether the audio data after collection or audio pre-processing is human voice or noise.
For related API, please refer to startAudioVADStableStateMonitor, stopAudioVADStableStateMonitor, onAudioVADStateUpdate
Note: To use this function, please contact ZEGOCLOUD technical support.
Support the generation of Token's secret key to realize the smooth migration ability of ServerSecret. Two ServerSecrets are enabled at the same time through the background configuration. When one of the ServerSecrets is exposed, it can be smoothly migrated to the other ServerSecret.
Note: If you need to use the Token to add to the blacklist function, please contact ZEGOCLOUD technical support.
Token support to be added to the blacklist: In order to prevent the old Token from attacking the new Token after the release, support for adding the Token to the blacklist is added. Token blacklist means that the Token cannot be used under the AppID within the validity period.
Token supports authentication through flow ID: To prevent the use of the same token to push other flows after passing authentication, a new token that supports the generation of a bound flow ID is added.
In order to ensure the successful downloading of the client, the server has added batch prohibition of RTC streaming and batch restoration of RTC streaming capabilities.
Call the batch prohibition of RTC streaming interface, you can prohibit the specified stream ID from being pushed to the RTC service in batches, and the prohibition operation will send a notification that the push is prohibited to the client that is pushing and the client that is pulling. Call the batch recovery RTC streaming interface to recover the stream IDs that are prohibited from being pushed to the RTC media service in batches.
For related APIs, please refer to Batch forbid a RTC stream / Batch resume a RTC stream
Enhancements
The processing logic of version 2.10.0 to 2.13.1 is: 1. You must both push and pull streams before you receive your own network quality callback. 2. When a stream is pulled, the user's network quality will be received only if the push-stream end has a pull-stream and the push-stream end is in the room where it is located.
The processing logic of version 2.14.0 and above is: 1. As long as it pushes or pulls a stream, it will receive its own network quality callback. 2. When you pull a stream, the pushing end is in the room where you are, and you will receive the user's network quality.
For related API, please refer to onNetworkQuality
Prior to 2.14.0, the default maximum number of push channels was 2 channels. If you need to support more, you need ZEGOCLOUD technical support for special package. In order to cooperate with the real-time signaling function, the default maximum number of push streams is added to 4 channels in this version.
For related API, please refer to startPublishingStream
Completed the comment optimization of all API interfaces and error codes, and added information such as "Available since", "Description", "Use cases", "When to call", "Restrictions", "Caution" and other information in the API comment , So that developers can understand the functions of the API more clearly. "Possible causes" and "Recommendations" are added to the error code to help developers better locate and solve problems.
In order to reduce the cost of developers’ understanding of the environment, ZEGO has unified the concept of the environment. Starting from this version, the test environment has been abandoned and the formal environment has been used uniformly. Developers who have accessed the SDK before version 2.14.0 can refer to the Test Environment Disposal Instructions for SDK upgrades and code adjustments.
The length limit of the mixed stream forwarding address is extended from 512 bytes to 1024 bytes.
Deleted
In order to reduce the developer's understanding of the environment, the test environment was abandoned and the environment was used uniformly. The original [createEngine] interface has been abandoned in 2.14.0 and above. Please use the same interface without the [isTestEnv] parameter instead.
Release date: 2021-10-15
Bug Fixes
Release date: 2021-10-15
New Features
Electronic sound effect refers to the sound that allows people to talk and sing, and after processing, it has the effect of electric sound. This function is often used in KTV and language chat room scenes.
Before [createEngine] initializes the SDK, call the [setElectronicEffects] interface to turn on the electronic sound effects, and set different modes of electronic tones and the corresponding starting pitches as needed. When this interface is not called, the electronic sound effects are turned off by default.
Developers can also preset common electronic sound effects through the [setVoiceChangerPreset] interface. Currently, it supports preset C major electronic sound effects, A minor electronic sound effects, and harmony minor electronic sound effects.
For related API, please refer to setElectronicEffects
Note: To use this function, you need to upgrade the SDK and contact ZEGOCLOUD technical support for configuration.
Different business scenarios, time-effective restrictions on user login room, push streaming and other permissions can be achieved through Token.
When the Token expires, the server will take the initiative to reclaim the user's permissions, and the client user will be kicked out of the room and stop streaming. This mechanism can make user authority management more secure, and is often used in KTV and chat room scenarios.
For related API, please refer to renewToken, onRoomTokenWillExpire
When live streaming uses mixed streaming, the watermark can be updated in real time on the mixed streaming output screen, so that the watermark can be refreshed synchronously when the mixed streaming is updated in real time. This function is often used in online education scenarios, such as marking the name of the teacher or class corresponding to each input stream on the mixed stream screen during class.
[ZegoMixerTask]'s mixed stream input list [ZegoMixerInput] has added a [label] field to set the relevant text watermark information on the mixed stream input video screen. Each stream of the mixed stream only supports one watermark.
For related API, please refer to startMixerTask
When mixing streams, you can set the rendering mode for each stream. When the resolution ratio of the mixed stream input stream is inconsistent with the layout ratio of the corresponding input stream on the mixed stream output screen, different rendering modes can be selected according to different business scenarios.
[ZegoMixerTask]’s mixed stream input list [ZegoMixerInput] has added a [renderMode] field to set the rendering mode of the mixed stream input video screen, supporting "filling mode" and "adaptation mode".
For related API, please refer to startMixerTask
When the iOS platform collects mobile phone screen images, App audio, and microphone audio through Replaykit, and shares them through the SDK, the volume of the App audio and the volume of the microphone audio can be adjusted separately. In order to realize the flexible choice of highlighting the voice of the anchor or App in different scenarios. This function is often used in game live broadcast scenes.
After you start the preview or push the stream successfully, you can set the audio volume collected by the microphone through the [setReplayKitMicrophoneVolume] interface, and set the audio volume collected by the App through the [setReplayKitApplicationVolume] interface. The volume range of both is 0 ~ 200. The default value is 100.
For related API, please refer to setReplayKitMicrophoneVolume, setReplayKitApplicationVolume
Starting from this version, the arm64 simulator architecture is supported to facilitate developers to use the iOS simulator to develop and debug on the Mac with M1 chip (Apple Silicon).
Enhancements
On the push end, you can specify certain streams for precise alignment when configuring mixed streams.
For related API, please refer to setStreamAlignmentProperty
Call the [startPublishingStream] interface and set the [forceSynchronousNetworkTime] value in [ZegoPublisherConfig] to 1, then the SDK will wait until the NTP network time synchronization is completed before pushing the stream, and then call the [setStreamAlignmentProperty] interface to enable the mixed stream precise alignment function.
For related API, please refer to startPublishingStream, setStreamAlignmentProperty, onNetworkTimeSynchronized
Bug Fixes
Deleted
Because the timestamp collection timestamp is added to the callback, the [onProcessCapturedAudioData] callback is discarded in 2.13.0 and above, and the callback with the same name with the timestamp
parameter is used instead.
For related API, please refer to onProcessCapturedAudioData
Because the timestamp collection timestamp is added to the callback, the [onProcessRemoteAudioData] callback is discarded in 2.13.0 and above, and the callback with the same name with the timestamp
parameter is used instead.
For related API, please refer to onProcessRemoteAudioData
Because the timestamp collection timestamp is added to the callback, the [onProcessPlaybackAudioData] callback is discarded in 2.13.0 and above, and the callback with the same name with the timestamp
parameter is used instead.
For related API, please refer to onProcessPlaybackAudioData
Release date: 2021-09-09
New Features
The H.265 codec complete solution is launched, which is suitable for single-anchor live broadcast and multi-person interactive live broadcast scenarios. Developers can output H.265 format video streams during encoding or mixing. H.265 saves 30% of traffic compared to H.264 under the same image quality. Before using this function, you need to contact ZEGOCLOUD technical support to activate it.
For related API, please refer to isVideoEncoderSupported, isVideoDecoderSupported, enableH265EncodeFallback, onPublisherVideoEncoderChanged
Support to obtain and modify the audio data to be played after mixing. After initializing the SDK, before [startPublishingStream], [startPlayingStream], [startPreview], [createMediaPlayer] and [createAudioEffectPlayer], call the [enableCustomAudioPlaybackProcessing] interface to enable the custom audio mixing post-processing function, and [setCustomAudioProcessHandler] to set the custom audio Handling callbacks.
For related API, please refer to enableCustomAudioPlaybackProcessing, setCustomAudioProcessHandler, onProcessPlaybackAudioData
After successfully connecting with the remote user, when the status of the remote speaker device changes, such as turning on/off the speaker, you can call back to monitor through [onRemoteSpeakerStateUpdate].
For related API, please refer to onRemoteSpeakerStateUpdate
After loading the resources, call the [setPlaySpeed] interface to set the video playback speed of the media player, which supports 0.5 ~ 2.0 times, and the default is 1.0, which is the normal speed.
For related API, please refer to ZegoMediaPlayer > setPlaySpeed
When using the mixing function, it is supported to set the spatial audio effect of each audio stream through the audioDirection parameter in [ZegoMixerInput].
For related API, please refer to startMixerTask
Enhancements
Optimized the ear return logic to shorten the delay of ear return to 50+ ms.
Starting from this version, broadcast messages and barrage messages support sending longer messages (the default limit is 1 KB). If necessary, please contact ZEGOCLOUD technical support for configuration.
Starting from this version, no special package is required.
For related API, please refer to ZegoMediaPlayer > loadResource
Bug Fixes
Release date: 2021-08-27
New Features
Range audio is a scene-based voice interactive product developed for social and chicken-eating games. The product provides range voice, 3D sound effects and team voice functions.
Range audio: the listener in the room has a range limit on the receiving distance of the audio. If the distance between the speaker and himself exceeds this range, the sound cannot be heard. To ensure a clear voice, when there are more than 20 people nearby, you can only hear the 20 speakers closest to you.
3D sound effect: The sound has a sense of 3D space and is attenuated by distance.
Team mode: Players can choose to join a team, and support free switching between World
mode and Team
mode in the room.
For related API, please refer to createRangeAudio, destroyRangeAudio, setEventHandler, setAudioReceiveRange, updateSelfPosition, updateAudioSource, enableSpatializer, enableMicrophone, enableSpeaker, setRangeAudioMode, setTeamID
Enhancements
Release date: 2021-08-20
Bug Fixes
For related API, please refer to loginRoom
Release date: 2021-08-10
New Features
The SDK can specify a room, and the ZEGO real-time audio and video server automatically mixes all audio streams in the room (currently only supports mixed audio streams), which is often used in pure language chat scenes. Compared with manual mixing, this function reduces the complexity of developer access and does not need to manage the life cycle of the audio stream in the specified room.
For related API, please refer to startAutoMixerTask, stopAutoMixerTask
Added [setBackgroundColor] to the mixed-flow task object [ZegoMixerTask] to set the mixed-flow background color.
For related API, please refer to startMixerTask
Developers often only pay attention to the human voice when monitoring sound wave callbacks. They can call the [startSoundLevelMonitor] interface and pass in [ZegoSoundLevelConfig] to enable VAD human voice detection. The SDK also adds parameters for whether to include human voice detection in the local sound wave callback [onCapturedSoundLevelInfoUpdate] and remote audio sound wave callback [onRemoteSoundLevelInfoUpdate].
For related API, please refer to startSoundLevelMonitor, onCapturedSoundLevelInfoUpdate, onRemoteSoundLevelInfoUpdate
After the developer has written the playback data into the memory, he can directly use the media player to play it without writing it into a file for playback.
For related API, please refer to ZegoMediaPlayer > loadResourceFromMediaData
Developers can call the [setCameraExposureCompensation] interface to set the camera exposure compensation value after opening the preview. The value range is [-1, 1]. The smaller the value, the darker the picture, and the larger the value, the brighter the picture.
For related API, please refer to setCameraExposureCompensation
Enhancements
When using multiple video or audio devices, the [deviceID] parameter can be used to accurately identify the device reporting the error and troubleshoot the problem more efficiently.
When developers need to support more than 12 streams, they need to contact ZEGOCLOUD technical support settings.
Bug Fixes
Deleted
The parameter definition is inaccurate. The [onNetworkQuality] callback is abandoned in 2.10.0 and above, and the callback of the same name with the ZegoStreamQualityLevel
enumeration is used instead.
For related API, please refer to onNetworkQuality
Release date: 2021-07-13
Bug Fixes
Release date: 2021-07-09
New Features
When the camera is turned off, it supports continuous push of still pictures in JPEG/JPG, BMP and HEIF formats. For example, when the anchor exits the background, the camera will be actively turned off. At this time, the audience side needs to display the image of the anchor temporarily leaving.
After initializing the SDK, set the path of the pushed static image through the [setDummyCaptureImagePath] interface before closing the camera. After starting the normal push, call the [enableCamera] interface to turn off the camera and start pushing the still image, call the [enableCamera] interface to open The camera will end pushing still pictures.
For related API, please refer to setDummyCaptureImagePath
Added uplink and downlink network quality callbacks for local and remote users [onNetworkQuality]. By default, the network status of the local and each remote user (including unknown, excellent, good, medium, poor, network Disconnected). Developers can use this function when they want to analyze the network conditions on the link, or to understand the network conditions of local and remote users.
For related API, please refer to onNetworkQuality
When performing multi-terminal synchronization behaviors or time-consuming statistics, network time synchronization is required. The SDK adds the function of obtaining the NTP time. You can obtain the NTP time stamp through the [getNetworkTimeInfo] interface. Please contact ZEGOCLOUD technical support before using this function.
For related API, please refer to getNetworkTimeInfo
Based on the NTP time of the ZEGO server, the playback time of each stream is automatically aligned when the stream is mixed. Please contact ZEGOCLOUD technical support before using this function.
Enhancements
The same user can join multiple rooms at the same time, and in multiple rooms at the same time (currently the default is up to 5 rooms at the same time) push, pull, send real-time messages, and receive message callbacks. This function can isolate the messages and callbacks of multiple rooms, and realize more flexible mic-linking services. ZEGO is recommended for super small class scenes with inter-room interconnection and online education.
You need to call [ZegoRoomMode] to set the multi-room mode before initializing the SDK, and then call the [loginRoom] interface to log in to the multi-room.
For related API, please refer to setRoomMode, loginRoom
Developers can call the [logoutRoom] interface to log out of the current room without filling in the roomID. If the multi-room function is used, calling this interface will exit all rooms.
For related API, please refer to logoutRoom
Added audioCumulativeBreakCount, audioCumulativeBreakTime, audioCumulativeBreakRate and other parameters in the pull-stream quality callback, which provide more detailed data of pull-stream stuck.
For related API, please refer to onPlayerQualityUpdate
When calling the [startNetworkSpeedTest] interface to start the network speed test, you can set the callback period (3000 ms by default).
For related API, please refer to startNetworkSpeedTest
Bug Fixes
Deleted
In order to improve the multi-room function and remove the old master-slave room concept, the [loginMultiRoom] interface has been abandoned in 2.9.0 and above. If you need to implement a new multi-room function, please call the [setRoomMode] function to set the multi-room mode before the engine is initialized, and then use [loginRoom] to log in to the multi-room. If you call the [loginRoom] function to log in to the multi-room, please make sure to pass in the same User Info.
For related API, please refer to setRoomMode, loginRoom
Release date: 2021-06-11
New Features
User permission control refers to the fact that when a user logs in to a room, or performs operations such as push/pull streaming in the room, the ZEGO server judges whether the user has the corresponding permission according to the Token parameter carried when the user logs in, so as to avoid the lack of permission control or Risk issues caused by improper operation. Currently, it only supports the verification of two permissions for the user to log in to the room and push streaming in the user's room.
For related API, please refer to loginRoom, renewToken, onRoomTokenWillExpire
Open the ability of the auxiliary stream to replicate the video data of the main stream. With this capability, developers can use different protocols to push the same data in the main and auxiliary streams. If developers need this ability, please contact ZEGOCLOUD technical support to provide a trial API.
For related API, please refer to callExperimentalAPI
The spatial audio can perceive the sound position of 360° in the space. Developers can use the spatial audio function to create a more realistic "seat" effect in the audio and video room. Users can perceive the source direction of the sound through the spatial audio and restore the offline scene. It is suitable for scenes such as language chat room, script killing and online meeting.
For related API, please refer to enablePlayStreamVirtualStereo
Enhancements
Bug Fixes
Release date: 2021-04-29
New Features
ZEGO provides some technical previews or special customized functions in the RTC business through this API. If you need to obtain the use of this function or its details, please consult ZEGOCLOUD technical support.
For related API, please refer to callExperimentalAPI
Deleted
The [enableAudioDataCallback] interface is obsolete in 2.7.0 and above, please use the [startAudioDataObserver] interface instead.
For related API, please refer to startAudioDataObserver
Release date: 2021-04-15
New Features
In the case of mixed streaming, the developer sets the target stream that needs to be highlighted on the streaming end, so as to highlight the voice of a specific user in a noisy environment where multiple people are talking at the same time. For example, in a meeting scenario, the voice of key people can be guaranteed. When calling the [startMixerTask] interface to start mixing, first set the [mixMode] in the incoming [ZegoMixerAudioConfig] mixed audio configuration parameter to [focused], and configure the [isAudioFocus] of [ZegoMixerInput] in the stream that needs to be highlighted. If it is [true], the stream can be designated as the focus voice stream.
For related API, please refer to startMixerTask
Bug Fixes
Release date: 2021-04-01
New Features
Developers can set the maximum cache duration and maximum cache data size of media player network resources through the [setNetWorkResourceMaxCache] interface before loading resources according to actual needs (the two cannot be 0 at the same time), and can obtain the current network resources through the [getNetWorkResourceCache] interface The playable duration and size of the buffered data in the buffer queue.
For related API, please refer to ZegoMediaPlayer > setNetWorkResourceMaxCache, ZegoMediaPlayer > getNetWorkResourceCache
When the network status is poor and the media player has finished playing the cached network resources, it will stop playing. Only when the cached network resource is greater than the threshold set by the SDK (the default value is 5000 ms, and the valid value is greater than or equal to 1000 ms), the media player will automatically resume playback at the original paused position.
For related API, please refer to ZegoMediaPlayer > setNetWorkBufferThreshold
Release date: 2021-03-18
New Features
Take a screenshot of the screen currently playing in the media player.
For related API, please refer to ZegoMediaPlayer > takeSnapshot
The audio encoding type, bit rate, and audio channel combination value can be set and obtained as required.
For related API, please refer to setAudioConfig, getAudioConfig
When the flow control of the designated push channel is enabled through the [enableTrafficControl] interface, the [setTrafficControlFocusOn] interface can be used to control whether to start the flow control due to poor remote network conditions.
For related API, please refer to setTrafficControlFocusOn
This callback will be received after the first frame of video data is rendered.
For related API, please refer to onPublisherRenderVideoFirstFrame
The timestamp specified by normal seek may not an I frame, and then returns the I frame near the specified timestamp, which is not so accurate. But the accurate seek, when the specified timestamp is not an I frame, it will use the I frame near the specified timestamp to decode the frame of the specified timestamp.
For related API, please refer to ZegoMediaPlayer > enableAccurateSeek
This function can be used when you need to mute the audio data of all the playing streams at one time.
For related API, please refer to muteAllPlayStreamAudio
This function can be used when you need to mute the video data of all the playing streams at one time.
For related API, please refer to muteAllPlayStreamVideo
Enhancements
Bug Fixes
Release date: 2021-03-05
Bug Fixes
Release date: 2021-03-04
New Features
When developers need to customize the log file size and path, they can call the [setLogConfig] interface to complete the configuration, and it must be set before calling [createEngine] to take effect. If set after [createEngine], it will take effect at the next [createEngine] after [destroyEngine]. Once the [setLogConfig] interface is called, before [destroyEngine], that is, during the entire life cycle of the engine, the old way of setting the log size and path through [setEngineConfig] will be invalid. It is recommended that once you use this interface, you always only use it to complete the requirements for setting the log path and size.
For related API, please refer to setLogConfig
After setting [setApiCalledCallback], you can get the detailed information of the execution result of the ZEGO SDK method through the [onApiCalledResult] callback.
For related API, please refer to setApiCalledCallback
When the pusher has set the [codecID] to [SVC] through [setVideoConfig] (it can be set before and after the pull), the puller can dynamically set and select different stream types (small resolution is the two-division of the standard layer) one). When the network is weak or the rendered UI window is small, you can choose to use a small resolution video to save bandwidth.
For related API, please refer to setPlayStreamVideoType
Via [setAudioRouteToSpeaker], you can set the audio route to the speaker. When you choose not to use the built-in speaker to play the sound, that is, when it is set to [false], the SDK will select the audio output device with the highest current priority to play the sound according to the system schedule.
For related API, please refer to setAudioRouteToSpeaker
The local user can control the playback volume of all audio streams.
For related API, please refer to setAllPlayStreamVolume
Before pushing and pulling the stream, detect and locate some possible network problems through the network.
For related API, please refer to startNetworkProbe, stopNetworkProbe
Enhancements
Bug Fixes
Deleted
Because the view type parameters that are more general and easier to understand are defined, the new interface itself can clearly describe the concept of playing streams and switching between large and small streams, so as to avoid misunderstandings for developers, the [setPlayStreamVideoLayer] interface is abandoned in 2.3.0 and above, and the [setPlayStreamVideoType] interface is used instead.
For related API, please refer to setPlayStreamVideoType
For related API, please refer to setLogConfig
Switch the console printing function by setting [key] to [set_verbose] and [value] to [true] or [false].
For related API, please refer to setEngineConfig
Due to naming conventions, in order to reflect that the interface itself can clearly describe the concept of audio routing and avoid misunderstandings for developers, the [setBuiltInSpeakerOn] interface is abandoned in 2.3.0 and above. Please use [setAudioRouteToSpeaker] to implement the original function.
For related API, please refer to setAudioRouteToSpeaker
Release date: 2021-02-04
Bug Fixes
For related API, please refer to setPublishWatermark
For related API, please refer to sendCustomAudioCaptureAACData, sendCustomAudioCapturePCMData
For related API, please refer to onRemoteVideoFrameEncodedData
Release date: 2021-01-28
New Features
When developers need to pre-process the captured video, such as using a third-party beauty SDK, they can use the custom video processing function to easily use to the video pre-processing operation. Compared with the custom video capture function, this function does not require the developer to manage the input source of the device. It only needs to manipulate the raw data thrown by the SDK and then send it back to the SDK side.
For related API, please refer to enableCustomVideoProcessing
Multiple users conduct audio and video communication in the room, and each audio and video communication will have a unique RoomSessionID, which identifies the continuous communication from the first user in the room to the end of the audio and video communication. It can be used in scenarios such as call quality scoring and call problem diagnosis.
For related API, please refer to onRoomStateUpdate
Use a media player to play a media file. When the media player parses that the media file contains SEI, it will trigger the [onMediaPlayerRecvSEI] callback.
Add [advancedConfig] parameter for [ZegoMixerTask] to support advance configuration for mixer task. if you need to use it, please contact ZEGOCLOUD technical support.
For related API, please refer to startMixerTask
Added Prism related quality reports to facilitate developers to discover, locate, and solve problems in a timely manner, so as to better and more comprehensively improve user experience.
Enhancements
Release date: 2021-01-21
Bug Fixes
Release date: 2021-01-14
New Features
Please choose to download XCFramework from the Download SDK on the developer center.
This function is used to specify the range of adaptive adjustment of the playback buffer, and the developer can set it according to the scene.
For related API, please refer to setPlayStreamBufferIntervalRange
Bug Fixes
Release date: 2021-01-07
Bug Fixes
Release date: 2020-12-31
New Features
Low latency live broadcast focuses on providing stable and reliable live broadcast services. Compared with standard live video products, audio and video delays are lower, synchronization is stronger, and weak network resistance is better. It can provide users with a millisecond-level live broadcast experience; usage scenarios as education class, live show broadcast, e-commerce live broadcast, watch together, online auction. For more details, please refer to Low Latency Live.
For related API, please refer to startPlayingStream
Added support for H.265 encoding, which can reduce the bit rate at the same resolution and frame rate.
For related API, please refer to setVideoConfig
Bug Fixes
Deleted
The legacy function follows the life cycle of [setEngineConfig] function and is not flexible enough. After allowing the setting of [custom video capture] before the engine is started, Express SDK has added an independent [enableCustomVideoCapture] function to set the custom video capture.
For related API, please refer to enableCustomVideoCapture
The legacy function follows the life cycle of [setEngineConfig] and is not flexible enough. After allowing the setting of [custom video render] before the engine is started, Express SDK has added an independent [enableCustomVideoRender] function to set the custom video render.
For related API, please refer to enableCustomVideoRender
Added a new destroy engine function with the [callback] parameter. If the developer needs to switch between multiple audio/video SDK, the ZEGO SDK can be assumed to have freed up the device's hardware resources when the callback is received. If there is no need to listen, just set null for the [callback] parameter.
For related API, please refer to destroyEngine
The name of the legacy callback does not match the actual function, so delete and rename.
For related API, please refer to onPlaybackAudioData
The naming style and semantics of the legacy function are not clear. [muteSpeaker] uses the definition of Speaker to correspond to the Microphone.
For related API, please refer to muteSpeaker
The life cycle of the media player follows the engine, so it is changed to the instance method of the same name of [ZegoExpressEngine] class.
For related API, please refer to createMediaPlayer
The engine provides the function of separately acquiring the player's publish volume and local playback volume, accurate volume acquisition, so the original unified acquisition interface is deprecated.
For related API, please refer to ZegoMediaPlayer > setPublishVolume, ZegoMediaPlayer > setPlayVolume
Release date: 2020-12-24
Bug Fixes
Release date: 2020-12-17
New Features
This function supports uplink/downlink network speed measurement, and can be used to detect whether the network environment is suitable for pushing/pulling streams with specified bitrates. Call [startNetworkSpeedTest] to start the network speed test, configure its parameters [ZegoNetworkSpeedTestConfig] to control the speed test process. The speed test result will be called back through [onNetworkSpeedTestQualityUpdate].
For related API, please refer to startNetworkSpeedTest, stopNetworkSpeedTest, onNetworkSpeedTestQualityUpdate
This callback will be called when the device's network mode changes, such as switching from WiFi to 5G, or when the network is disconnected.
For related API, please refer to onNetworkModeChanged
Set the zoom factor of the camera through the SDK to achieve the effect of zooming in on distant objects during shooting. For detailed function implementation, please refer to [Advanced Features - Zoom].
For related API, please refer to getCameraMaxZoomFactor, setCameraZoomFactor
This callback will be called when there are changes in audio routing such as earphone plugging, speaker and receiver switching, etc.
For related API, please refer to onAudioRouteChange
Bug Fixes
For related API, please refer to startRecordingCapturedData, stopRecordingCapturedData
Release date: 2020-12-10
Enhancements
After logging in to the room, if you log out or switch rooms, and the incoming RoomID is empty or the RoomID does not exist, the 1002002 error code will be thrown.
For related API, please refer to loginRoom, logoutRoom, switchRoom
Bug Fixes
For related API, please refer to ZegoAudioEffectPlayer > seekTo
Release date: 2020-12-03
New Features
Developers can start monitoring after [createEngine], and support setting the monitoring callback interval (the default is 2s), which can generally be used to compare the memory growth before and after publish/play stream.
For related API, please refer to startPerformanceMonitor, stopPerformanceMonitor
Support the use of AES-128/192/256 to encrypt streaming media data.
For related API, please refer to setPublishStreamEncryptionKey, setPlayStreamDecryptionKey
This value is less than 0 means the number of milliseconds that the video leads the audio, greater than 0 means the number of milliseconds that the video lags the audio, and 0 means no difference. When the absolute value is less than 200, it can basically be regarded as synchronized audio and video, when the absolute value is greater than 200 for 10 consecutive seconds, it can be regarded as abnormal.
For related API, please refer to onPlayerQualityUpdate
When the publisher has set the codecID of [setVideoConfig] to SVC, the player can call [setPlayStreamVideoLayer] API to select the standard layer or the base layer (the resolution of the base layer is one-half of the standard layer) to saving bandwidth.
Enhancements
For related API, please refer to loginRoom
For related API, please refer to sendBroadcastMessage
For related API, please refer to startRecordingCapturedData
if you need to use it, please contact ZEGOCLOUD technical support.
For related API, please refer to ZegoMediaPlayer > loadResource
Deleted
Release date: 2020-11-24
Bug Fixes
Release date: 2020-11-19
New Features
Advanced reverberation parameters can be used to adjust finer reverberation effects as needed. In the original preset reverberation, effects such as studio, KTV, rock and concert have been added, and magnetic male and female voices have been added to the preset voice change. Fresh female voice effect, increase the interest of real-time voice, can adapt to more scenes.
For related API, please refer to setReverbAdvancedParam, setReverbPreset, setVoiceChangerPreset
By setting the SEI type, the developer can correctly parse the SEI when decoding with other decoders.
For related API, please refer to setSEIConfig
For related API, please refer to onRoomStreamUpdate
For related API, please refer to onPlayerAudioData
Bug Fixes
For related API, please refer to startPlayingStream
For related API, please refer to mutePlayStreamAudio
For related API, please refer to mutePlayStreamVideo
Deleted
In order to support more reverberation parameters, and set more rich effects, the [setReverbParam] interface is obsolete in version 1.18.0 and above. Please use the [setReverbAdvancedParam] interface with ZegoReverbAdvancedParam
type parameters instead.
For related API, please refer to setReverbAdvancedParam
The [onRoomStreamUpdate] callback is obsolete in version 1.18.0 and above. Please use the callback with the same name with the extended data parameter of extendedData
instead. extendedData
is used to identify the messages attached to stream updates, such as the reason for stream deletion, so the old interface is discarded.
For related API, please refer to onRoomStreamUpdate
Release date: 2020-11-05
Enhancements
For related API, please refer to loginRoom, sendBroadcastMessage, sendBarrageMessage, sendCustomCommand
Bug Fixes
For related API, please refer to sendSEI
For related API, please refer to handleReplayKitSampleBuffer
Release date: 2020-10-22
New Features
Added 4 voice changer effects - Foreigner, Optimus Prime, Robot, and Ethereal - to easily create unique sound effects and make users' voices more interesting. Create a quirky atmosphere between friends' voices in voice scenes to enhance entertainment.
For related API, please refer to setVoiceChangerPreset
Users can set the reverberation echo parameters according to their needs, allowing up to 7 echoes (delay) to be set, and support to individually set the delay and attenuation of each echo, as well as the overall input and output gain values. It can also be used with voice changer and reverb to achieve a variety of customized sound effects.
For related API, please refer to setReverbEchoParam
By default, adaptive screen rotating are processed on the publisher side, and it's also supported processing rotating on the player side to reduce the memory usage of the publisher. For specific usage, please refer to the document [Advanced Video Processing] -> [Custom Video Capturing].
For related API, please refer to prepareForReplayKit, handleReplayKitSampleBuffer
By changing the user's pitch, the output sound is sensorically different from the original sound, and various effects such as male voice changing to female voice are realized.
For related API, please refer to ZegoMediaPlayer > setVoiceChangerParam
It supports taking snapshots of the screen during publishing or playing stream, which can be used for scenes such as pornographic identification.
For related API, please refer to takePublishStreamSnapshot, takePlayStreamSnapshot
Can be used to suppress transient noises such as keyboard and desk knocks.
For related API, please refer to enableTransientANS
When the media file contains multiple audio tracks (such as original sound and accompaniment), it supports switching audio tracks for playback.
For related API, please refer to ZegoMediaPlayer > setAudioTrackIndex
Enhancements
For related API, please refer to onPlayerQualityUpdate
Bug Fixes
Deleted
param
parameter of the [setVoiceChangerParam] function is deprecated. This function is only used to fine-tune the pitch
value. If you need to use the preset enumeration to set the voice changer, please use the newly added [setVoiceChangerPreset] function.For related API, please refer to setVoiceChangerPreset, setVoiceChangerParam
param
parameter of the [setReverbParam] function is deprecated. This function is only used to fine-tune and set specific reverb parameters values. If you need to use the preset enumeration to set the reverberation, please use the newly added [setReverbPreset] function.For related API, please refer to setReverbPreset, setReverbAdvancedParam
Release date: 2020-10-15
Enhancements
Bug Fixes
Release date: 2020-09-24
New Features
Sound effects refer to short sound effects played to enhance the sense of reality or to enhance the atmosphere of the scene, such as: playing applause, gift sound effects, prompt sounds, etc. during the live broadcast; in the game, playing bullets, collision sounds.
The sound effect player supports functions such as sound effect playback (multiple sound effects can be overlapped), playback control (such as pause playback, volume adjustment, set playback progress), pre-loaded sound effects and other functions.
For related API, please refer to createAudioEffectPlayer, destroyAudioEffectPlayer
Bug Fixes
For related API, please refer to onRoomStreamExtraInfoUpdate
Release date: 2020-09-17
Bug Fixes
For related API, please refer to onRoomUserUpdate
Release date: 2020-09-10
New Features
Allows to set and get the local playback volume and publish volume of the media player separately.
For related API, please refer to ZegoMediaPlayer > setPublishVolume, ZegoMediaPlayer > setPlayVolume
Dual channel are two sound channels (stereo). When you hear a sound, you can determine the specific location of the sound source based on the phase difference between the left and right ears. When the developer turns on the dual channel capture, using a special dual channel capture device, the dual channel audio data can be collected and streamed (for publish stream, the dual channel audio encoding function must be enabled at the same time with setAudioConfig
API).
For related API, please refer to setAudioCaptureStereoMode, setAudioConfig
Developers can control the callback interval of sound level / audio spectrum monitoring. The default is 100 ms and the value range is [100, 3000].
For related API, please refer to startSoundLevelMonitor, startAudioSpectrumMonitor
Switch room allows developer to configure the properties of the next room, such as login authentication
For related API, please refer to switchRoom
Enhancements
Bug Fixes
Deleted
onRemoteAudioData
enum value, please use onPlaybackAudioData
instead.ZegoAudioDataCallbackBitMaskRemote
enum value, please use ZegoAudioDataCallbackBitMaskPlayback
instead.Release date: 2020-08-27
New Features
This method can realize the function of quickly switching rooms. After calling, it stops the publishing and playing stream of the previous room and enters the new room. If the room is switched successfully, the callback of the successful login of the new room will be received. Compared with the previous method implemented by calling the two interfaces of logout of the original room and login the new room, it is easier to use and more efficient.
For related API, please refer to switchRoom
When useing custom video capture by sending encoded data, when the network environment changes, the SDK will notify the developer that traffic control is required.
For related API, please refer to onEncodedDataTrafficControl
Enhancements
When the user enters the room, there is a stream in the room with stream extra info, onRoomStreamExtraInfoUpdate will be triggered. Therefore, developers only need to care about this callback to handle the logic of stream extra info.
For related API, please refer to onRoomStreamExtraInfoUpdate
Bug Fixes
For related API, please refer to enableCustomVideoCapture, enableCustomVideoRender, enableCustomAudioIO
For related API, please refer to onPlayerRecvSEI
currentProgress
getter of ZegoMediaPlayer
is incorrect.Release date: 2020-08-13
New Features
This feature supports that the same user can join multiple rooms at the same time, and the total number of rooms that can currently be accessed is up to two. After a user joins a room, he/she can only publish stream in the main room, but can pull stream in all rooms, and can receive signaling and callback from each room normally. This feature is usually used in scenarios such as "Super-Small Class", please contact ZEGOCLOUD technical support if you need to enable it.
For related API, please refer to [loginMultiRoom]
This function can set an extra message with the room as the unit. The message follows the life cycle of the entire room, and every user who logs in to the room can synchronize the message. Developers can be used to implement various business logic, such as room announcements and so on. At present, only one key-value pair is allowed to be set in the room additional message, and the maximum length of key is 10 bytes, and the maximum length of value is 100 bytes.
For related API, please refer to setRoomExtraInfo
This function allows developers to customize audio data after capture audio data or before play remote audio data for rendering. This function is usually used in scenes such as "voice changer" and "beautiful sound".
For related API, please refer to enableCustomAudioCaptureProcessing, enableCustomAudioRemoteProcessing
This feature allows the developer to flip the video image when doing custom video capturing using the type of incoming texture ID (Texture). This setting is valid only if the bufferType
property in the ZegoCustomVideoCapture
parameter of the enableCustomVideoCapture
API is GLTexture2D
.
Enhancements
The publishing stream capture volume and playing stream playback volume range is expanded from 0 ~ 100 to 0 ~ 200, and the default value is 100.
For related API, please refer to setCaptureVolume, setPlayVolume
Release date: 2020-08-06
Bug Fixes
Release date: 2020-07-30
New Features
Support to adjust the gain value of 10 frequency bands, so as to achieve the purpose of adjusting the tone.
For related API, please refer to setAudioEqualizerGain
Release date: 2020-07-23
Bug Fixes
AudioDataCallback
may not call back.destroyEngine
.Release date: 2020-07-15
New Features
Enhancements
Release date: 2020-06-30
New Features
Developers can build related audio preprocessing effects into their apps.
For related API, please refer to enableVirtualStereo, setVoiceChangerParam, setReverbAdvancedParam
It allows developers to record and save audio/video streams to local files for future playback.
For related API, please refer to startRecordingCapturedData, stopRecordingCapturedData
With this feature, developers can listen for audio data callback to obtain PCM audio data, which can be used for further processes, such as audio data obscene content detection, subtitles generation.
For related API, please refer to [enableAudioDataCallback]
Developers can do custom decoding and rendering. The bufferType
attribute of the ZegoCustomVideoRenderConfig
object now can be set to 'EncodedData' to use this feature.
Medium
.For related API, please refer to setANSMode
Developers can use this function to capture audio data and send it to the SDK, and obtain the audio data of the remote playing stream to process or play.
For related API, please refer to enableCustomAudioIO
Enhancements
1001011
.When developers set up the custom video rendering configurations with an unsupported bufferType, this error will be reported to remind the developer that the setting is incorrect.
Release date: 2020-06-28
Bug Fixes
Release date: 2020-06-20
Bug Fixes
onMixerSoundLevelUpdate
callback.bufferType
field in the custom video capture enableCustomVideoRender
configuration does not take effect.Release date: 2020-06-17
Bug Fixes
onAudioMixingCopyData
callback.Release date: 2020-06-15
New Features
enableCustomVideoCapture
, enableCustomVideoRender
APIs, allowing user to choose whether to use custom video capture/render functions after createEngine
and before preview or start publishing/playing stream.Starts audio mixing
, Mutes the mixed audio locally
, Sets the audio mixing output volume
, Callback for copying audio mixing data to the SDK
.enableHeadphoneMonitor
was added to enable this feature.sendCustomVideoCaptureEncodedData
was introduced to send encoded video data to the SDK.Bug Fixes
rotation
parameter is passed from the MediaPlayer video frame callback.setEventHandler
cannot set to empty.destroyEngine
.Deleted
customVideoCaptureMainConfig
, customVideoCaptureAuxConfig
, and customVideoRenderConfig
members in class ZegoEngineConfig
, please use the new enableCustomVideoCapture
, enableCustomVideoRender
interfaces mentioned above.Release date: 2020-06-11
Bug Fixes
advancedConfig
fails when the config includes special characters.Release date: 2020-05-31
New Features
getAudioConfig
and getVideoConfig
for obtaining the current audio configurations (i.e., audio bitrate, number of audio channels, etc.) and video configurations (i.e., resolution, bitrate, frame rate, et.).isMicrophoneMuted
and isSpeakerMuted
for obtaining the audio device status.Bug Fixes
Deleted
Release date: 2020-05-18
Bug Fixes
setEngineConfig
to explicitly set the log path does not take effect.Release date: 2020-05-15
New Features
1000008
.Call setEventHandler
again to change the Event Handler and it needs to be explicitly set to null.
Enhancements
sendCustomVideoCapturePixelBuffer
and sendCustomVideoCaptureTextureData
interface's parameter timeStamp
is renamed to timestamp
.Release date: 2020-04-30
New Features
onRoomOnlineUserCountUpdate
, which will be triggered when the number of online users in the room changes. Developers can use this callback to monitor the current online user count of the room and display the number on the app's UI.Bug Fixes
Release date: 2020-04-15
New Features
onPublisherQualityUpdate
, through which developers can obtain the following information to perform statistics on the data related to stream publishing. totalSendBytes
, the total bytes sent. audioSendBytes
, number of bytes of audio data sent. videoSendBytes
, number of bytes of video data sent.onPublisherQualityUpdate
, through which developers can obtain the following information to perform statistics on the data related to stream playing and the latency of stream playing. totalSendBytes
, the total bytes sent. audioSendBytes
, number of bytes of audio data sent. videoSendBytes
, number of bytes of video data sent.Enhancements
Release date: 2020-03-31
Enhancements
Bug Fixes
Release date: 2020-03-19
Bug Fixes
Release date: 2020-03-14
New Features
Developers can select scenario when calling createEngine
, and the SDK will perform optimal pre-configuration for real-time communication scenarios and live broadcast scenarios.
This feature can be used when developers need to publish one stream with source video from the camera and the other stream with source video from the screen. It can be applied to many scenarios, such as online education and video conferencing.
When calling the API setVideoConfig
to set up the video configurations for stream publishing, developers can set the video codecID to multi layer to enable multi-bitrate encoding.
Developers can call the API enableTrafficControl:property:
to enable this feature and choose one or more of the following traffic control properties, to adjust the video bitrate (the basic property), to adjust the video frame rate, to adjust the video resolution.
enablePublishDirectToCDN
.Developers can directly push audio/video streams to CDN from the client by specifying the CDN URL or using the related ZEGO backend configuration before start publishing.
For related API, please refer to enablePublishDirectToCDN
startPlayingStream
.When streams are directly published or relayed to CDN, they have to be played via URL. Developers can set up the CDN configurations for playing streams via URL. When multi-bitrate encoding is enabled on the stream publishing side, the stream player can play the stream with the lower resolution by setting the videoLayer to base.
When a viewer playing a mixed stream needs to the know the sound level of each input stream, developers can call the API enableSoundLevel
to enable sound level, and set a unique sound level ID for each input stream when starting the stream mixing task, and then the stream player can obtain the sound level of each input stream from the callback onMixerSoundLevelUpdate
.
When it is required to send messages to a room with more than 500 users with the assumption that the delivery of each message to each user does not need to be guaranteed (i.e., some users might miss out some messages).
For related API, please refer to sendBarrageMessage
With this feature, developers do not have to render the preview by themselves when using custom video capture.
Enhancements
destroyEngine
interface with no parameters and add a destroyEngine
interface with the same name but with a callback parameter.With the new API, the callback is triggered when the action of destroying the ZegoExpressEngine is completed and the resources used by the engine are released, and developers can switch to use other SDK at this point.
Release date: 2020-02-13
New Features
Release date: 2020-01-17
Bug Fixes
Release date: 2019-12-27
New Features
setAudioConfig
for setting up audio parameters (codec, audio bitrate, number of channels ) before start publishing stream.Release date: 2019-12-13
New Features
Enhancements
Release date: 2019-11-27
New Features
Release date: 2019-11-11
New Features
Bug Fixes
Release date: 2019-11-01
New Features
System
, Room
, Stream publishing
, Stream playing
, Preprocess
, Devices
.