在 Kotlin 中使用 WebRTC 和 Firebase 构建视频通话应用程序

项目概述

本文分享的视频通话应用程序由两个主要部分组成:

  • WebRTC 集成:处理点对点视频通信。
  • Firebase 集成:管理信令和用户状态更新。

我们将使用 Hilt 进行依赖注入,Kotlin 协程进行异步编程,并采用简洁的架构方法来保持代码的模块化和可维护性。

设置项目

首先,创建一个新的 Kotlin 项目,并为 WebRTC、Firebase 和 Hilt 添加必要的依赖关系,以实现依赖注入。

依赖项

在 build.gradle 文件中添加以下依赖项:

dependencies { 
    .
    .
    implementation 'com.google.code.gson:gson:2.10.1'
    implementation 'com.mesibo.api:webrtc:1.0.5'
    implementation 'com.google.dagger:hilt-android:2.51'
    kapt "com.google.dagger:hilt-compiler:2.51"
}

关键组件

  • MainService:管理视频通话的整体流程,包括初始化服务、处理来电和去电以及管理 UI 元素。
  • MainRepository:充当服务和 WebRTC/Firebase 客户端之间的中介,处理数据流和状态管理。
  • NSWebRTCClient:用于管理 WebRTC 连接的自定义客户端,包括处理媒体流、ICE 候选和会话描述。
在 Kotlin 中使用 WebRTC 和 Firebase 构建视频通话应用程序

数据流

  1. UI 交互:
  • 用户与 UI(ActivityFragment)交互,以开始或接听电话。
  1. 服务通信:
  • UI 向 MainService 发送请求,以启动或管理呼叫。
  1. 业务逻辑:
  • MainService 将请求委托给 MainRepository,以处理呼叫设置或更新。
  1. 数据处理:
  • MainRepositoryWebRTCClient 交互管理视频通话,并与 FirebaseClient 交互同步通话相关数据。
  1. 实时操作:
  • WebRTCClient 负责管理实时视频和音频流。
  • FirebaseClient 可确保呼叫相关事件在客户端之间实时同步和更新。
  1. 更新:
  • MainRepository 接收来自 WebRTCClient FirebaseClient 的更新,并通知 MainService
  • MainService 会根据最新的呼叫状态和数据更新 UI(ActivityFragment)。

MainService 主服务

MainService管理应用程序的核心功能,例如启动和停止服务、设置视图和处理呼叫事件。

@Singleton
class MainService @Inject constructor(private val mainRepository: MainRepository) : MainRepository.MainRepositoryListener {

    companion object {
        var incomingCallListener: IncomingCallListener? = null
        var endCallListener: EndCallListener? = null
        var localSurfaceView: SurfaceViewRenderer? = null
        var remoteSurfaceView: SurfaceViewRenderer? = null
    }

    interface EndCallListener {
        fun onCallEnded()
    }

    interface IncomingCallListener {
        fun onCallReceived(model: NSDataModel)
    }

    /**
     * Initializes the WebRTC client and Firebase client, sets the repository listener,
     * and starts the service for the given username.
     *
     * @param username The username of the current user.
     */
    fun startService(username: String) {
        mainRepository.mainRepositoryListener = this
        CoroutineScope(Dispatchers.IO).launch {
            mainRepository.initWebrtcClient(username)
            mainRepository.initFirebase()
        }
    }

    /**
     * Sets up the views for displaying video streams, initializes the local and remote
     * surface views, and starts the call if not initiated by the caller.
     *
     * @param videoCall Indicates if the call is a video call.
     * @param caller Indicates if the current user is the caller.
     * @param target The target username to connect with.
     */
    fun setupViews(videoCall: Boolean, caller: Boolean, target: String) {
        mainRepository.setTarget(target)
        mainRepository.initLocalSurfaceView(localSurfaceView!!, videoCall)
        mainRepository.initRemoteSurfaceView(remoteSurfaceView!!)
        if (!caller) {
            mainRepository.startCall()
        }
    }

    /**
     * Receives and handles the latest event data, such as incoming call requests.
     *
     * @param data The data model containing event information.
     */
    override fun onLatestEventReceived(data: NSDataModel) {
        if (data.isValid()) {
            when (data.type) {
                NSDataModelType.StartVideoCall, NSDataModelType.StartAudioCall -> {
                    incomingCallListener?.onCallReceived(data)
                }
                else -> Unit
            }
        }
    }

    /**
     * Ends the call and notifies the listener.
     */
    override fun endCall() {
        mainRepository.endCall()
        endCallListener?.onCallEnded()
    }
}

主要方法:

  • startService:初始化 WebRTC 客户端和 Firebase,为应用程序接听来电和拨出电话做好准备。
  • setupViews:配置显示视频流的本地和远程视图。
  • onLatestEventReceived:处理来电等事件,让 UI 做出适当响应。

MainRepository

MainRepository负责与 WebRTC 客户端和 Firebase 进行交互。它处理呼叫发起、终止和状态更改的逻辑。

@Singleton
class MainRepository @Inject constructor(
    private val firebaseClient: FirebaseClient, 
    private val webRTCClient: NSWebRTCClient, 
    private val gson: Gson
) : NSWebRTCClient.Listener {

    private var target: String? = null
    var mainRepositoryListener: MainRepositoryListener? = null

    interface MainRepositoryListener {
        fun onLatestEventReceived(data: NSDataModel)
        fun endCall()
    }

    /**
     * Initializes the WebRTC client with the given username and sets up peer connection observers.
     *
     * @param username The username of the current user.
     */
    fun initWebrtcClient(username: String) {
        webRTCClient.listener = this
        webRTCClient.initializeWebrtcClient(username, object : NSPeerObserver() {
            override fun onAddStream(p0: MediaStream?) {
                // When a remote media stream is added, set it to the remote view
                p0?.videoTracks?.get(0)?.addSink(remoteView)
            }

            override fun onIceCandidate(p0: IceCandidate?) {
                // Send the ICE candidate to the remote peer
                p0?.let {
                    webRTCClient.sendIceCandidate(target!!, it)
                }
            }

            override fun onConnectionChange(newState: PeerConnection.PeerConnectionState?) {
                // Handle connection state changes
                if (newState == PeerConnection.PeerConnectionState.CONNECTED) {
                    firebaseClient.clearLatestEvent()
                }
            }
        })
    }

    /**
     * Initializes the Firebase client to listen for events and handles them based on event type.
     */
    fun initFirebase() {
        firebaseClient.subscribeForLatestEvent { event ->
            mainRepositoryListener?.onLatestEventReceived(event)
            when (event.type) {
                Offer -> {
                    webRTCClient.onRemoteSessionReceived(
                        SessionDescription(SessionDescription.Type.OFFER, event.data.toString())
                    )
                    webRTCClient.answer(target!!)
                }
                Answer -> {
                    webRTCClient.onRemoteSessionReceived(
                        SessionDescription(SessionDescription.Type.ANSWER, event.data.toString())
                    )
                }
                IceCandidates -> {
                    // Parse and add ICE candidates received from the remote peer
                    val candidate = gson.fromJson(event.data.toString(), IceCandidate::class.java)
                    candidate?.let { webRTCClient.addIceCandidateToPeer(it) }
                }
                EndCall -> {
                    mainRepositoryListener?.endCall()
                }
                else -> Unit
            }
        }
    }

    /**
     * Initiates a call to the target user.
     */
    fun startCall() {
        webRTCClient.call(target!!)
    }

    /**
     * Ends the current call and cleans up resources.
     */
    fun endCall() {
        webRTCClient.closeConnection()
    }
}

NSWebRTCClient

NSWebRTCClient处理 WebRTC 操作,包括媒体流和连接管理。

查看服务器的此链接:https://gist.github.com/sagivo/3a4b2f2c7ac6e1b5267c2f1f59ac6c6b

class NSWebRTCClient(private val context: Context) {

    private val peerConnectionFactory: PeerConnectionFactory by lazy { createPeerConnectionFactory() }
    private var peerConnection: PeerConnection? = null
    private val iceServer = listOf(
        PeerConnection.IceServer.builder("YOUR_URL")
            .setUsername("YOUR_USERNAME")
            .setPassword("YOUR_PASSWORD").createIceServer()
    )
    private val localVideoSource by lazy { peerConnectionFactory.createVideoSource(false) }
    private val localAudioSource by lazy { peerConnectionFactory.createAudioSource(MediaConstraints()) }
    private val videoCapturer = getVideoCapturer(context)
    private var localStream: MediaStream? = null
    private var localAudioTrack: AudioTrack? = null
    private var localVideoTrack: VideoTrack? = null

    init {
        // Initialize the PeerConnectionFactory with the application context
        PeerConnectionFactory.initialize(
            PeerConnectionFactory.InitializationOptions.builder(context)
                .setFieldTrials("WebRTC-H264HighProfile/Enabled/")
                .createInitializationOptions()
        )
    }

    /**
     * Initializes the WebRTC client with the specified username and sets up the peer connection.
     *
     * @param username The username of the current user.
     * @param observer The observer for peer connection events.
     */
    fun initializeWebrtcClient(username: String, observer: PeerConnection.Observer) {
        peerConnection = createPeerConnection(observer)
        startLocalStreaming()
    }

    /**
     * Creates a peer connection with the specified observer.
     *
     * @param observer The observer for peer connection events.
     * @return The created peer connection.
     */
    private fun createPeerConnection(observer: PeerConnection.Observer): PeerConnection? {
        return peerConnectionFactory.createPeerConnection(iceServer, observer)
    }

    /**
     * Starts a call by creating an SDP offer and setting it as the local description.
     *
     * @param target The target user to call.
     */
    fun call(target: String) {
        peerConnection?.createOffer(object : MySdpObserver() {
            override fun onCreateSuccess(desc: SessionDescription?) {
                // Set the local description with the created SDP offer
                peerConnection?.setLocalDescription(object : MySdpObserver() {
                    override fun onSetSuccess() {
                        // Handle success
                    }
                }, desc)
            }
        }, MediaConstraints().apply {
            // Set media constraints for video and audio
            mandatory.add(MediaConstraints.KeyValuePair("OfferToReceiveVideo", "true"))
            mandatory.add(MediaConstraints.KeyValuePair("OfferToReceiveAudio", "true"))
        })
    }

    /**
     * Creates an SDP answer in response to an offer from the target user.
     *
     * @param target The target user.
     */
    fun answer(target: String) {
        peerConnection?.createAnswer(object : MySdpObserver() {
            override fun onCreateSuccess(desc: SessionDescription?) {
                // Set the local description with the created SDP answer
                peerConnection?.setLocalDescription(object : MySdpObserver() {
                    override fun onSetSuccess() {
                        // Handle success
                    }
                }, desc)
            }
        }, MediaConstraints().apply {
            // Set media constraints for video and audio
            mandatory.add(MediaConstraints.KeyValuePair("OfferToReceiveVideo", "true"))
            mandatory.add(MediaConstraints.KeyValuePair("OfferToReceiveAudio", "true"))
        })
    }

    /**
     * Receives and sets the remote session description.
     *
     * @param sessionDescription The remote session description.
     */
    fun onRemoteSessionReceived(sessionDescription: SessionDescription) {
        peerConnection?.setRemoteDescription(MySdpObserver(), sessionDescription)
    }

    /**
     * Adds an ICE candidate to the peer connection.
     *
     * @param iceCandidate The ICE candidate to be added.
     */
    fun addIceCandidateToPeer(iceCandidate: IceCandidate) {
        peerConnection?.addIceCandidate(iceCandidate)
    }

    /**
     * Sends an ICE candidate to the target user.
     *
     * @param target The target user.
     * @param iceCandidate The ICE candidate to be sent.
     */
    fun sendIceCandidate(target: String, iceCandidate: IceCandidate) {
        addIceCandidateToPeer(iceCandidate)
    }

    /**
     * Closes the peer connection and releases resources.
     */
    fun closeConnection() {
        localStream?.dispose()
        peerConnection?.close()
    }

    /**
     * Starts local media streaming by creating a local media stream and adding video and audio tracks.
     */
    private fun startLocalStreaming() {
        localStream = peerConnectionFactory.createLocalMediaStream("localStream")
        localAudioTrack = peerConnectionFactory.createAudioTrack("localAudio", localAudioSource)
        localStream?.addTrack(localAudioTrack)

        val surfaceTextureHelper = SurfaceTextureHelper.create(Thread.currentThread().name, EglBase.create().eglBaseContext)
        videoCapturer.initialize(surfaceTextureHelper, context, localVideoSource.capturerObserver)
        videoCapturer.startCapture(720, 480, 30)

        localVideoTrack = peerConnectionFactory.createVideoTrack("localVideo", localVideoSource)
        localStream?.addTrack(localVideoTrack)
    }

    /**
     * Retrieves a video capturer suitable for the platform.
     *
     * @param context The application context.
     * @return The video capturer.
     */
    private fun getVideoCapturer(context: Context): VideoCapturer {
        // Implementation to retrieve a video capturer (e.g., using Camera1 or Camera2 API)
        // For simplicity, this is left as a placeholder
        throw NotImplementedError("Video capturer retrieval is not implemented.")
    }
}

总结

本文介绍了使用 Kotlin、WebRTC 和 Firebase 构建视频通话应用程序的过程,重点是模块化和可维护的代码设计。通过实现 MainServiceMainRepositoryNSWebRTCClient类,您可以高效地管理视频通话功能和交互。所提供的代码示例和注释旨在阐明应用程序各部分的目的和功能,便于更好地理解和更轻松地实施强大的视频通话解决方案。

作者:Enes Algan

本文来自作者投稿,版权归原作者所有。如需转载,请注明出处:https://www.nxrte.com/jishu/webrtc/51400.html

(0)

相关推荐

发表回复

登录后才能评论