GithubHelp home page GithubHelp logo

ebg1223 / client-sdk-react-native Goto Github PK

View Code? Open in Web Editor NEW

This project forked from livekit/client-sdk-react-native

0.0 0.0 0.0 2.39 MB

License: Apache License 2.0

Shell 0.09% JavaScript 1.59% Ruby 3.47% C 0.07% Objective-C 7.86% Java 17.55% Kotlin 16.06% TypeScript 40.76% Objective-C++ 1.63% Swift 10.90%

client-sdk-react-native's Introduction

The LiveKit icon, the name of the repository and some sample code in the background.

livekit-react-native

Use this SDK to add real-time video, audio and data features to your React Native app. By connecting to a self- or cloud-hosted LiveKit server, you can quickly build applications like interactive live streaming or video calls with just a few lines of code.

Installation

NPM

npm install @livekit/react-native @livekit/react-native-webrtc

Yarn

yarn add @livekit/react-native @livekit/react-native-webrtc

This library depends on @livekit/react-native-webrtc, which has additional installation instructions found here:


Once the @livekit/react-native-webrtc dependency is installed, one last step is needed to finish the installation:

Android

In your MainApplication.java file:

Java

import com.livekit.reactnative.LiveKitReactNative;
import com.livekit.reactnative.audio.AudioType;

public class MainApplication extends Application implements ReactApplication {

  @Override
  public void onCreate() {
    // Place this above any other RN related initialization
    // When AudioType is omitted, it'll default to CommunicationAudioType.
    // Use MediaAudioType if user is only consuming audio, and not publishing.
    LiveKitReactNative.setup(this, new AudioType.CommunicationAudioType());

    //...
  }
}

Or in your MainApplication.kt if you are using RN 0.73+

Kotlin

import com.livekit.reactnative.LiveKitReactNative
import com.livekit.reactnative.audio.AudioType

class MainApplication : Application, ReactApplication() {
  override fun onCreate() {
    // Place this above any other RN related initialization
    // When AudioType is omitted, it'll default to CommunicationAudioType.
    // Use MediaAudioType if user is only consuming audio, and not publishing.
    LiveKitReactNative.setup(this, AudioType.CommunicationAudioType())

    //...
  }
}

iOS

In your AppDelegate.m file:

#import "LivekitReactNative.h"

@implementation AppDelegate

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
  // Place this above any other RN related initialization
  [LivekitReactNative setup];

  //...
}

Expo

LiveKit is available on Expo through development builds. See the instructions found here.

Example app

We've included an example app that you can try out.

Usage

In your index.js file, setup the LiveKit SDK by calling registerGlobals(). This sets up the required WebRTC libraries for use in Javascript, and is needed for LiveKit to work.

import { registerGlobals } from '@livekit/react-native';

// ...

registerGlobals();

A Room object can then be created and connected to.

import { Participant, Room, Track } from 'livekit-client';
import { useRoom, AudioSession, VideoView } from '@livekit/react-native';

/*...*/

// Create a room state
const [room] = useState(() => new Room());

// Get the participants from the room
const { participants } = useRoom(room);

useEffect(() => {
  let connect = async () => {
    await AudioSession.startAudioSession();
    await room.connect(url, token, {});
    console.log('connected to ', url, ' ', token);
  };
  connect();
  return () => {
    room.disconnect();
    AudioSession.stopAudioSession();
  };
}, [url, token, room]);

const videoView = participants.length > 0 && (
  <VideoView
    style={{ flex: 1, width: '100%' }}
    videoTrack={participants[0].getTrack(Track.Source.Camera)?.videoTrack}
  />
);

API documentation is located here.

Additional documentation for the LiveKit SDK can be found at https://docs.livekit.io/references/client-sdks/

Audio sessions

As seen in the above example, we've introduced a class AudioSession that helps to manage the audio session on native platforms. This class wraps either AudioManager on Android, or AVAudioSession on iOS.

You can customize the configuration of the audio session with configureAudio.

Android

Media playback

By default, the audio session is set up for bidirectional communication. In this mode, the audio framework exhibits the following behaviors:

  • The volume cannot be reduced to 0.
  • Echo cancellation is available and is enabled by default.
  • A microphone indicator can be displayed, depending on the platform.

If you're leveraging LiveKit primarily for media playback, you have the option to reconfigure the audio session to better suit media playback. Here's how:

useEffect(() => {
  let connect = async () => {
    // configure audio session prior to starting it.
    await AudioSession.configureAudio({
      android: {
        // currently supports .media and .communication presets
        audioTypeOptions: AndroidAudioTypePresets.media,
      },
    });
    await AudioSession.startAudioSession();
    await room.connect(url, token, {});
  };
  connect();
  return () => {
    room.disconnect();
    AudioSession.stopAudioSession();
  };
}, [url, token, room]);

Customizing audio session

Instead of using our presets, you can further customize the audio session to suit your specific needs.

await AudioSession.configureAudio({
  android: {
    preferredOutputList: ['earpiece'],
    // See [AudioManager](https://developer.android.com/reference/android/media/AudioManager)
    // for details on audio and focus modes.
    audioTypeOptions: {
      manageAudioFocus: true,
      audioMode: 'normal',
      audioFocusMode: 'gain',
      audioStreamType: 'music',
      audioAttributesUsageType: 'media',
      audioAttributesContentType: 'unknown',
    },
  },
});
await AudioSession.startAudioSession();

iOS

For iOS, the most appropriate audio configuration may change over time when local/remote audio tracks publish and unpublish from the room. To adapt to this, the useIOSAudioManagement hook is advised over just configuring the audio session once for the entire audio session.

Screenshare

Enabling screenshare requires extra installation steps:

Android

Android screenshare requires a foreground service with type mediaProjection to be present.

The example app uses @voximplant/react-native-foreground-service for this. Ensure that the service is labelled a mediaProjection service like so:

<service android:name="com.voximplant.foregroundservice.VIForegroundService" android:foregroundServiceType="mediaProjection" />

Once setup, start the foreground service prior to using screenshare.

iOS

iOS screenshare requires adding a Broadcast Extension to your iOS project. Follow the integration instructions here:

https://jitsi.github.io/handbook/docs/dev-guide/dev-guide-ios-sdk/#screen-sharing-integration

It involves copying the files found in this sample project to your iOS project, and registering a Broadcast Extension in Xcode.

It's also recommended to use CallKeep, to register a call with CallKit (as well as turning on the voip background mode). Due to background app processing limitations, screen recording may be interrupted if the app is restricted in the background. Registering with CallKit allows the app to continue processing for the duration of the call.

Once setup, iOS screenshare can be initiated like so:

const screenCaptureRef = React.useRef(null);
const screenCapturePickerView = Platform.OS === 'ios' && (
  <ScreenCapturePickerView ref={screenCaptureRef} />
);
const startBroadcast = async () => {
  if (Platform.OS === 'ios') {
    const reactTag = findNodeHandle(screenCaptureRef.current);
    await NativeModules.ScreenCapturePickerViewManager.show(reactTag);
    room.localParticipant.setScreenShareEnabled(true);
  } else {
    room.localParticipant.setScreenShareEnabled(true);
  }
};

return (
  <View style={styles.container}>
    /*...*/ // Make sure the ScreenCapturePickerView exists in the view tree.
    {screenCapturePickerView}
  </View>
);

Note

You will not be able to publish camera or microphone tracks on iOS Simulator.

Troubleshooting

Cannot read properties of undefined (reading 'split')

This error could happen if you are using yarn and have incompatible versions of dependencies with livekit-client.

To fix this, you can either:

  • use another package manager, like npm
  • use yarn-deduplicate to deduplicate dependencies

Contributing

See the contributing guide to learn how to contribute to the repository and the development workflow.

License

Apache License 2.0


LiveKit Ecosystem
Real-time SDKsReact Components · JavaScript · iOS/macOS · Android · Flutter · React Native · Rust · Python · Unity (web) · Unity (beta)
Server APIsNode.js · Golang · Ruby · Java/Kotlin · Python · Rust · PHP (community)
Agents FrameworksPython · Playground
ServicesLivekit server · Egress · Ingress · SIP
ResourcesDocs · Example apps · Cloud · Self-hosting · CLI

client-sdk-react-native's People

Contributors

b0iq avatar davidliu avatar davidzhao avatar dsa avatar ocupe avatar wjaykim avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.