SkillAgentSearch skills...

Voice

:microphone: React Native Voice Recognition library for iOS and Android (Online and Offline Support)

Install / Use

/learn @react-native-voice/Voice
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

⚠️ This package is deprecated and archived.

Please use this actively maintained alternative:

![CircleCI branch][circle-ci-badge] [![npm][npm]][npm-url]

<h1 align="center">React Native Voice</h1> <p align="center">A speech-to-text library for <a href="https://reactnative.dev/">React Native.</a></p> <a href="https://discord.gg/CJHKVeW6sp"> <img src="https://img.shields.io/discord/764994995098615828?label=Discord&logo=Discord&style=for-the-badge" alt="chat on Discord"></a>
yarn add @react-native-voice/voice

# or

npm i @react-native-voice/voice --save

Link the iOS package

npx pod-install

Table of contents

<h2 align="center">Linking</h2> <p align="center">Manually or automatically link the NativeModule</p>
react-native link @react-native-voice/voice

Manually Link Android

  • In android/setting.gradle
...
include ':@react-native-voice_voice', ':app'
project(':@react-native-voice_voice').projectDir = new File(rootProject.projectDir, '../node_modules/@react-native-voice/voice/android')
  • In android/app/build.gradle
...
dependencies {
    ...
    compile project(':@react-native-voice_voice')
}
  • In MainApplication.java

import android.app.Application;
import com.facebook.react.ReactApplication;
import com.facebook.react.ReactPackage;
...
import com.wenkesj.voice.VoicePackage; // <------ Add this!
...

public class MainActivity extends Activity implements ReactApplication {
...
    @Override
    protected List<ReactPackage> getPackages() {
      return Arrays.<ReactPackage>asList(
        new MainReactPackage(),
        new VoicePackage() // <------ Add this!
        );
    }
}

Manually Link iOS

  • Drag the Voice.xcodeproj from the @react-native-voice/voice/ios folder to the Libraries group on Xcode in your poject. Manual linking

  • Click on your main project file (the one that represents the .xcodeproj) select Build Phases and drag the static library, lib.Voice.a, from the Libraries/Voice.xcodeproj/Products folder to Link Binary With Libraries

<h2 align="center">Prebuild Plugin</h2>

This package cannot be used in the "Expo Go" app because it requires custom native code.

After installing this npm package, add the config plugin to the plugins array of your app.json or app.config.js:

{
  "expo": {
    "plugins": ["@react-native-voice/voice"]
  }
}

Next, rebuild your app as described in the "Adding custom native code" guide.

Props

The plugin provides props for extra customization. Every time you change the props or plugins, you'll need to rebuild (and prebuild) the native app. If no extra properties are added, defaults will be used.

  • speechRecognition (string | false): Sets the message for the NSSpeechRecognitionUsageDescription key in the Info.plist message. When undefined, a default permission message will be used. When false, the permission will be skipped.
  • microphone (string | false): Sets the message for the NSMicrophoneUsageDescription key in the Info.plist. When undefined, a default permission message will be used. When false, the android.permission.RECORD_AUDIO will not be added to the AndroidManifest.xml and the iOS permission will be skipped.

Example

{
  "plugins": [
    [
      "@react-native-voice/voice",
      {
        "microphonePermission": "CUSTOM: Allow $(PRODUCT_NAME) to access the microphone",
        "speechRecognitionPermission": "CUSTOM: Allow $(PRODUCT_NAME) to securely recognize user speech"
      }
    ]
  ]
}
<h2 align="center">Usage</h2> <p align="center"><a href="https://github.com/react-native-voice/voice/blob/master/example/src/VoiceTest.tsx">Full example for Android and iOS.</a></p>

Example

import Voice from '@react-native-voice/voice';
import React, {Component} from 'react';

class VoiceTest extends Component {
  constructor(props) {
    Voice.onSpeechStart = this.onSpeechStartHandler.bind(this);
    Voice.onSpeechEnd = this.onSpeechEndHandler.bind(this);
    Voice.onSpeechResults = this.onSpeechResultsHandler.bind(this);
  }
  onStartButtonPress(e){
    Voice.start('en-US');
  }
  ...
}
<h2 align="center">API</h2> <p align="center">Static access to the Voice API.</p>

All methods now return a new Promise for async/await compatibility.

| Method Name | Description | Platform | | ------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ | | Voice.isAvailable() | Checks whether a speech recognition service is available on the system. | Android, iOS | | Voice.start(locale) | Starts listening for speech for a specific locale. Returns null if no error occurs. | Android, iOS | | Voice.stop() | Stops listening for speech. Returns null if no error occurs. | Android, iOS | | Voice.cancel() | Cancels the speech recognition. Returns null if no error occurs. | Android, iOS | | Voice.destroy() | Destroys the current SpeechRecognizer instance. Returns null if no error occurs. | Android, iOS | | Voice.removeAllListeners() | Cleans/nullifies overridden Voice static methods. | Android, iOS | | Voice.isRecognizing() | Return if the SpeechRecognizer is recognizing. | Android, iOS | | Voice.getSpeechRecognitionServices() | Returns a list of the speech recognition engines available on the device. (Example: ['com.google.android.googlequicksearchbox'] if Google is the only one available.) | Android |

<h2 align="center">Events</h2> <p align="center">Callbacks that are invoked when a native event emitted.</p>

| Event Name | Description | Event | Platform | | ----------------------------------- | ------------------------------------------------------ | ----------------------------------------------- | ------------ | | Voice.onSpeechStart(event) | Invoked when .start() is called without error. | { error: false } | Android, iOS | | Voice.onSpeechRecognized(event) | Invoked when speech is recognized. | { error: false } | Android, iOS | | Voice.onSpeechEnd(event) | Invoked when SpeechRecognizer stops recognition. | { error: false } | Android, iOS | | Voice.onSpeechError(event) | Invoked when an error occurs. | { error: Description of error as string } | Android, iOS | | Voice.onSpeechResults(event) | Invoked when SpeechRecognizer is finished recognizing. | { value: [..., 'Speech recognized'] } | Android, iOS | | Voice.onSpeechPartialResults(event) | Invoked when any results are computed. | { value: [..., 'Partial speech recognized'] } | Android, iOS | | Voice.onSpeechVolumeChanged(event) | Invoked when pitch that is recognized changed. | { value: pitch in dB } | Android |

<h2 align="center">Permissions</h2> <p align="center">Arguably the most important part.</p>

Android

While the included VoiceTest app works without explicit permissions checks and requests, it may be necessary to add a permission request for RECORD_AUDIO for some configurations. Since Android M (6.0), user need to grant permission at runtime (and not during app installation). By default, calling the startSpeech method will invoke RECORD AUDIO permission popup to the user. This can be disabled by passing REQUEST_PERMISSIONS_AUTO: true in the options argument.

If you're running an ejected expo/expokit app, you may run into issues with permissions on Android and get the following error host.exp.exponent.MainActivity cannot be cast to com.facebook.react.ReactActivity startSpeech. This can be resolved by prompting for permssion using the expo-permission package before starting recognition.

import { Permissions } from "expo";
async componentDidMount() {
	con
View on GitHub
GitHub Stars2.2k
CategoryCustomer
Updated5d ago
Forks609

Languages

TypeScript

Security Score

100/100

Audited on Mar 26, 2026

No findings