Home >Web Front-end >JS Tutorial >Adding Voice Search to a React Application
Voice commands aren't just for virtual assistants like Google Assistant or Alexa. They can significantly enhance mobile and desktop applications, adding both functionality and a fun user experience. Integrating voice commands or voice search is surprisingly straightforward. This article demonstrates building a voice-controlled book search application using the Web Speech API within a React framework.
The complete code is available on GitHub, and a working demo is provided at the end.
Key Concepts:
useVoice
) to encapsulate and reuse voice recognition logic.useBookFetch
), interacting with an external API (Open Library) for data retrieval based on voice input.Web Speech API Introduction:
The Web Speech API has limited browser support. Ensure you're using a compatible browser (check MDN for up-to-date compatibility information).
A simple example of using the Web Speech API:
<code class="language-javascript">const SpeechRecognition = webkitSpeechRecognition; const speech = new SpeechRecognition(); speech.onresult = (event) => { console.log(event); }; speech.start();</code>
This code instantiates SpeechRecognition
, adds an onresult
event listener to handle speech transcription, and starts listening. The browser will request microphone access. After speech, onresult
provides the transcribed text.
The onresult
event delivers a SpeechRecognitionEvent
object containing a results
array. The first element of this array holds the transcribed text.
This basic code can run in Chrome DevTools or a JavaScript file. Let's integrate this into a React application.
Using Web Speech in React:
Create a new React project:
<code class="language-bash">npx create-react-app book-voice-search cd book-voice-search npm start</code>
Replace the default App.js
with the following, which incorporates the Web Speech API:
<code class="language-javascript">// App.js import React, { useState, useEffect } from "react"; import "./index.css"; import Mic from "./microphone-black-shape.svg"; // Import your microphone image let speech; if (window.webkitSpeechRecognition) { const SpeechRecognition = webkitSpeechRecognition; speech = new SpeechRecognition(); speech.continuous = true; // Enable continuous listening } else { speech = null; } const App = () => { const [isListening, setIsListening] = useState(false); const [text, setText] = useState(""); const listen = () => { setIsListening(!isListening); if (isListening) { speech.stop(); } else { speech.start(); } }; useEffect(() => { if (!speech) return; speech.onresult = (event) => { setText(event.results[event.results.length - 1][0].transcript); }; }, []); // ... (rest of the component remains the same) }; export default App;</code>
This enhanced component manages listening state (isListening
), stores the transcribed text (text
), and handles the microphone click event (listen
). The useEffect
hook sets up the onresult
listener.
Reusable Custom React Voice Hook:
To improve code reusability, create a custom hook useVoice.js
:
<code class="language-javascript">const SpeechRecognition = webkitSpeechRecognition; const speech = new SpeechRecognition(); speech.onresult = (event) => { console.log(event); }; speech.start();</code>
This hook encapsulates the voice recognition logic. Now, update App.js
to use this hook:
<code class="language-bash">npx create-react-app book-voice-search cd book-voice-search npm start</code>
This simplifies App.js
and promotes code reuse.
Book Voice Search Functionality:
Create another custom hook useBookFetch.js
to handle the book search:
<code class="language-javascript">// App.js import React, { useState, useEffect } from "react"; import "./index.css"; import Mic from "./microphone-black-shape.svg"; // Import your microphone image let speech; if (window.webkitSpeechRecognition) { const SpeechRecognition = webkitSpeechRecognition; speech = new SpeechRecognition(); speech.continuous = true; // Enable continuous listening } else { speech = null; } const App = () => { const [isListening, setIsListening] = useState(false); const [text, setText] = useState(""); const listen = () => { setIsListening(!isListening); if (isListening) { speech.stop(); } else { speech.start(); } }; useEffect(() => { if (!speech) return; speech.onresult = (event) => { setText(event.results[event.results.length - 1][0].transcript); }; }, []); // ... (rest of the component remains the same) }; export default App;</code>
This hook fetches book data from Open Library based on the author's name. Finally, integrate this into App.js
to display the search results:
<code class="language-javascript">// useVoice.js import { useState, useEffect } from 'react'; // ... (SpeechRecognition setup remains the same) const useVoice = () => { // ... (state and listen function remain the same) useEffect(() => { // ... (onresult event listener remains the same) }, []); return { text, isListening, listen, voiceSupported: speech !== null }; }; export { useVoice };</code>
This completes the voice-controlled book search application.
Demo:
[Insert CodeSandbox or similar demo link here]
Conclusion:
This example showcases the power and simplicity of the Web Speech API for adding voice interaction to React applications. Remember browser compatibility and potential accuracy limitations. The full code is available on GitHub.
FAQs (moved to the end for better flow): (These would follow the conclusion in the original format) The FAQs section from the original input can be included here, slightly rephrased for better clarity and flow within this revised article.
The above is the detailed content of Adding Voice Search to a React Application. For more information, please follow other related articles on the PHP Chinese website!