1、在controller中加入接口,返回byte[]
設定produces = “application/octet-stream ”
設定返回類型ResponseEntity7b67ad69929576476a7dd96a4603a5e3
@PostMapping(value = "/v/voice", produces = "application/octet-stream") public ResponseEntity<byte[]> voice(@RequestBody JSONObject param, HttpServletResponse response) throws IOException { String text = param.getString("text"); // 以下代码调用阿里云接口,把文字转语音 byte[] voice = SpeechRestfulUtil.text2voice(text); // 返回音频byte[] return ResponseEntity.ok().body(voice); }
本例是呼叫阿里雲tts接口,把文字轉語音
2、在configureMessageConverters中加入解析器
ByteArrayHttpMessageConverter
@Override public void configureMessageConverters(List<HttpMessageConverter<?>> converters) { MappingJackson2HttpMessageConverter jackson2HttpMessageConverter = new MappingJackson2HttpMessageConverter(objectMapper()); converters.add(jackson2HttpMessageConverter); converters.add(new ByteArrayHttpMessageConverter()); }
使用axios呼叫後端接口,設定responseType=blob
1)得到瀏覽器播放控制項audioContext
2)使用FileReader讀取得到的byte[]
3)FileReader綁定load事件,讀取byte[]完成後播放語音
function doVoice(){ axios({ method:'post', url:req.voice, responseType:'blob', data:{text:data.info} // 需要播放的文本 }).then(function (response) { console.log(response); if(response.status===200){ // 1)得到浏览器播放控件 audioContext let audioContext = new (window.AudioContext || window.webkitAudioContext)(); let reader = new FileReader(); reader.onload = function(evt) { if (evt.target.readyState === FileReader.DONE) { // 3)FileReader绑定load事件,读取byte[]完成后播放语音 audioContext.decodeAudioData(evt.target.result, function(buffer) { // 解码成pcm流 let audioBufferSouceNode = audioContext.createBufferSource(); audioBufferSouceNode.buffer = buffer; audioBufferSouceNode.connect(audioContext.destination); audioBufferSouceNode.start(0); }, function(e) { console.log(e); }); } }; // 2)使用FileReader读取得到的byte[] reader.readAsArrayBuffer(response.data); } }) .catch(function (error) { // handle error console.log(error); }) .finally(function () { // always executed }); }
以上是基於SpringBoot和Vue的動態語音播放怎麼實現的詳細內容。更多資訊請關注PHP中文網其他相關文章!