Android语音识别将数据传回Xamarin Forms
我现在真的卡住了,对Xamarin来说我是一个新手。 我使用Xamarin Forms开发具有语音识别function的应用程序。
我创建了一个带有按钮和输入框的简单UI。
工作:
- 按下按钮并显示带语音识别的弹出窗口
- 将所说的单词读成var
不工作:
- 将数据传递回Xamarin Forms UI(条目)
StartPage.xaml.cs:
private void BtnRecord_OnClicked(object sender, EventArgs e) { WaitForSpeechToText(); } private async void WaitForSpeechToText() { EntrySpeech.Text = await DependencyService.Get().SpeechToTextAsync(); }
ISpeechToText.cs:
public interface ISpeechToText { Task SpeechToTextAsync(); }
调用本机代码。
SpeechToText_Android.cs:
public class SpeechToText_Android : ISpeechToText { private const int VOICE = 10; public SpeechToText_Android() { } public Task SpeechToTextAsync() { var tcs = new TaskCompletionSource(); try { var voiceIntent = new Intent(RecognizerIntent.ActionRecognizeSpeech); voiceIntent.PutExtra(RecognizerIntent.ExtraLanguageModel, RecognizerIntent.LanguageModelFreeForm); voiceIntent.PutExtra(RecognizerIntent.ExtraPrompt, "Sprechen Sie jetzt"); voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1500); voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1500); voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 15000); voiceIntent.PutExtra(RecognizerIntent.ExtraMaxResults, 1); voiceIntent.PutExtra(RecognizerIntent.ExtraLanguage, Java.Util.Locale.Default); try { ((Activity)Forms.Context).StartActivityForResult(voiceIntent, VOICE); } catch (ActivityNotFoundException a) { tcs.SetResult("Device doesn't support speech to text"); } } catch (Exception ex) { tcs.SetException(ex); } return tcs.Task; } }
MainActivity.cs:
protected override void OnActivityResult(int requestCode, Result resultVal, Intent data) { if (requestCode == VOICE) { if (resultVal == Result.Ok) { var matches = data.GetStringArrayListExtra(RecognizerIntent.ExtraResults); if (matches.Count != 0) { string textInput = matches[0].ToString(); if (textInput.Length > 500) textInput = textInput.Substring(0, 500); } // RETURN } } base.OnActivityResult(requestCode, resultVal, data); }
首先,我认为我可以通过结果
return tcs.Task;
回到ui,但后来我注意到,当语音识别的弹出结束渲染时,会发生这种返回。 此刻没有一个单词说出来。
说出的单词位于OnActivityResult函数的字符串“textInput”中,但是如何将此字符串传递回Xamarin.Forms UI?
多谢你们 !
我将使用AutoResetEvent
暂停返回,直到调用OnActivityResult
,直到用户记录某些内容,取消或在AutoResetEvent中超时。
从SpeechToTextAsync
方法返回一个Task
:
public interface ISpeechToText { Task SpeechToTextAsync(); }
添加AutoResetEvent
以暂停执行:
注意:包装AutoResetEvent.WaitOne
以防止挂起应用程序循环器
public class SpeechToText_Android : Listener.ISpeechToText { public static AutoResetEvent autoEvent = new AutoResetEvent(false); public static string SpeechText; const int VOICE = 10; public async Task SpeechToTextAsync() { var voiceIntent = new Intent(RecognizerIntent.ActionRecognizeSpeech); voiceIntent.PutExtra(RecognizerIntent.ExtraLanguageModel, RecognizerIntent.LanguageModelFreeForm); voiceIntent.PutExtra(RecognizerIntent.ExtraPrompt, "Sprechen Sie jetzt"); voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1500); voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1500); voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 15000); voiceIntent.PutExtra(RecognizerIntent.ExtraMaxResults, 1); voiceIntent.PutExtra(RecognizerIntent.ExtraLanguage, Java.Util.Locale.Default); SpeechText = ""; autoEvent.Reset(); ((Activity)Forms.Context).StartActivityForResult(voiceIntent, VOICE); await Task.Run(() => { autoEvent.WaitOne(new TimeSpan(0, 2, 0)); }); return SpeechText; } }
MainActivity OnActivityResult:
const int VOICE = 10; protected override void OnActivityResult(int requestCode, Result resultCode, Intent data) { base.OnActivityResult(requestCode, resultCode, data); if (requestCode == VOICE) { if (resultCode == Result.Ok) { var matches = data.GetStringArrayListExtra(RecognizerIntent.ExtraResults); if (matches.Count != 0) { var textInput = matches[0]; if (textInput.Length > 500) textInput = textInput.Substring(0, 500); SpeechToText_Android.SpeechText = textInput; } } SpeechToText_Android.autoEvent.Set(); } }
注意:这是利用几个静态变量来简化这个例子的实现…一些开发人员会说这是一个代码味道,我半同意,但你不可能有一个以上的Google语音识别器运行一次……
你好世界的例子:
public class App : Application { public App() { var speechTextLabel = new Label { HorizontalTextAlignment = TextAlignment.Center, Text = "Waiting for text" }; var speechButton = new Button(); speechButton.Text = "Fetch Speech To Text Results"; speechButton.Clicked += async (object sender, EventArgs e) => { var speechText = await WaitForSpeechToText(); speechTextLabel.Text = string.IsNullOrEmpty(speechText) ? "Nothing Recorded" : speechText; }; var content = new ContentPage { Title = "Speech", Content = new StackLayout { VerticalOptions = LayoutOptions.Center, Children = { new Label { HorizontalTextAlignment = TextAlignment.Center, Text = "Welcome to Xamarin Forms!" }, speechButton, speechTextLabel } } }; MainPage = new NavigationPage(content); } async Task WaitForSpeechToText() { return await DependencyService.Get().SpeechToTextAsync(); } }