2016-05-02 35 views
0

私はUWPを開発中で、SpeechRecognizerを使いたいと思います。私がアプリケーションを実行すると、最初の試行は正しいですが、次に試行するとSystem.InvalidOperationExceptionがスローされます。UWP SpeechRecognizer例外がスローされました:オブジェクトの現在の状態のため、操作が無効です

SpeechRecognitionResult speechRecognitionResult = await speechRecognizer.RecognizeAsync();

エラーメッセージが「操作が原因オブジェクトの現在の状態に有効ではありません。」

です

そして別の状況私が話すように、ボタンをクリックすると、機能RecognizeAsync()が呼び出されていないようだと、私はすぐに、空白のコンテンツにMessageDialogを取得し、プログラムの戻り値が1(0x1の)であるということです。このように例外は発生しませんが、ボタンをすばやくクリックすると、上記の例外がスローされます。

私は多くのページをオンラインで検索しましたが、誰もこの問題を解決できませんでした。 どんな助けでも大歓迎です。ありがとう。

は、ここに私の完全なコード

public sealed partial class VoiceMainPage : Page 
    { 
     public VoiceMainPage() 
     { 
      InitializeComponent(); 
     } 

     private async void OnListenAsync(object sender, RoutedEventArgs e) 
     { 
      // Create an instance of SpeechRecognizer. 
      var speechRecognizer = new SpeechRecognizer(); 

      // Compile the dictation grammar by default. 
      await speechRecognizer.CompileConstraintsAsync(); 

      // Start recognition. 
      SpeechRecognitionResult speechRecognitionResult = await speechRecognizer.RecognizeAsync(); 

      var messageDialog = new MessageDialog(speechRecognitionResult.Text, "Text spoken"); 
      await messageDialog.ShowAsync(); 
     } 
    } 
+0

スレッドで実行しようとしましたか? – Ouarzy

+0

RunAsync()関数を使用することを意味しますか?私はMSDNに目を向けると、それを使用しようとしましたが、それでも動作しません。 –

+0

はい、Thread.Run()か何か。 UWP APIはバックグラウンドスレッドから実行する必要があることはよく知っていますが、SpeechRecognizerはまだ使用していません。 – Ouarzy

答えて

1

チェックこの公式sample codeです。ここではgithubのサンプルの一部を再利用コードの適応は次のとおりです。

public class VoiceMainPage 
{ 
    private SpeechRecognizer speechRecognizer; 
    private CoreDispatcher dispatcher; 
    private IAsyncOperation<SpeechRecognitionResult> recognitionOperation; 

    public VoiceMainPage() 
    { 
     InitializeComponent(); 
    } 

    /// <summary> 
    /// When activating the scenario, ensure we have permission from the user to access their microphone, and 
    /// provide an appropriate path for the user to enable access to the microphone if they haven't 
    /// given explicit permission for it. 
    /// </summary> 
    /// <param name="e">The navigation event details</param> 
    protected async override void OnNavigatedTo(NavigationEventArgs e) 
    { 
     // Save the UI thread dispatcher to allow speech status messages to be shown on the UI. 
     dispatcher = CoreWindow.GetForCurrentThread().Dispatcher; 

     bool permissionGained = await AudioCapturePermissions.RequestMicrophonePermission(); 
     if (permissionGained) 
     { 
      // Enable the recognition buttons.     
      await InitializeRecognizer(SpeechRecognizer.SystemSpeechLanguage); 
      buttonOnListen.IsEnabled = true; 
     } 
     else 
     { 
      // Permission to access capture resources was not given by the user; please set the application setting in Settings->Privacy->Microphone. 
      buttonOnListen.IsEnabled = false; 
     } 
    } 

    private async void OnListenAsync(object sender, RoutedEventArgs e) 
    { 
     buttonOnListen.IsEnabled = false; 

     // Start recognition. 
     try 
     { 
      recognitionOperation = speechRecognizer.RecognizeAsync(); 
      SpeechRecognitionResult speechRecognitionResult = await recognitionOperation; 
      // If successful, display the recognition result. 
      if (speechRecognitionResult.Status == SpeechRecognitionResultStatus.Success) 
      { 
       // Access to the recognized text through speechRecognitionResult.Text; 
      } 
      else 
      { 
       // Handle speech recognition failure 
      } 
     } 
     catch (TaskCanceledException exception) 
     { 
      // TaskCanceledException will be thrown if you exit the scenario while the recognizer is actively 
      // processing speech. Since this happens here when we navigate out of the scenario, don't try to 
      // show a message dialog for this exception. 
      System.Diagnostics.Debug.WriteLine("TaskCanceledException caught while recognition in progress (can be ignored):"); 
      System.Diagnostics.Debug.WriteLine(exception.ToString()); 
     } 
     catch (Exception exception) 
     { 

      var messageDialog = new Windows.UI.Popups.MessageDialog(exception.Message, "Exception"); 
      await messageDialog.ShowAsync(); 
     } 

     buttonOnListen.IsEnabled = true; 
    } 

    /// <summary> 
    /// Ensure that we clean up any state tracking event handlers created in OnNavigatedTo to prevent leaks. 
    /// </summary> 
    /// <param name="e">Details about the navigation event</param> 
    protected override void OnNavigatedFrom(NavigationEventArgs e) 
    { 
     base.OnNavigatedFrom(e); 
     if (speechRecognizer != null) 
     { 
      if (speechRecognizer.State != SpeechRecognizerState.Idle) 
      { 
       if (recognitionOperation != null) 
       { 
        recognitionOperation.Cancel(); 
        recognitionOperation = null; 
       } 
      } 

      speechRecognizer.StateChanged -= SpeechRecognizer_StateChanged; 

      this.speechRecognizer.Dispose(); 
      this.speechRecognizer = null; 
     } 
    } 

    /// <summary> 
    /// Initialize Speech Recognizer and compile constraints. 
    /// </summary> 
    /// <param name="recognizerLanguage">Language to use for the speech recognizer</param> 
    /// <returns>Awaitable task.</returns> 
    private async Task InitializeRecognizer(Language recognizerLanguage) 
    { 
     if (speechRecognizer != null) 
     { 
      // cleanup prior to re-initializing this scenario. 
      speechRecognizer.StateChanged -= SpeechRecognizer_StateChanged; 

      this.speechRecognizer.Dispose(); 
      this.speechRecognizer = null; 
     } 

     // Create an instance of SpeechRecognizer. 
     speechRecognizer = new SpeechRecognizer(recognizerLanguage); 

     // Provide feedback to the user about the state of the recognizer. 
     speechRecognizer.StateChanged += SpeechRecognizer_StateChanged; 

     // Add a web search topic constraint to the recognizer. 
     var webSearchGrammar = new SpeechRecognitionTopicConstraint(SpeechRecognitionScenario.WebSearch, "webSearch"); 
     speechRecognizer.Constraints.Add(webSearchGrammar); 

     // Compile the constraint. 
     SpeechRecognitionCompilationResult compilationResult = await speechRecognizer.CompileConstraintsAsync(); 
     if (compilationResult.Status != SpeechRecognitionResultStatus.Success) 
     { 
      buttonOnListen.IsEnabled = false; 
     } 

    /// <summary> 
    /// Handle SpeechRecognizer state changed events by updating a UI component. 
    /// </summary> 
    /// <param name="sender">Speech recognizer that generated this status event</param> 
    /// <param name="args">The recognizer's status</param> 
    private async void SpeechRecognizer_StateChanged(SpeechRecognizer sender, SpeechRecognizerStateChangedEventArgs args) 
    { 
     await dispatcher.RunAsync(CoreDispatcherPriority.Normal,() => 
     { 
      MainPage.Current.NotifyUser("Speech recognizer state: " + args.State.ToString(), NotifyType.StatusMessage); 
     }); 
    } 
} 

をbuttonOnListenボタンをクリックハンドラの再入を防ぐために、音声認識処理中にfalseに設定された有効な状態でなければなりません。さらに、SpeechRecognizerオブジェクトをページナビゲーション着陸中に一度初期化し(OnNavigatedTo)、ページを離れる際にSpeechRecognizerオブジェクトを廃棄することをお勧めします(OnNavigatedFrom)

+0

それは動作します。私はたくさんのことを学びました、ありがとう! –

関連する問題