w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Android Speech Speech Recognition: Repeated Calling of SpeechRecognizer.startListening() fails on JB 4.1.2
Try to use single instance of SpeechRecognizer, now need to recreate it on stop() method. Call getSpeechRecognizer() at onCreate() and forget it. But don't forget to destroy on in onDestroy() method.

Categories : Android

can I use SRGS grammar and Speech Recognition to develop my software in Vietnamese speech
You can check out this link for language support for Windows Phone 8 http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202918(v=vs.105).aspx Also I would like to add that you can create grammar (SRGS grammar) for Vietnamese and for speech recognition. http://msdn.microsoft.com/en-us/library/hh361675(v=office.14).aspx

Categories : Windows Phone 8

Vb script for Speech to text (speech recognition)?
I think you are asking for speech recognition, but im not sure if it works in Vbscript, I know it works in google chrome --- or in html connected to google crhome. Visit this site- http://www.labnol.org/software/add-speech-recognition-to-website/19989/ try microsoft speech recognition.

Categories : Vbscript

Speech Recognition using LPC and ANN in Matlab
I used 14 LPC coefficients of the first period (20ms) of records as features. So did you ignore almost all the sound data except first 20ms? It doesn't sound right. You must have calculate an average over all frames at least. What is wrong here? You started coding without understanding a theory. Probably you want to read some introduction first. At least this and ideally this To understand why ANN doesn't work calculate how many parameters are required to map 10 features to 4 classes, then calculate how many training vectors do you have for every parameter. Take into account that for every parameter you need at least 10 samples for initial estimation. That means your training data is not enough.

Categories : Matlab

Choices in speech recognition
I'm going to borrow Raymond Chen's Psychic Debugging Talents (tm) and say that your problem is here: _recognizer.LoadGrammarAsync(servicesGrammar); _recognizer.LoadGrammarAsync(lookingGrammar); In particular, I suspect that the recognizer can have only one async grammar load at once. If you change your code to _recognizer.LoadGrammar(servicesGrammar); _recognizer.LoadGrammar(lookingGrammar); or put the second LoadGrammarAsync call in a onLoadGrammarCompleted handler, your problems will go away. But seriously, you need to include the error.

Categories : C#

speech recognition times out too soon in android
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Voice recognition!"); startActivityForResult(intent, REQUEST_CODE); This just works fine for me!

Categories : Java

Can't start speech recognition in service
You have to either override onStartCommand in the service to send a listening message or bind to the service and send a listening message @Override public int onStartCommand(Intent intent, int flags, int startId) { Message msg = Message.obtain(null, MSG_RECOGNIZER_START_LISTENING); try { mServerMessenger.send(msg); } catch (RemoteException e) { } return START_STICKY; } For the bind implementation see Android Speech Recognition Continuous Service

Categories : Android

Speech recognition in Windows Phone 8
Speech recognition requires access to Microsoft Cloud Services. Many people have problems getting their emulator to work well with internet enabled apps. Here's the MSDN article on troubleshooting that issue. If I were you I'd verify you can actually access the internet on the emulator using a simple WebBrowser and trying to navigate to a site of your choosing. If you aren't able to access an external site, voice recognition will not work on your emulator.

Categories : C#

how to open my computer through speech recognition
yes i have done this now work prefect string resultText = e.Result.Text.ToLower(); if (resultText == "computer") { string myComputerPath = Environment. (Environment.SpecialFolder.MyComputer); System.Diagnostics.Process.Start("explorer", myComputerPath); //System.Diagnostics.Process.Start("explorer", "::{20d04fe0-3aea-1069-a2d8-08002b30309d}"); } but if you still find answer better than this so plz comment here thanx guys

Categories : C#

Speech recognition library in C++ for XCode
Okay, after some searches I figured out that the apple Carbon API had a SpeechRecognition.h framework! The bad news is that it seems quite old and that the documentation/help on internet is quite poor... Anyone to have some experience about this framework? Thanks for your help!

Categories : C++

Free API for speech recognition system
I would suggest CMU's Sphinx system. See http://cmusphinx.sourceforge.net. It has tools to tune language and acoustic models, which can increase accuracy. There are multiple versions, but I would begin with PocketSphinx.

Categories : Java

System.Speech recognition error
The problem is that you are calling RecognizeAsync from the SpeechRecognized event handler. It is throwing the exception because the previous recognition has not yet completed. The event handler is blocking it from completing. Try starting a different task/thread to call RecognizeAsync.

Categories : C#

control mouse with speech recognition
Paste the following code into a Winforms project and run the project: public Form1() { InitializeComponent(); this.KeyPreview = true; this.KeyDown += new System.Windows.Forms.KeyEventHandler(this.Form1_KeyDown); var button1 = new Button(); button1.Location = new Point(50,50); button1.Text = "Hover mouse over and press a key to simulate mouse click"; button1.AutoSize = true; button1.Click +=new EventHandler(button1_Click); this.Controls.Add(button1); } [System.Runtime.InteropServices.DllImport("user32.dll")] public static extern void mouse_event(int dwFlags, int dx, int dy, int cButtons, int dwExtraInfo); public const int MOUSEEVENTF_LEFTDOWN = 0x02; public const int MOUSEEVENTF_LEFTUP = 0x04; private void Form1_KeyDown(object sender, KeyEventArgs e)

Categories : C#

iPhone speech recognition for my Applicaiton?
OpenEars makes it simple for you to add speech recognition and synthesized speech/TTS to your iPhone app quickly and easily. Check Link Hope it helps to you :)

Categories : Iphone

Android Speech Recognition Continuous Service
Class members private int mBindFlag; private Messenger mServiceMessenger; Start service in onCreate() @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Intent service = new Intent(activityContext, VoiceCommandService.class); activityContext.startService(service); mBindFlag = Build.VERSION.SDK_INT < Build.VERSION_CODES.ICE_CREAM_SANDWICH ? 0 : Context.BIND_ABOVE_CLIENT; } Bind service in onStart() @Override protected void onStart() { super.onStart(); bindService(new Intent(this, VoiceCommandService.class), mServiceConnection, mBindFlag); } @Override protected void onStop() { super.onStop(); if (mServiceMessenger != null) { unbindService(mServiceConnection); mServiceMessenger =

Categories : Android

Getting Word Recognition in Windows Phone Speech
According to the documentation RecognizedPhrase.Text Property should contain the display text format which is what you are asking for. As part of the speech recognition process, the speech recognizer performs speech-to-text normalization of the recognized input into a display form. For example, the spoken input, "twenty five dollars", generates a recognition result where the Words property contains the words, "twenty", "five", and "dollars", and the Text property contains the phrase, "$25.00". For more information about text normalization, see ReplacementText.

Categories : Misc

C# system.speech.recognition alternate words
This MSDN page handles what you're asking quite nicely. For reference, I'll post the included code. The final for loop is what contains the // Handle the SpeechRecognized event. void SpeechRecognizedHandler(object sender, SpeechRecognizedEventArgs e) { //... Code handling the result // Display the recognition alternates for the result. foreach (RecognizedPhrase phrase in e.Result.Alternates) { Console.WriteLine(" alt({0}) {1}", phrase.Confidence, phrase.Text); } } The use of e.Result.Alternates is the official way to obtain other possible words. If that isn't giving you enough results, this MSDN page gives you the required information. You need to use UpdateRecognizerSetting on your SpeechRecognitionEngine to change the confidence rejection level. Setting it to 0 will m

Categories : C#

how to disable windows speech recognition commands?
Well, what you need is an in-process recognition engine, and PySpeech uses a shared recognition engine. So you'll need to modify PySpeech a bit. Change _recognizer = win32com.client.Dispatch("SAPI.SpSharedRecognizer") to _recognizer = win32com.client.Dispatch("SAPI.SpInprocRecognizer") and in startlistening(phraselist, callback), you need to attach an audio stream (via _recognizer.SetInput) and a reco engine (via _recognizer.SetRecognizer). Unfortunately, I'm not familiar enough with Python to translate the SAPI helpers SpGetDefaultTokenFromCategoryId (to get the default audio stream) and SpGetDefaultSharedRecognizerToken (to get the default reco engine) to Python.

Categories : Python

Restricting speech recognition results on Android
No, there are no such parameters, google speech recognition is not flexible enough. You can use external speech recognition toolkit like CMUSphinx

Categories : Java

c# Kinect speech and gesture recognition not working together
I know nothing about Kinect, but - InitializeKinect looks like it's finding a Kinect sensor and initializing the SR engine (most likely using some Kinect information). I would remove the InitializeKinect call and add speechRecognizer = CreateSpeechRecognizer(); just before this.sensorChooser.Start();

Categories : C#

Constantly-on speech recognition listening for just one keyword
from my research, there is no way to do this using the standard google voice recognition server. They way it works is once sound/word is recognized, the recognizer returns a list of what it thinks it heard with an associated confidence score. to do what you are asking, you would: have to keep re-activating the recognition service every time it fired a recognition event, until it matches the word you want. your app would have to 'keep-awake' the recognition service. you could do this by creating a service that periodically wakes up your handset and resuming the service/activity. I would not recommend either of these options considering that the battery life is really reduces by the voice recognition service being constantly on.

Categories : Android

Fuzzy EmulateRecognize on Windows Speech Recognition
When you send audio to the recognizer, the SR engine does a lot of work to create a set of phonemes (via acoustic modeling) and then a set of strings (via phoneme modeling). During that process, much of the ambiguity gets eliminated. EmulateRecognize doesn't generate audio that gets processed via the SR engine; it skips all the modeling and just does a string match. There's no way to work around this that doesn't involve a lot of work (e.g., implementing a SAPI-compatible SR engine that only does EmulateRecognize).

Categories : C#

Inproc speech recognition engine in Python
Well, as I mentioned, in-process recognizers don't have default input sources or recognition engines set up. In order to get the in-process recognizer to listen, you need to set these via _recognizer.SetInput (to set the input source) and _recognizer.SetRecognizer (to set the recognition engine) The challenge for you is to get the default input source and recognition engine, respectively. If you were using C++, this would be straightforward; there's a helper function in sphelper.h that gets the default input source: SpGetDefaultTokenFromCategoryId(SPCAT_AUDIOIN, &cpToken), and I published a function on my blog that gets the default recognition engine. But I don't know how to translate those functions into Python; perhaps you do.

Categories : Python

Offline Speech Recognition In Android (JellyBean)
In short, I don't have the implementation, but the explanation. Google did not make offline speech recognition available to third party apps. Offline recognition is only accessable via the keyboard. Ben Randall (the developer of utter!) explains his workaround in an article at Android Police: I had implemented my own keyboard and was switching between Google Voice Typing and the users default keyboard with an invisible edit text field and transparent Activity to get the input. Dirty hack! This was the only way to do it, as offline Voice Typing could only be triggered by an IME or a system application (that was my root hack) . The other type of recognition API … didn't trigger it and just failed with a server error. … A lot of work wasted for me on the workaroun

Categories : Android

Is it possible to see remaining valid choices during speech recognition?
Depending on your grammars the System.Speech.Recognition.RecognitionResult should have an Alternate property that displays a list of ordered recognitions that are alternatives to the recognition with the highest confidence. Many speech enabled applications use this to disambiguate possible mis-recognitions. // Handle the SpeechRecognized event. void SpeechRecognizedHandler(object sender, SpeechRecognizedEventArgs e) { if (e.Result == null) return; // Display the recognition alternates for the result. foreach (RecognizedPhrase phrase in e.Result.Alternates) { Console.WriteLine(" alt({0}) {1}", phrase.Confidence, phrase.Text); } } Another possible solution (that works well in some scenarios) is to use the garbage rule in your grammars to act as a wild card (this would n

Categories : Dotnet

Allow Spelling Letters using Speech Recognition Engine
You need to use another GrammarBuilder and constructor for repeats to construct a grammar which matches repeated sequences: private void lettterGrammar() { GrammarBuilder letterGb = new GrammarBuilder(); Choices letterChoices = new Choices("A", "B", "C", "D); GrammarBuilder speellingGb = new GrammarBuilder( (GrammarBuilder)letterChoices, 1, 50); Grammar grammar = new Grammar(spellingGb); } See the documentation on MSDN for details

Categories : C#

record audio data while using speech recognition?
Have a look at the Windows APIs. I am sure you can register an handler/event handler/interceptor there, to get the audio data. Check out following link: maybe helpfull

Categories : C#

Use Android Speech Recognition so that it stops only at the press of a button
This is a JB problem maybe by design. For a work around you can implement the voice recognition in a service and then send update to your UI based on the results. For an implementation of service work around see Android Speech Recognition as a service on Android 4.1 & 4.2

Categories : Android

Using Android Speech Recognition APIs from Google Glass
To use the standard android speech recognition you have to install/deploy the com.google.android.voicesearch apk package. I don't know if there is an official way to get this. I just googled the apk file. Just install it by using adb install < apk-file > Then you should be able to use the voice recognition feature of android on your glass device. Another way is to use the very cool features of google glass, e.g. to just say "okay glass" to activate the voice recognition. But therefore you have to root your device and activate this so called lab-feature. This side is a good starting point for the activation of lab features: glassxe I have not tried it by myself but I am going to.

Categories : Android

Battery consumption needed for continuous speech recognition
Predicting the battery consumption will be near impossible as it depends on several factors: The device's processing power The device's screen size, type and brightness The internet connection speed on the device (most speech recognition services send the data to a server) The efficiency of the hardware microphone Other background processes running on the device Even if everything was in ideal conditions, the simple fact that different devices have different screens, processors and battery capacity will make it impossible to predict the consumption.

Categories : Android

Audio Comparison using Micorosft speech recognition engine
You're using a command grammar (i.e., a set of choices). With a command grammar, the engine tries its best to find a match, which can easily result in false positives (as you've seen). You might want to investigate a dictation grammar, particularly the pronunciation grammar, as I've outlined in my answer to this question. Note that the solution I outlined uses some interfaces that aren't available in C# (or at least exposed via System.Speech.Recognition).

Categories : C#

Creating Acoustic models for Microsoft speech recognition engine
Acoustic adaptation can be done via the Windows Control Panel; search for "speech", and you will find the Speech Recognition control panel, which has an item 'Train your computer to better understand you'. Running this will result in an improved acoustic model.

Categories : C#

How to start voice recognition by codes on Chrome via HTML5 Speech Input API?
You can't realy manage x-webkit-speech input with javascript. By the way, this feature will be deprecated in the future, you may have a look to this link. Now you have to use Web Speech API. This API is an unofficial draft proposed by google on W3C lists. At the moment, there is no talk to add this API in W3Cs specs of HTML5. If you want to use this Web Speech API, you may have a look to this link, there is good explanations and exemples how to use it.

Categories : HTML

Python having trouble accessing usb microphone using Gstreamer to perform speech recognition with Pocketsphinx on a Raspberry Pi
So I finally got this guy working. Couple key things I needed to realize: 1. Even if you're using Pulseaudio on your Raspberry Pi, as long as Alsa is still installed you're still able to use it. ( This might seem like a no brainer to others, but I honestly didn't realize I could still use both of these at the same time ) Hint via (syb0rg). 2. When it comes to sending large amounts of raw audio data ( .wav format in my case ) to Pocketsphinx via Gstreamer, (queues) are your friend. After messing around with gst-launch-0.10 on the command line for a while I came across something that actually worked: gst-launch-0.10 alsasrc device=hw:1 ! queue ! audioconvert ! audioresample ! queue ! vader name=vader auto-threshold=true ! pocketsphinx lm=/home/pi/dev/scarlettPi/config/speech/lm/scarlett

Categories : Python

Developing a simple voice driven web app using web speech API
I may have a solution for you: 1. Go into google chrome settings 2. Click advanced settings 3. Click show advanced settings. 4. Under "privacy" click content settings 5. Go to the media section (near the bottom) and click "Allow all sites to use a plugin on my computer" Really hope this helps! If you extra help, check out this link for more information.

Categories : Google Chrome

android.speech package to just determine whether there is speech rather than using the full process of also converting to text
onBeginningOfSpeech is called by the speech recognition provider once it thinks that there is a beginning of speech. If you are not interested in the content of the speech or even when it ends then call immediately stopListening. Both of these methods are in android.speech.

Categories : Android

web speech api speech synthesis - getting voice list
You should use speechSynthesis.getVoices() to get a list of all voices. This is an output from Google Chrome 33: [{ "default": true, "localService": false, "lang": "en-US", "name": "Google US English", "voiceURI": "Google US English" }, { "default": false, "localService": false, "lang": "en-GB", "name": "Google UK English Male", "voiceURI": "Google UK English Male" }, { "default": false, "localService": false, "lang": "en-GB", "name": "Google UK English Female", "voiceURI": "Google UK English Female" }, { "default": false, "localService": false, "lang": "es-ES", "name": "Google Español", "voiceURI": "Google Español" }, { "default": false, "localService": false, "lang": "fr-FR", "name": "Googl

Categories : Javascript

Speech or no speech detection in Python
I think that your issue is that at the moment you are trying to record without recognition of the speech so it is not discriminating - recognisable speech is anything that gives meaningful results after recognition - so catch 22. You could simplify matters by looking for an opening keyword. You can also filter on voice frequency range as the human ear and the telephone companies both do and you can look at the mark space ratio - I believe that there were some publications a while back on that but look out - it varies from language to language. A quick Google can be very informative. You may also find this article interesting.

Categories : Python

Change Speech Voice in Speech API
The SpeechSynthesizer has a GetInstalledVoices method which returns a ReadOnlyCollection of Voices installed in your system (InstalledVoice type), in order to change the synthesizer voice you should call the SelectVoice Method in which requires the voice name(String type) SpeechSynthesizer synt = new SpeechSynthesizer(); IReadOnlyCollection<InstalledVoice> InstalledVoices = synt.GetInstalledVoices(); InstalledVoice InstalledVoice = InstalledVoices.First(); synt.SelectVoice(InstalledVoice.VoiceInfo.Name); synt.Speak("This is how you select an installed voice"); To see what voices are installed in your computer you can see them in: Control Painel -> Speech Recognition -> Text to Speech You can specify more info there as well like voice speed if you want to add more voices to your c

Categories : C#

Simple Linux Signal Handling
[Q-3] Does the terminate variable in my example have to be volatile? I've seen many examples where this variable is volatile, and others where it is not. The flag terminate should be volatile sig_atomic_t: Because handler functions can be called asynchronously. That is, a handler might be called at any point in the program, unpredictably. If two signals arrive during a very short interval, one handler can run within another. And it is considered better practice to declare volatile sig_atomic_t, this type are always accessed atomically, avoid uncertainty about interrupting access to a variable. volatile tells the compiler not to optimize and put it into register. (read: Atomic Data Access and Signal Handling for detail expiation). One more reference: 24.4.7 Atomic Data Access a

Categories : C++



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.