Go to <[url removed, login to view]>
I need something like that.
My data management software need to implement a soft Speech Recognition, to allow users to move into windows frames keeping hands free.
Required none or short speeching teachment.
Required totally compatibility with C# (net framework 2.0)
**DLL I need must:
keep _on background_ and _listen to microphone_
_recieve a list of words_: words can be .wav, or specific number that identify it. In fact sent words are max 20 and very different (example given one, two, free, exit, enter, teeth, mouth ecc.).
_Event Hear_: when something is spoken, generate event
_Handling Hear_: compare word with the list
_Event WordChecked_: when compare gives true, generate event, and send parameter with the matched word.
**Language used to test the speech reco will be italian, but any kind a man is able to reproduce should be recognized, comparing it with a number sequence that identify its wave and other needed features.
Project should be written in C# 2.0. Would be accepted any other language that can be implemented in the attached test, using .net framework 2.0 with _only managed code_.
Project will be tested on WinXp Home / Pro, with .net framework 2.0.
Your functions will be called by the sample application attached here, where suggested in the comments, using only managed code. Any kind of modification in structure, application flow, library access, should be approved by me: if I'll not approve your suggested modification, yuor job will not be completed.
The attached solution VoiceCommand, has a windows application, with Forms inheriting from VoiceCommandForm, from VoiceCommandDLL.
VoiceCommandDLL has 1 main class: VoiceCommand.
VoiceCommand is a Singleton class that should encapsulate your code.
It has 2 public props: Stutus and Actions.
Status can mainly be Working or Waiting: when it's working, the VoiceRecognition should accept Waiting and all voice commands listed in Actions; when it's waiting it only accept the working voice command.
When Status changes, StatusChanged event reaises.
When VoiceRecognition recognizes an accepted voice command, NewAction event reaises.
These Events are overriden in VoiceCommandForm objects.
Debugging the application, you can find comment that explain the workflow and suggest point in which referes to VoiceRecognition.
The required code include both:
- the VoiceRecognition, applyed to this test, which will be used by VoiceCommand raising NewAction or StatusChanged event when recognizes the user speaking
- and the VoicePrintGenerator, a simple app that gives a number from a wave file, or from a live spoken word. This number should be associated to the corresponding VoiceCommandActions item, so that any people saying that word will be understood by VoiceRecognition
Your work will be tested on WinXp with .net framework 2.0 (or latest beta available). During the test a speaker will chat with a friend, and the application should recognize when user says a recognized word (obviously speaking correct voice command clearly). The test should success for 4 hours consecutevly on a P3 with 512 Mb ram, a generic sound card with an economy microphone. It is allowed a lag of max 1-2 sec between the end of saying the voice command and the execution of the corresponding action: I mean that in the test application, after maximum 2 second that a recognized word is said to microphone, the test application should open a windows form or allow to move in a listbar.
Windows XP Home and Pro