SlideShare ist ein Scribd-Unternehmen logo
1 von 41
VOICE OPERATED WHEELCHAIR
1
ABSTRACT
Many disabled people usually depend on others in their daily life especially in
getting from one place to another. For the wheelchair users, they need continuously
someone to help them in going the wheelchair moving. By having a wheelchair
control system will help handicapped persons become independent. The system is
a wireless wheelchair control system which employs a voice recognition system for
triggering and controlling all its movements. The wheelchair responds to the voice
command from its user to perform any movements functions. It integrates a
microcontroller, wireless microphone, voice recognition processor, motor control
interface board to move the wheelchair. By using the system, the users are able to
operate the wheelchair by simply speak to the wheelchair microphone. The basic
movement functions includes forward and reverse direction, left and right turns
and stop It utilizes a PIC controller microchip 16f877a to control the system
operations. It communicates with the voice recognition processor to detect word
spoken and then determines the corresponding output command to drive the left
and right motors. To accomplish this task, an assembly language program is
written and stored in the controller's memory .In order to recognize the spoken
words, the voice recognition processor HM 2007 must be trained with the word
spoken out by the user who is going to operate the wheelchair.
VOICE OPERATED WHEELCHAIR
2
Chapter1:
INTRODUCTION
VOICE OPERATED WHEELCHAIR
3
1.1 GENERALOVERVIEW:
A wheelchair is a wheeled mobility device in which the user sits. The device is
propelled either manually by pushing the wheels with the hands or via various
automated systems. Wheelchairs are used by people for whom walking is
difficult or impossible due to illness, injury, or disability. People with walking
disability often need to use a wheelchair “World report on disability" jointly
presented by World Health Organization (WHO) and World Bank says that there
are 70 million people are handicapped in the world. Unfortunately day by day the
number of handicapped people is going on increasing due to road accidents as well
as disease like paralysis. If a person is handicapped he is dependent on other
person for his day to day work like transport, food, orientation etc. So a voice
operated wheel chair is developed which will operate automatically on the
commands from the handicapped user for movement purpose.
VOICE OPERATED WHEELCHAIR
4
1.2 LITERATURE SURVEY:
 There are many scientists and researchers who develop computer software
that can recognize human voice commands in so many languages such as
English, Japanese and Thai. There are many techniques that are used to
recognize voice commands .[1]
 Researchers transform sound wave into digital wave by a computer. After
that they use digital signal to manage different electronic equipments, for
example 1)controlling robot arm movement 2)helping the handicapped to
move a wheel chair etc.[2]
 According to “ IJRET” In the paper on “Voice Operated Intelligent
Wheelchair” , Mat lab software is used for input signal processing and that
signal given to the ARM Processor LPC2138.[3]
 In recent paper of “ IJRET”, input is given to IC HM2007.HM 2007 IC is
used for the voice recognition purpose. HM 2007 generates the output signal
depending on the input from the user.[4]
VOICE OPERATED WHEELCHAIR
5
1.3 THEROTICALBACKGROUND:
Voice enabled devices basically use the principal of speech recognition. It is
the process of electronically converting a speech waveform (as the realization of a
linguistic expression) into words (as a best-decoded sequence of linguistic units).
Converting a speechwaveform into a sequence of words involves several essential
steps:
i. A microphone picks up the signal of the speech to be recognized and converts it
into an electrical signal. A modern speech recognition system also requires that the
electrical signal be represented digitally by means of an analog-to-digital(A/D)
conversion process, so that it can be processed with a digital computer or
microprocessor.
ii. This speech signal is then analyzed (in the analysis block) to produce a
representation consisting of salient features of the speech. The most prevalent
feature of speech is derived from its short-time spectrum, measured successively
over short-time windows of length 20–30 milliseconds overlapping at intervals
of10–20 ms.Each short-time spectrum is transformed into a feature vector, and the
temporal sequence of such feature vectors thus forms a speech pattern.
iii. The speech pattern is then compared to a store of phoneme patterns or models
through a dynamic programming process in order to generate a hypothesis (or a
number of hypotheses) of the phonemic unit sequence. (A phoneme is a basic unit
of speech and a phoneme model is a succinct representation of the signal that
corresponds to a phoneme, usually embedded in an utterance.) A speech signal
inherently has substantial variations along many dimensions.Before we understand
the design of the project let us first understand speech recognition types and styles.
Speech recognition is classified into two categories, speaker dependent and speaker
independent.
VOICE OPERATED WHEELCHAIR
6
Speaker dependent systems are trained by the individual who will be using the
system. These systems are capable of achieving a high command count and better
than 95% accuracy for word recognition. The drawback to this approach is that the
system only responds accurately only to the individual who trained the system.
This is the most common approach employed in software for personal computers.
Speaker independent is a system trained to respond to a word regardless of who
speaks. Therefore the system must respond to a large variety of speech patterns,
inflections and enunciation's of the target word. The command word count is
usually lower than the speaker dependent however high accuracy can still be
maintain within processing limits. Industrial requirements more often need speaker
independent voice systems, such as the AT&T system used in the telephone
systems. A more general form of voice recognition is available through feature
analysis and this technique usually leads to "speaker-independent" voice
recognition.
RecognitionStyle
Speech recognition systems have another constraint concerning the style of speech
they can recognize. They are three styles of speech: isolated, connected and
continuous. Isolated speech recognition systems can just handle words that are
spoken separately. This is the most common speech recognition systems available
today. The user must pause between each word or command spoken. The speech
recognition circuit is set up to identify isolated words of .96 second lengths.
Connected is a half way point between isolated word and continuous speech
recognition. Allows users to speak multiple words. The HM2007 can be set up to
identify words or phrases 1.92 seconds in length. This reduces the word
recognition vocabulary number to20.
VOICE OPERATED WHEELCHAIR
7
 Approaches of Statistical Speech Recognition
a. Hidden Markovmodel (HMM)-based speechrecognition
Modern general-purpose speech recognition systems are generally based on
hidden Markov models (HMMs). This is a statistical model which outputs a
sequence of symbols or quantities. One possible reason why HMMs are used
in speech recognition is that a speech signal could be viewed as a piece-wise
stationary signal or a short-time stationary signal. That is, one could assume
in a short-time in the range of 10 milliseconds, speech could be
approximated as a stationary process. Speech could thus be thought as a
Markov model for many stochastic processes (known as states). Another
reason why HMMs are popular is because they can be trained automatically
and are simple and computationally feasible to use.
b. Neuralnetwork-basedspeechrecognition
Another approach in acoustic modeling is the use of neural networks. They
are capable of solving much more complicated recognition tasks, but do not
scale as well as HMMs when it comes to large vocabularies. Rather than
being used in general-purpose speech recognition applications they can
handle low quality, noisy data and speaker independence. Such systems can
achieve greater accuracy than HMM based systems, as long as there is
training data and the vocabulary is limited. A more general approach using
neural networks is phoneme recognition.
c. Dynamic time warping (DTW)-basedspeechrecognition
Dynamic time warping is an algorithm for measuring similarity between two
sequences which may vary in time or speed. For instance, similarities in
walking patterns would be detected, even if in one video the person was
walking slowly and if in another they were walking more quickly, or even if
VOICE OPERATED WHEELCHAIR
8
there were accelerations and decelerations during the course of one
observation. DTW has been applied to video, audio, and graphics -- indeed,
any data which can be turned into a linear representation can be analyzed
with DTW.
1.4 NATURE OF PROBLEM:
Speech recognition is the process of finding a interpretation of a spoken utterance;
typically, this means finding the sequence of words that were spoken. This
involves preprocessing the acoustic signals to parameterize it in a more usable and
useful form. The input signal must be matched against a stored pattern and then
makes a decision of accepting or rejecting a match
The different types of problems we are going to face in our project have been
enumerated below: -
DIFFERENCES IN THE VOICES OF DIFFERENT PEOPLE:-
The voice of a man differs from the voice of a woman that again differs from the
voice of a baby. Different speakers have different vocal tracts and source
physiology. Electrically speaking, the difference is in frequency. Women and
babies tend to speak at higher frequencies from that of men.
DIFFERENCES IN THE LOUDNESS OF SPOKEN WORDS:-
No two persons speak with the same loudness. One person will constantly go on
speaking in a loud manner while another person will speak in a light tone. Even if
the same person speaks the same word on two different instants, there is no
guarantee that he will speak the word with the same loudness at the different
instants. The problem of loudness also depends on the distance the microphone is
held from the user's mouth. Electrically speaking, the problem of difference is
reflected in the amplitude of the generated digital signal.
VOICE OPERATED WHEELCHAIR
9
DIFFERENCEIN THE TIME:-
Even if the same person speaks the same word at two different instants of time,
there is no guarantee that he will speak exactly similarly on boththe occasions.
Electrically speaking there is a problem of difference in time i.e. indirectly
frequency.
DIFFERENCES IN THE PROPERTIESOF MICROPHONES:-
There may be problems due to differences in the electrical properties of different
mikes and transmission channels.
DIFFERENCES IN THE PITCH:-
Pitch and other source features such as breathiness and amplitude can be varied
independently.
OTHER PROBLEMS:-
We have to make sure that robot does not go out of reach of our voice. Output of
microphone is very small. Output of Voice recognition chip is not compatible
with input required at motors.
VOICE OPERATED WHEELCHAIR
10
1.5 PROJECT OBJECTIVES:
 To equip the present motorized wheelchair control system with a voice
command system. By having this features, disabled people especially with a
severe disabilities that is unable to move their hand or other parts of a body
are able to move their wheelchair around independently.
 To simplify the operations of the motorized wheelchair as to make it easier
and simpler for the disabled person to operate. With this simplified
operation, many disabled people have a chance to use the system with little
training on how to use it.
 To build a wheelchair control module and interface it with the speech
recognition board as well as a wireless microphone unit.
 To build a motor control circuit, and add a motor driving mechanism to an
ordinary wheelchair
 To integrate all the modules together to produce a wireless controlled
Motorized wheelchair.
VOICE OPERATED WHEELCHAIR
11
Chapter 2:
SYSTEM DESIGN FOR VOW
VOICE OPERATED WHEELCHAIR
12
Fig 2.1VOICE OPERATED WHEELCHAIR
VOICE OPERATED WHEELCHAIR
13
2.1:BLOCK DIAGRAM OF V.O.W.
Fig 2.2
2.2 DESCRIPTION OF BLOCKDIAGRAM:
HARDWARE:
Block diagram of voice operated wheelchair consist of following blocks.
1) PIC microcontroller
2)Voice recognition block
3) Driver IC block
4) DC motors block
5) Battery
6) Battery charger
The description of this blocks is as follows.
1)MICROCONTROLLER PIC16F877
This is a 40 pin programmable interrupt microcontroller. It is a high performance
RISC CPU. This is used for controlling the movement and direction of wheel chair
VOICE OPERATED WHEELCHAIR
14
by controlling the two DC motors. The details of microcontroller are given in
Following section. The microcontroller unit is the core of intelligent wheelchair. It
interfaces the voice recognition unit and the motor driver circuit. The main
function of this unit is to receive the data from the HM2007 IC through (D0-D7)
and determine the right command to be given to the driver circuit.PIC16F877A
microcontroller with 33 I/O lines covers all the requisites for this wheelchair.
2)VOICE RECOGNITION IC HM2007
The voice recognition unit consists of the HM2007 IC. It is a Large Scale
Integration (LSI) circuit with analog front end, voice analyzer, voice recognition
processor and functional control system embedded in a single chip Complementary
Metal Oxide Semi conductor(CMOS). It also consists of HM6264B IC which is a
64k external static RAM used by the HM2007 IC to store the trained words that are
used at the recognition phase , a 4*3 keyboard , external microphone and some
other components assembled together to build a 40 isolated sound word
recognition system. The voice recognition IC HM2007 is operated in speaker
dependent recognition mode. In this mode, the unit responds only to the current
user. If another person needs to use the same system, a new training phase must be
applied. This mode reaches a high accuracy of more than 95% for voice command
recognition.
3) MOTOR DRIVER CIRCUIT
The L293 and L293D are quadruple high-current half-H drivers. The L293 is
designed to provide bidirectional drive currents of up to 1 A at voltages from 4.5 V
to 36 V. The L293D is designed to provide bidirectional drive currents of up to
600-mA at voltages from 4.5 V to 36 V. Both devices are designed to drive
VOICE OPERATED WHEELCHAIR
15
inductive loads such as relays, solenoids, dc and bipolar stepping motors, as well
as other high-current/high-voltage loads in positive-supply applications.
4)MOTORS(DC):
Two 12V dc motors are used in this experiment.
5)POWER SUPPLY SECTION
This section is consisting of a rechargeable battery. This section deals with the
power requirements of the wheel chair for DC motors, Microcontroller and other
Section. Battery is used to provide the power supply to L298 driver IC (supply)
which drives the DC motors, Microcontroller and IR section operates on 5V supply
which is provided with the help of LM7805 which is a 5V regulator IC by
converting 12V into 5V
SOFTWAREREQUIRD:
 MP LAB compiler is used for Programming the Microcontrollers.
 Embedded C is the Programming language used .
 Proteus 7 is used for simulation of the circuit .
VOICE OPERATED WHEELCHAIR
16
2.3 SPECIFICATIONS:
Components:
Parts list for speech-recognitioncircuit
1. IC1 HM2007 IC
2. IC3 74LS373
3. IC4 and IC5 7448
4. XTAL 3.57 MHz
5. Speech-recognition PCB
6. 12-contact keypad
7. 7-segment displays
8. Microphone
9. 12V battery clip
Parts list for interface circuit
1. Microcontroller PIC16f877A
2. L293D
3.40 MHz crystal
4. DC motors
5.7pin connectors
COMPONENT SPECIFICATIONS:
1)HM2007 IC :
 It is a 48 pin DIP IC.
 Speaker independent mode was used.
 Maximum of 40 words can be recognized
 Each word can be maximum 1.92sec long.
 Microphone can be connected directly to the analog input.
 64K SRAM, two 7 segment displays and their drivers were connected.
2)L293ddriver Ic:
 Output Current Capability per driver:600 mA
 Pulse Current:1.2A per driver
 Package:16 pin DIP
VOICE OPERATED WHEELCHAIR
17
3)Pic Microcontroller16F877a:
 Instructions:32
 Operating speed:DC 20 MHz
 Flash program memory:upto 8k *14 words
 Data memory:upto 368*8 bytes
 EPROM: upto 856*8 bytes
 Timer/Counter:3(2-8 bit ,1-16 bit)
 Operating voltage:2.0 to5.5
 A/D convertor: 10 bit 8 channel
4)DC Motors:
 Operating voltage:12 v
 Speed:100 rpm
 Current rating: upto 2 amp
VOICE OPERATED WHEELCHAIR
18
SOFTWARE:
a)Flow Chart for voice training and recognition
Turn off LED
TrainingMode
Y Word accepted
Fig 2.3
START
Press any no.on keypad
Memory no. displayed on 7-
seg.display
Press train(#) key
Speak the word
Next Word
End
LED Blinking
VOICE OPERATED WHEELCHAIR
19
2.4 VOICE TRAINING AND RECOGNITIONALGORITHM:
 Clear the memory by pressing 99 *.
 Enter the location number to be trained.
 After entering the number the LED will turn off.
 Number will be displayed on the display.
 Next press # to train.
 The chip will now listen to the voice input and LED will turn ON.
 Now, speak the word you want to train into the microphone.
 The LED should blink momentarily.
 This is the sign that the voice has been accepted.
 Continue doing this for different words.
 Repeat the trained word into the microphone.
 If word is rightly recognized, the correct location is displayed.
 The error codes are:
55- word too long.
66-word too short.
77-word no match.
VOICE OPERATED WHEELCHAIR
20
Fig 2.4
DESCRIPTION OF FLOW CHART FOR V.O.W
 Start the process.
 Select the mode of operating.
 For voice mode, give the voice input command.
 If voice input is ‘FORWARD’, then execute ‘FORWARD’ loop, wheelchair
will move in forward motion otherwise go to next loop.
 If voice input is ‘BACKWARD’, then execute ‘BACKWARD’ loop,
wheelchair will move in backward motion otherwise go to next loop.
 If voice input is ‘RIGHT’, then execute ‘RIGHT’ loop, wheelchair will
move in right motion otherwise go to next loop.
 If voice input is ‘LEFT’, then execute ‘LEFT’ loop, wheelchair will move in
left motion otherwise go to next loop.
 Execute the stop loop, wheelchair will stop
 For manual mode, by using keypad press 01for FORWARD,02 for
BACKWARD,03 for RIGHT,04 for LEFT,05 for STOP.
VOICE OPERATED WHEELCHAIR
21
Chapter 3:
SYSTEM IMPLEMENTATION
VOICE OPERATED WHEELCHAIR
22
3.1 Pin diagram of HM2007:
Fig 3.1
Descriptionof pin diagram:
The pin diagram of the speech processor HM2007 is shown in figure. The heart of
this module is the voice processor HM2007 IC manufactured by Hualon
Microelectronic Corporation, USA which controls the overall voice recognition
process. The data sheet is in the Appendix A. This processor is a 48-pin single chip
CMOS voice recognition LSI circuit with on chip analog front end, voice analysis,
recognition process and system control functions. It uses a 3.57 MHz crystal as a
clock to synchronize its operation. A maximum of 40 isolated-word voice
recognition system can be composed of external microphone, keyboard, external
memory 64K SRAM and some other components. This means it
has two selections of command word length capabilities:
VOICE OPERATED WHEELCHAIR
23
1)40 words vocabulary which has a maximum limit of 0.96 second. Length for
each word.
2) 20 words vocabulary which has a maximum limit of 1.92 second. length for
each word Other features includes 'dependent' and 'independent' mode voice
recognition capabilities. The speaker-dependent system is trained by the individual
who will be using the system [5]. It is capable of achieving a high command count
and better than 95% accuracy for word recognition. The disadvantage of this
approach is that the system only responds accurately to the individual who trained
the system. Speaker-independent system is a system trained to respond to a word
regardless of who speaks. Therefore the system must respond to a large variety of
speech patterns, inflections, and enunciations of the target words. The command
word count is usually lower than that of the speaker dependent system, however,
high accuracy can still be maintained within processing limits. Combined with a
microprocessor, an intelligent recognition system can be built.
VOICE OPERATED WHEELCHAIR
24
3.2 Voice RecognitionModule:
Fig3.2Voice recognition module
Description:
General definition of a voice recognition or speech recognition is that, a process of
converting a speech or voice signal into a sequence of words, by means of an
algorithm implemented as a-computer program. It is the ability of a machine or
program to recognize 'spoken words, by comparing the spoken commands with a
sound sample. In this technology analog signal (voice) is converted into a digital
signal by using an analog-to-digital converter. This digital signal is then compared
to the digital database of the system which has
VOICE OPERATED WHEELCHAIR
25
been stored with digital speech patterns. The voice recognition used in this thesis is
SR-06 which is from Images SI Incorporation, USA. It converts the voice analog
signal to digital output. The circuit is made up by four main blocks:
1. Speechrecognition processorIC HM2007
2. Input device which is a keypad for word training purpose
3. Digital display board which is used to display the word number
4. External memory SRAM IC
VOICE OPERATED WHEELCHAIR
26
3.3 SYSTEM CIRCUIT DIAGRAM:
Fig 3.3
VOICE OPERATED WHEELCHAIR
27
3.4 DESCRIPTION OF SYSTEM CIRCUIT DIAGRAM:
CONNECTIONS:
For voice recognition purpose, IC HM2007 is used. D-bus of IC HM2007 is
connected to port B of pic -microcontroller16F877.Port B of pic microcontroller is
made as input port. Port D of pic microcontroller16F877 is made as output port.
The pins RD0/PS0(19),RD1/PSP1(20),RD2/PSP2(21),RD3PSP3(22) are connected
to pins 3A(10),1A(2),4A(15),2A(7) of L293D respectively. Pins
1Y(3),2Y(6),3Y(11) and 4Y(14) are output pins of L293D.The two DC motors are
connected connected to this pins.
WORKING:
There are two mode provided by HM 2007
1)MANUAL MODE:
In this operation mode keypad, SRAM and other component connected to HM2007
to built simple recognition circuit. SRAM is of 8kb.
(a)Power on: When power is on HM2007 starts its initialization process. If
WAIT pin is low, IC will do memory check whether the SRAM is perfect or not. If
pin is high, skip the memory check process. After initial process is done IC will
move to recognition mode.
(b)Recognition mode
WAIT pin is high: In this mode, RDY set to low and HM2007 is ready to
accept voice input. When voice input is detected, RDY goes to high and IC begins
its recognition process. After recognition process, the result will appear on D-Bus
of HM2007 with pin DEN active. The result is in binary form of memory location
of voice input.
This binary output is given to Port B of pic microcontroller 16F877.In
microcontroller it compares the output from HM2007 with specified value in the
program. If the two values are same, then microcontroller executes the
corresponding subroutine. The four Port D pins are connected to input pins of
driver IC L293D and accordingly motor rotates.
VOICE OPERATED WHEELCHAIR
28
COMBINATIONSFOR MOTOR DRIVER IC L293D:
Pin 2=logic 1and pin 7=logic0, CLOCKWISEDIRECTION
Pin 2=logic 0and pin 7=logic1,ANTI CLOCKWISE DIRECTION
Pin 2=logic 0and pin 7=logic0, NO ROTATION
Pin 2=logic 1and pin 7=logic1, NO ROTATION
In very similar way, motor can be operated across pin 15 and pin 10 for motor on
right hand side.
VOICE OPERATED WHEELCHAIR
29
3.5 PCB LAYOUT
Fig 3.4 PCB Layout of PIC 16F877A
Fig 3.5 PCB Layout of PIC 16F877A
VOICE OPERATED WHEELCHAIR
30
Fig 3.6 PCB Layout L293D Driver circuit
VOICE OPERATED WHEELCHAIR
31
3.6 MOTION OF WHEELCHAIR:
 The main part of the design is to control the motion of the wheelchair. There
are four condition of motions are considered, moving forward, moving in
reverse direction, moving to the left and moving to the right. For the speed,
the user may use slow or fast speed command.
 The system starts by applying the supply voltage to the speech recognition
circuit. For fast condition the system will supply higher current to the
motors.
 If the user does not want the wheelchair move in high speed, the slow speed
command can be set by applying low current supply to the motors. The
wheel chair directions and movement possible are as given below.
 Forward:Both motors are in forward direction.
 Reverse:Both motors are in reverse direction.
 Left: Left motor stopped and right motor in forward direction.
 Right: Right motor stopped and left motor in forward direction.
 Stop: Both motors are stopped.
VOICE OPERATED WHEELCHAIR
32
3.7 SNAPSHOT OF VARIOUS CIRCUIT BOARDS
Fig 3.7 HM 2007 Board
Fig 3.8.Initial stage
VOICE OPERATED WHEELCHAIR
33
Fig 3.9.Interfacing of HM2007 with microcontroller
VOICE OPERATED WHEELCHAIR
34
3.8 RESULTS
OUTPUT TABLE:
Table 1
Commands B.7 B.6 B.5 B.4 B.3 B.2 B.1 B.0 code REQUIRED
motion
FORWARD 0 0 0 0 0 0 0 1 01 FORWARD
BACKWARD 0 0 0 0 0 0 1 0 02 BACKWARD
RIGHT 0 0 0 0 0 0 1 1 03 RIGHT
LEFT 0 0 0 0 0 1 0 0 04 LEFT
STOP 0 0 0 0 0 1 0 1 05 STOP
EXPERIMENTAL OBSERVATION SPAPSHOTS
Fig 3.10 FORWARDMOTION
Memory location=01
VOICE OPERATED WHEELCHAIR
35
Fig 3.11 BACKWARD MOTION
Memory location=02
Fig 3.12 RIGHT MOTION
Memory location=03
VOICE OPERATED WHEELCHAIR
36
Fig 3.13 LEFT MOTION
Memory location=04
Fig 3.14 STOP
Memory location=05
VOICE OPERATED WHEELCHAIR
37
EXPERIMENTAL OBSERVATION FOR TESTING:
Table 2
COMMAND
GIVEN
OBSERVED MOTION ACCURACY
OF
RESPONSESPEAKER1 SPEAKER2 SPEAKER3
FORWARD FORWARD FORWARD FORWARD 100%
BACKWARD BACKWARD BACKWARD BACKWARD 100
RIGHT RIGHT NO MOTION RIGHT 66.66%
LEFT FORWARD LEFT LEFT 66.66%
STOP STOP STOP STOP 100%
DESCRIPTION OF RESULT:
1)Table no 1shows the output result present on the Port B of the PIC
microcontroller 16F877.When we give the voice input as ‘FORWARD’ for the
memory location 01H,then IC HM2007 assigns this voice input to that memory
location and provides its binary form which is appears on Port B of
microcontroller. As per this, for all other voice inputs, we get the output.
2)Table no 2 shows the actual testing results of the voice operated wheelchair. The
output is speaker independent. For ‘FORWARD’ command, all the three speakers
got the result as ‘FORWARD’ motion of wheelchair. For ‘BACKWARD’
command, all speakers got ‘BACKWARD ’motion of wheelchair. So accuracy of
result is 100% for FORWARD and BACKWARD. For command ‘RIGHT
‘speaker 2 got result as ‘NO MOTION ’.For ‘LEFT’ command, speaker1 got
response as ’FORWARD’ motion. So accuracy of result for ’LEFT’ and
‘RIGHT’is66.66%.For‘STOP’ command got the accuracy of result as100%.
VOICE OPERATED WHEELCHAIR
38
3.9 ADVANTAGES
1) A handicapped person without Legs can use this and become Independent.
2) Reduce manpower.
3) User friendly
3.10 CONCLUSION:
HM2007 the efficiency to detect voice command and control the wheel chair is
significantly increased. This voice operated wheel chair will assist the handicapped
persons to make them self dependent for the purposeof movement for which these
people are dependent on other most of the times. A person with disabled with legs
and arms can use this wheel chair efficiently if he is able to speak.
3.101FUTURE SCOPE:
The wheelchair speed control system is targeted to be operated in both indoor and
outdoor. This means it has to be noise-proofed and weather-proofed.It must has the
ability to recognize the command word even in the presence of the background
noise.
VOICE OPERATED WHEELCHAIR
39
Chapter 4:
APPENDIX
VOICE OPERATED WHEELCHAIR
40
4.1 REFERENCES:
[1] “Voice Operated Intelligent Wheelchair” by Ms. S. D. Suryawanshi , Mr. J. S.
Chitode , Ms. S. S. Pethakar, “ International Journal of Advanced Research in
Computer Science and Software Engineering
[2] “Voice Based Direction and Speed Control of Wheel Chair for Physically
Challenged by M.Prathyusha, K. S. Roy, Mahaboob Ali Shaik, “ International
Journal of Engineering Trends and Technology (IJETT)” ,
[3] “A Wheelchair Steered through Voice Commands by Gabriel Pires and
UrbanoNunes “Journal of Intelligent and Robotic
[4] “Smart Wheelchairs: A literature Survey”, by Richard Simpson “Journal of
Rehabilitation Research & Development
VOICE OPERATED WHEELCHAIR
41
4.2 DATASHEETS:

Weitere ähnliche Inhalte

Was ist angesagt?

HDL Implementation of Vending Machine Report with Verilog Code
HDL Implementation of Vending Machine Report with Verilog CodeHDL Implementation of Vending Machine Report with Verilog Code
HDL Implementation of Vending Machine Report with Verilog CodePratik Patil
 
Project report on home automation using Arduino
Project report on home automation using Arduino Project report on home automation using Arduino
Project report on home automation using Arduino AMIT SANPUI
 
Automatic voice control wheelchair
Automatic voice control wheelchairAutomatic voice control wheelchair
Automatic voice control wheelchairMohit Nagar
 
Home automation ppt-kamal lamichhane
Home automation ppt-kamal lamichhaneHome automation ppt-kamal lamichhane
Home automation ppt-kamal lamichhaneKamal Lamichhane
 
Smart Voice Controlled Wheelchair
Smart Voice Controlled WheelchairSmart Voice Controlled Wheelchair
Smart Voice Controlled WheelchairIJLT EMAS
 
Gesture Control Robot
Gesture Control RobotGesture Control Robot
Gesture Control Robotnikhilsaini25
 
HAND GESTURE CONTROLLED WHEEL CHAIR
HAND GESTURE CONTROLLED WHEEL CHAIRHAND GESTURE CONTROLLED WHEEL CHAIR
HAND GESTURE CONTROLLED WHEEL CHAIRNoufal Nechiyan
 
Home automation using arduino
Home automation using arduinoHome automation using arduino
Home automation using arduinoIkram Arshad
 
Advanced Topics In Digital Signal Processing
Advanced Topics In Digital Signal ProcessingAdvanced Topics In Digital Signal Processing
Advanced Topics In Digital Signal ProcessingJim Jenkins
 
ANDROID BASED AUTOMATED SMART WHEELCHAIR
ANDROID BASED AUTOMATED SMART WHEELCHAIRANDROID BASED AUTOMATED SMART WHEELCHAIR
ANDROID BASED AUTOMATED SMART WHEELCHAIRshashank tiwari
 
Women Safety Night Patrolling Robot Using IOT
Women Safety Night Patrolling Robot Using IOTWomen Safety Night Patrolling Robot Using IOT
Women Safety Night Patrolling Robot Using IOTDr. Amarjeet Singh
 
IoT Based Garbage Monitoring System ppt
IoT Based Garbage Monitoring System pptIoT Based Garbage Monitoring System ppt
IoT Based Garbage Monitoring System pptRanjan Gupta
 
GSM based patient monitoring system
GSM based patient monitoring systemGSM based patient monitoring system
GSM based patient monitoring systemssvarma k
 
DIGITAL SIGNAL PROCESSING
DIGITAL SIGNAL PROCESSINGDIGITAL SIGNAL PROCESSING
DIGITAL SIGNAL PROCESSINGSnehal Hedau
 
LED and LASER source in optical communication
LED and LASER source in optical communicationLED and LASER source in optical communication
LED and LASER source in optical communicationbhupender rawat
 
Electronic hand glove for deaf and blindppt
Electronic hand glove for deaf and blindpptElectronic hand glove for deaf and blindppt
Electronic hand glove for deaf and blindpptgtsooka
 
PULSE WIDTH MODULATION &DEMODULATION
PULSE WIDTH MODULATION &DEMODULATIONPULSE WIDTH MODULATION &DEMODULATION
PULSE WIDTH MODULATION &DEMODULATIONbharath405
 

Was ist angesagt? (20)

Smart glove
Smart gloveSmart glove
Smart glove
 
HDL Implementation of Vending Machine Report with Verilog Code
HDL Implementation of Vending Machine Report with Verilog CodeHDL Implementation of Vending Machine Report with Verilog Code
HDL Implementation of Vending Machine Report with Verilog Code
 
Project report on home automation using Arduino
Project report on home automation using Arduino Project report on home automation using Arduino
Project report on home automation using Arduino
 
Automatic voice control wheelchair
Automatic voice control wheelchairAutomatic voice control wheelchair
Automatic voice control wheelchair
 
Home automation ppt-kamal lamichhane
Home automation ppt-kamal lamichhaneHome automation ppt-kamal lamichhane
Home automation ppt-kamal lamichhane
 
Smart Voice Controlled Wheelchair
Smart Voice Controlled WheelchairSmart Voice Controlled Wheelchair
Smart Voice Controlled Wheelchair
 
Gesture Control Robot
Gesture Control RobotGesture Control Robot
Gesture Control Robot
 
HAND GESTURE CONTROLLED WHEEL CHAIR
HAND GESTURE CONTROLLED WHEEL CHAIRHAND GESTURE CONTROLLED WHEEL CHAIR
HAND GESTURE CONTROLLED WHEEL CHAIR
 
Properties of dft
Properties of dftProperties of dft
Properties of dft
 
Home automation using arduino
Home automation using arduinoHome automation using arduino
Home automation using arduino
 
Advanced Topics In Digital Signal Processing
Advanced Topics In Digital Signal ProcessingAdvanced Topics In Digital Signal Processing
Advanced Topics In Digital Signal Processing
 
ANDROID BASED AUTOMATED SMART WHEELCHAIR
ANDROID BASED AUTOMATED SMART WHEELCHAIRANDROID BASED AUTOMATED SMART WHEELCHAIR
ANDROID BASED AUTOMATED SMART WHEELCHAIR
 
Women Safety Night Patrolling Robot Using IOT
Women Safety Night Patrolling Robot Using IOTWomen Safety Night Patrolling Robot Using IOT
Women Safety Night Patrolling Robot Using IOT
 
Flexible electronic skin
Flexible electronic skinFlexible electronic skin
Flexible electronic skin
 
IoT Based Garbage Monitoring System ppt
IoT Based Garbage Monitoring System pptIoT Based Garbage Monitoring System ppt
IoT Based Garbage Monitoring System ppt
 
GSM based patient monitoring system
GSM based patient monitoring systemGSM based patient monitoring system
GSM based patient monitoring system
 
DIGITAL SIGNAL PROCESSING
DIGITAL SIGNAL PROCESSINGDIGITAL SIGNAL PROCESSING
DIGITAL SIGNAL PROCESSING
 
LED and LASER source in optical communication
LED and LASER source in optical communicationLED and LASER source in optical communication
LED and LASER source in optical communication
 
Electronic hand glove for deaf and blindppt
Electronic hand glove for deaf and blindpptElectronic hand glove for deaf and blindppt
Electronic hand glove for deaf and blindppt
 
PULSE WIDTH MODULATION &DEMODULATION
PULSE WIDTH MODULATION &DEMODULATIONPULSE WIDTH MODULATION &DEMODULATION
PULSE WIDTH MODULATION &DEMODULATION
 

Andere mochten auch

Wheelchair is guided by voice commands full documentation
Wheelchair is guided by voice commands full documentationWheelchair is guided by voice commands full documentation
Wheelchair is guided by voice commands full documentationMajd Khaleel
 
Performance analysis of voice operated wheel chair
Performance analysis of voice operated wheel chairPerformance analysis of voice operated wheel chair
Performance analysis of voice operated wheel chaireSAT Publishing House
 
ACCELEROMETER BASED GESTURE ROBO CAR
ACCELEROMETER BASED GESTURE ROBO CARACCELEROMETER BASED GESTURE ROBO CAR
ACCELEROMETER BASED GESTURE ROBO CARHarshit Jain
 
Speech and Language Processing
Speech and Language ProcessingSpeech and Language Processing
Speech and Language ProcessingVikalp Mahendra
 
Voice and touchscreen operated wheelchair report.
Voice and touchscreen operated wheelchair report.Voice and touchscreen operated wheelchair report.
Voice and touchscreen operated wheelchair report.Syed Saleem Ahmed
 
Abstract-Voice Recognition Whel Chair
Abstract-Voice Recognition Whel ChairAbstract-Voice Recognition Whel Chair
Abstract-Voice Recognition Whel ChairDhammika Vidanalage
 
Report on touch screen
Report on touch screenReport on touch screen
Report on touch screenAlisha Korpal
 
Report on Touch Screens
Report on Touch ScreensReport on Touch Screens
Report on Touch ScreensPavan Kumar MT
 
Powered wheel chair ppt
Powered wheel chair pptPowered wheel chair ppt
Powered wheel chair pptbaggaraghav0
 
zForce Touch Screen Technology
zForce Touch Screen TechnologyzForce Touch Screen Technology
zForce Touch Screen TechnologySuryakanta Rout
 
Project Report On GREEN HUMAN RESOURCE MANAGEMENT (GHRM)
Project Report On GREEN HUMAN RESOURCE MANAGEMENT (GHRM) Project Report On GREEN HUMAN RESOURCE MANAGEMENT (GHRM)
Project Report On GREEN HUMAN RESOURCE MANAGEMENT (GHRM) gunvender sharma
 
Green technology 06 42_50
Green technology 06 42_50Green technology 06 42_50
Green technology 06 42_50domsr
 
Voice morphing ppt
Voice morphing pptVoice morphing ppt
Voice morphing ppthimadrigupta
 
Ppt on wheel chair edited2
Ppt on wheel chair edited2Ppt on wheel chair edited2
Ppt on wheel chair edited2Rajkumar Thakur
 

Andere mochten auch (20)

Wheelchair is guided by voice commands full documentation
Wheelchair is guided by voice commands full documentationWheelchair is guided by voice commands full documentation
Wheelchair is guided by voice commands full documentation
 
Performance analysis of voice operated wheel chair
Performance analysis of voice operated wheel chairPerformance analysis of voice operated wheel chair
Performance analysis of voice operated wheel chair
 
Idioms
IdiomsIdioms
Idioms
 
Wonderstruck
WonderstruckWonderstruck
Wonderstruck
 
ACCELEROMETER BASED GESTURE ROBO CAR
ACCELEROMETER BASED GESTURE ROBO CARACCELEROMETER BASED GESTURE ROBO CAR
ACCELEROMETER BASED GESTURE ROBO CAR
 
Speech and Language Processing
Speech and Language ProcessingSpeech and Language Processing
Speech and Language Processing
 
iot contest file
iot contest fileiot contest file
iot contest file
 
Voice and touchscreen operated wheelchair report.
Voice and touchscreen operated wheelchair report.Voice and touchscreen operated wheelchair report.
Voice and touchscreen operated wheelchair report.
 
Abstract-Voice Recognition Whel Chair
Abstract-Voice Recognition Whel ChairAbstract-Voice Recognition Whel Chair
Abstract-Voice Recognition Whel Chair
 
Report on touch screen
Report on touch screenReport on touch screen
Report on touch screen
 
Report on Touch Screens
Report on Touch ScreensReport on Touch Screens
Report on Touch Screens
 
Powered wheel chair ppt
Powered wheel chair pptPowered wheel chair ppt
Powered wheel chair ppt
 
zForce Touch Screen Technology
zForce Touch Screen TechnologyzForce Touch Screen Technology
zForce Touch Screen Technology
 
Touch screen report
Touch screen reportTouch screen report
Touch screen report
 
Project Report On GREEN HUMAN RESOURCE MANAGEMENT (GHRM)
Project Report On GREEN HUMAN RESOURCE MANAGEMENT (GHRM) Project Report On GREEN HUMAN RESOURCE MANAGEMENT (GHRM)
Project Report On GREEN HUMAN RESOURCE MANAGEMENT (GHRM)
 
Green technology 06 42_50
Green technology 06 42_50Green technology 06 42_50
Green technology 06 42_50
 
Voice morphing ppt
Voice morphing pptVoice morphing ppt
Voice morphing ppt
 
White led
White ledWhite led
White led
 
Ppt on wheel chair edited2
Ppt on wheel chair edited2Ppt on wheel chair edited2
Ppt on wheel chair edited2
 
Wheelchairs
WheelchairsWheelchairs
Wheelchairs
 

Ähnlich wie FINAL report

Artificial Intelligence for Speech Recognition
Artificial Intelligence for Speech RecognitionArtificial Intelligence for Speech Recognition
Artificial Intelligence for Speech RecognitionRHIMRJ Journal
 
AI for voice recognition.pptx
AI for voice recognition.pptxAI for voice recognition.pptx
AI for voice recognition.pptxJhalakDashora
 
A survey on Enhancements in Speech Recognition
A survey on Enhancements in Speech RecognitionA survey on Enhancements in Speech Recognition
A survey on Enhancements in Speech RecognitionIRJET Journal
 
A Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign LanguageA Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign Languageijsrd.com
 
Artificial Intelligence- An Introduction
Artificial Intelligence- An IntroductionArtificial Intelligence- An Introduction
Artificial Intelligence- An Introductionacemindia
 
Artificial Intelligence - An Introduction
Artificial Intelligence - An Introduction Artificial Intelligence - An Introduction
Artificial Intelligence - An Introduction acemindia
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversionankit_saluja
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversionankit_saluja
 
Speech Recognition in Artificail Inteligence
Speech Recognition in Artificail InteligenceSpeech Recognition in Artificail Inteligence
Speech Recognition in Artificail InteligenceIlhaan Marwat
 
Voice Recognition Based Automation System for Medical Applications and for Ph...
Voice Recognition Based Automation System for Medical Applications and for Ph...Voice Recognition Based Automation System for Medical Applications and for Ph...
Voice Recognition Based Automation System for Medical Applications and for Ph...IRJET Journal
 
Voice Recognition Based Automation System for Medical Applications and for Ph...
Voice Recognition Based Automation System for Medical Applications and for Ph...Voice Recognition Based Automation System for Medical Applications and for Ph...
Voice Recognition Based Automation System for Medical Applications and for Ph...IRJET Journal
 
Speech recognition
Speech recognitionSpeech recognition
Speech recognitionCharu Joshi
 
Developing a hands-free interface to operate a Computer using voice command
Developing a hands-free interface to operate a Computer using voice commandDeveloping a hands-free interface to operate a Computer using voice command
Developing a hands-free interface to operate a Computer using voice commandMohammad Liton Hossain
 
Utterance based speaker identification
Utterance based speaker identificationUtterance based speaker identification
Utterance based speaker identificationIJCSEA Journal
 
Utterance Based Speaker Identification Using ANN
Utterance Based Speaker Identification Using ANNUtterance Based Speaker Identification Using ANN
Utterance Based Speaker Identification Using ANNIJCSEA Journal
 
Utterance Based Speaker Identification Using ANN
Utterance Based Speaker Identification Using ANNUtterance Based Speaker Identification Using ANN
Utterance Based Speaker Identification Using ANNIJCSEA Journal
 

Ähnlich wie FINAL report (20)

Artificial Intelligence for Speech Recognition
Artificial Intelligence for Speech RecognitionArtificial Intelligence for Speech Recognition
Artificial Intelligence for Speech Recognition
 
AI for voice recognition.pptx
AI for voice recognition.pptxAI for voice recognition.pptx
AI for voice recognition.pptx
 
A survey on Enhancements in Speech Recognition
A survey on Enhancements in Speech RecognitionA survey on Enhancements in Speech Recognition
A survey on Enhancements in Speech Recognition
 
A Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign LanguageA Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign Language
 
Artificial Intelligence- An Introduction
Artificial Intelligence- An IntroductionArtificial Intelligence- An Introduction
Artificial Intelligence- An Introduction
 
Artificial Intelligence - An Introduction
Artificial Intelligence - An Introduction Artificial Intelligence - An Introduction
Artificial Intelligence - An Introduction
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversion
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversion
 
Speech Recognition in Artificail Inteligence
Speech Recognition in Artificail InteligenceSpeech Recognition in Artificail Inteligence
Speech Recognition in Artificail Inteligence
 
Voice Recognition Based Automation System for Medical Applications and for Ph...
Voice Recognition Based Automation System for Medical Applications and for Ph...Voice Recognition Based Automation System for Medical Applications and for Ph...
Voice Recognition Based Automation System for Medical Applications and for Ph...
 
Voice Recognition Based Automation System for Medical Applications and for Ph...
Voice Recognition Based Automation System for Medical Applications and for Ph...Voice Recognition Based Automation System for Medical Applications and for Ph...
Voice Recognition Based Automation System for Medical Applications and for Ph...
 
Bt35408413
Bt35408413Bt35408413
Bt35408413
 
30
3030
30
 
[IJET-V1I6P21] Authors : Easwari.N , Ponmuthuramalingam.P
[IJET-V1I6P21] Authors : Easwari.N , Ponmuthuramalingam.P[IJET-V1I6P21] Authors : Easwari.N , Ponmuthuramalingam.P
[IJET-V1I6P21] Authors : Easwari.N , Ponmuthuramalingam.P
 
Speech recognition
Speech recognitionSpeech recognition
Speech recognition
 
Developing a hands-free interface to operate a Computer using voice command
Developing a hands-free interface to operate a Computer using voice commandDeveloping a hands-free interface to operate a Computer using voice command
Developing a hands-free interface to operate a Computer using voice command
 
Utterance based speaker identification
Utterance based speaker identificationUtterance based speaker identification
Utterance based speaker identification
 
Utterance Based Speaker Identification Using ANN
Utterance Based Speaker Identification Using ANNUtterance Based Speaker Identification Using ANN
Utterance Based Speaker Identification Using ANN
 
Utterance Based Speaker Identification Using ANN
Utterance Based Speaker Identification Using ANNUtterance Based Speaker Identification Using ANN
Utterance Based Speaker Identification Using ANN
 
Seminar
SeminarSeminar
Seminar
 

FINAL report

  • 1. VOICE OPERATED WHEELCHAIR 1 ABSTRACT Many disabled people usually depend on others in their daily life especially in getting from one place to another. For the wheelchair users, they need continuously someone to help them in going the wheelchair moving. By having a wheelchair control system will help handicapped persons become independent. The system is a wireless wheelchair control system which employs a voice recognition system for triggering and controlling all its movements. The wheelchair responds to the voice command from its user to perform any movements functions. It integrates a microcontroller, wireless microphone, voice recognition processor, motor control interface board to move the wheelchair. By using the system, the users are able to operate the wheelchair by simply speak to the wheelchair microphone. The basic movement functions includes forward and reverse direction, left and right turns and stop It utilizes a PIC controller microchip 16f877a to control the system operations. It communicates with the voice recognition processor to detect word spoken and then determines the corresponding output command to drive the left and right motors. To accomplish this task, an assembly language program is written and stored in the controller's memory .In order to recognize the spoken words, the voice recognition processor HM 2007 must be trained with the word spoken out by the user who is going to operate the wheelchair.
  • 3. VOICE OPERATED WHEELCHAIR 3 1.1 GENERALOVERVIEW: A wheelchair is a wheeled mobility device in which the user sits. The device is propelled either manually by pushing the wheels with the hands or via various automated systems. Wheelchairs are used by people for whom walking is difficult or impossible due to illness, injury, or disability. People with walking disability often need to use a wheelchair “World report on disability" jointly presented by World Health Organization (WHO) and World Bank says that there are 70 million people are handicapped in the world. Unfortunately day by day the number of handicapped people is going on increasing due to road accidents as well as disease like paralysis. If a person is handicapped he is dependent on other person for his day to day work like transport, food, orientation etc. So a voice operated wheel chair is developed which will operate automatically on the commands from the handicapped user for movement purpose.
  • 4. VOICE OPERATED WHEELCHAIR 4 1.2 LITERATURE SURVEY:  There are many scientists and researchers who develop computer software that can recognize human voice commands in so many languages such as English, Japanese and Thai. There are many techniques that are used to recognize voice commands .[1]  Researchers transform sound wave into digital wave by a computer. After that they use digital signal to manage different electronic equipments, for example 1)controlling robot arm movement 2)helping the handicapped to move a wheel chair etc.[2]  According to “ IJRET” In the paper on “Voice Operated Intelligent Wheelchair” , Mat lab software is used for input signal processing and that signal given to the ARM Processor LPC2138.[3]  In recent paper of “ IJRET”, input is given to IC HM2007.HM 2007 IC is used for the voice recognition purpose. HM 2007 generates the output signal depending on the input from the user.[4]
  • 5. VOICE OPERATED WHEELCHAIR 5 1.3 THEROTICALBACKGROUND: Voice enabled devices basically use the principal of speech recognition. It is the process of electronically converting a speech waveform (as the realization of a linguistic expression) into words (as a best-decoded sequence of linguistic units). Converting a speechwaveform into a sequence of words involves several essential steps: i. A microphone picks up the signal of the speech to be recognized and converts it into an electrical signal. A modern speech recognition system also requires that the electrical signal be represented digitally by means of an analog-to-digital(A/D) conversion process, so that it can be processed with a digital computer or microprocessor. ii. This speech signal is then analyzed (in the analysis block) to produce a representation consisting of salient features of the speech. The most prevalent feature of speech is derived from its short-time spectrum, measured successively over short-time windows of length 20–30 milliseconds overlapping at intervals of10–20 ms.Each short-time spectrum is transformed into a feature vector, and the temporal sequence of such feature vectors thus forms a speech pattern. iii. The speech pattern is then compared to a store of phoneme patterns or models through a dynamic programming process in order to generate a hypothesis (or a number of hypotheses) of the phonemic unit sequence. (A phoneme is a basic unit of speech and a phoneme model is a succinct representation of the signal that corresponds to a phoneme, usually embedded in an utterance.) A speech signal inherently has substantial variations along many dimensions.Before we understand the design of the project let us first understand speech recognition types and styles. Speech recognition is classified into two categories, speaker dependent and speaker independent.
  • 6. VOICE OPERATED WHEELCHAIR 6 Speaker dependent systems are trained by the individual who will be using the system. These systems are capable of achieving a high command count and better than 95% accuracy for word recognition. The drawback to this approach is that the system only responds accurately only to the individual who trained the system. This is the most common approach employed in software for personal computers. Speaker independent is a system trained to respond to a word regardless of who speaks. Therefore the system must respond to a large variety of speech patterns, inflections and enunciation's of the target word. The command word count is usually lower than the speaker dependent however high accuracy can still be maintain within processing limits. Industrial requirements more often need speaker independent voice systems, such as the AT&T system used in the telephone systems. A more general form of voice recognition is available through feature analysis and this technique usually leads to "speaker-independent" voice recognition. RecognitionStyle Speech recognition systems have another constraint concerning the style of speech they can recognize. They are three styles of speech: isolated, connected and continuous. Isolated speech recognition systems can just handle words that are spoken separately. This is the most common speech recognition systems available today. The user must pause between each word or command spoken. The speech recognition circuit is set up to identify isolated words of .96 second lengths. Connected is a half way point between isolated word and continuous speech recognition. Allows users to speak multiple words. The HM2007 can be set up to identify words or phrases 1.92 seconds in length. This reduces the word recognition vocabulary number to20.
  • 7. VOICE OPERATED WHEELCHAIR 7  Approaches of Statistical Speech Recognition a. Hidden Markovmodel (HMM)-based speechrecognition Modern general-purpose speech recognition systems are generally based on hidden Markov models (HMMs). This is a statistical model which outputs a sequence of symbols or quantities. One possible reason why HMMs are used in speech recognition is that a speech signal could be viewed as a piece-wise stationary signal or a short-time stationary signal. That is, one could assume in a short-time in the range of 10 milliseconds, speech could be approximated as a stationary process. Speech could thus be thought as a Markov model for many stochastic processes (known as states). Another reason why HMMs are popular is because they can be trained automatically and are simple and computationally feasible to use. b. Neuralnetwork-basedspeechrecognition Another approach in acoustic modeling is the use of neural networks. They are capable of solving much more complicated recognition tasks, but do not scale as well as HMMs when it comes to large vocabularies. Rather than being used in general-purpose speech recognition applications they can handle low quality, noisy data and speaker independence. Such systems can achieve greater accuracy than HMM based systems, as long as there is training data and the vocabulary is limited. A more general approach using neural networks is phoneme recognition. c. Dynamic time warping (DTW)-basedspeechrecognition Dynamic time warping is an algorithm for measuring similarity between two sequences which may vary in time or speed. For instance, similarities in walking patterns would be detected, even if in one video the person was walking slowly and if in another they were walking more quickly, or even if
  • 8. VOICE OPERATED WHEELCHAIR 8 there were accelerations and decelerations during the course of one observation. DTW has been applied to video, audio, and graphics -- indeed, any data which can be turned into a linear representation can be analyzed with DTW. 1.4 NATURE OF PROBLEM: Speech recognition is the process of finding a interpretation of a spoken utterance; typically, this means finding the sequence of words that were spoken. This involves preprocessing the acoustic signals to parameterize it in a more usable and useful form. The input signal must be matched against a stored pattern and then makes a decision of accepting or rejecting a match The different types of problems we are going to face in our project have been enumerated below: - DIFFERENCES IN THE VOICES OF DIFFERENT PEOPLE:- The voice of a man differs from the voice of a woman that again differs from the voice of a baby. Different speakers have different vocal tracts and source physiology. Electrically speaking, the difference is in frequency. Women and babies tend to speak at higher frequencies from that of men. DIFFERENCES IN THE LOUDNESS OF SPOKEN WORDS:- No two persons speak with the same loudness. One person will constantly go on speaking in a loud manner while another person will speak in a light tone. Even if the same person speaks the same word on two different instants, there is no guarantee that he will speak the word with the same loudness at the different instants. The problem of loudness also depends on the distance the microphone is held from the user's mouth. Electrically speaking, the problem of difference is reflected in the amplitude of the generated digital signal.
  • 9. VOICE OPERATED WHEELCHAIR 9 DIFFERENCEIN THE TIME:- Even if the same person speaks the same word at two different instants of time, there is no guarantee that he will speak exactly similarly on boththe occasions. Electrically speaking there is a problem of difference in time i.e. indirectly frequency. DIFFERENCES IN THE PROPERTIESOF MICROPHONES:- There may be problems due to differences in the electrical properties of different mikes and transmission channels. DIFFERENCES IN THE PITCH:- Pitch and other source features such as breathiness and amplitude can be varied independently. OTHER PROBLEMS:- We have to make sure that robot does not go out of reach of our voice. Output of microphone is very small. Output of Voice recognition chip is not compatible with input required at motors.
  • 10. VOICE OPERATED WHEELCHAIR 10 1.5 PROJECT OBJECTIVES:  To equip the present motorized wheelchair control system with a voice command system. By having this features, disabled people especially with a severe disabilities that is unable to move their hand or other parts of a body are able to move their wheelchair around independently.  To simplify the operations of the motorized wheelchair as to make it easier and simpler for the disabled person to operate. With this simplified operation, many disabled people have a chance to use the system with little training on how to use it.  To build a wheelchair control module and interface it with the speech recognition board as well as a wireless microphone unit.  To build a motor control circuit, and add a motor driving mechanism to an ordinary wheelchair  To integrate all the modules together to produce a wireless controlled Motorized wheelchair.
  • 11. VOICE OPERATED WHEELCHAIR 11 Chapter 2: SYSTEM DESIGN FOR VOW
  • 12. VOICE OPERATED WHEELCHAIR 12 Fig 2.1VOICE OPERATED WHEELCHAIR
  • 13. VOICE OPERATED WHEELCHAIR 13 2.1:BLOCK DIAGRAM OF V.O.W. Fig 2.2 2.2 DESCRIPTION OF BLOCKDIAGRAM: HARDWARE: Block diagram of voice operated wheelchair consist of following blocks. 1) PIC microcontroller 2)Voice recognition block 3) Driver IC block 4) DC motors block 5) Battery 6) Battery charger The description of this blocks is as follows. 1)MICROCONTROLLER PIC16F877 This is a 40 pin programmable interrupt microcontroller. It is a high performance RISC CPU. This is used for controlling the movement and direction of wheel chair
  • 14. VOICE OPERATED WHEELCHAIR 14 by controlling the two DC motors. The details of microcontroller are given in Following section. The microcontroller unit is the core of intelligent wheelchair. It interfaces the voice recognition unit and the motor driver circuit. The main function of this unit is to receive the data from the HM2007 IC through (D0-D7) and determine the right command to be given to the driver circuit.PIC16F877A microcontroller with 33 I/O lines covers all the requisites for this wheelchair. 2)VOICE RECOGNITION IC HM2007 The voice recognition unit consists of the HM2007 IC. It is a Large Scale Integration (LSI) circuit with analog front end, voice analyzer, voice recognition processor and functional control system embedded in a single chip Complementary Metal Oxide Semi conductor(CMOS). It also consists of HM6264B IC which is a 64k external static RAM used by the HM2007 IC to store the trained words that are used at the recognition phase , a 4*3 keyboard , external microphone and some other components assembled together to build a 40 isolated sound word recognition system. The voice recognition IC HM2007 is operated in speaker dependent recognition mode. In this mode, the unit responds only to the current user. If another person needs to use the same system, a new training phase must be applied. This mode reaches a high accuracy of more than 95% for voice command recognition. 3) MOTOR DRIVER CIRCUIT The L293 and L293D are quadruple high-current half-H drivers. The L293 is designed to provide bidirectional drive currents of up to 1 A at voltages from 4.5 V to 36 V. The L293D is designed to provide bidirectional drive currents of up to 600-mA at voltages from 4.5 V to 36 V. Both devices are designed to drive
  • 15. VOICE OPERATED WHEELCHAIR 15 inductive loads such as relays, solenoids, dc and bipolar stepping motors, as well as other high-current/high-voltage loads in positive-supply applications. 4)MOTORS(DC): Two 12V dc motors are used in this experiment. 5)POWER SUPPLY SECTION This section is consisting of a rechargeable battery. This section deals with the power requirements of the wheel chair for DC motors, Microcontroller and other Section. Battery is used to provide the power supply to L298 driver IC (supply) which drives the DC motors, Microcontroller and IR section operates on 5V supply which is provided with the help of LM7805 which is a 5V regulator IC by converting 12V into 5V SOFTWAREREQUIRD:  MP LAB compiler is used for Programming the Microcontrollers.  Embedded C is the Programming language used .  Proteus 7 is used for simulation of the circuit .
  • 16. VOICE OPERATED WHEELCHAIR 16 2.3 SPECIFICATIONS: Components: Parts list for speech-recognitioncircuit 1. IC1 HM2007 IC 2. IC3 74LS373 3. IC4 and IC5 7448 4. XTAL 3.57 MHz 5. Speech-recognition PCB 6. 12-contact keypad 7. 7-segment displays 8. Microphone 9. 12V battery clip Parts list for interface circuit 1. Microcontroller PIC16f877A 2. L293D 3.40 MHz crystal 4. DC motors 5.7pin connectors COMPONENT SPECIFICATIONS: 1)HM2007 IC :  It is a 48 pin DIP IC.  Speaker independent mode was used.  Maximum of 40 words can be recognized  Each word can be maximum 1.92sec long.  Microphone can be connected directly to the analog input.  64K SRAM, two 7 segment displays and their drivers were connected. 2)L293ddriver Ic:  Output Current Capability per driver:600 mA  Pulse Current:1.2A per driver  Package:16 pin DIP
  • 17. VOICE OPERATED WHEELCHAIR 17 3)Pic Microcontroller16F877a:  Instructions:32  Operating speed:DC 20 MHz  Flash program memory:upto 8k *14 words  Data memory:upto 368*8 bytes  EPROM: upto 856*8 bytes  Timer/Counter:3(2-8 bit ,1-16 bit)  Operating voltage:2.0 to5.5  A/D convertor: 10 bit 8 channel 4)DC Motors:  Operating voltage:12 v  Speed:100 rpm  Current rating: upto 2 amp
  • 18. VOICE OPERATED WHEELCHAIR 18 SOFTWARE: a)Flow Chart for voice training and recognition Turn off LED TrainingMode Y Word accepted Fig 2.3 START Press any no.on keypad Memory no. displayed on 7- seg.display Press train(#) key Speak the word Next Word End LED Blinking
  • 19. VOICE OPERATED WHEELCHAIR 19 2.4 VOICE TRAINING AND RECOGNITIONALGORITHM:  Clear the memory by pressing 99 *.  Enter the location number to be trained.  After entering the number the LED will turn off.  Number will be displayed on the display.  Next press # to train.  The chip will now listen to the voice input and LED will turn ON.  Now, speak the word you want to train into the microphone.  The LED should blink momentarily.  This is the sign that the voice has been accepted.  Continue doing this for different words.  Repeat the trained word into the microphone.  If word is rightly recognized, the correct location is displayed.  The error codes are: 55- word too long. 66-word too short. 77-word no match.
  • 20. VOICE OPERATED WHEELCHAIR 20 Fig 2.4 DESCRIPTION OF FLOW CHART FOR V.O.W  Start the process.  Select the mode of operating.  For voice mode, give the voice input command.  If voice input is ‘FORWARD’, then execute ‘FORWARD’ loop, wheelchair will move in forward motion otherwise go to next loop.  If voice input is ‘BACKWARD’, then execute ‘BACKWARD’ loop, wheelchair will move in backward motion otherwise go to next loop.  If voice input is ‘RIGHT’, then execute ‘RIGHT’ loop, wheelchair will move in right motion otherwise go to next loop.  If voice input is ‘LEFT’, then execute ‘LEFT’ loop, wheelchair will move in left motion otherwise go to next loop.  Execute the stop loop, wheelchair will stop  For manual mode, by using keypad press 01for FORWARD,02 for BACKWARD,03 for RIGHT,04 for LEFT,05 for STOP.
  • 21. VOICE OPERATED WHEELCHAIR 21 Chapter 3: SYSTEM IMPLEMENTATION
  • 22. VOICE OPERATED WHEELCHAIR 22 3.1 Pin diagram of HM2007: Fig 3.1 Descriptionof pin diagram: The pin diagram of the speech processor HM2007 is shown in figure. The heart of this module is the voice processor HM2007 IC manufactured by Hualon Microelectronic Corporation, USA which controls the overall voice recognition process. The data sheet is in the Appendix A. This processor is a 48-pin single chip CMOS voice recognition LSI circuit with on chip analog front end, voice analysis, recognition process and system control functions. It uses a 3.57 MHz crystal as a clock to synchronize its operation. A maximum of 40 isolated-word voice recognition system can be composed of external microphone, keyboard, external memory 64K SRAM and some other components. This means it has two selections of command word length capabilities:
  • 23. VOICE OPERATED WHEELCHAIR 23 1)40 words vocabulary which has a maximum limit of 0.96 second. Length for each word. 2) 20 words vocabulary which has a maximum limit of 1.92 second. length for each word Other features includes 'dependent' and 'independent' mode voice recognition capabilities. The speaker-dependent system is trained by the individual who will be using the system [5]. It is capable of achieving a high command count and better than 95% accuracy for word recognition. The disadvantage of this approach is that the system only responds accurately to the individual who trained the system. Speaker-independent system is a system trained to respond to a word regardless of who speaks. Therefore the system must respond to a large variety of speech patterns, inflections, and enunciations of the target words. The command word count is usually lower than that of the speaker dependent system, however, high accuracy can still be maintained within processing limits. Combined with a microprocessor, an intelligent recognition system can be built.
  • 24. VOICE OPERATED WHEELCHAIR 24 3.2 Voice RecognitionModule: Fig3.2Voice recognition module Description: General definition of a voice recognition or speech recognition is that, a process of converting a speech or voice signal into a sequence of words, by means of an algorithm implemented as a-computer program. It is the ability of a machine or program to recognize 'spoken words, by comparing the spoken commands with a sound sample. In this technology analog signal (voice) is converted into a digital signal by using an analog-to-digital converter. This digital signal is then compared to the digital database of the system which has
  • 25. VOICE OPERATED WHEELCHAIR 25 been stored with digital speech patterns. The voice recognition used in this thesis is SR-06 which is from Images SI Incorporation, USA. It converts the voice analog signal to digital output. The circuit is made up by four main blocks: 1. Speechrecognition processorIC HM2007 2. Input device which is a keypad for word training purpose 3. Digital display board which is used to display the word number 4. External memory SRAM IC
  • 26. VOICE OPERATED WHEELCHAIR 26 3.3 SYSTEM CIRCUIT DIAGRAM: Fig 3.3
  • 27. VOICE OPERATED WHEELCHAIR 27 3.4 DESCRIPTION OF SYSTEM CIRCUIT DIAGRAM: CONNECTIONS: For voice recognition purpose, IC HM2007 is used. D-bus of IC HM2007 is connected to port B of pic -microcontroller16F877.Port B of pic microcontroller is made as input port. Port D of pic microcontroller16F877 is made as output port. The pins RD0/PS0(19),RD1/PSP1(20),RD2/PSP2(21),RD3PSP3(22) are connected to pins 3A(10),1A(2),4A(15),2A(7) of L293D respectively. Pins 1Y(3),2Y(6),3Y(11) and 4Y(14) are output pins of L293D.The two DC motors are connected connected to this pins. WORKING: There are two mode provided by HM 2007 1)MANUAL MODE: In this operation mode keypad, SRAM and other component connected to HM2007 to built simple recognition circuit. SRAM is of 8kb. (a)Power on: When power is on HM2007 starts its initialization process. If WAIT pin is low, IC will do memory check whether the SRAM is perfect or not. If pin is high, skip the memory check process. After initial process is done IC will move to recognition mode. (b)Recognition mode WAIT pin is high: In this mode, RDY set to low and HM2007 is ready to accept voice input. When voice input is detected, RDY goes to high and IC begins its recognition process. After recognition process, the result will appear on D-Bus of HM2007 with pin DEN active. The result is in binary form of memory location of voice input. This binary output is given to Port B of pic microcontroller 16F877.In microcontroller it compares the output from HM2007 with specified value in the program. If the two values are same, then microcontroller executes the corresponding subroutine. The four Port D pins are connected to input pins of driver IC L293D and accordingly motor rotates.
  • 28. VOICE OPERATED WHEELCHAIR 28 COMBINATIONSFOR MOTOR DRIVER IC L293D: Pin 2=logic 1and pin 7=logic0, CLOCKWISEDIRECTION Pin 2=logic 0and pin 7=logic1,ANTI CLOCKWISE DIRECTION Pin 2=logic 0and pin 7=logic0, NO ROTATION Pin 2=logic 1and pin 7=logic1, NO ROTATION In very similar way, motor can be operated across pin 15 and pin 10 for motor on right hand side.
  • 29. VOICE OPERATED WHEELCHAIR 29 3.5 PCB LAYOUT Fig 3.4 PCB Layout of PIC 16F877A Fig 3.5 PCB Layout of PIC 16F877A
  • 30. VOICE OPERATED WHEELCHAIR 30 Fig 3.6 PCB Layout L293D Driver circuit
  • 31. VOICE OPERATED WHEELCHAIR 31 3.6 MOTION OF WHEELCHAIR:  The main part of the design is to control the motion of the wheelchair. There are four condition of motions are considered, moving forward, moving in reverse direction, moving to the left and moving to the right. For the speed, the user may use slow or fast speed command.  The system starts by applying the supply voltage to the speech recognition circuit. For fast condition the system will supply higher current to the motors.  If the user does not want the wheelchair move in high speed, the slow speed command can be set by applying low current supply to the motors. The wheel chair directions and movement possible are as given below.  Forward:Both motors are in forward direction.  Reverse:Both motors are in reverse direction.  Left: Left motor stopped and right motor in forward direction.  Right: Right motor stopped and left motor in forward direction.  Stop: Both motors are stopped.
  • 32. VOICE OPERATED WHEELCHAIR 32 3.7 SNAPSHOT OF VARIOUS CIRCUIT BOARDS Fig 3.7 HM 2007 Board Fig 3.8.Initial stage
  • 33. VOICE OPERATED WHEELCHAIR 33 Fig 3.9.Interfacing of HM2007 with microcontroller
  • 34. VOICE OPERATED WHEELCHAIR 34 3.8 RESULTS OUTPUT TABLE: Table 1 Commands B.7 B.6 B.5 B.4 B.3 B.2 B.1 B.0 code REQUIRED motion FORWARD 0 0 0 0 0 0 0 1 01 FORWARD BACKWARD 0 0 0 0 0 0 1 0 02 BACKWARD RIGHT 0 0 0 0 0 0 1 1 03 RIGHT LEFT 0 0 0 0 0 1 0 0 04 LEFT STOP 0 0 0 0 0 1 0 1 05 STOP EXPERIMENTAL OBSERVATION SPAPSHOTS Fig 3.10 FORWARDMOTION Memory location=01
  • 35. VOICE OPERATED WHEELCHAIR 35 Fig 3.11 BACKWARD MOTION Memory location=02 Fig 3.12 RIGHT MOTION Memory location=03
  • 36. VOICE OPERATED WHEELCHAIR 36 Fig 3.13 LEFT MOTION Memory location=04 Fig 3.14 STOP Memory location=05
  • 37. VOICE OPERATED WHEELCHAIR 37 EXPERIMENTAL OBSERVATION FOR TESTING: Table 2 COMMAND GIVEN OBSERVED MOTION ACCURACY OF RESPONSESPEAKER1 SPEAKER2 SPEAKER3 FORWARD FORWARD FORWARD FORWARD 100% BACKWARD BACKWARD BACKWARD BACKWARD 100 RIGHT RIGHT NO MOTION RIGHT 66.66% LEFT FORWARD LEFT LEFT 66.66% STOP STOP STOP STOP 100% DESCRIPTION OF RESULT: 1)Table no 1shows the output result present on the Port B of the PIC microcontroller 16F877.When we give the voice input as ‘FORWARD’ for the memory location 01H,then IC HM2007 assigns this voice input to that memory location and provides its binary form which is appears on Port B of microcontroller. As per this, for all other voice inputs, we get the output. 2)Table no 2 shows the actual testing results of the voice operated wheelchair. The output is speaker independent. For ‘FORWARD’ command, all the three speakers got the result as ‘FORWARD’ motion of wheelchair. For ‘BACKWARD’ command, all speakers got ‘BACKWARD ’motion of wheelchair. So accuracy of result is 100% for FORWARD and BACKWARD. For command ‘RIGHT ‘speaker 2 got result as ‘NO MOTION ’.For ‘LEFT’ command, speaker1 got response as ’FORWARD’ motion. So accuracy of result for ’LEFT’ and ‘RIGHT’is66.66%.For‘STOP’ command got the accuracy of result as100%.
  • 38. VOICE OPERATED WHEELCHAIR 38 3.9 ADVANTAGES 1) A handicapped person without Legs can use this and become Independent. 2) Reduce manpower. 3) User friendly 3.10 CONCLUSION: HM2007 the efficiency to detect voice command and control the wheel chair is significantly increased. This voice operated wheel chair will assist the handicapped persons to make them self dependent for the purposeof movement for which these people are dependent on other most of the times. A person with disabled with legs and arms can use this wheel chair efficiently if he is able to speak. 3.101FUTURE SCOPE: The wheelchair speed control system is targeted to be operated in both indoor and outdoor. This means it has to be noise-proofed and weather-proofed.It must has the ability to recognize the command word even in the presence of the background noise.
  • 40. VOICE OPERATED WHEELCHAIR 40 4.1 REFERENCES: [1] “Voice Operated Intelligent Wheelchair” by Ms. S. D. Suryawanshi , Mr. J. S. Chitode , Ms. S. S. Pethakar, “ International Journal of Advanced Research in Computer Science and Software Engineering [2] “Voice Based Direction and Speed Control of Wheel Chair for Physically Challenged by M.Prathyusha, K. S. Roy, Mahaboob Ali Shaik, “ International Journal of Engineering Trends and Technology (IJETT)” , [3] “A Wheelchair Steered through Voice Commands by Gabriel Pires and UrbanoNunes “Journal of Intelligent and Robotic [4] “Smart Wheelchairs: A literature Survey”, by Richard Simpson “Journal of Rehabilitation Research & Development