SlideShare ist ein Scribd-Unternehmen logo
1 von 52
An Android Communication Platform Between Hearing Impaired and General
People
This thesis is submitted in partial fulfillment of the requirement for the degree of
Bachelor of Science in Computer Science and Engineering.
Afif Bin Kamrul
ID: 1404065
Supervised by
Shayla Sharmin
Assistant Professor
Department of Computer Science and Engineering (CSE)
Chittagong University of Engineering and Technology (CUET)
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING (CSE)
CHITTAGONG UNIVERSITY OF ENGINEERING AND TECHNOLOGY
(CUET)
CHITTAGONG – 4349, BANGLADESH
JULY, 2019
ii
The thesis titled “An Android Communication Platform Between Hearing
Impaired and General People” submitted by ID 1404065, session 2017-2018 has
been accepted as satisfactory in fulfillment of the requirement for the degree of
Bachelor of Science in Computer Science and Engineering (CSE) as B.Sc.
Engineering to be awarded by Chittagong University of Engineering and Technology
(CUET).
Board of Examiners
1. ____________________________ Chairman
(Supervisor)
Shayla Sharmin
Assistant Professor
Department of Computer Science and Engineering (CSE)
Chittagong University of Engineering and Technology (CUET)
2. ____________________________ Member
(Ex-officio)
Dr. Asaduzzaman
Professor and Head
Department of Computer Science and Engineering (CSE)
Chittagong University of Engineering and Technology (CUET)
3. _____________________________ Member
(External)
Dr. Md. Ibrahim Khan
Professor
Department of Computer Science and Engineering (CSE)
Chittagong University of Engineering and Technology (CUET)
iii
Statement of Originality
It is hereby declared that the contents of this project are original and any part of it has
not been submitted elsewhere for the award if any degree or diploma.
_________________________ _______________________
Signature of the Supervisor Signature of the Candidate
Date: Date:
iv
Acknowledgement
I am grateful to almighty Allah who has given me the ability to complete my project
and to intend myself in performing the completion of B.Sc. Engineering degree. I am
indebted to my supervisor, Shayla Sharmin, Assistant Professor, Department of
Computer Science and Engineering, Chittagong University of Engineering and
Technology, for her encouragement, proper guidance, constructive criticisms and
endless patience throughout the progress of the project. She supported me by
providing books, conference and journal papers and effective advices. From very
beginning madam always encouraged me with proper guideline so the project never
seemed a burden to me. She motivated me to complete final thesis within time.
Finally, I want to express my gratitude to all other teachers of our department for
their sincere and active cooperation in completing the project work. Finally, I would
like to thank my parents for their steady love and support during my study period.
v
Abstract
Although an enormous number of people are deaf and mute in our society, there is a
large gap between them and us in terms of communication. People without hearing
impairment can listen and speak but deaf people cannot; instead deaf people use
signs to communicate. It is very hard for deaf people to communicate with normal
people. It would be nice to have a mechanism by which these two communities can
make effective communication among them. With the recent technologies it would
be preferable to have a portable device for this purpose. One such platform is a
smartphone. We have established an android based application which helps to build a
connection between general and hear impaired people. The system implies a Bangla
voice recognition system for general people through which they can input their voice
in their application. There are more than 200 Bangla words available in the
application. Whenever voice is detected, the words will be separated and translated
into sign language animation and played sequentially. On the contrary, there will be a
keyboard for deaf/mute users who can use this to express their language readable to
general users. The project has been tested in real life by some students of deaf and
mute school, Muradpur, Chittagong to evaluate the project for real life purpose. The
test results showed us that the developed system is functioning well and attaining
satisfactory result in establishing communication. Both subjective evaluation and
black box testing showed satisfactory results. Our proposed system with
smartphone’s in-built microphone and their test results can be an exquisite candidate
in adaptive, user-oriented communication system for hearing impaired people.
vi
Table of Contents
Chapter 1
Introduction…………………………………………………………….1
1.1 Present State of the Problem .................................................................................. 2
1.2 Motivation.............................................................................................................. 3
1.3 Contributions.......................................................................................................... 4
1.4 Organization of the Paper....................................................................................... 4
Chapter 2
Literature Review………………………………………………………5
2.1 Related Works…………………………………………………………………….5
2.2Android.................................................................................................................... 7
2.2.1 Android OS: A Walk from Past to Present ..................................................... 8
2.3 Java....................................................................................................................... 10
2.4 Android Platform Architecture............................................................................. 11
2.4.1 Linux Kernel ................................................................................................. 11
2.4.2 Android Runtime........................................................................................... 11
2.4.3 Java API Framework..................................................................................... 12
2.5 Android Services.................................................................................................. 12
2.6 Broadcast Receivers ............................................................................................. 13
2.7 MediaPlayer ......................................................................................................... 14
2.8 Adobe Photoshop ................................................................................................. 15
2.9 Adobe Character Animator .................................................................................. 16
2.10 Adobe Media Encoder........................................................................................ 17
2.11 Android Studio…………………………………………………………………18
vii
Chapter 3
Methodologyof the ProposedSystem………………………………..19
3.1 Overview .............................................................................................................. 19
3.2 Drawing puppet in Adobe Photoshop .................................................................. 20
3.3 Generate Animated files in Adobe Character Animator ...................................... 22
3.4 Render using Adobe Media Encoder ................................................................... 23
3.5 Convert Speech to Text........................................................................................ 24
3.6 Show Signs........................................................................................................... 25
3.7 Keyboard .............................................................................................................. 26
Chapter 4
Implementation………………………………………………………...28
4.1 Software Development......................................................................................... 28
4.1.1 Development Tools ....................................................................................... 28
4.2 Home Screen........................................................................................................ 28
4.3 Speech to Sign Conversion .................................................................................. 28
4.4 Keyboard .............................................................................................................. 30
Chapter 5
Experimental Results…………………………………………………………..31
5.1 Experiment Data................................................................................................... 31
5.2 Experimental Design and Procedure.................................................................... 32
5.3 Experimental Result ............................................................................................. 32
5.4 Subjective Evaluation…………………………………………………………...34
5.5 Testing.................................................................................................................. 36
5.5.1 Purpose of Testing......................................................................................... 36
5.5.2 Black Box Testing......................................................................................... 37
viii
5.5.2 Black Box Testing of the Project…………………………………………...37
5.6 Conclusion............................................................................................................ 38
Chapter 6
Conclusionand Future Recommendation…………………………...39
6.1 Conclusion............................................................................................................ 39
6.2 Future Recommendations..................................................................................... 40
References……………………………………………………………...41
ix
List of Figures
Figure 1.1 Communication problem ............................................................................ 4
Figure 2.1 Android logo……………........................................................................... 7
Figure 2.2 Android smartphone ................................................................................... 7
Figure 2.3 Android market share................................................................................ 10
Figure 2.4 Linux kernel.............................................................................................. 11
Figure 2.5 Android runtime........................................................................................ 12
Figure 2.6 Java API Framework ................................................................................ 12
Figure 2.7 Adobe Photoshop….................................................................................. 16
Figure 2.8 Adobe Character Animator....................................................................... 16
Figure 2.9 Adobe Media Encoder .............................................................................. 18
Figure 2.10 Android Studio........................................................................................ 18
Figure 3.1 System overview....................................................................................... 20
Figure 3.2 Adobe photoshop puppet drawing procedure ........................................... 21
Figure 3.3 Photoshop preview………….................................................................... 22
Figure 3.4 Puppet in rig mode……………………………………………………….22
Figure 3.5 Animation procedure ................................................................................ 23
Figure 3.6 Render process.......................................................................................... 23
Figure 3.7 Showing signs........................................................................................... 25
Figure 3.8 Sign language keyboard............................................................................ 27
Figure 4.1 Home screen…………. ............................................................................ 29
Figure 4.2 Speech to sign conversion......................................................................... 29
Figure 4.3 Speech recognition…................................................................................ 29
Figure 4.4 Recognized speech…................................................................................ 29
Figure 4.5 Converted text…....................................................................................... 29
Figure 4.6 Showing signs…....................................................................................... 29
Figure 4.7 (a) Keyboard setup screen (b) Typing on keyboard (c) Typed text.......... 30
Figure 5.1 Teacher explaining sign to student………………………………………32
Figure 5.2 Student explaining sign after using application........................................ 32
Figure 5.3 Column chart showing subjective evaluation of answering Q1, Q2, Q3.. 35
Figure 5.4 Column chart showing subjective evaluation of answering Q4, Q5, Q6.. 35
x
List of Tables
Table 5.1 No of times while application was able to convert from voice to signs .... 33
Table 5.2 No of times while students actually recognized the signs.......................... 33
Table 5.3 No of times the students typed the words using keyboard......................... 34
Table 5.4 Q1, Q2, Q3 answered by deaf/mute people ............................................... 35
Table 5.5 Q4, Q5, Q6 answered by teachers and general people............................... 35
Table 5.6 Test case of black box testing .................................................................... 37
1
Chapter 1
Introduction
Communication is one of the major needs of human. People without hearing
impairments can listen and speak but deaf people cannot; instead deaf people use
signs to communicate, these signs are known as sign language. Sign language is a
visual language which uses various body movements as a method of communication.
Like natural languages, there are different forms of sign languages used in different
countries around the world. There is a large gap between general people and deaf
society during communication. It is very hard for deaf people to communicate with
normal people. In maximum cases, sign languages are only used for direct visual
communication, such as video broadcasts or interpersonal communication. Unlike
their natural language counterpart, the sign languages are usually grammatically
different. Many deaf people may not know the natural language at all while many
general people may not know how to communicate in sign language. These
differences make it very difficult for these two types of people to communicate
effectively with each other without a translator. Human interpreters are inefficient
and there are not enough of them to make sure every personal communication can
take place between the two communities of people.
According to “National Survey on Prevalence of Hearing Impairment in Bangladesh
2013” published by WHO/SEARO/Country Office for Bangladesh and Ministry of
Health and Family Welfare (BSMMU), hearing impairment is the second commonest
form of disability. One small-scale study done in 2002 reported to WHO a
prevalence of 7.9% hearing impairment (in better ear) in Bangladeshi people. In
addition, Bangladesh has a population of over 130 million by the Population Census
20001 National Report (Provisional). Bangladesh Bureau of statistic, Dhaka,
Bangladesh, July 2003. Moreover, about 13 million people are suffering from
variable degrees of hearing loss of which 3 million are suffering from severe to
profound hearing loss leading to disability in accordance to Amin MN: Prevention of
Deafness and Primary Ear Care (Bengali)- Society for Assistance to Hearing
Impaired Children (SAHIC), Mohakhali, Dhaka-1212, Bangladesh. So, it’s a pretty
2
serious issue to be prioritized for. This problem is causing economic, social,
educational and vocational problems both for the victims and the country. In
addition, according to “(2009) World Federation of the Deaf” around 90% of the
worlds deaf children and adults have never been to school and thus more or less
illiterate. Such problems could be avoided if there exists a digital translator between
the normal society and deaf people. Source:
http://www.searo.who.int/bangladesh/publications/national_survey/en/
http://wfdeaf.org/[2]
In our day to day life, we, the general people frequently use some sign to
communicate with other along with verbal language. In a nut shell, we can say that to
express feelings or to explain what a person wants to say to other both sign language,
facial expressions and verbal language is used by us. But deaf and mute people have
only one medium to express them via sign language, on the other hand not every
people know every sign used by them. Again, there is no Bangla medium system
which can provide an easier solution which can ensure a two-way communication
between deaf and mute with the general people. Considering these facts, we have
developed a system which can convert Bangla speech to Bangla sign language with,
and a keyboard with sign language which helps a deaf and mute people to construct
word or sentences.
We are hoping that, the proposed system will be useful for deaf pupils, for parents of
deaf children and for any person who is in contact with deaf and need to learn sign
language in the real-life environment. Moreover, this can satisfy the need for
developing tools and appliances, which would not only help people of deaf society in
communicating with the hearing mainstream of the society, but would also enable
them to embrace basic education easily.
1.1 Present State of the Problem
Some notable works have been done in recent years. An approach contained about
1000 words in its database and human recorded videos were concatenated with each
other for displaying sign outputs.
3
In a work, Unicode is used for syntactic and morphological analysis for detecting
inflections, suffixes etc.
A research has been conducted for establishing mobile platforms between deaf/dumb
person and a normal person where a speech recognition engine was trained for
recognizing and detecting Bangla text and corresponding sign still images are shown.
Some advanced keyboard designs of Bangla layout are introduced for deaf people
including some swipe and press keyboard.
1.2 Motivation
The main motivation for implementing the emergency application for these people is
ensuring the reduction of communication barrier between the deaf society and
general people. The latest achievement in different fields of technologies may allow
us to minimize and successfully integrate these technologies.
As much as we would like to get rid of this barrier, awkward situations can be
avoided. Main motivation comes from improving the condition of deaf society where
many of them cannot participate with mainstream of the society. Many of them want
to have education, participate with local people but the language barrier cannot allow
them. Most often the problem arises from the inability of general people not
understanding their signs.
So, the goal of this project is to develop and implement a reliable system for these
two societies. The need for such a mechanism increases even more as in this era of
technology, platform exist to support them. One such platform and a very common
one in that is a Smartphone. Almost everyone today carries a Smartphone with them
as they become more and more affordable and easily available. According to one
report, 78.1% of the total Smartphones based on Android Operating System sold in
2013 [9]. Hence, we have decided to develop an Android application that lets its
users to communicate in between them easily. This mechanism is very useful and can
be used in a variety of ways.
4
Figure 1.1 Communication problem
1.3 Contributions
My objective of this project was to recognize Bangla speech and convert them to
sign language for deaf/mute people. Another objective was to develop a sign
language keyboard that converts Bangla sign into Bangla text. I’ve tried heart and
soul to fulfill these objectives.
I have managed to covert speech to Bangla text using Google speech API engine. It
does the job in a very good way. In respect of converting the Bangla words into
animated signs, I’ve managed to do that for more than 200 words. The animations
are works well according to test performed.
The keyboard feature also equipped with signs for all available Bangla letters. Upon
pressing the buttons, the value of key is shown on the screen and hence Bangla text
as well as whole sentence can be formed.
1.4 Organization of the Paper
The following chapters will go through the different aspects of this project. Chapter 1
gives exordial concept of our project. Chapter 2 gives, an overview of our project
related terminologies and brief discussion of the previous works with their
limitations. Chapter 3 describes the working procedure of our project. In Chapter 4,
we have illustrated our implementation of the project in details. Chapter 5 centers on
the experimental result and evolution of the proposed system. The paper concludes
with a summary of our work and future recommendations for further improvements
in Chapter 6. This paper contains one appendix, intended for persons who wish to
explore certain topics in greater depth.
5
Chapter 2
Literature Review
In this chapter, we present studies on the terminologies related to the project which
are important to understand. This chapter also contains brief discussion on related
previous works.
2.1 Related works
There has been some notable work performed based on identifying sign language
image or body movement to develop a medium for deaf people. But there has not
been a lot of work done on the opposite. This kind of work is done by some
honorable persons from India where they developed a translator for Bangla text to
sign language. In the process, they maintained a dictionary containing about 1000
words against unique id, grammar id, path and filename. In the sentence mode
enabled, the corresponding Bangla word is fetched according to corresponding
matched id and individual video clips are concatenated one after another in the
correct sequence to generate a video clip that represents the sign language output
corresponding to the input text. [1] Same type of work is presented in another
research [2] where input text is broken and analyzed into syntactic and
morphological level. Unicode which provides a unique character for every character,
no matter what the platform, no matter what the program, no matter what the
language; is used for this research. At first, unusual characters or special characters
that are not used usually are removed using syntactic analysis. Then, texts are
ordered in series and according to the rule of Bangla grammar through
morphological analysis. Database is maintained for root words and it also contains
parts of speech of each word. Words are stored in a tree like structure where sign
image is only provided for parent word to avoid duplication.
A necessary research has been conducted in [3] where an empirical framework for
tokenizing and parsing Bangla three types of sentences: assertive, interrogative and
imperative is shown. In this work presented here, input sequence is taken at first.
Then the program breaks the string into individual words called tokens. A lexicon is
6
maintained containing parts of speech of each word. Some Context-Sensitive
Grammar (CSG) rules are used to for processing tokens of input sequence to generate
parse tree or structural representation according to CSG rules.
A two-way communication is established in mobile platforms between a deaf/dumb
person and a normal person. In the process, an open-source speech recognition
software called CMU Sphinx is used to convert Bangla speech to Bangla sign
language. The Sphinx defines Bangla phonetics, words and grammars and recognizes
speech input and convert it to phonetic text. Scope of words are stored in dictionary,
hence sentences are built and stored as Bangla language model file and audio files
are stored as Bangla acoustic model file and therefore using those required files,
training process is begun which breaks recorded speech into phonetics text. By
converting texts to Bangla words, these are matched against the database and if
match found, corresponding image are shown on screen from database fetched from
database. On the other end, impaired persons have to write Bangla text which would
be converted to speech by utilizing Google Translation Server. This is a lacking due
to the fact that impaired persons may not know the Bangla language which requires
designing a new keyboard for them. [4]
An essential keyboard design is introduced for deaf people [6] and Bangla layout is
described. [5] The swipe and press layout and sign keyboard both can be combined
for the sake of deaf people to communicate with normal people. The swipe and press
keyboard layout would require less space on screen. Both can be applied for android
mobile. The sign language keyboard also support speech to text and text to speech
where text to speech allows a use of an algorithm where standard output use a
morphological analyzer module to determine the stem and part of speech of each
individual word. After preprocessing, a search is begun to find exact match of the
word using hash value. If not found, Levenshtein distance between those stems and
the stem of the word under search is calculated and minimum value of distance less
than two is selected from database. If that word is not found also, the word is broken
into letters and signs are retrieved for letters.
7
Android app developed in different platforms can be communicated through specific
methods. [7] The inter-app communication can be established using predefined
‘intent’, built into the Java package.
Animation based teaching assistant is presented with the help of ‘WebSign’; which is
based on the technology of avatar (animation in virtual world). It translates
transcriptions of natural hearing languages to a real time animation in sign language.
[8] Actual sign is played for each word from dictionary.
2.2 Android
Android is a mobile operating system (OS) [10] based on the Linux kernel and
currently developed by Google. With a user interface based on direct manipulation,
Android is designed primarily for touchscreen mobile devices such as smartphones
and tablet computers, with specialized user interfaces for televisions (Android TV),
cars (Android Auto), and wrist watches (Android Wear). The OS uses touch inputs
that loosely correspond to real-world actions, like swiping, tapping, pinching and
reverse pinching to manipulate on-screen objects and a virtual keyboard. Despite
being primarily designed for touchscreen input, it also has been used in game
consoles, digital cameras, regular PCs and other electronics. Android is the most
widely used mobile OS and, as of 2013, the most widely used OS overall. Android
devices sell more than Windows, iOS, and Mac OS X devices combined, with sales
in 2012, 2013 and 2014 close to the installed base of all PCs. As of July 2013, the
Google Play store has had over 1 million Android apps published, and over 50 billion
apps downloaded. A developer survey conducted in April/May 2013 found that 71%
of mobile developers develop for Android. At Google I/O 2014, the company
revealed that there were over 1 billion active monthly Android users, up from 537
million in June 2013. Android’s source code is released by Google under open
source licenses, although most Android devices ultimately ship with a combination
of open source and proprietary software. [9]
8
Figure 2.1 Android logo Figure 2.2 Android smartphone
Initially developed by Android Inc., which Google backed financially and later
bought in 2005, Android was unveiled in 2007 along with the founding of the Open
Handset Alliancea consortium of hardware, software and telecommunication
companies devoted to advancing open standards for mobile devices. Android is
popular with technology companies which require a ready-made. Low-cost and
customizable operating system for high-tech devices. Android’s open nature has
encouraged a large community of developers and enthusiasts to use the open-source
code as a foundation for community-driven projects, which add new features for
advanced users or bring Android to devices which were officially, released running
other operating systems. The operating system’s success has made it a target for
patent litigation as part of the so called “smartphone wars” between technology
companies.
2.2.1 Android OS: A Walk from Past to Present
Android Inc. was founded in Palo Alto, California in October 2003 by Andy Rubin
(co-founder of Danger), Rich Miner (co-founder of Wildfire Communications, Inc.),
Nick Sears (once VP at T-Mobile) and Chris White (headed design and interface
development at WebTV) to develop, in Rubin’s words, “smarter mobile devices that
are more aware of its owner’s location and preferences”. The early intentions of the
company were to develop an advanced operating system for digital cameras, when it
was realized that the market for the devices was not large enough and diverted their
efforts to producing a smartphone operating system to rival those of Symbian and
Windows Mobile. Despite the past accomplishments of the founders and early
employees, Android Inc. operated secretly, revealing only that it was working on
software for mobile phones. That same year, Rubin ran out of money. Steve Perlman,
a close friend of Rubin, brought him $10,000 in cash in an envelope and refused a
9
stake in the company. Google acquired Android Inc. on August 17, 2005; key
employees of Android Inc. including Rubin, Miner and White, stayed at the company
after the acquisition. Not much was known about Android Inc. at the time but many
assumed that Google was planning to enter the mobile phone market with this move.
At Google, the team led by Rubin developed a mobile device platform powered by
the Linux kernel. Google marketed the platform to handset makers and carriers on
the promise of providing a flexible, upgradable system. Google had lined up a series
of hardware component and software partners and signaled to carriers that it was
open to various degrees of cooperation on their part. Speculation about Google’s
intention to enter the mobile communication market continued to build through
December 2006. An earlier prototype codenamed “Sooner” had a close r
resemblance to a BlackBerry phone, with no touchscreen, and a physical QWERTY
keyboard but was later reengineered to support a touchscreen, to compete with other
announced devices such as the 2006 LG Prada and 2007 Apple iPhone. In September
2007, InformationWeek covered an Evalueserve study reporting that Google had
filed several patent applications in the area of mobile telephony. On November 5,
2007, the Open Handset Alliance, a consortium of technology companies including
Google, device manufacturers such as HTC, Sony and Samsung, wireless carriers
such as Sprint Nextel and T-Mobile and chipset makers such as Qualcomm and
Texas Instruments, unveiled itself, with a goal to develop open standards for mobile
devices. That day, Android was unveiled as its first product, a mobile device
platform built on the Linux kernel version 2.6.25. The first commercially available
smartphone running Android was the HTC Dream, released on October 22, 2008. In
2010, Google launched its Nexus series of devices a line of smartphones and tablets
running the Android operating system, and built by manufacturing partners. HTC
collaborated with Google to release the first Nexus smartphone, the Nexus One.
Google has since updated the series with newer devices, such as the Nexus 5 phone
(made by LG) and the Nexus 7 tablet (made by Asus). Google releases the Nexus
phones and tablets to act as their flagship Android devices, demonstrating Android’s
latest software and hardware features. On March 13, 2013 Larry Page announced in a
blog post that Andy Rubin had moved from the Android division to take on new
projects at Google. He was replaced by Sundar Pichai, who also continues his role as
10
the head of Google’s Chrome division, which develops Chrome OS. Since 2008,
Android has seen numerous updates which have incrementally improved the
operating system, adding new features and fixing bugs in previous releases.
Each major release is named in alphabetical order after a dessert or sugary treat; for
example, version 1.5 Cupcake was followed by 1.6 Donut. The latest released
version, 9.0 Pie, appeared as a security-only update; it was released on March 7,
2018, shortly after release of 8.0 Oreo, August 21, 2016. From 2010 to 2013, Hugo
Barra served as product spokesperson for the Android team, representing Android at
both press conferences and Google I/O, Googles annual developer-focused
conference. Barras product involvement included the entire Android ecosystem of
software and hardware, including Honeycomb, Ice Cream Sandwich, Jelly Bean and
KitKat operating system launches, the Nexus 4 and Nexus 5 smartphones, the Nexus
7 and Nexus 10 tablets, and other related products such as Google Now and Google
Voice Search, Googles speech recognition product comparable to Apples Siri. In
2013 Barra left the Android team for Chinese smartphone maker Xiaomi.
Figure 2.3 Android market share
2.3 Java
Java is a general-purpose computer programming language that is concurrent, class-
based, object-oriented, [11] and specifically designed to have as few implementation
dependencies as possible. It is intended to let application developers “write once, run
anywhere” (WORA), [12] meaning that compiled Java code can run on all platforms
11
that support Java without the need for recompilation. [13] Java applications are
typically compiled to bytecode that can run on any Java virtual machine (JVM)
regardless of computer architecture. The JDK is a development environment for
building applications, applets and components using the Java programming language.
The JDK includes tools useful for developing and testing programs written in the
Java programming language and running on the Java platform.
2.4 Android Platform Architecture
Android is an open source, Linux-based software stack that is created for a wide
range of devices and form factors. There are some major components of the Android
platform. They are discussed below:
2.4.1 Linux Kernel
Linux Kernel is the foundation of the Android platform. For functionalities like
threading, low-level memory management Android Runtime (ART) relies on the
Linux Kernel. Linux Kernel also allows Android to take the advantage of key
security features. Thus, Original Equipment Manufacturers can use Linux on their
system and have the drivers running before loading other components of the stack.
This is pictured in below figure 2.13.
Figure 2.4 Linux kernel
2.4.2 Android Runtime
12
Android Runtime
Android Runtime (ART)
Core Libraries
Android Runtime in short ART is used to run multiple virtual machines on low-
memory devices by executing DEX files, which is a byte-code format used for
optimized minimal memory footprint.
ART can also do Ahead of time (AOT), just in time (JIT) compilation and optimized
garbage collection (GC). It also gives better debugging support. This is pictured in
below figure 2.15.
Figure 2.5 Android runtime
2.4.3 Java API Framework
API’s forms a bridge between Android OS and developer. The API’s act as the
building block to create an Android app by simplifying the core reuse, modular
system components and services.
API includes a View system to build app’s UI, a resource manager to allow the
access to the non-code resources, a notification manager to display custom alerts, an
activity manager to manage the lifecycle of the apps and a content provider to enable
the apps to access data from other apps. This is pictured in below figure 2.17.
Figure 2.6 Java API Framework
2.5 Android Services
13
A Service is an application component that can perform long-running operations in
the background, and it doesn't provide a user interface. Another application
component can start a service, and it continues to run in the background even if the
user switches to another application. Additionally, a component can bind to a service
to interact with it and even perform interprocess communication (IPC). For example,
a service can handle network transactions, play music, perform file I/O, or interact
with a content provider, all from the background.
These are the three different types of services:
 Foreground: A foreground service performs some operation that is
noticeable to the user.
 Background: A background service performs an operation that isn't directly
noticed by the user.
 Bound: A service is bound when an application component binds to it by
calling bindService(). A bound service offers a client-server interface that
allows components to interact with the service, send requests, receive results,
and even do so across processes with interprocess communication (IPC). A
bound service runs only as long as another application component is bound to
it. Multiple components can bind to the service at once, but when all of them
unbind, the service is destroyed.
2.6 Broadcast Receivers
Android apps can send or receive broadcast messages from the Android system and
other Android apps, similar to the publish-subscribe design pattern. These broadcasts
are sent when an event of interest occurs. For example, the Android system sends
broadcasts when various system events occur, such as when the system boots up or
the device starts charging. Apps can also send custom broadcasts, for example, to
notify other apps of something that they might be interested in (for example, some
new data has been downloaded).
Apps can register to receive specific broadcasts. When a broadcast is sent, the system
automatically routes broadcasts to apps that have subscribed to receive that particular
14
type of broadcast. Generally speaking, broadcasts can be used as a messaging system
across apps and outside of the normal user flow. However, you must be careful not to
abuse the opportunity to respond to broadcasts and run jobs in the background that
can contribute to a slow system performance. There are two major classes of
broadcasts that can be received:
 Normal broadcasts (sent with Context.sendBroadcast) are completely
asynchronous. All receivers of the broadcast are run in an undefined order,
often at the same time. This is more efficient, but means that receivers cannot
use the result or abort APIs included here.
 Ordered broadcast (sent with Context.sendOrderedBroadcast) are delivered
to one receiver at a time. As each receiver executes in turn, it can propagate a
result to the next receiver or it can completely abort the broadcast so that it
won’t be passed to other receivers.
There are following two important steps to make BroadcastReceiver works for the
system broadcast intents-
 Creating the Broadcast Receiver
 Registering Broadcast Receiver
2.7 Media Player
The Android multimedia framework includes support for playing variety of common
media types, so that you can easily integrate audio, video and images into your
applications. You can play audio or video from media files stored in your
application's resources (raw resources), from standalone files in the filesystem, or
from a data stream arriving over a network connection, all using MediaPlayer APIs.
The following classes are used to play sound and video in the Android framework:
MediaPlayer
This class is the primary API for playing sound and video.
AudioManager
15
This class manages audio sources and audio output on a device. Here is an example
of how to play audio that’s available as a local raw resource:
2.8 Adobe Photoshop
Adobe Photoshop is a raster graphics editor developed and published by Adobe Inc.
for Windows and macOS. It was originally created in 1988 by Thomas and John
Knoll. Since then, this software has become the industry standard not only in raster
graphics editing, but in digital art as a whole. Photoshop can edit and compose raster
images in multiple layers and supports masks, alpha compositing, and several color
models including RGB, CMYK, CIELAB, spot color and duotone. Photoshop uses
its own .psd and .psb file formats to support these features. In addition to raster
graphics, this software has limited abilities to edit or render text and vector graphics
as well as 3D graphics and video. Its feature set can be expanded by plug-ins.
Photoshop was developed in 1987 by two brothers Thomas and John Knoll, who sold
the distribution license to Adobe Systems Incorporated in 1988. Upon loading
Photoshop, a sidebar with a variety of tools with multiple image-editing functions
appears to the left of the screen. These tools typically fall under the categories of
drawing; painting; measuring and navigation; selection; typing; and retouching.
 Moving: The move tool can be used to drag the entirely of a single layer or
more if they are selected. Alternatively, once an area of an image is
highlighted, the move tool can be used to manually relocate the selected piece
to anywhere on the canvas.
 Pen tool: Photoshop includes a few versions of the pen tool. The pen tool
creates precise paths that can be manipulated using anchor points.
 Magic wand: The magic wand tool selects areas based on pixels of similar
values. One click will select all neighboring pixels of similar value within a
tolerance level set by the user.
 Eraser: The Eraser tool erases content based on the active layer. If the user is
on the text layer, then any text across which the tool is dragged will be
MediaPlayer mediaPlayer = MediaPlayer.create(context, R.raw.video_file_1);
mediaPlayer.start(); //no need to call prepare(); create() does that for you
16
erased. The eraser will convert the pixels to transparent, unless the
background layer is selected. The size and style of the eraser can be selected
in the options bar.
 Camera raw: With the Camera Raw plug-in, raw images can be processed
without the use of Adobe Photoshop Lightroom, along with other image file
formats such as JPEG, TIFF, or PNG.
 Shape tools: Photoshop provides an array of shape tools including rectangles,
rounded rectangles, ellipses, polygons and lines. These shapes can be
manipulated by the pen tool, direct selection tool etc. to make vector
graphics.
 Selection tools: Selection tools are used to select all or any part of a picture
to perform cut, copy, edit, or retouching operations.
 Lasso: The user can make a custom selection by drawing it freehand. There
are three options for the "lasso" tool – regular, polygonal, and magnetic. The
regular "lasso" tool allows the user to have drawing capabilities. Photoshop
will complete the selection once the mouse button is released. The "polygonal
lasso" tool will draw only straight lines, which makes it an ideal choice for
images with many straight lines. "Magnetic lasso" tool is considered the
smart tool. It can do the same as the other two, but it can also detect the edges
of an image once the user selects a starting point.
Figure 2.7 Adobe Photoshop Figure 2.8 Adobe Character Animator
2.9 Adobe Character Animator
17
Adobe Character Animator is a desktop application software product that combines
live motion-capture with a multi-track recording system to control layered 2D
puppets drawn in Photoshop or Illustrator. It is automatically installed with Adobe
After Effects CC 2015 to 2017 and is also available as a standalone application
which one can download separately. It is used to produce both live and non-live
animation.
Character Animator imports layered Adobe Photoshop and Adobe
Illustrator documents into puppets which have behaviors applied to them.
The puppets are then placed into a scene, which can be viewed in the Scene panel
and Timeline panel. Rigging is set up in the Puppet panel, though basic rigging is
fully automatic based on specific layer names like Right Eyebrow and Smile.
Properties of selected elements can be examined and changed in the Properties panel,
including behavior parameters. Live inputs include a webcam (for face-tracking),
microphone (for live lip sync), keyboard (for triggering layers to hide/show), and
mouse (for warping specific handles).
Final output of a scene can be exported to a sequence of PNG files and a WAV file,
or any video format supported by Adobe Media Encoder. Live output can be sent to
other applications running on the same machine via the Syphon protocol (Mac only)
or Adobe Mercury Transmit on both Mac and Windows. Scenes can also be dropped
directly into After Effects and Premiere Pro, using Dynamic Link to avoid rendering.
2.10 Adobe Media Encoder
Adobe Media Encoder is used to compress audio and/or video files. Typically, when
a project is rendered (Rendering (computer graphics), it is rather large in file size. In
order to make it play back smoothly on devices without fast processors, tons of
RAM, and/or to play across cellular and/or WIFI networks, they must be
compressed.
Compression comprises many different types of approaches/algorithms based upon
the content involved, how it will be delivered, and also, what level of compression is
acceptable to the creator and/or audience. Once all those variables are considered, it
18
can be processed through software like Adobe Media Encoder. There are many other
programs like it that do the same thing, with varying levels of speed and quality.
The yield from this procedure is then a record that looks and sounds especially like
the first, however it is typically numerous requests of size littler in document size.
People can't see the enormous range details in shading and sound the manner in
which that PCs can, which is the reason this procedure works so well.
Figure 2.9 Adobe Media Encoder
2.11 Android Studio
Android Studio is the official Integrated Development Environment (IDE) for
Android app development. It supports Java or Kotlin as development language. It has
in built useful features for android development.
Figure 2.10 Android Studio
19
Chapter 3
Methodology of the Proposed System
In this work, we focused on designing a smartphone-based communication system
between normal and deaf people and implement it by developing a smartphone
application. This chapter mainly focuses on the overall system architecture of the
proposed system and procedure to achieve this in details.
3.1 Overview
In this work, our main concern is to set up a communication medium in between the
deaf-mute and general people. The android app is provided with a choice of option
for users. The people who can speak are provided a simple list of Bangla words. In
accordance with the list users have certain speech recognition system in built with
the application. If the speech is matched with the list of words the sign is played
sequentially through media player.
For the deaf people, there is another choice for selecting keyboard which they can
understand previewing signs of available Bangla character. The typed text is
displayed in the output box in order for the general people to understand easily. Each
keystroke is combined to form Bangla readable text.
The animated characters are made with the help of Adobe software. Whenever the
normal people give their speech to the input service, the speech is sent to Google
cloud repository using sendOrderedBroadcast for fetching the generated language
according to specified language from Google server. The speech recognition feature
comes in built with android operating system.
The overall process is shown via a block diagram which shows the features provided
for the people of two community. If the general people speak the provided words
with suffixes, those are automatically discarded to find the root word. According to
the root word, the actual signs are played sequentially for which no delay are visible
for the deaf/mute community.
20
Start
Figure 3.1 System overview
Upon starting of the application, the control is passed either to speech recognition or
to keyboard based on the decision of the user. After the flow, the launcher comes to
an end.
3.2 Drawing puppet in Adobe Photoshop
In order to design animation for sign language, a puppet needs to be designed. To
design photoshop offers a great deal of exclusive features. The steps of flow are
described below:
 First a new photoshop file is opened and saved as .psd selecting perfect
resolution.
 Background is selected of color black.
 Using ellipse tool, the head, eye, pupils are drawn.
Speech to Sign Language
Conversion with the Help of
Virtual Agent
Convert Bangla Text
Launch Speech to Text Launch Sign Language Keyboard
End
21
 By using rounded rectangle tool, ellipse tool, polygon tool, line tool and
custom shape tool, remaining shapes of body parts are drawn.
 Optimal strokes are selected for these tools.
 By using brush tool, particular body parts are colored accordingly.
 These drawn parts are sequenced in perfect layer (Layers on top of other
layers are visible).
 The parts movable is under a same group so they can move together.
 The movable layers are assigned ‘+’ sign in front of them to indicate them as
independent entity.
 These photoshop files is further analyzed in Adobe Character Animator.
The procedure is shown in figure 3.2 and 3.3:
Figure 3.2 Adobe photoshop puppet drawing procedure
Open new file
Draw character body parts
Adjust the drawn parts of character
Organize the shapes in layers
Assign ‘+’ sign into the independent layers
Save the puppet as .psd
22
Figure 3.3 Photoshop preview Figure 3.4 Puppet in rig mode
3.3 Generate Animated files in Adobe Character Animator
In order to generate the animated video files, the designed photoshop puppet needs to
be imported from Adobe Character Animator. The layers marked as crown in the left
side of the panel is the independent layers which is imported from photoshop. The
puppet is available in two types of mode: Rig mode and Scene mode. In Rig mode,
the puppet can be edited for movement of particular parts. In Scene mode, the
corresponding video can be taken frame by frame. The video taken can be further
analyzed and exported for rendering. The following is maintained for my puppet:
 The photoshop puppet is imported into the Character Animator.
 Make those layers crown which I want to make independent and organize
other layers which I want to move with this layer.
 Add handle tool for the parts to be fixed during the take of animation.
 Add stick tool for arm which I don’t want to make dangle.
 Add dragger tool for the palm of hand group which is movable with the
movement of mouse.
 Add triggers for multiple parts to switch by triggering the keyboard input.
 Set frame duration to 3 to 4 seconds.
 Frame rate of 3 fps do just fine.
 Pressing record button scene is recorded.
 During record, the movements are either triggered or dragged via mouse.
 For some animation, behaviors need to be set.
 Multiple frames can be taken via press recording button again and again.
 The record can be previewed through the play button.
23
 If necessary, eye gaze can be taken using laptop webcam from taking the
expression of operator face.
 Finally, the recorded animation can be exported for render to particular
software via render option.
The mentioned procedure is shown in figure 3.4 and 3.5.
Figure 3.5 Animation procedure
3.4 Render using Adobe Media Encoder
The recorded animated files are opened in the Adobe Media Encoder as source. The
following procedures is followed:
Figure 3.6 Render process
Import photoshop puppet
Add tools for corresponding task
Add triggers
Set frame duration and fps
Record animation
Export for rendering
Open Character Animator file
Set system preset to mp4 file
Press start button
24
3.5 Convert Speech to Text
Android is supplied with a built-in speech recognition engine from Google.
Following method helps to solve this procedure.
Speech recognizer is initiated by passing language data to a very special
recognizerIntent through sendOrderedBroadcast(). Upon RESULT_OK broadcasting
the intent passes voice input to Google server.
The data of this intent is processed through this procedure:
The result is achieved by using a superclass method and result is stored in an
ArrayList.
Intent ob=new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
ob.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.L
ANGUAGE_MODEL_FREE_FORM);
ob.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "bn");
ob.putExtra(RecognizerIntent.EXTRA_PROMPT,"নির্ধানিত শব্দগুচ্ছ থেকে েো বলুি");
try{
startActivityForResult(ob,100);
}catch (ActivityNotFoundException e){
Toast.makeText(getApplicationContext(),"sorry your device doesn't support
speech language",Toast.LENGTH_LONG).show();
}
Intent detailsIntent = new
Intent(RecognizerIntent.ACTION_GET_LANGUAGE_DETAILS);
sendOrderedBroadcast(detailsIntent, null, new LanguageDetailsChecker(), null,
Activity.RESULT_OK, null, null);
public void onActivityResult(int request_code,int result_code,Intent ob){
super.onActivityResult(request_code,result_code,ob);
if (request_code == 100){
if(result_code==RESULT_OK && ob!=null){
ArrayList<String>
result=ob.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
resultText.setText(result.get(0));
str=result.get(0);
}
}
}
25
There are certain types of permissions required such as internet and audio recording
permission declared in the manifest file.
3.6 Show Signs
Animated sign videos generated from Media Encoder are stored in Android Studio
raw folder. The Bangla text is checked against the root word. If match is found,
MediaPlayer() plays the video file.
Figure 3.7 Showing signs
Bangla word signs available in raw folder are:
উপি, উচ্চ, উপনিভাগ, পকি, নবোল, আকেনিো, এবং, থ াষণা, এনিল, এলাো, আগস্ট, অগাস্ট, শিৎোল, শিৎ,
বাংলাকেশ, আকগ, োল, োকলা, িাজর্ািী, চট্টগ্রাে, শহি, েকলজ, েহাকেশ, গণিা, থেশ, গরু, থিাজ, নেি, নিকেম্বি,
ঢাো, থিাগ, থজলা, নবভাগ, পৃনেবী, পূবধ, নিে, আট, আঠাকিা, আনশ, এগাকিা, ইউকিাপ, চানিনেে, দ্রুত, থেব্রুয়ানি, নেছু,
পকিি, পঞ্চাশ, আগুি, পাাঁচ, পতাো, খােয, খাওয়া, খাও, খাব, খাকব, থখকয়ছ, জিয, পক্ষোল, চনিশ, চাি, থচৌদ্দ,
শুক্রবাি, োও, থেওয়া, থেেি, নেভাকব, তাড়াতানড়, জািুয়ানি, আিন্দ, জুলাই, জুি, থহেন্তোল, থহেন্ত, অকিেনেি,
Speaking
Bangla
Fetching Sign
Sign
Language
Output
Raw folder containing
animation
Splitting Suffixes,
Inflections etc.
Splitting Words from
Sentence
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
26
োচধ , থে, নেনিট, থোেবাি, োে, েোল, জানত, েখকিািা, িাত, িয়, উনিশ, িব্বই, দুপুি, উত্তি, িকভম্বি, এখি, এখাকি,
অকটাবি, এে, হাজাি, এে হাজাি, োত্র, অেবা, েেতল, বষধাোল, বষধা, চাউল, ভাত, োলাে, শনিবাি, থেকেম্বি, োত,
েকতি, েত্তি, অেুস্থ, ছয়, থষাল, ষাট, আোশ, েনক্ষণ, বেন্তোল, বেন্ত, গ্রীষ্ম, গ্রীষ্মোল, িনববাি, েশ, যখি, তখি,
থতি, নতনিশ, নতি, বৃহস্পনতবাি, আজ, আগােীোল, োলকে, েঙ্গলবাি, বাকিা, নবশ, দুই, নবশ্বনবেযালয়, পানি, িাস্তা,
বুর্বাি, েপ্তাহ, পনিে, নে, থোোয়, েখি, থেি, শীতোল, শীত, বছি , গতোল, শূিয, লাল, আোশী, িং, হইকত,
হকত, থেকে, স্বাগত, স্বাগতে, েহজ, থোজা, আনে, আোি, িিোল, োর্ািাণ, োেুনল, স্থায়ী, স্বয়ং, নিকজ, নিজ, আেিা,
আোকেি, তু নে, থতাোকেি, ঠিে, েঠিে, নবপেজিে, নবোয়, যনে, বহু, অকিে, েেেযা, নিজাভধ , লজ্জা, োকপাটধ , েেেধি,
লম্বা, শব্দ, অক্ষি, ড্রাইভাি, চালে, স্টু কিন্ট, ছাত্র, েকর্য, থবোি, েোি, শনি, শনিশালী, আকো, আেকব, একেকছা,
একেনছকল, আিম্ভ, শুরু, োিনেউ, েল, আলাো, আলাপ, েো, জয়, থখলা, থখলকছ, থখকলায়াড়, বল, গাছ, গাকছি োাঁটা,
িাব, তিেুজ, োাঁঠাল, থপাঁকপ, শাপলা, েু ল, চািাচু ি, নজলানপ, িশ্ন, েবেেয় , েেয়
Splitting words are done by simply using a for loop and whenever a space is
detected, the particular word is stored in an array. Suffixes or inflections that are
found after the root word are discarder using recursion procedure. Output is
displayed via MediaPlayer after fetching sign from raw folder.
3.7 Keyboard
Android keyboard service is implemented by inheriting a class InputMethodService
and overriding its methods. The layout is designed in a xml folder. Keyboard service
works in the following way:
Whenever a key is pressed it is against a key value in a java class overriding onKey()
method of KeyboardView.java. The value of Bangla word is then committed into the
text view of output screen. For certain types like spacebar or enter, their keycode
were checked and committed their ASCII values into the text view.
The xml folder contains keyboard layout files for each layout changes visible after
pressing certain keys of the keyboard. Each row is designed separately. In a separate
xml file keyboard background is designed while keyPreviewLayout is designed in
another separate file.
First of all, keyboard is launched by pressing the launch keyboard button in home
screen. Upon launching the keyboard, two buttons are popped up for enable and
27
select keyboard. As android requires permission for selecting keyboard the enable
keyboard allows the user to enter into the android system screen where installed
keyboards are displayed and a choice is popped up for if user wants to enable
keyboard.
The selection between installed keyboards are popped up after pressing select
keyboard button.
When user presses a keyboard button a preview is popped up showing the signs of
the value of the button. When the button is pressed, the corresponding value is
committed into the output field. Hence the keyboard works in a predefined way.
Figure 3.8 Sign language keyboard
Start
Button with sign
preview Select Sign
Match with
Corresponding
Bangla
CharacterShow CharacterBuild Sentence
End
28
Chapter 4
Implementation
The implementation of this system requires designing and development of a
smartphone-based software that has a Graphical User Interface (GUI) with
communication medium working in between people. In this chapter, the background
tasks with necessary diagram are given which will clearly describe the outcome of
this project.
4.1 Software Development
The proposed system is implemented as a mobile application. The application is
developed for the android platform.
4.1.1 Development Tools
The lists of tools that have been used to implement the system are given below:
 Software Development Kit
o Android
 Integrated Development Environment (IDE)
o Android studio
4.2 Home Screen
At first the application has to be launched by clicking the launcher icon of the
application. When the application is launched the home-screen appears. The home
screen contains all the option of the operation of the application. Its user interface
and activity layers are kept simple and light for ease of the users of all ages.
Home screen contains two buttons, one to convert speech to sign language and other
to launch keyboard. Two specific type of users select the buttons according to their
need. For deaf/mute people launch keyboard button is used and for normal people
convert speech to sign language button is used.
4.3 Speech to Sign Conversion
After pressing first button the home screen (figure 4.1) will popup. This screen
comes with a list of words showing from left side as a listview shown in figure 4.2.
The list can be scrolled down to see the available words. There are more than 200
29
Bangla words available in the list. These words can be selected to view their
corresponding animated signs. If inflections or suffixes are added during speak, it is
handled automatically.
Figure 4.1 Home Screen Figure 4.2 Speech to sign conversion
Figure 4.3 Speech recognition Figure 4.4 Recognized speech
Figure 4.5 Converted text Figure 4.6 Showing signs
30
There is microphone button provided which titles “tap on mic to speak” (figure 4.3).
Upon pressing the button, Google speech recognition is initiated (figure 4.4). The
user has to provide his/her voice as soon as they hit microphone button. The voice is
sent to Google for Bangla audio recognition (figure 4.5). Following scenario is
occurred. Another button is for show the sign output using MediaPlayer of android
which is shown in figure 4.6.
4.4 Keyboard
For the people having hearing impairment will have to choose launch keyboard
option. A window is opened:
(a) (b) (c)
Figure 4.7 (a) Keyboard setup screen (b) Typing on keyboard (c) Typed text
From this keyboard must be enabled and selected. The keyboard looks like figure
4.7(b). Typed text is shown in output section figure 4.7(b). Hence the text can be
understandable by normal people.
31
Chapter 5
Experimental Results
We tested the system with extensive experiments. In this section, we first introduce
how data are collected. Then we present the performance of the system and compare
it with existing systems.
5.1 Collecting requirements for the system
We have asked to various people whose age varying from 20 years to 60 years. The
people included are:
 Principal of the deaf and mute school
 Guardians of the students of the school
 CUET students
We asked them about their problem in communication in between deaf/mute and
normal people. They have opted for a swift communication capable application.
For this purpose, we had to visited a school named “Mute and deaf school,
Muradpur, Chittagong” before building the app for real life scenario. We had to
communicate with hearing impaired and mute children. We have asked people about
their expectation by which they can help us to eradicate the communication gap.
Based on the conversation, we summarized the basic requirements and those are:
 Voice to text conversion.
 Animated characters showing sign language.
 Signs should be played synchronously.
 Capable of handling Bangla words.
 Capable of handling Bangla words suffixes, inflections etc.
 Deaf/mute people should have some form of typing or displaying sign
method.
 Should have signs instead of Bangla letters in the keyboard.
 Share option would be preferable.
32
 For animation, a puppet is eye comforting while watching the signs displayed
via the application.
 Attractive theme would be appreciated.
5.2 Experimental Design and Procedure:
We visited “Mute and deaf school, Muradpur, Chittagong” to check the application
by the deaf/mute students of the school as well as the teachers and some general
people. The honorable principal named “Md. Habibur Rahman” helped us to gather
the necessary data from them. We had 6 deaf and mute students (age= 8 years to 14
years). We have selected 02 teachers, 02 guardian and one commoner to evaluate our
system.
Figure 5.1 Teacher explaining sign to student Figure 5.2 Student explaining sign
after using application
5.3 Experimental Result
The experiment was done in different phase while visiting the school. At first, the
teachers gave their voice into the smartphone. The number of times words were
converted to corresponding signs are given in table 5.1. The 5 Bangla words that
were checked sequentially for all the cases in table 5.1 and 5.2 are বিকাল, পৃবিিী, আবি,
পক্ষকাল, সকাল ।
Table 5.1 demonstrates the success rate of the voice to animation module. The
accuracy of google speech to text is around 95% and our system shows the accuracy
90.67%. We have realized that this accuracy could be higher if the internet
connection was fair enough. Again, pronunciation of different people is varied and
33
some people have regional accent. For example, we have chosen a word College
(েকলজ) which was pronounced as Kholej by some user and this was not detected
perfectly by google speech recognizer.
Table 5. 1 No of times while application was able to convert from voice to signs
Deaf/mute
people
Teachers and general people (each having 5 trials for each student)
1 2 3 4 5
1 5 5 2 4 5
2 5 3 4 5 5
3 4 3 5 5 4
4 5 4 5 4 4
5 4 5 4 4 5
6 5 4 5 5 5
Mean 4.666667 4.166667 4.166667 4.5 4.666667
SD 0.516398 0.894427 1.169045 0.547723 0.516398
Success rate 136/150%= 90.67%
Again, all of signs were not understood by the students of the school. The converted
signs those were actually understood by the students are given in table 5.2.
Table 5. 2 No of times while students actually recognized the signs
Deaf/mute
people
Teachers and general people (each having 5 trials
for each student) Mean SD
1 2 3 4 5
1 5 4 3 4 5 4.2 0.748331
2 5 4 3 5 4 4.2 0.748331
3 4 5 4 3 4 4 0.632456
4 5 4 4 4 4 4.2 0.4
5 4 5 3 4 4 4 0.632456
6 4 4 4 4 4 4 0.516398
Success Rate (102/136%) 75%
34
From table 5.2 we can say that the accuracy is 75% and we have experienced that the
deaf students found difficulties to recognize on some words which have 3D motions
for example আনশ. So, the overall accuracy is (102/150) =68%.
The testing of keyboard had to be in a different way. The honorable principal of the
school helped us in this case. While testing he had showed specific signs towards the
chosen students and the students had to type what they understood. The results are
mentioned in table 5.3. The words that were checked in this procedure are
ির্ষাকাল, অসুস্থ, গ্রীষ্মকাল, আগামীকাল, পাবি ।
Table 5. 3 No of times the students typed the words using keyboard
Participants Perfectly Typed Word
1 5
2 3
3 5
4 4
5 4
6 3
5.4 Subjective Evaluation
After finishing this session, we requested the principal to ask the deaf/mute students
for scoring the following questions:
1. Are the animated signs understandable?
2. Is the application boring?
3. Is the keyboard flexible to type?
We also asked the general people and teachers along with the principle, to score the
following questions.
4. Are you facing any problem while using the application?
5. Do you find this application easy to use?
6. How interactive the application is?
People answered each question on a 1-to-5 Likert scale, where 1 is the lowest
evaluation and 5 is the highest.
35
4.333333333
1.666666667
4.333333333
0
2
4
6
Q1 Q2 Q3
Mean
2.8
4.2
3.4
0
1
2
3
4
5
6
Q1 Q2 Q3
Mean
Table 5. 4 Q1, Q2, Q3 answered by deaf/mute people
Questions
Participants (deaf and mute user)
1 2 3 4 5 6
Q1 4 5 5 4 3 5
Q2 2 1 2 2 1 2
Q3 5 4 4 4 5 4
Table 5. 5 Q4, Q5, Q6 answered by teachers and general people
Questions
Participants (general user and teachers)
1 2 3 4 5
Q4 3 4 2 2 3
Q5 4 4 3 5 5
Q6 4 5 1 4 3
Figure 5.3 Column chart showing subjective evaluation of answering Q1, Q2, Q3
Figure 5.4 Column chart showing subjective evaluation of answering Q4, Q5, Q6
The evaluation indicates that the results are pretty satisfactory in real life oriented
purpose.
36
5.5 Testing
Testing is the process of evaluating a system or its components with the intent to find
that whether it satisfies the specified requirements or not. This activity results in the
actual, expected and difference between their results. In simple words testing is
executing a system in order to identify any groups, errors or missing requirements in
contrary to the actual desire or requirements.
Testing is the practice of making objective judgements regarding the extent to which
the system (device) meets, exceeds or fails to meet stated objectives.
A good testing program is a tool for the agency and the integrator/supplier; it
typically identifies the end of the “Development” phase of the project, establishes the
criteria for project acceptance, and establishes the start of the warranty period.
5.5.1 Purpose of Testing
There are two fundamental purposes of testing:
 Verifying Procurement Specification
 Managing Risk
First, testing is about verifying that what was specified is what was delivered: it
verifies that the product (system) meets the functional, performance, design and
implementation requirements identified in the procurement specifications.
Second, testing is about managing risk for both the acquiring agency and the
system’s vendor/developer/integrator. The testing program is used to identify when
the work has been “completed” so that he contracts can be closed, the vendor paid
and the system shifted by the agency into the warranty and maintenance phase of the
project.
Following are some of important factors for which Testing for an application is
required:
 Reduce the number of bugs in the code.
37
 To provide a quality product.
 To verify whether all the requirements are met.
 To satisfy the customer’s needs.
 To provide a bug free software.
 To earn the reliability of the software.
 To avoid the user from detecting problems.
 Verify that it behaves “as specified”.
 Validate that what has been specified is what the user actually wanted.
5.5.2 Black Box Testing
There are different methods which can be used for software testing. The technique of
testing without having any knowledge of the interior workings of the application is
Black Box testing. The tester is oblivious to the system architecture and does not
have access to the source code. Typically, when performing a black box test, a tester
will interact with the system’s user interface by providing inputs and examining
output s without knowing how and where the inputs are worked upon.
5.5.3 Black BoxTesting ofthe Project
Table 5. 6 Test case of black box testing
No Description Expected Result Status
1 Launch
application
Application runs in the system Yes
2
Layouts visible All layouts are visible Yes
3 Button press All buttons are working Yes
4 Speech to text Bangla speech converted to text Yes
5 Signs preview Signs showing for available Bangla words Yes
6
Keyboard
working
Each keystroke is being detected Yes
38
5.6 Conclusion
The experiments and the experience gained in designing and analyzing this
application led us to point out some interesting findings. First of all, our applications
can positively impact the current mechanisms used for notifying any exceptional
circumstances. It can obviously become a breakthrough in such communication
mechanism in between deaf/mute and normal people.
Secondly, the application has been demonstrated to be accepted by potential users.
The large majority of the interviewed users after the experiments declared that they
would use this application when needed. Moreover, the deaf/mute were hugely
surprised to see the implementation of this application. It would obviously allow
them to communicate with the rest of the society. This is due to the good experience
they had during the simulation.
The last findings are related to the application interface. Users are generally annoyed
by long tasks. So that designers should elaborate a direct and short navigation
through the application otherwise users will prefer traditional calls. Finally, feedback
is a critical factor for this type of application. Our implemented application
succeeded to gain satisfactory reviews from all scale of people.
39
Chapter 6
Conclusion and Future Recommendation
In this chapter in section 6.1, we conclude our developed system. We describe the
future recommendations for further improvements of our developed system in
section 6.2.
6.1 Conclusion
This is actually “Android based Platform between Normal and Deaf/Mute people”
which is very useful application for all ages of people mainly for deaf or mute people
who doesn’t have any means of communication medium. We can use this application
to educate the deaf or mute people and allow them to participate with us in our
society so that they can become self-dependent. Although there are more spaces to
update it forward, it can work as a framework for the development. Our application
can remove the hidden barrier in between the hearing impaired and normal people.
So, this application is having communication medium on both ends. Here are the
most important points pursued during the research:
 A special application for smartphone communication was designed which is
capable of communication between two people if internet connection is
available.
 Google voice recognition is very accurate.
 The keyboard is really helpful owing to showing signs for each Bangla
letters.
 A very quick way for communication establishment and performing the
communication which make user confident of his/her participation in every
aspect of the society.
 The user interface is very user friendly so that every person from all age can
use the application without any hustle.
 Although the animations are not 100% accurate, users understood and loved
them from their heart.
40
6.2 Future Recommendations
For future work of this project many recommendations are found while taking our
survey. People from all aspects of life gave suggestions to make this project a
complete package. As fulfilling the user’s need is our utmost motive and goal, the
following areas can be considered for future improvements:
 Log-in system can be introduced for the users.
 Word list can be increased more to support a broad range of Bangla words.
 Messaging option can be introduced.
 Facebook auto-share option can be introduced.
 Keyboard layer might be modified.
41
References
[1] Sarkar, B., Datta, K., Datta, C., Sarkar, D., Dutta, S., Roy, I., Paul, A., Molla, J.
and Paul, A. (2009). A Translator for Bangla Text to Sign Language. 2009 Annual
IEEE India Conference.
[2] Ahmed Tanvir, “A Small Initiative to Convert Bangla Text to Bangla Sign
Language”, American-International University, Bangladesh.
Available:
https://www.academia.edu/30768422/A_Small_Initiative_to_Convert_Bangla_Text_
to_Bangla_Sign_Language
[3] Arefin, M., Alam, L., Sharmin, S. and Hoque, M. (2015). An empirical
framework for parsing Bangla assertive, interrogative and imperative sentences. 2015
International Conference on Computer and Information Engineering (ICCIE).
[4] A. Kowshal, S. Sharmin and M. M. Hoque (2017). Development of an Interactive
Game to Increase Speech Ability for Language Impairment Children. 2017
International Conference on Engineering Research, Innovation and Education
(ICERIE).
[5] Shahriar, R., Zaman, A., Ahmed, T., Khan, S. and Maruf, H. (2017). A
communication platform between bangla and sign language. 2017 IEEE Region 10
Humanitarian Technology Conference (R10-HTC).
[6] Islam, K. and Sarker, B. (2014). Designing a press and swipe type single layered
Bangla soft keyboard for Android devices. 16th Int'l Conf. Computer and
Information Technology.
[7] El-Gayyar, M., Ibrahim, A. and Sallam, A. (2015). The ArSL keyboard for
android. 2015 IEEE Seventh International Conference on Intelligent Computing and
Information Systems (ICICIS).
[8] Allison, L. and Fuad, M. (2016). Inter-app communication between Android apps
developed in app-inventor and Android studio. Proceedings of the International
Workshop on Mobile Software Engineering and Systems - MOBILESoft '16.
42
[9] Jemni, M. and Elghoul, O. (2008). Using ICT to Teach Sign Language. 2008
Eighth IEEE International Conference on Advanced Learning Technologies.
[10] IDC. [Online] May 25,2014.
http://www.idc.com/getdoc.jsp?containerId=prUS24676414
[11] “Life 360 – Family Locator” Android App Developed ByLife360. [Online]
September 20, 2017. http://www.life360.com/familylocator/.
[12] “OnWatch” Android App Developed ByOnWatch. [Online] November 10,
2017. https://play.google.com/store/apps/details?id=com.onwatch.
[13] “StreetSafe” Android App Developed ByPeopleGuard LLC. [Online] September
23, 2017. http://streetsafe.com/static-productsoverview.
[14] Doilamis, A. Pelekis, N. Theodoridis, “EasyTracker, “An android application
for capturing mobility behavior”, 16th Panhellenic conference on Informatics (PCI),
Volume 1, Issue 1, pp.357-362,5-7October2012.
[15] V. Sutton. (2016) Introduction on deafness, sign language & sign writing.
http://www.signwriting.org/about/questions/quest0000.html. [Online; accessed
1April-2017]
[16] CDD, “Manual on sign supported bangla,” Computer Vision and Image
Understanding, pp. 1–50, 2002.
[17] "The Android Source Code." The Android Source Code. Web. 21 Dec. 2015.
[18] Android Studio Overview. Available link at:
https://developer.android.com/studio/intro
[19] Android-Apktool, A tool for reverse engineering Android apk files
https://code.google.com/p/android-apktool/
[20] Lane, Nicholas D., et al. "A survey of mobile phone sensing." Communications
Magazine, IEEE 48.9 (2010): 140-150.

Weitere ähnliche Inhalte

Was ist angesagt?

An Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity RemovalAn Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
 
IRJET- My Buddy App: Communications between Smart Devices through Voice A...
IRJET-  	  My Buddy App: Communications between Smart Devices through Voice A...IRJET-  	  My Buddy App: Communications between Smart Devices through Voice A...
IRJET- My Buddy App: Communications between Smart Devices through Voice A...IRJET Journal
 
DEVELOPMENT OF PHONEME DOMINATED DATABASE FOR LIMITED DOMAIN T-T-S IN HINDI
DEVELOPMENT OF PHONEME DOMINATED DATABASE FOR LIMITED DOMAIN T-T-S IN HINDIDEVELOPMENT OF PHONEME DOMINATED DATABASE FOR LIMITED DOMAIN T-T-S IN HINDI
DEVELOPMENT OF PHONEME DOMINATED DATABASE FOR LIMITED DOMAIN T-T-S IN HINDIijaia
 
Text Detection and Recognition with Speech Output for Visually Challenged Per...
Text Detection and Recognition with Speech Output for Visually Challenged Per...Text Detection and Recognition with Speech Output for Visually Challenged Per...
Text Detection and Recognition with Speech Output for Visually Challenged Per...IJERA Editor
 
SENTIMENT ANALYSIS OF MIXED CODE FOR THE TRANSLITERATED HINDI AND MARATHI TEXTS
SENTIMENT ANALYSIS OF MIXED CODE FOR THE TRANSLITERATED HINDI AND MARATHI TEXTSSENTIMENT ANALYSIS OF MIXED CODE FOR THE TRANSLITERATED HINDI AND MARATHI TEXTS
SENTIMENT ANALYSIS OF MIXED CODE FOR THE TRANSLITERATED HINDI AND MARATHI TEXTSijnlc
 
A black-box-approach-for-response-quality-evaluation-of-conversational-agent-...
A black-box-approach-for-response-quality-evaluation-of-conversational-agent-...A black-box-approach-for-response-quality-evaluation-of-conversational-agent-...
A black-box-approach-for-response-quality-evaluation-of-conversational-agent-...Cemal Ardil
 
Artificially Generatedof Concatenative Syllable based Text to Speech Synthesi...
Artificially Generatedof Concatenative Syllable based Text to Speech Synthesi...Artificially Generatedof Concatenative Syllable based Text to Speech Synthesi...
Artificially Generatedof Concatenative Syllable based Text to Speech Synthesi...iosrjce
 
IRJET- Artificial Intelligence Based Chat-Bot
IRJET-  	  Artificial Intelligence Based Chat-BotIRJET-  	  Artificial Intelligence Based Chat-Bot
IRJET- Artificial Intelligence Based Chat-BotIRJET Journal
 
Script to Sentiment : on future of Language TechnologyMysore latest
Script to Sentiment : on future of Language TechnologyMysore latestScript to Sentiment : on future of Language TechnologyMysore latest
Script to Sentiment : on future of Language TechnologyMysore latestJaganadh Gopinadhan
 
B tech project_report
B tech project_reportB tech project_report
B tech project_reportabhiuaikey
 
IRJET - Analysis on Code-Mixed Data for Movie Reviews
IRJET - Analysis on Code-Mixed Data for Movie ReviewsIRJET - Analysis on Code-Mixed Data for Movie Reviews
IRJET - Analysis on Code-Mixed Data for Movie ReviewsIRJET Journal
 
Natural Language Processing Theory, Applications and Difficulties
Natural Language Processing Theory, Applications and DifficultiesNatural Language Processing Theory, Applications and Difficulties
Natural Language Processing Theory, Applications and Difficultiesijtsrd
 
IRJET- Tamil Speech to Indian Sign Language using CMUSphinx Language Models
IRJET- Tamil Speech to Indian Sign Language using CMUSphinx Language ModelsIRJET- Tamil Speech to Indian Sign Language using CMUSphinx Language Models
IRJET- Tamil Speech to Indian Sign Language using CMUSphinx Language ModelsIRJET Journal
 
IRJET - Chat-Bot for College Information System using AI
IRJET -  	  Chat-Bot for College Information System using AIIRJET -  	  Chat-Bot for College Information System using AI
IRJET - Chat-Bot for College Information System using AIIRJET Journal
 
IRJET- Voice based Billing System
IRJET-  	  Voice based Billing SystemIRJET-  	  Voice based Billing System
IRJET- Voice based Billing SystemIRJET Journal
 
Software Project Proposal: Bengali Braille to Text Translation
Software Project Proposal: Bengali Braille to Text TranslationSoftware Project Proposal: Bengali Braille to Text Translation
Software Project Proposal: Bengali Braille to Text TranslationMinhas Kamal
 
IRJET- Review of Chatbot System in Hindi Language
IRJET-  	  Review of Chatbot System in Hindi LanguageIRJET-  	  Review of Chatbot System in Hindi Language
IRJET- Review of Chatbot System in Hindi LanguageIRJET Journal
 

Was ist angesagt? (20)

An Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity RemovalAn Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity Removal
 
IRJET- My Buddy App: Communications between Smart Devices through Voice A...
IRJET-  	  My Buddy App: Communications between Smart Devices through Voice A...IRJET-  	  My Buddy App: Communications between Smart Devices through Voice A...
IRJET- My Buddy App: Communications between Smart Devices through Voice A...
 
DEVELOPMENT OF PHONEME DOMINATED DATABASE FOR LIMITED DOMAIN T-T-S IN HINDI
DEVELOPMENT OF PHONEME DOMINATED DATABASE FOR LIMITED DOMAIN T-T-S IN HINDIDEVELOPMENT OF PHONEME DOMINATED DATABASE FOR LIMITED DOMAIN T-T-S IN HINDI
DEVELOPMENT OF PHONEME DOMINATED DATABASE FOR LIMITED DOMAIN T-T-S IN HINDI
 
Text Detection and Recognition with Speech Output for Visually Challenged Per...
Text Detection and Recognition with Speech Output for Visually Challenged Per...Text Detection and Recognition with Speech Output for Visually Challenged Per...
Text Detection and Recognition with Speech Output for Visually Challenged Per...
 
iot pet feeder report
iot pet feeder reportiot pet feeder report
iot pet feeder report
 
SENTIMENT ANALYSIS OF MIXED CODE FOR THE TRANSLITERATED HINDI AND MARATHI TEXTS
SENTIMENT ANALYSIS OF MIXED CODE FOR THE TRANSLITERATED HINDI AND MARATHI TEXTSSENTIMENT ANALYSIS OF MIXED CODE FOR THE TRANSLITERATED HINDI AND MARATHI TEXTS
SENTIMENT ANALYSIS OF MIXED CODE FOR THE TRANSLITERATED HINDI AND MARATHI TEXTS
 
A black-box-approach-for-response-quality-evaluation-of-conversational-agent-...
A black-box-approach-for-response-quality-evaluation-of-conversational-agent-...A black-box-approach-for-response-quality-evaluation-of-conversational-agent-...
A black-box-approach-for-response-quality-evaluation-of-conversational-agent-...
 
Artificially Generatedof Concatenative Syllable based Text to Speech Synthesi...
Artificially Generatedof Concatenative Syllable based Text to Speech Synthesi...Artificially Generatedof Concatenative Syllable based Text to Speech Synthesi...
Artificially Generatedof Concatenative Syllable based Text to Speech Synthesi...
 
IRJET- Artificial Intelligence Based Chat-Bot
IRJET-  	  Artificial Intelligence Based Chat-BotIRJET-  	  Artificial Intelligence Based Chat-Bot
IRJET- Artificial Intelligence Based Chat-Bot
 
Script to Sentiment : on future of Language TechnologyMysore latest
Script to Sentiment : on future of Language TechnologyMysore latestScript to Sentiment : on future of Language TechnologyMysore latest
Script to Sentiment : on future of Language TechnologyMysore latest
 
B tech project_report
B tech project_reportB tech project_report
B tech project_report
 
D1803041822
D1803041822D1803041822
D1803041822
 
IRJET - Analysis on Code-Mixed Data for Movie Reviews
IRJET - Analysis on Code-Mixed Data for Movie ReviewsIRJET - Analysis on Code-Mixed Data for Movie Reviews
IRJET - Analysis on Code-Mixed Data for Movie Reviews
 
Natural Language Processing Theory, Applications and Difficulties
Natural Language Processing Theory, Applications and DifficultiesNatural Language Processing Theory, Applications and Difficulties
Natural Language Processing Theory, Applications and Difficulties
 
IRJET- Tamil Speech to Indian Sign Language using CMUSphinx Language Models
IRJET- Tamil Speech to Indian Sign Language using CMUSphinx Language ModelsIRJET- Tamil Speech to Indian Sign Language using CMUSphinx Language Models
IRJET- Tamil Speech to Indian Sign Language using CMUSphinx Language Models
 
IRJET - Chat-Bot for College Information System using AI
IRJET -  	  Chat-Bot for College Information System using AIIRJET -  	  Chat-Bot for College Information System using AI
IRJET - Chat-Bot for College Information System using AI
 
IRJET- Voice based Billing System
IRJET-  	  Voice based Billing SystemIRJET-  	  Voice based Billing System
IRJET- Voice based Billing System
 
Software Project Proposal: Bengali Braille to Text Translation
Software Project Proposal: Bengali Braille to Text TranslationSoftware Project Proposal: Bengali Braille to Text Translation
Software Project Proposal: Bengali Braille to Text Translation
 
IRJET- Review of Chatbot System in Hindi Language
IRJET-  	  Review of Chatbot System in Hindi LanguageIRJET-  	  Review of Chatbot System in Hindi Language
IRJET- Review of Chatbot System in Hindi Language
 
Chat bot in_pythion
Chat bot in_pythionChat bot in_pythion
Chat bot in_pythion
 

Ähnlich wie An Android Communication Platform between Hearing Impaired and General People

Project report on signal jammer
Project report on signal jammerProject report on signal jammer
Project report on signal jammerARYAN KUMAR
 
Internship_Report_Projects_have_done_Dur.pdf
Internship_Report_Projects_have_done_Dur.pdfInternship_Report_Projects_have_done_Dur.pdf
Internship_Report_Projects_have_done_Dur.pdfHikMan2
 
An investigation into the physical build and psychological aspects of an inte...
An investigation into the physical build and psychological aspects of an inte...An investigation into the physical build and psychological aspects of an inte...
An investigation into the physical build and psychological aspects of an inte...Jessica Navarro
 
Miniproject Report.pdf
Miniproject Report.pdfMiniproject Report.pdf
Miniproject Report.pdfVedaantDutt1
 
Design and Development of a Knowledge Community System
Design and Development of a Knowledge Community SystemDesign and Development of a Knowledge Community System
Design and Development of a Knowledge Community SystemHuu Bang Le Phan
 
Human activity recognition
Human activity recognitionHuman activity recognition
Human activity recognitionRandhir Gupta
 
Accident detection and notification system
Accident detection and notification systemAccident detection and notification system
Accident detection and notification systemSolomon Mutwiri
 
CMPE 295B Final Project Report-QA-signed
CMPE 295B Final Project Report-QA-signedCMPE 295B Final Project Report-QA-signed
CMPE 295B Final Project Report-QA-signedDaniel Ng
 
CS499_JULIUS_J_FINAL_YEAR_PROJETCT_L_DRAFT
CS499_JULIUS_J_FINAL_YEAR_PROJETCT_L_DRAFTCS499_JULIUS_J_FINAL_YEAR_PROJETCT_L_DRAFT
CS499_JULIUS_J_FINAL_YEAR_PROJETCT_L_DRAFTJosephat Julius
 
Suriya atipong2013(tentang rfid disertasi)
Suriya atipong2013(tentang rfid disertasi)Suriya atipong2013(tentang rfid disertasi)
Suriya atipong2013(tentang rfid disertasi)Herry Effendy
 
ZERONE 2010 - Annual Technical Journal, IOE, Nepal
ZERONE 2010 - Annual Technical Journal, IOE, NepalZERONE 2010 - Annual Technical Journal, IOE, Nepal
ZERONE 2010 - Annual Technical Journal, IOE, NepalShristi Pradhan
 
Internet Traffic Measurement and Analysis
Internet Traffic Measurement and AnalysisInternet Traffic Measurement and Analysis
Internet Traffic Measurement and AnalysisNikolaos Draganoudis
 
AN ANALYSIS OF CHALLENGES ENCOUNTERED WHEN PERFORMING MOBILE FORENSICS ON EME...
AN ANALYSIS OF CHALLENGES ENCOUNTERED WHEN PERFORMING MOBILE FORENSICS ON EME...AN ANALYSIS OF CHALLENGES ENCOUNTERED WHEN PERFORMING MOBILE FORENSICS ON EME...
AN ANALYSIS OF CHALLENGES ENCOUNTERED WHEN PERFORMING MOBILE FORENSICS ON EME...Raymond Gonzales
 
Causes effects and measures on employee turnover in awcc june 09, 2018
Causes effects and measures on employee turnover in awcc june 09, 2018Causes effects and measures on employee turnover in awcc june 09, 2018
Causes effects and measures on employee turnover in awcc june 09, 2018Tamim Totakhail
 
Focus Group Discussion on UC Browser Bangladesh TVC
Focus Group Discussion on UC Browser Bangladesh TVCFocus Group Discussion on UC Browser Bangladesh TVC
Focus Group Discussion on UC Browser Bangladesh TVCShagufta Rahman
 

Ähnlich wie An Android Communication Platform between Hearing Impaired and General People (20)

Project report on signal jammer
Project report on signal jammerProject report on signal jammer
Project report on signal jammer
 
Internship_Report_Projects_have_done_Dur.pdf
Internship_Report_Projects_have_done_Dur.pdfInternship_Report_Projects_have_done_Dur.pdf
Internship_Report_Projects_have_done_Dur.pdf
 
E votingproposal
E votingproposalE votingproposal
E votingproposal
 
An investigation into the physical build and psychological aspects of an inte...
An investigation into the physical build and psychological aspects of an inte...An investigation into the physical build and psychological aspects of an inte...
An investigation into the physical build and psychological aspects of an inte...
 
MYINT OO ID BIT COURSEWORK
MYINT OO ID BIT COURSEWORKMYINT OO ID BIT COURSEWORK
MYINT OO ID BIT COURSEWORK
 
Miniproject Report.pdf
Miniproject Report.pdfMiniproject Report.pdf
Miniproject Report.pdf
 
Design and Development of a Knowledge Community System
Design and Development of a Knowledge Community SystemDesign and Development of a Knowledge Community System
Design and Development of a Knowledge Community System
 
Human activity recognition
Human activity recognitionHuman activity recognition
Human activity recognition
 
android report
android reportandroid report
android report
 
android report
android reportandroid report
android report
 
Accident detection and notification system
Accident detection and notification systemAccident detection and notification system
Accident detection and notification system
 
CMPE 295B Final Project Report-QA-signed
CMPE 295B Final Project Report-QA-signedCMPE 295B Final Project Report-QA-signed
CMPE 295B Final Project Report-QA-signed
 
CS499_JULIUS_J_FINAL_YEAR_PROJETCT_L_DRAFT
CS499_JULIUS_J_FINAL_YEAR_PROJETCT_L_DRAFTCS499_JULIUS_J_FINAL_YEAR_PROJETCT_L_DRAFT
CS499_JULIUS_J_FINAL_YEAR_PROJETCT_L_DRAFT
 
Suriya atipong2013(tentang rfid disertasi)
Suriya atipong2013(tentang rfid disertasi)Suriya atipong2013(tentang rfid disertasi)
Suriya atipong2013(tentang rfid disertasi)
 
ZERONE 2010 - Annual Technical Journal, IOE, Nepal
ZERONE 2010 - Annual Technical Journal, IOE, NepalZERONE 2010 - Annual Technical Journal, IOE, Nepal
ZERONE 2010 - Annual Technical Journal, IOE, Nepal
 
Internet Traffic Measurement and Analysis
Internet Traffic Measurement and AnalysisInternet Traffic Measurement and Analysis
Internet Traffic Measurement and Analysis
 
Harsh varia
Harsh variaHarsh varia
Harsh varia
 
AN ANALYSIS OF CHALLENGES ENCOUNTERED WHEN PERFORMING MOBILE FORENSICS ON EME...
AN ANALYSIS OF CHALLENGES ENCOUNTERED WHEN PERFORMING MOBILE FORENSICS ON EME...AN ANALYSIS OF CHALLENGES ENCOUNTERED WHEN PERFORMING MOBILE FORENSICS ON EME...
AN ANALYSIS OF CHALLENGES ENCOUNTERED WHEN PERFORMING MOBILE FORENSICS ON EME...
 
Causes effects and measures on employee turnover in awcc june 09, 2018
Causes effects and measures on employee turnover in awcc june 09, 2018Causes effects and measures on employee turnover in awcc june 09, 2018
Causes effects and measures on employee turnover in awcc june 09, 2018
 
Focus Group Discussion on UC Browser Bangladesh TVC
Focus Group Discussion on UC Browser Bangladesh TVCFocus Group Discussion on UC Browser Bangladesh TVC
Focus Group Discussion on UC Browser Bangladesh TVC
 

Kürzlich hochgeladen

AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdfankushspencer015
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLManishPatel169454
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...SUHANI PANDEY
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756dollysharma2066
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdfSuman Jyoti
 
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...tanu pandey
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXssuser89054b
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfRagavanV2
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfKamal Acharya
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptMsecMca
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 

Kürzlich hochgeladen (20)

AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
 
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 

An Android Communication Platform between Hearing Impaired and General People

  • 1. An Android Communication Platform Between Hearing Impaired and General People This thesis is submitted in partial fulfillment of the requirement for the degree of Bachelor of Science in Computer Science and Engineering. Afif Bin Kamrul ID: 1404065 Supervised by Shayla Sharmin Assistant Professor Department of Computer Science and Engineering (CSE) Chittagong University of Engineering and Technology (CUET) DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING (CSE) CHITTAGONG UNIVERSITY OF ENGINEERING AND TECHNOLOGY (CUET) CHITTAGONG – 4349, BANGLADESH JULY, 2019
  • 2. ii The thesis titled “An Android Communication Platform Between Hearing Impaired and General People” submitted by ID 1404065, session 2017-2018 has been accepted as satisfactory in fulfillment of the requirement for the degree of Bachelor of Science in Computer Science and Engineering (CSE) as B.Sc. Engineering to be awarded by Chittagong University of Engineering and Technology (CUET). Board of Examiners 1. ____________________________ Chairman (Supervisor) Shayla Sharmin Assistant Professor Department of Computer Science and Engineering (CSE) Chittagong University of Engineering and Technology (CUET) 2. ____________________________ Member (Ex-officio) Dr. Asaduzzaman Professor and Head Department of Computer Science and Engineering (CSE) Chittagong University of Engineering and Technology (CUET) 3. _____________________________ Member (External) Dr. Md. Ibrahim Khan Professor Department of Computer Science and Engineering (CSE) Chittagong University of Engineering and Technology (CUET)
  • 3. iii Statement of Originality It is hereby declared that the contents of this project are original and any part of it has not been submitted elsewhere for the award if any degree or diploma. _________________________ _______________________ Signature of the Supervisor Signature of the Candidate Date: Date:
  • 4. iv Acknowledgement I am grateful to almighty Allah who has given me the ability to complete my project and to intend myself in performing the completion of B.Sc. Engineering degree. I am indebted to my supervisor, Shayla Sharmin, Assistant Professor, Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, for her encouragement, proper guidance, constructive criticisms and endless patience throughout the progress of the project. She supported me by providing books, conference and journal papers and effective advices. From very beginning madam always encouraged me with proper guideline so the project never seemed a burden to me. She motivated me to complete final thesis within time. Finally, I want to express my gratitude to all other teachers of our department for their sincere and active cooperation in completing the project work. Finally, I would like to thank my parents for their steady love and support during my study period.
  • 5. v Abstract Although an enormous number of people are deaf and mute in our society, there is a large gap between them and us in terms of communication. People without hearing impairment can listen and speak but deaf people cannot; instead deaf people use signs to communicate. It is very hard for deaf people to communicate with normal people. It would be nice to have a mechanism by which these two communities can make effective communication among them. With the recent technologies it would be preferable to have a portable device for this purpose. One such platform is a smartphone. We have established an android based application which helps to build a connection between general and hear impaired people. The system implies a Bangla voice recognition system for general people through which they can input their voice in their application. There are more than 200 Bangla words available in the application. Whenever voice is detected, the words will be separated and translated into sign language animation and played sequentially. On the contrary, there will be a keyboard for deaf/mute users who can use this to express their language readable to general users. The project has been tested in real life by some students of deaf and mute school, Muradpur, Chittagong to evaluate the project for real life purpose. The test results showed us that the developed system is functioning well and attaining satisfactory result in establishing communication. Both subjective evaluation and black box testing showed satisfactory results. Our proposed system with smartphone’s in-built microphone and their test results can be an exquisite candidate in adaptive, user-oriented communication system for hearing impaired people.
  • 6. vi Table of Contents Chapter 1 Introduction…………………………………………………………….1 1.1 Present State of the Problem .................................................................................. 2 1.2 Motivation.............................................................................................................. 3 1.3 Contributions.......................................................................................................... 4 1.4 Organization of the Paper....................................................................................... 4 Chapter 2 Literature Review………………………………………………………5 2.1 Related Works…………………………………………………………………….5 2.2Android.................................................................................................................... 7 2.2.1 Android OS: A Walk from Past to Present ..................................................... 8 2.3 Java....................................................................................................................... 10 2.4 Android Platform Architecture............................................................................. 11 2.4.1 Linux Kernel ................................................................................................. 11 2.4.2 Android Runtime........................................................................................... 11 2.4.3 Java API Framework..................................................................................... 12 2.5 Android Services.................................................................................................. 12 2.6 Broadcast Receivers ............................................................................................. 13 2.7 MediaPlayer ......................................................................................................... 14 2.8 Adobe Photoshop ................................................................................................. 15 2.9 Adobe Character Animator .................................................................................. 16 2.10 Adobe Media Encoder........................................................................................ 17 2.11 Android Studio…………………………………………………………………18
  • 7. vii Chapter 3 Methodologyof the ProposedSystem………………………………..19 3.1 Overview .............................................................................................................. 19 3.2 Drawing puppet in Adobe Photoshop .................................................................. 20 3.3 Generate Animated files in Adobe Character Animator ...................................... 22 3.4 Render using Adobe Media Encoder ................................................................... 23 3.5 Convert Speech to Text........................................................................................ 24 3.6 Show Signs........................................................................................................... 25 3.7 Keyboard .............................................................................................................. 26 Chapter 4 Implementation………………………………………………………...28 4.1 Software Development......................................................................................... 28 4.1.1 Development Tools ....................................................................................... 28 4.2 Home Screen........................................................................................................ 28 4.3 Speech to Sign Conversion .................................................................................. 28 4.4 Keyboard .............................................................................................................. 30 Chapter 5 Experimental Results…………………………………………………………..31 5.1 Experiment Data................................................................................................... 31 5.2 Experimental Design and Procedure.................................................................... 32 5.3 Experimental Result ............................................................................................. 32 5.4 Subjective Evaluation…………………………………………………………...34 5.5 Testing.................................................................................................................. 36 5.5.1 Purpose of Testing......................................................................................... 36 5.5.2 Black Box Testing......................................................................................... 37
  • 8. viii 5.5.2 Black Box Testing of the Project…………………………………………...37 5.6 Conclusion............................................................................................................ 38 Chapter 6 Conclusionand Future Recommendation…………………………...39 6.1 Conclusion............................................................................................................ 39 6.2 Future Recommendations..................................................................................... 40 References……………………………………………………………...41
  • 9. ix List of Figures Figure 1.1 Communication problem ............................................................................ 4 Figure 2.1 Android logo……………........................................................................... 7 Figure 2.2 Android smartphone ................................................................................... 7 Figure 2.3 Android market share................................................................................ 10 Figure 2.4 Linux kernel.............................................................................................. 11 Figure 2.5 Android runtime........................................................................................ 12 Figure 2.6 Java API Framework ................................................................................ 12 Figure 2.7 Adobe Photoshop….................................................................................. 16 Figure 2.8 Adobe Character Animator....................................................................... 16 Figure 2.9 Adobe Media Encoder .............................................................................. 18 Figure 2.10 Android Studio........................................................................................ 18 Figure 3.1 System overview....................................................................................... 20 Figure 3.2 Adobe photoshop puppet drawing procedure ........................................... 21 Figure 3.3 Photoshop preview………….................................................................... 22 Figure 3.4 Puppet in rig mode……………………………………………………….22 Figure 3.5 Animation procedure ................................................................................ 23 Figure 3.6 Render process.......................................................................................... 23 Figure 3.7 Showing signs........................................................................................... 25 Figure 3.8 Sign language keyboard............................................................................ 27 Figure 4.1 Home screen…………. ............................................................................ 29 Figure 4.2 Speech to sign conversion......................................................................... 29 Figure 4.3 Speech recognition…................................................................................ 29 Figure 4.4 Recognized speech…................................................................................ 29 Figure 4.5 Converted text…....................................................................................... 29 Figure 4.6 Showing signs…....................................................................................... 29 Figure 4.7 (a) Keyboard setup screen (b) Typing on keyboard (c) Typed text.......... 30 Figure 5.1 Teacher explaining sign to student………………………………………32 Figure 5.2 Student explaining sign after using application........................................ 32 Figure 5.3 Column chart showing subjective evaluation of answering Q1, Q2, Q3.. 35 Figure 5.4 Column chart showing subjective evaluation of answering Q4, Q5, Q6.. 35
  • 10. x List of Tables Table 5.1 No of times while application was able to convert from voice to signs .... 33 Table 5.2 No of times while students actually recognized the signs.......................... 33 Table 5.3 No of times the students typed the words using keyboard......................... 34 Table 5.4 Q1, Q2, Q3 answered by deaf/mute people ............................................... 35 Table 5.5 Q4, Q5, Q6 answered by teachers and general people............................... 35 Table 5.6 Test case of black box testing .................................................................... 37
  • 11. 1 Chapter 1 Introduction Communication is one of the major needs of human. People without hearing impairments can listen and speak but deaf people cannot; instead deaf people use signs to communicate, these signs are known as sign language. Sign language is a visual language which uses various body movements as a method of communication. Like natural languages, there are different forms of sign languages used in different countries around the world. There is a large gap between general people and deaf society during communication. It is very hard for deaf people to communicate with normal people. In maximum cases, sign languages are only used for direct visual communication, such as video broadcasts or interpersonal communication. Unlike their natural language counterpart, the sign languages are usually grammatically different. Many deaf people may not know the natural language at all while many general people may not know how to communicate in sign language. These differences make it very difficult for these two types of people to communicate effectively with each other without a translator. Human interpreters are inefficient and there are not enough of them to make sure every personal communication can take place between the two communities of people. According to “National Survey on Prevalence of Hearing Impairment in Bangladesh 2013” published by WHO/SEARO/Country Office for Bangladesh and Ministry of Health and Family Welfare (BSMMU), hearing impairment is the second commonest form of disability. One small-scale study done in 2002 reported to WHO a prevalence of 7.9% hearing impairment (in better ear) in Bangladeshi people. In addition, Bangladesh has a population of over 130 million by the Population Census 20001 National Report (Provisional). Bangladesh Bureau of statistic, Dhaka, Bangladesh, July 2003. Moreover, about 13 million people are suffering from variable degrees of hearing loss of which 3 million are suffering from severe to profound hearing loss leading to disability in accordance to Amin MN: Prevention of Deafness and Primary Ear Care (Bengali)- Society for Assistance to Hearing Impaired Children (SAHIC), Mohakhali, Dhaka-1212, Bangladesh. So, it’s a pretty
  • 12. 2 serious issue to be prioritized for. This problem is causing economic, social, educational and vocational problems both for the victims and the country. In addition, according to “(2009) World Federation of the Deaf” around 90% of the worlds deaf children and adults have never been to school and thus more or less illiterate. Such problems could be avoided if there exists a digital translator between the normal society and deaf people. Source: http://www.searo.who.int/bangladesh/publications/national_survey/en/ http://wfdeaf.org/[2] In our day to day life, we, the general people frequently use some sign to communicate with other along with verbal language. In a nut shell, we can say that to express feelings or to explain what a person wants to say to other both sign language, facial expressions and verbal language is used by us. But deaf and mute people have only one medium to express them via sign language, on the other hand not every people know every sign used by them. Again, there is no Bangla medium system which can provide an easier solution which can ensure a two-way communication between deaf and mute with the general people. Considering these facts, we have developed a system which can convert Bangla speech to Bangla sign language with, and a keyboard with sign language which helps a deaf and mute people to construct word or sentences. We are hoping that, the proposed system will be useful for deaf pupils, for parents of deaf children and for any person who is in contact with deaf and need to learn sign language in the real-life environment. Moreover, this can satisfy the need for developing tools and appliances, which would not only help people of deaf society in communicating with the hearing mainstream of the society, but would also enable them to embrace basic education easily. 1.1 Present State of the Problem Some notable works have been done in recent years. An approach contained about 1000 words in its database and human recorded videos were concatenated with each other for displaying sign outputs.
  • 13. 3 In a work, Unicode is used for syntactic and morphological analysis for detecting inflections, suffixes etc. A research has been conducted for establishing mobile platforms between deaf/dumb person and a normal person where a speech recognition engine was trained for recognizing and detecting Bangla text and corresponding sign still images are shown. Some advanced keyboard designs of Bangla layout are introduced for deaf people including some swipe and press keyboard. 1.2 Motivation The main motivation for implementing the emergency application for these people is ensuring the reduction of communication barrier between the deaf society and general people. The latest achievement in different fields of technologies may allow us to minimize and successfully integrate these technologies. As much as we would like to get rid of this barrier, awkward situations can be avoided. Main motivation comes from improving the condition of deaf society where many of them cannot participate with mainstream of the society. Many of them want to have education, participate with local people but the language barrier cannot allow them. Most often the problem arises from the inability of general people not understanding their signs. So, the goal of this project is to develop and implement a reliable system for these two societies. The need for such a mechanism increases even more as in this era of technology, platform exist to support them. One such platform and a very common one in that is a Smartphone. Almost everyone today carries a Smartphone with them as they become more and more affordable and easily available. According to one report, 78.1% of the total Smartphones based on Android Operating System sold in 2013 [9]. Hence, we have decided to develop an Android application that lets its users to communicate in between them easily. This mechanism is very useful and can be used in a variety of ways.
  • 14. 4 Figure 1.1 Communication problem 1.3 Contributions My objective of this project was to recognize Bangla speech and convert them to sign language for deaf/mute people. Another objective was to develop a sign language keyboard that converts Bangla sign into Bangla text. I’ve tried heart and soul to fulfill these objectives. I have managed to covert speech to Bangla text using Google speech API engine. It does the job in a very good way. In respect of converting the Bangla words into animated signs, I’ve managed to do that for more than 200 words. The animations are works well according to test performed. The keyboard feature also equipped with signs for all available Bangla letters. Upon pressing the buttons, the value of key is shown on the screen and hence Bangla text as well as whole sentence can be formed. 1.4 Organization of the Paper The following chapters will go through the different aspects of this project. Chapter 1 gives exordial concept of our project. Chapter 2 gives, an overview of our project related terminologies and brief discussion of the previous works with their limitations. Chapter 3 describes the working procedure of our project. In Chapter 4, we have illustrated our implementation of the project in details. Chapter 5 centers on the experimental result and evolution of the proposed system. The paper concludes with a summary of our work and future recommendations for further improvements in Chapter 6. This paper contains one appendix, intended for persons who wish to explore certain topics in greater depth.
  • 15. 5 Chapter 2 Literature Review In this chapter, we present studies on the terminologies related to the project which are important to understand. This chapter also contains brief discussion on related previous works. 2.1 Related works There has been some notable work performed based on identifying sign language image or body movement to develop a medium for deaf people. But there has not been a lot of work done on the opposite. This kind of work is done by some honorable persons from India where they developed a translator for Bangla text to sign language. In the process, they maintained a dictionary containing about 1000 words against unique id, grammar id, path and filename. In the sentence mode enabled, the corresponding Bangla word is fetched according to corresponding matched id and individual video clips are concatenated one after another in the correct sequence to generate a video clip that represents the sign language output corresponding to the input text. [1] Same type of work is presented in another research [2] where input text is broken and analyzed into syntactic and morphological level. Unicode which provides a unique character for every character, no matter what the platform, no matter what the program, no matter what the language; is used for this research. At first, unusual characters or special characters that are not used usually are removed using syntactic analysis. Then, texts are ordered in series and according to the rule of Bangla grammar through morphological analysis. Database is maintained for root words and it also contains parts of speech of each word. Words are stored in a tree like structure where sign image is only provided for parent word to avoid duplication. A necessary research has been conducted in [3] where an empirical framework for tokenizing and parsing Bangla three types of sentences: assertive, interrogative and imperative is shown. In this work presented here, input sequence is taken at first. Then the program breaks the string into individual words called tokens. A lexicon is
  • 16. 6 maintained containing parts of speech of each word. Some Context-Sensitive Grammar (CSG) rules are used to for processing tokens of input sequence to generate parse tree or structural representation according to CSG rules. A two-way communication is established in mobile platforms between a deaf/dumb person and a normal person. In the process, an open-source speech recognition software called CMU Sphinx is used to convert Bangla speech to Bangla sign language. The Sphinx defines Bangla phonetics, words and grammars and recognizes speech input and convert it to phonetic text. Scope of words are stored in dictionary, hence sentences are built and stored as Bangla language model file and audio files are stored as Bangla acoustic model file and therefore using those required files, training process is begun which breaks recorded speech into phonetics text. By converting texts to Bangla words, these are matched against the database and if match found, corresponding image are shown on screen from database fetched from database. On the other end, impaired persons have to write Bangla text which would be converted to speech by utilizing Google Translation Server. This is a lacking due to the fact that impaired persons may not know the Bangla language which requires designing a new keyboard for them. [4] An essential keyboard design is introduced for deaf people [6] and Bangla layout is described. [5] The swipe and press layout and sign keyboard both can be combined for the sake of deaf people to communicate with normal people. The swipe and press keyboard layout would require less space on screen. Both can be applied for android mobile. The sign language keyboard also support speech to text and text to speech where text to speech allows a use of an algorithm where standard output use a morphological analyzer module to determine the stem and part of speech of each individual word. After preprocessing, a search is begun to find exact match of the word using hash value. If not found, Levenshtein distance between those stems and the stem of the word under search is calculated and minimum value of distance less than two is selected from database. If that word is not found also, the word is broken into letters and signs are retrieved for letters.
  • 17. 7 Android app developed in different platforms can be communicated through specific methods. [7] The inter-app communication can be established using predefined ‘intent’, built into the Java package. Animation based teaching assistant is presented with the help of ‘WebSign’; which is based on the technology of avatar (animation in virtual world). It translates transcriptions of natural hearing languages to a real time animation in sign language. [8] Actual sign is played for each word from dictionary. 2.2 Android Android is a mobile operating system (OS) [10] based on the Linux kernel and currently developed by Google. With a user interface based on direct manipulation, Android is designed primarily for touchscreen mobile devices such as smartphones and tablet computers, with specialized user interfaces for televisions (Android TV), cars (Android Auto), and wrist watches (Android Wear). The OS uses touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching and reverse pinching to manipulate on-screen objects and a virtual keyboard. Despite being primarily designed for touchscreen input, it also has been used in game consoles, digital cameras, regular PCs and other electronics. Android is the most widely used mobile OS and, as of 2013, the most widely used OS overall. Android devices sell more than Windows, iOS, and Mac OS X devices combined, with sales in 2012, 2013 and 2014 close to the installed base of all PCs. As of July 2013, the Google Play store has had over 1 million Android apps published, and over 50 billion apps downloaded. A developer survey conducted in April/May 2013 found that 71% of mobile developers develop for Android. At Google I/O 2014, the company revealed that there were over 1 billion active monthly Android users, up from 537 million in June 2013. Android’s source code is released by Google under open source licenses, although most Android devices ultimately ship with a combination of open source and proprietary software. [9]
  • 18. 8 Figure 2.1 Android logo Figure 2.2 Android smartphone Initially developed by Android Inc., which Google backed financially and later bought in 2005, Android was unveiled in 2007 along with the founding of the Open Handset Alliancea consortium of hardware, software and telecommunication companies devoted to advancing open standards for mobile devices. Android is popular with technology companies which require a ready-made. Low-cost and customizable operating system for high-tech devices. Android’s open nature has encouraged a large community of developers and enthusiasts to use the open-source code as a foundation for community-driven projects, which add new features for advanced users or bring Android to devices which were officially, released running other operating systems. The operating system’s success has made it a target for patent litigation as part of the so called “smartphone wars” between technology companies. 2.2.1 Android OS: A Walk from Past to Present Android Inc. was founded in Palo Alto, California in October 2003 by Andy Rubin (co-founder of Danger), Rich Miner (co-founder of Wildfire Communications, Inc.), Nick Sears (once VP at T-Mobile) and Chris White (headed design and interface development at WebTV) to develop, in Rubin’s words, “smarter mobile devices that are more aware of its owner’s location and preferences”. The early intentions of the company were to develop an advanced operating system for digital cameras, when it was realized that the market for the devices was not large enough and diverted their efforts to producing a smartphone operating system to rival those of Symbian and Windows Mobile. Despite the past accomplishments of the founders and early employees, Android Inc. operated secretly, revealing only that it was working on software for mobile phones. That same year, Rubin ran out of money. Steve Perlman, a close friend of Rubin, brought him $10,000 in cash in an envelope and refused a
  • 19. 9 stake in the company. Google acquired Android Inc. on August 17, 2005; key employees of Android Inc. including Rubin, Miner and White, stayed at the company after the acquisition. Not much was known about Android Inc. at the time but many assumed that Google was planning to enter the mobile phone market with this move. At Google, the team led by Rubin developed a mobile device platform powered by the Linux kernel. Google marketed the platform to handset makers and carriers on the promise of providing a flexible, upgradable system. Google had lined up a series of hardware component and software partners and signaled to carriers that it was open to various degrees of cooperation on their part. Speculation about Google’s intention to enter the mobile communication market continued to build through December 2006. An earlier prototype codenamed “Sooner” had a close r resemblance to a BlackBerry phone, with no touchscreen, and a physical QWERTY keyboard but was later reengineered to support a touchscreen, to compete with other announced devices such as the 2006 LG Prada and 2007 Apple iPhone. In September 2007, InformationWeek covered an Evalueserve study reporting that Google had filed several patent applications in the area of mobile telephony. On November 5, 2007, the Open Handset Alliance, a consortium of technology companies including Google, device manufacturers such as HTC, Sony and Samsung, wireless carriers such as Sprint Nextel and T-Mobile and chipset makers such as Qualcomm and Texas Instruments, unveiled itself, with a goal to develop open standards for mobile devices. That day, Android was unveiled as its first product, a mobile device platform built on the Linux kernel version 2.6.25. The first commercially available smartphone running Android was the HTC Dream, released on October 22, 2008. In 2010, Google launched its Nexus series of devices a line of smartphones and tablets running the Android operating system, and built by manufacturing partners. HTC collaborated with Google to release the first Nexus smartphone, the Nexus One. Google has since updated the series with newer devices, such as the Nexus 5 phone (made by LG) and the Nexus 7 tablet (made by Asus). Google releases the Nexus phones and tablets to act as their flagship Android devices, demonstrating Android’s latest software and hardware features. On March 13, 2013 Larry Page announced in a blog post that Andy Rubin had moved from the Android division to take on new projects at Google. He was replaced by Sundar Pichai, who also continues his role as
  • 20. 10 the head of Google’s Chrome division, which develops Chrome OS. Since 2008, Android has seen numerous updates which have incrementally improved the operating system, adding new features and fixing bugs in previous releases. Each major release is named in alphabetical order after a dessert or sugary treat; for example, version 1.5 Cupcake was followed by 1.6 Donut. The latest released version, 9.0 Pie, appeared as a security-only update; it was released on March 7, 2018, shortly after release of 8.0 Oreo, August 21, 2016. From 2010 to 2013, Hugo Barra served as product spokesperson for the Android team, representing Android at both press conferences and Google I/O, Googles annual developer-focused conference. Barras product involvement included the entire Android ecosystem of software and hardware, including Honeycomb, Ice Cream Sandwich, Jelly Bean and KitKat operating system launches, the Nexus 4 and Nexus 5 smartphones, the Nexus 7 and Nexus 10 tablets, and other related products such as Google Now and Google Voice Search, Googles speech recognition product comparable to Apples Siri. In 2013 Barra left the Android team for Chinese smartphone maker Xiaomi. Figure 2.3 Android market share 2.3 Java Java is a general-purpose computer programming language that is concurrent, class- based, object-oriented, [11] and specifically designed to have as few implementation dependencies as possible. It is intended to let application developers “write once, run anywhere” (WORA), [12] meaning that compiled Java code can run on all platforms
  • 21. 11 that support Java without the need for recompilation. [13] Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. The JDK is a development environment for building applications, applets and components using the Java programming language. The JDK includes tools useful for developing and testing programs written in the Java programming language and running on the Java platform. 2.4 Android Platform Architecture Android is an open source, Linux-based software stack that is created for a wide range of devices and form factors. There are some major components of the Android platform. They are discussed below: 2.4.1 Linux Kernel Linux Kernel is the foundation of the Android platform. For functionalities like threading, low-level memory management Android Runtime (ART) relies on the Linux Kernel. Linux Kernel also allows Android to take the advantage of key security features. Thus, Original Equipment Manufacturers can use Linux on their system and have the drivers running before loading other components of the stack. This is pictured in below figure 2.13. Figure 2.4 Linux kernel 2.4.2 Android Runtime
  • 22. 12 Android Runtime Android Runtime (ART) Core Libraries Android Runtime in short ART is used to run multiple virtual machines on low- memory devices by executing DEX files, which is a byte-code format used for optimized minimal memory footprint. ART can also do Ahead of time (AOT), just in time (JIT) compilation and optimized garbage collection (GC). It also gives better debugging support. This is pictured in below figure 2.15. Figure 2.5 Android runtime 2.4.3 Java API Framework API’s forms a bridge between Android OS and developer. The API’s act as the building block to create an Android app by simplifying the core reuse, modular system components and services. API includes a View system to build app’s UI, a resource manager to allow the access to the non-code resources, a notification manager to display custom alerts, an activity manager to manage the lifecycle of the apps and a content provider to enable the apps to access data from other apps. This is pictured in below figure 2.17. Figure 2.6 Java API Framework 2.5 Android Services
  • 23. 13 A Service is an application component that can perform long-running operations in the background, and it doesn't provide a user interface. Another application component can start a service, and it continues to run in the background even if the user switches to another application. Additionally, a component can bind to a service to interact with it and even perform interprocess communication (IPC). For example, a service can handle network transactions, play music, perform file I/O, or interact with a content provider, all from the background. These are the three different types of services:  Foreground: A foreground service performs some operation that is noticeable to the user.  Background: A background service performs an operation that isn't directly noticed by the user.  Bound: A service is bound when an application component binds to it by calling bindService(). A bound service offers a client-server interface that allows components to interact with the service, send requests, receive results, and even do so across processes with interprocess communication (IPC). A bound service runs only as long as another application component is bound to it. Multiple components can bind to the service at once, but when all of them unbind, the service is destroyed. 2.6 Broadcast Receivers Android apps can send or receive broadcast messages from the Android system and other Android apps, similar to the publish-subscribe design pattern. These broadcasts are sent when an event of interest occurs. For example, the Android system sends broadcasts when various system events occur, such as when the system boots up or the device starts charging. Apps can also send custom broadcasts, for example, to notify other apps of something that they might be interested in (for example, some new data has been downloaded). Apps can register to receive specific broadcasts. When a broadcast is sent, the system automatically routes broadcasts to apps that have subscribed to receive that particular
  • 24. 14 type of broadcast. Generally speaking, broadcasts can be used as a messaging system across apps and outside of the normal user flow. However, you must be careful not to abuse the opportunity to respond to broadcasts and run jobs in the background that can contribute to a slow system performance. There are two major classes of broadcasts that can be received:  Normal broadcasts (sent with Context.sendBroadcast) are completely asynchronous. All receivers of the broadcast are run in an undefined order, often at the same time. This is more efficient, but means that receivers cannot use the result or abort APIs included here.  Ordered broadcast (sent with Context.sendOrderedBroadcast) are delivered to one receiver at a time. As each receiver executes in turn, it can propagate a result to the next receiver or it can completely abort the broadcast so that it won’t be passed to other receivers. There are following two important steps to make BroadcastReceiver works for the system broadcast intents-  Creating the Broadcast Receiver  Registering Broadcast Receiver 2.7 Media Player The Android multimedia framework includes support for playing variety of common media types, so that you can easily integrate audio, video and images into your applications. You can play audio or video from media files stored in your application's resources (raw resources), from standalone files in the filesystem, or from a data stream arriving over a network connection, all using MediaPlayer APIs. The following classes are used to play sound and video in the Android framework: MediaPlayer This class is the primary API for playing sound and video. AudioManager
  • 25. 15 This class manages audio sources and audio output on a device. Here is an example of how to play audio that’s available as a local raw resource: 2.8 Adobe Photoshop Adobe Photoshop is a raster graphics editor developed and published by Adobe Inc. for Windows and macOS. It was originally created in 1988 by Thomas and John Knoll. Since then, this software has become the industry standard not only in raster graphics editing, but in digital art as a whole. Photoshop can edit and compose raster images in multiple layers and supports masks, alpha compositing, and several color models including RGB, CMYK, CIELAB, spot color and duotone. Photoshop uses its own .psd and .psb file formats to support these features. In addition to raster graphics, this software has limited abilities to edit or render text and vector graphics as well as 3D graphics and video. Its feature set can be expanded by plug-ins. Photoshop was developed in 1987 by two brothers Thomas and John Knoll, who sold the distribution license to Adobe Systems Incorporated in 1988. Upon loading Photoshop, a sidebar with a variety of tools with multiple image-editing functions appears to the left of the screen. These tools typically fall under the categories of drawing; painting; measuring and navigation; selection; typing; and retouching.  Moving: The move tool can be used to drag the entirely of a single layer or more if they are selected. Alternatively, once an area of an image is highlighted, the move tool can be used to manually relocate the selected piece to anywhere on the canvas.  Pen tool: Photoshop includes a few versions of the pen tool. The pen tool creates precise paths that can be manipulated using anchor points.  Magic wand: The magic wand tool selects areas based on pixels of similar values. One click will select all neighboring pixels of similar value within a tolerance level set by the user.  Eraser: The Eraser tool erases content based on the active layer. If the user is on the text layer, then any text across which the tool is dragged will be MediaPlayer mediaPlayer = MediaPlayer.create(context, R.raw.video_file_1); mediaPlayer.start(); //no need to call prepare(); create() does that for you
  • 26. 16 erased. The eraser will convert the pixels to transparent, unless the background layer is selected. The size and style of the eraser can be selected in the options bar.  Camera raw: With the Camera Raw plug-in, raw images can be processed without the use of Adobe Photoshop Lightroom, along with other image file formats such as JPEG, TIFF, or PNG.  Shape tools: Photoshop provides an array of shape tools including rectangles, rounded rectangles, ellipses, polygons and lines. These shapes can be manipulated by the pen tool, direct selection tool etc. to make vector graphics.  Selection tools: Selection tools are used to select all or any part of a picture to perform cut, copy, edit, or retouching operations.  Lasso: The user can make a custom selection by drawing it freehand. There are three options for the "lasso" tool – regular, polygonal, and magnetic. The regular "lasso" tool allows the user to have drawing capabilities. Photoshop will complete the selection once the mouse button is released. The "polygonal lasso" tool will draw only straight lines, which makes it an ideal choice for images with many straight lines. "Magnetic lasso" tool is considered the smart tool. It can do the same as the other two, but it can also detect the edges of an image once the user selects a starting point. Figure 2.7 Adobe Photoshop Figure 2.8 Adobe Character Animator 2.9 Adobe Character Animator
  • 27. 17 Adobe Character Animator is a desktop application software product that combines live motion-capture with a multi-track recording system to control layered 2D puppets drawn in Photoshop or Illustrator. It is automatically installed with Adobe After Effects CC 2015 to 2017 and is also available as a standalone application which one can download separately. It is used to produce both live and non-live animation. Character Animator imports layered Adobe Photoshop and Adobe Illustrator documents into puppets which have behaviors applied to them. The puppets are then placed into a scene, which can be viewed in the Scene panel and Timeline panel. Rigging is set up in the Puppet panel, though basic rigging is fully automatic based on specific layer names like Right Eyebrow and Smile. Properties of selected elements can be examined and changed in the Properties panel, including behavior parameters. Live inputs include a webcam (for face-tracking), microphone (for live lip sync), keyboard (for triggering layers to hide/show), and mouse (for warping specific handles). Final output of a scene can be exported to a sequence of PNG files and a WAV file, or any video format supported by Adobe Media Encoder. Live output can be sent to other applications running on the same machine via the Syphon protocol (Mac only) or Adobe Mercury Transmit on both Mac and Windows. Scenes can also be dropped directly into After Effects and Premiere Pro, using Dynamic Link to avoid rendering. 2.10 Adobe Media Encoder Adobe Media Encoder is used to compress audio and/or video files. Typically, when a project is rendered (Rendering (computer graphics), it is rather large in file size. In order to make it play back smoothly on devices without fast processors, tons of RAM, and/or to play across cellular and/or WIFI networks, they must be compressed. Compression comprises many different types of approaches/algorithms based upon the content involved, how it will be delivered, and also, what level of compression is acceptable to the creator and/or audience. Once all those variables are considered, it
  • 28. 18 can be processed through software like Adobe Media Encoder. There are many other programs like it that do the same thing, with varying levels of speed and quality. The yield from this procedure is then a record that looks and sounds especially like the first, however it is typically numerous requests of size littler in document size. People can't see the enormous range details in shading and sound the manner in which that PCs can, which is the reason this procedure works so well. Figure 2.9 Adobe Media Encoder 2.11 Android Studio Android Studio is the official Integrated Development Environment (IDE) for Android app development. It supports Java or Kotlin as development language. It has in built useful features for android development. Figure 2.10 Android Studio
  • 29. 19 Chapter 3 Methodology of the Proposed System In this work, we focused on designing a smartphone-based communication system between normal and deaf people and implement it by developing a smartphone application. This chapter mainly focuses on the overall system architecture of the proposed system and procedure to achieve this in details. 3.1 Overview In this work, our main concern is to set up a communication medium in between the deaf-mute and general people. The android app is provided with a choice of option for users. The people who can speak are provided a simple list of Bangla words. In accordance with the list users have certain speech recognition system in built with the application. If the speech is matched with the list of words the sign is played sequentially through media player. For the deaf people, there is another choice for selecting keyboard which they can understand previewing signs of available Bangla character. The typed text is displayed in the output box in order for the general people to understand easily. Each keystroke is combined to form Bangla readable text. The animated characters are made with the help of Adobe software. Whenever the normal people give their speech to the input service, the speech is sent to Google cloud repository using sendOrderedBroadcast for fetching the generated language according to specified language from Google server. The speech recognition feature comes in built with android operating system. The overall process is shown via a block diagram which shows the features provided for the people of two community. If the general people speak the provided words with suffixes, those are automatically discarded to find the root word. According to the root word, the actual signs are played sequentially for which no delay are visible for the deaf/mute community.
  • 30. 20 Start Figure 3.1 System overview Upon starting of the application, the control is passed either to speech recognition or to keyboard based on the decision of the user. After the flow, the launcher comes to an end. 3.2 Drawing puppet in Adobe Photoshop In order to design animation for sign language, a puppet needs to be designed. To design photoshop offers a great deal of exclusive features. The steps of flow are described below:  First a new photoshop file is opened and saved as .psd selecting perfect resolution.  Background is selected of color black.  Using ellipse tool, the head, eye, pupils are drawn. Speech to Sign Language Conversion with the Help of Virtual Agent Convert Bangla Text Launch Speech to Text Launch Sign Language Keyboard End
  • 31. 21  By using rounded rectangle tool, ellipse tool, polygon tool, line tool and custom shape tool, remaining shapes of body parts are drawn.  Optimal strokes are selected for these tools.  By using brush tool, particular body parts are colored accordingly.  These drawn parts are sequenced in perfect layer (Layers on top of other layers are visible).  The parts movable is under a same group so they can move together.  The movable layers are assigned ‘+’ sign in front of them to indicate them as independent entity.  These photoshop files is further analyzed in Adobe Character Animator. The procedure is shown in figure 3.2 and 3.3: Figure 3.2 Adobe photoshop puppet drawing procedure Open new file Draw character body parts Adjust the drawn parts of character Organize the shapes in layers Assign ‘+’ sign into the independent layers Save the puppet as .psd
  • 32. 22 Figure 3.3 Photoshop preview Figure 3.4 Puppet in rig mode 3.3 Generate Animated files in Adobe Character Animator In order to generate the animated video files, the designed photoshop puppet needs to be imported from Adobe Character Animator. The layers marked as crown in the left side of the panel is the independent layers which is imported from photoshop. The puppet is available in two types of mode: Rig mode and Scene mode. In Rig mode, the puppet can be edited for movement of particular parts. In Scene mode, the corresponding video can be taken frame by frame. The video taken can be further analyzed and exported for rendering. The following is maintained for my puppet:  The photoshop puppet is imported into the Character Animator.  Make those layers crown which I want to make independent and organize other layers which I want to move with this layer.  Add handle tool for the parts to be fixed during the take of animation.  Add stick tool for arm which I don’t want to make dangle.  Add dragger tool for the palm of hand group which is movable with the movement of mouse.  Add triggers for multiple parts to switch by triggering the keyboard input.  Set frame duration to 3 to 4 seconds.  Frame rate of 3 fps do just fine.  Pressing record button scene is recorded.  During record, the movements are either triggered or dragged via mouse.  For some animation, behaviors need to be set.  Multiple frames can be taken via press recording button again and again.  The record can be previewed through the play button.
  • 33. 23  If necessary, eye gaze can be taken using laptop webcam from taking the expression of operator face.  Finally, the recorded animation can be exported for render to particular software via render option. The mentioned procedure is shown in figure 3.4 and 3.5. Figure 3.5 Animation procedure 3.4 Render using Adobe Media Encoder The recorded animated files are opened in the Adobe Media Encoder as source. The following procedures is followed: Figure 3.6 Render process Import photoshop puppet Add tools for corresponding task Add triggers Set frame duration and fps Record animation Export for rendering Open Character Animator file Set system preset to mp4 file Press start button
  • 34. 24 3.5 Convert Speech to Text Android is supplied with a built-in speech recognition engine from Google. Following method helps to solve this procedure. Speech recognizer is initiated by passing language data to a very special recognizerIntent through sendOrderedBroadcast(). Upon RESULT_OK broadcasting the intent passes voice input to Google server. The data of this intent is processed through this procedure: The result is achieved by using a superclass method and result is stored in an ArrayList. Intent ob=new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); ob.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.L ANGUAGE_MODEL_FREE_FORM); ob.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "bn"); ob.putExtra(RecognizerIntent.EXTRA_PROMPT,"নির্ধানিত শব্দগুচ্ছ থেকে েো বলুি"); try{ startActivityForResult(ob,100); }catch (ActivityNotFoundException e){ Toast.makeText(getApplicationContext(),"sorry your device doesn't support speech language",Toast.LENGTH_LONG).show(); } Intent detailsIntent = new Intent(RecognizerIntent.ACTION_GET_LANGUAGE_DETAILS); sendOrderedBroadcast(detailsIntent, null, new LanguageDetailsChecker(), null, Activity.RESULT_OK, null, null); public void onActivityResult(int request_code,int result_code,Intent ob){ super.onActivityResult(request_code,result_code,ob); if (request_code == 100){ if(result_code==RESULT_OK && ob!=null){ ArrayList<String> result=ob.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS); resultText.setText(result.get(0)); str=result.get(0); } } }
  • 35. 25 There are certain types of permissions required such as internet and audio recording permission declared in the manifest file. 3.6 Show Signs Animated sign videos generated from Media Encoder are stored in Android Studio raw folder. The Bangla text is checked against the root word. If match is found, MediaPlayer() plays the video file. Figure 3.7 Showing signs Bangla word signs available in raw folder are: উপি, উচ্চ, উপনিভাগ, পকি, নবোল, আকেনিো, এবং, থ াষণা, এনিল, এলাো, আগস্ট, অগাস্ট, শিৎোল, শিৎ, বাংলাকেশ, আকগ, োল, োকলা, িাজর্ািী, চট্টগ্রাে, শহি, েকলজ, েহাকেশ, গণিা, থেশ, গরু, থিাজ, নেি, নিকেম্বি, ঢাো, থিাগ, থজলা, নবভাগ, পৃনেবী, পূবধ, নিে, আট, আঠাকিা, আনশ, এগাকিা, ইউকিাপ, চানিনেে, দ্রুত, থেব্রুয়ানি, নেছু, পকিি, পঞ্চাশ, আগুি, পাাঁচ, পতাো, খােয, খাওয়া, খাও, খাব, খাকব, থখকয়ছ, জিয, পক্ষোল, চনিশ, চাি, থচৌদ্দ, শুক্রবাি, োও, থেওয়া, থেেি, নেভাকব, তাড়াতানড়, জািুয়ানি, আিন্দ, জুলাই, জুি, থহেন্তোল, থহেন্ত, অকিেনেি, Speaking Bangla Fetching Sign Sign Language Output Raw folder containing animation Splitting Suffixes, Inflections etc. Splitting Words from Sentence <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.RECORD_AUDIO" />
  • 36. 26 োচধ , থে, নেনিট, থোেবাি, োে, েোল, জানত, েখকিািা, িাত, িয়, উনিশ, িব্বই, দুপুি, উত্তি, িকভম্বি, এখি, এখাকি, অকটাবি, এে, হাজাি, এে হাজাি, োত্র, অেবা, েেতল, বষধাোল, বষধা, চাউল, ভাত, োলাে, শনিবাি, থেকেম্বি, োত, েকতি, েত্তি, অেুস্থ, ছয়, থষাল, ষাট, আোশ, েনক্ষণ, বেন্তোল, বেন্ত, গ্রীষ্ম, গ্রীষ্মোল, িনববাি, েশ, যখি, তখি, থতি, নতনিশ, নতি, বৃহস্পনতবাি, আজ, আগােীোল, োলকে, েঙ্গলবাি, বাকিা, নবশ, দুই, নবশ্বনবেযালয়, পানি, িাস্তা, বুর্বাি, েপ্তাহ, পনিে, নে, থোোয়, েখি, থেি, শীতোল, শীত, বছি , গতোল, শূিয, লাল, আোশী, িং, হইকত, হকত, থেকে, স্বাগত, স্বাগতে, েহজ, থোজা, আনে, আোি, িিোল, োর্ািাণ, োেুনল, স্থায়ী, স্বয়ং, নিকজ, নিজ, আেিা, আোকেি, তু নে, থতাোকেি, ঠিে, েঠিে, নবপেজিে, নবোয়, যনে, বহু, অকিে, েেেযা, নিজাভধ , লজ্জা, োকপাটধ , েেেধি, লম্বা, শব্দ, অক্ষি, ড্রাইভাি, চালে, স্টু কিন্ট, ছাত্র, েকর্য, থবোি, েোি, শনি, শনিশালী, আকো, আেকব, একেকছা, একেনছকল, আিম্ভ, শুরু, োিনেউ, েল, আলাো, আলাপ, েো, জয়, থখলা, থখলকছ, থখকলায়াড়, বল, গাছ, গাকছি োাঁটা, িাব, তিেুজ, োাঁঠাল, থপাঁকপ, শাপলা, েু ল, চািাচু ি, নজলানপ, িশ্ন, েবেেয় , েেয় Splitting words are done by simply using a for loop and whenever a space is detected, the particular word is stored in an array. Suffixes or inflections that are found after the root word are discarder using recursion procedure. Output is displayed via MediaPlayer after fetching sign from raw folder. 3.7 Keyboard Android keyboard service is implemented by inheriting a class InputMethodService and overriding its methods. The layout is designed in a xml folder. Keyboard service works in the following way: Whenever a key is pressed it is against a key value in a java class overriding onKey() method of KeyboardView.java. The value of Bangla word is then committed into the text view of output screen. For certain types like spacebar or enter, their keycode were checked and committed their ASCII values into the text view. The xml folder contains keyboard layout files for each layout changes visible after pressing certain keys of the keyboard. Each row is designed separately. In a separate xml file keyboard background is designed while keyPreviewLayout is designed in another separate file. First of all, keyboard is launched by pressing the launch keyboard button in home screen. Upon launching the keyboard, two buttons are popped up for enable and
  • 37. 27 select keyboard. As android requires permission for selecting keyboard the enable keyboard allows the user to enter into the android system screen where installed keyboards are displayed and a choice is popped up for if user wants to enable keyboard. The selection between installed keyboards are popped up after pressing select keyboard button. When user presses a keyboard button a preview is popped up showing the signs of the value of the button. When the button is pressed, the corresponding value is committed into the output field. Hence the keyboard works in a predefined way. Figure 3.8 Sign language keyboard Start Button with sign preview Select Sign Match with Corresponding Bangla CharacterShow CharacterBuild Sentence End
  • 38. 28 Chapter 4 Implementation The implementation of this system requires designing and development of a smartphone-based software that has a Graphical User Interface (GUI) with communication medium working in between people. In this chapter, the background tasks with necessary diagram are given which will clearly describe the outcome of this project. 4.1 Software Development The proposed system is implemented as a mobile application. The application is developed for the android platform. 4.1.1 Development Tools The lists of tools that have been used to implement the system are given below:  Software Development Kit o Android  Integrated Development Environment (IDE) o Android studio 4.2 Home Screen At first the application has to be launched by clicking the launcher icon of the application. When the application is launched the home-screen appears. The home screen contains all the option of the operation of the application. Its user interface and activity layers are kept simple and light for ease of the users of all ages. Home screen contains two buttons, one to convert speech to sign language and other to launch keyboard. Two specific type of users select the buttons according to their need. For deaf/mute people launch keyboard button is used and for normal people convert speech to sign language button is used. 4.3 Speech to Sign Conversion After pressing first button the home screen (figure 4.1) will popup. This screen comes with a list of words showing from left side as a listview shown in figure 4.2. The list can be scrolled down to see the available words. There are more than 200
  • 39. 29 Bangla words available in the list. These words can be selected to view their corresponding animated signs. If inflections or suffixes are added during speak, it is handled automatically. Figure 4.1 Home Screen Figure 4.2 Speech to sign conversion Figure 4.3 Speech recognition Figure 4.4 Recognized speech Figure 4.5 Converted text Figure 4.6 Showing signs
  • 40. 30 There is microphone button provided which titles “tap on mic to speak” (figure 4.3). Upon pressing the button, Google speech recognition is initiated (figure 4.4). The user has to provide his/her voice as soon as they hit microphone button. The voice is sent to Google for Bangla audio recognition (figure 4.5). Following scenario is occurred. Another button is for show the sign output using MediaPlayer of android which is shown in figure 4.6. 4.4 Keyboard For the people having hearing impairment will have to choose launch keyboard option. A window is opened: (a) (b) (c) Figure 4.7 (a) Keyboard setup screen (b) Typing on keyboard (c) Typed text From this keyboard must be enabled and selected. The keyboard looks like figure 4.7(b). Typed text is shown in output section figure 4.7(b). Hence the text can be understandable by normal people.
  • 41. 31 Chapter 5 Experimental Results We tested the system with extensive experiments. In this section, we first introduce how data are collected. Then we present the performance of the system and compare it with existing systems. 5.1 Collecting requirements for the system We have asked to various people whose age varying from 20 years to 60 years. The people included are:  Principal of the deaf and mute school  Guardians of the students of the school  CUET students We asked them about their problem in communication in between deaf/mute and normal people. They have opted for a swift communication capable application. For this purpose, we had to visited a school named “Mute and deaf school, Muradpur, Chittagong” before building the app for real life scenario. We had to communicate with hearing impaired and mute children. We have asked people about their expectation by which they can help us to eradicate the communication gap. Based on the conversation, we summarized the basic requirements and those are:  Voice to text conversion.  Animated characters showing sign language.  Signs should be played synchronously.  Capable of handling Bangla words.  Capable of handling Bangla words suffixes, inflections etc.  Deaf/mute people should have some form of typing or displaying sign method.  Should have signs instead of Bangla letters in the keyboard.  Share option would be preferable.
  • 42. 32  For animation, a puppet is eye comforting while watching the signs displayed via the application.  Attractive theme would be appreciated. 5.2 Experimental Design and Procedure: We visited “Mute and deaf school, Muradpur, Chittagong” to check the application by the deaf/mute students of the school as well as the teachers and some general people. The honorable principal named “Md. Habibur Rahman” helped us to gather the necessary data from them. We had 6 deaf and mute students (age= 8 years to 14 years). We have selected 02 teachers, 02 guardian and one commoner to evaluate our system. Figure 5.1 Teacher explaining sign to student Figure 5.2 Student explaining sign after using application 5.3 Experimental Result The experiment was done in different phase while visiting the school. At first, the teachers gave their voice into the smartphone. The number of times words were converted to corresponding signs are given in table 5.1. The 5 Bangla words that were checked sequentially for all the cases in table 5.1 and 5.2 are বিকাল, পৃবিিী, আবি, পক্ষকাল, সকাল । Table 5.1 demonstrates the success rate of the voice to animation module. The accuracy of google speech to text is around 95% and our system shows the accuracy 90.67%. We have realized that this accuracy could be higher if the internet connection was fair enough. Again, pronunciation of different people is varied and
  • 43. 33 some people have regional accent. For example, we have chosen a word College (েকলজ) which was pronounced as Kholej by some user and this was not detected perfectly by google speech recognizer. Table 5. 1 No of times while application was able to convert from voice to signs Deaf/mute people Teachers and general people (each having 5 trials for each student) 1 2 3 4 5 1 5 5 2 4 5 2 5 3 4 5 5 3 4 3 5 5 4 4 5 4 5 4 4 5 4 5 4 4 5 6 5 4 5 5 5 Mean 4.666667 4.166667 4.166667 4.5 4.666667 SD 0.516398 0.894427 1.169045 0.547723 0.516398 Success rate 136/150%= 90.67% Again, all of signs were not understood by the students of the school. The converted signs those were actually understood by the students are given in table 5.2. Table 5. 2 No of times while students actually recognized the signs Deaf/mute people Teachers and general people (each having 5 trials for each student) Mean SD 1 2 3 4 5 1 5 4 3 4 5 4.2 0.748331 2 5 4 3 5 4 4.2 0.748331 3 4 5 4 3 4 4 0.632456 4 5 4 4 4 4 4.2 0.4 5 4 5 3 4 4 4 0.632456 6 4 4 4 4 4 4 0.516398 Success Rate (102/136%) 75%
  • 44. 34 From table 5.2 we can say that the accuracy is 75% and we have experienced that the deaf students found difficulties to recognize on some words which have 3D motions for example আনশ. So, the overall accuracy is (102/150) =68%. The testing of keyboard had to be in a different way. The honorable principal of the school helped us in this case. While testing he had showed specific signs towards the chosen students and the students had to type what they understood. The results are mentioned in table 5.3. The words that were checked in this procedure are ির্ষাকাল, অসুস্থ, গ্রীষ্মকাল, আগামীকাল, পাবি । Table 5. 3 No of times the students typed the words using keyboard Participants Perfectly Typed Word 1 5 2 3 3 5 4 4 5 4 6 3 5.4 Subjective Evaluation After finishing this session, we requested the principal to ask the deaf/mute students for scoring the following questions: 1. Are the animated signs understandable? 2. Is the application boring? 3. Is the keyboard flexible to type? We also asked the general people and teachers along with the principle, to score the following questions. 4. Are you facing any problem while using the application? 5. Do you find this application easy to use? 6. How interactive the application is? People answered each question on a 1-to-5 Likert scale, where 1 is the lowest evaluation and 5 is the highest.
  • 45. 35 4.333333333 1.666666667 4.333333333 0 2 4 6 Q1 Q2 Q3 Mean 2.8 4.2 3.4 0 1 2 3 4 5 6 Q1 Q2 Q3 Mean Table 5. 4 Q1, Q2, Q3 answered by deaf/mute people Questions Participants (deaf and mute user) 1 2 3 4 5 6 Q1 4 5 5 4 3 5 Q2 2 1 2 2 1 2 Q3 5 4 4 4 5 4 Table 5. 5 Q4, Q5, Q6 answered by teachers and general people Questions Participants (general user and teachers) 1 2 3 4 5 Q4 3 4 2 2 3 Q5 4 4 3 5 5 Q6 4 5 1 4 3 Figure 5.3 Column chart showing subjective evaluation of answering Q1, Q2, Q3 Figure 5.4 Column chart showing subjective evaluation of answering Q4, Q5, Q6 The evaluation indicates that the results are pretty satisfactory in real life oriented purpose.
  • 46. 36 5.5 Testing Testing is the process of evaluating a system or its components with the intent to find that whether it satisfies the specified requirements or not. This activity results in the actual, expected and difference between their results. In simple words testing is executing a system in order to identify any groups, errors or missing requirements in contrary to the actual desire or requirements. Testing is the practice of making objective judgements regarding the extent to which the system (device) meets, exceeds or fails to meet stated objectives. A good testing program is a tool for the agency and the integrator/supplier; it typically identifies the end of the “Development” phase of the project, establishes the criteria for project acceptance, and establishes the start of the warranty period. 5.5.1 Purpose of Testing There are two fundamental purposes of testing:  Verifying Procurement Specification  Managing Risk First, testing is about verifying that what was specified is what was delivered: it verifies that the product (system) meets the functional, performance, design and implementation requirements identified in the procurement specifications. Second, testing is about managing risk for both the acquiring agency and the system’s vendor/developer/integrator. The testing program is used to identify when the work has been “completed” so that he contracts can be closed, the vendor paid and the system shifted by the agency into the warranty and maintenance phase of the project. Following are some of important factors for which Testing for an application is required:  Reduce the number of bugs in the code.
  • 47. 37  To provide a quality product.  To verify whether all the requirements are met.  To satisfy the customer’s needs.  To provide a bug free software.  To earn the reliability of the software.  To avoid the user from detecting problems.  Verify that it behaves “as specified”.  Validate that what has been specified is what the user actually wanted. 5.5.2 Black Box Testing There are different methods which can be used for software testing. The technique of testing without having any knowledge of the interior workings of the application is Black Box testing. The tester is oblivious to the system architecture and does not have access to the source code. Typically, when performing a black box test, a tester will interact with the system’s user interface by providing inputs and examining output s without knowing how and where the inputs are worked upon. 5.5.3 Black BoxTesting ofthe Project Table 5. 6 Test case of black box testing No Description Expected Result Status 1 Launch application Application runs in the system Yes 2 Layouts visible All layouts are visible Yes 3 Button press All buttons are working Yes 4 Speech to text Bangla speech converted to text Yes 5 Signs preview Signs showing for available Bangla words Yes 6 Keyboard working Each keystroke is being detected Yes
  • 48. 38 5.6 Conclusion The experiments and the experience gained in designing and analyzing this application led us to point out some interesting findings. First of all, our applications can positively impact the current mechanisms used for notifying any exceptional circumstances. It can obviously become a breakthrough in such communication mechanism in between deaf/mute and normal people. Secondly, the application has been demonstrated to be accepted by potential users. The large majority of the interviewed users after the experiments declared that they would use this application when needed. Moreover, the deaf/mute were hugely surprised to see the implementation of this application. It would obviously allow them to communicate with the rest of the society. This is due to the good experience they had during the simulation. The last findings are related to the application interface. Users are generally annoyed by long tasks. So that designers should elaborate a direct and short navigation through the application otherwise users will prefer traditional calls. Finally, feedback is a critical factor for this type of application. Our implemented application succeeded to gain satisfactory reviews from all scale of people.
  • 49. 39 Chapter 6 Conclusion and Future Recommendation In this chapter in section 6.1, we conclude our developed system. We describe the future recommendations for further improvements of our developed system in section 6.2. 6.1 Conclusion This is actually “Android based Platform between Normal and Deaf/Mute people” which is very useful application for all ages of people mainly for deaf or mute people who doesn’t have any means of communication medium. We can use this application to educate the deaf or mute people and allow them to participate with us in our society so that they can become self-dependent. Although there are more spaces to update it forward, it can work as a framework for the development. Our application can remove the hidden barrier in between the hearing impaired and normal people. So, this application is having communication medium on both ends. Here are the most important points pursued during the research:  A special application for smartphone communication was designed which is capable of communication between two people if internet connection is available.  Google voice recognition is very accurate.  The keyboard is really helpful owing to showing signs for each Bangla letters.  A very quick way for communication establishment and performing the communication which make user confident of his/her participation in every aspect of the society.  The user interface is very user friendly so that every person from all age can use the application without any hustle.  Although the animations are not 100% accurate, users understood and loved them from their heart.
  • 50. 40 6.2 Future Recommendations For future work of this project many recommendations are found while taking our survey. People from all aspects of life gave suggestions to make this project a complete package. As fulfilling the user’s need is our utmost motive and goal, the following areas can be considered for future improvements:  Log-in system can be introduced for the users.  Word list can be increased more to support a broad range of Bangla words.  Messaging option can be introduced.  Facebook auto-share option can be introduced.  Keyboard layer might be modified.
  • 51. 41 References [1] Sarkar, B., Datta, K., Datta, C., Sarkar, D., Dutta, S., Roy, I., Paul, A., Molla, J. and Paul, A. (2009). A Translator for Bangla Text to Sign Language. 2009 Annual IEEE India Conference. [2] Ahmed Tanvir, “A Small Initiative to Convert Bangla Text to Bangla Sign Language”, American-International University, Bangladesh. Available: https://www.academia.edu/30768422/A_Small_Initiative_to_Convert_Bangla_Text_ to_Bangla_Sign_Language [3] Arefin, M., Alam, L., Sharmin, S. and Hoque, M. (2015). An empirical framework for parsing Bangla assertive, interrogative and imperative sentences. 2015 International Conference on Computer and Information Engineering (ICCIE). [4] A. Kowshal, S. Sharmin and M. M. Hoque (2017). Development of an Interactive Game to Increase Speech Ability for Language Impairment Children. 2017 International Conference on Engineering Research, Innovation and Education (ICERIE). [5] Shahriar, R., Zaman, A., Ahmed, T., Khan, S. and Maruf, H. (2017). A communication platform between bangla and sign language. 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC). [6] Islam, K. and Sarker, B. (2014). Designing a press and swipe type single layered Bangla soft keyboard for Android devices. 16th Int'l Conf. Computer and Information Technology. [7] El-Gayyar, M., Ibrahim, A. and Sallam, A. (2015). The ArSL keyboard for android. 2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS). [8] Allison, L. and Fuad, M. (2016). Inter-app communication between Android apps developed in app-inventor and Android studio. Proceedings of the International Workshop on Mobile Software Engineering and Systems - MOBILESoft '16.
  • 52. 42 [9] Jemni, M. and Elghoul, O. (2008). Using ICT to Teach Sign Language. 2008 Eighth IEEE International Conference on Advanced Learning Technologies. [10] IDC. [Online] May 25,2014. http://www.idc.com/getdoc.jsp?containerId=prUS24676414 [11] “Life 360 – Family Locator” Android App Developed ByLife360. [Online] September 20, 2017. http://www.life360.com/familylocator/. [12] “OnWatch” Android App Developed ByOnWatch. [Online] November 10, 2017. https://play.google.com/store/apps/details?id=com.onwatch. [13] “StreetSafe” Android App Developed ByPeopleGuard LLC. [Online] September 23, 2017. http://streetsafe.com/static-productsoverview. [14] Doilamis, A. Pelekis, N. Theodoridis, “EasyTracker, “An android application for capturing mobility behavior”, 16th Panhellenic conference on Informatics (PCI), Volume 1, Issue 1, pp.357-362,5-7October2012. [15] V. Sutton. (2016) Introduction on deafness, sign language & sign writing. http://www.signwriting.org/about/questions/quest0000.html. [Online; accessed 1April-2017] [16] CDD, “Manual on sign supported bangla,” Computer Vision and Image Understanding, pp. 1–50, 2002. [17] "The Android Source Code." The Android Source Code. Web. 21 Dec. 2015. [18] Android Studio Overview. Available link at: https://developer.android.com/studio/intro [19] Android-Apktool, A tool for reverse engineering Android apk files https://code.google.com/p/android-apktool/ [20] Lane, Nicholas D., et al. "A survey of mobile phone sensing." Communications Magazine, IEEE 48.9 (2010): 140-150.