SlideShare ist ein Scribd-Unternehmen logo
1 von 22
R V COLLEGE OF ENGINEERING,
BANGALORE
report on
Real time signal processing:
Implimentation & Application
Submitted by,
Laxman Jaygonde(1RV11EE029)
Mahantesh Padashetty(1RV11EE030)
Rajesh kumar rajpurohit(1RV11EE044)
Vijeth V S(1RV11EE061)
Introduction
There are two types of DSP applications ± non-real-time and real time. Non-real-
time signal processing involves manipulating signals that have already been
collected and digitized. This may or may not represent a current action and the need
for the result is not a function of real time. Real-time signal processing places
stringent demands on DSP hardware and software design to complete predefined
tasks within a certain time frame. This chapter reviews the fundamental functional
blocks of real-time DSP systems.
The basic functional blocks of DSP systems are illustrated in Figure
1.1, where a real- world analog signal is converted to a digital signal, processed by
DSP hardware in digital form, and converted backinto an analog signal. Each ofthe
functional blocks in Figure 1.1 will be introduced in the subsequent sections. For
some real-time applica- tions, the input data may already be in digital form and/or
the output data may not need to be converted to an analog signal. For example, the
processed digital information may be stored in computer memory for later use, or it
may bedisplayed graphically. In other applications, the DSP system may berequired
to generate signals digitally, such as speech synthesis used for cellular phones or
pseudo-random number generators for CDMA (code division multiple access)
systems.
An ideal sampler can be considered as a switch that is periodically open and closed
every T seconds and
where fs is the sampling frequency (or sampling rate) in hertz (Hz, or cycles per
second). The intermediate signal, x…nT†, is a discrete-time signal with a
continuous- value (a number has infinite precision) at discrete time nT, n ˆ 0, 1, . . .,
I as illustrated in Figure 1.3. The signal x…nT† is an impulse train with values equal
to the amplitude of x…t† at time nT. The analog input signal x…t† is continuous in
both time and amplitude. The sampled signal x…nT† is continuous in amplitude,
but it is defined only at discrete points in time. Thus the signal is zero except at the
sampling instants t ˆ nT.
Quantizing and Encoding
An obvious constraint of physically realizable digital systems is that sample values
can only be represented by a finite number of bits. The fundamental distinction
between discrete-time signal processing and DSP is the wordlength. The former
assumes that discrete-time signal values x…nT† have infinite wordlength, while the
latter assumes that digital signal values x…n† only have a limited B-bit.
We now discuss a method of representing the sampled discrete-
time signal x…nT† as a binary number that can be processed with DSP hardware.
This is the quantizing and encoding process. As shown in Figure 1.3, the discrete-
time signal x…nT† has an analog amplitude (infinite precision) at time t ˆ nT. To
process or store this signal with DSP hardware, the discrete-time signal must be
quantized to a digital signal x…n† with a finite number of bits. If the wordlength of
an ADC is B bits, there are 2B different values (levels) that can be used to represent
a sample. The entire continuous amplitude range is divided into 2B subranges.
Amplitudes of waveform that are in the same subrange are assigned the same
amplitude values. Therefore quantization is a process that represents an analog-
valued sample x…nT† with its nearest level that corresponds to the digital signal
x…n†. The discrete-time signal x…nT† is a sequence of real numbers using infinite
bits, while the digital signal x…n† represents each sample value by a finite number
of bits which can be stored and processed using DSP hardware.
The quantization process introduces errors that cannot be removed. For example,
we can use two bits to define four equally spaced levels (00, 01, 10, and 11) to
classify the signal into the four subranges as illustrated in Figure 1.4. In this figure,
the symbol `o' represents the discrete-time signal x…nT†, and the symbol `'
represents the digital signal x…n†.
This is a theoretical maximum. When real input signals and converters are used, the
achievable SNR will be less than this value due to imperfections in the fabrication
of A/D converters. As a result, the effective number of bits may be less than the
number of bits in the ADC. However, Equation (1.2.5) provides a simple guideline
for determining the required bits for a given application. For each additional bit, a
digital signal has about a 6-dB gain in SNR. For example, a 16-bit ADC provides
about 96 dB SNR. The more bits used to represent a waveform sample, the smaller
the quantization noise will be. If we had an input signal that varied between 0 and 5
V, using a 12-bit ADC, which has 4096 …212 † levels, the least significant bit (LSB)
would correspond to 1.22 mV resolution. An 8-bit ADC with 256 levels can only
provide up to 19.5 mV resolution.
Obviously with more quantization levels, one can represent the analog signal more
accurately.
If the uniform quantization scheme shown in Figure 1.4 can adequately
represent loud sounds, mostof the softer sounds may be pushed into the same small
value. This means soft sounds may not be distinguishable. To solve this problem, a
quantizer whose quantization step size varies according to the signal amplitude can
beused. In practice, the non-uniform quantizer uses a uniform step size, butthe input
signal is compressed first. The overall effect is identical to the non-uniform
quantization. For example, the logarithm-scaled input signal, rather than the input
signal itself, will be quantized. After processing, the signal is reconstructed at the
output by expanding it. The process of compression and expansion is called
companding (compressing and expanding). For example, the m-law (used in North
America and parts of Northeast Asia) and A-law (used in Europe and most of the
rest of the world) companding schemes are used in most digital communications.
As shown in Figure 1.1, the input signal to DSP hardware may be a digital signal
from other DSP systems. In this case, the sampling rate of digital signals from other
digital systems must be known. The signal processing techniques called
interpolation or decimation can be used to increase or decrease the existing digital
signals' sampling rates. Sampling rate changes are useful in many applications such
as interconnecting DSP systems operating at different rates. A multirate DSP system
uses more than one sampling frequency to perform its tasks.
Implementation Procedure for Real-Time Applications
The digital filters and algorithms can be implemented on a DSP chip
such as the TMS320C55x following a four-stage procedureto minimize the amount
of time spent on finite wordlength analysis and real-time debugging. Figure 3.17
shows a flowchart of this procedure.
In the first stage, algorithm design and study is performed on a general-purpose
computerin a non-real-time environment using a high-level MATLAB orC program
with floating-point coefficients and arithmetic. This stage produces an `ideal'
system.
In the second stage, we develop the C (or MATLAB) program in a way that
emulates the same sequenceof operations that will be implemented on the DSP chip,
using the same parameters and state variables. For example, we can define the data
samples and filter coefficients as 16-bit integers to mimic the wordlength of 16-bit
DSP chips. It is carefully redesigned and restructured, tailoring it to the architecture,
the I/O timing structure, and the memory constraints of the DSP device.
The quantization errors due to fixed-point representation and arithmetic can be
evaluated using the simulation technique illustrated in Figure 3.18. The testing data
x(n) is applied to both the ideal system designed in stage 1 and the practical system
developed in stage 2. The output difference, e(n), between these two systems is due
to finite-precision effects. We can re-optimize the structure and algorithm of the
practical system in order to minimize finite-precision errors.
The third stage develops the DSP assembly programs (or mixes C programs with
assembly routines) and tests the programs on a general-purpose computer using a
DSP software simulator (CCS with simulator or EVM) with test data from a disk
file. This test data is either a shortened version of the data used in stage 2, which can
be generated internally by the program or read in as digitized data emulating a real
application. Output from the simulator is saved as another disk file and is compared
to the correspondingoutputof the C program in the second stage. Oncea one-to-one
agreement is obtained between these two outputs, we are assured that the DSP
assembly program is essentially correct.
The final stage downloads the compiled (or assembled) and linked program into the
target hardware (such as EVM) and brings it to a real-time operation. Thus the real-
time debugging process is primarily constrained to debugging the I/O timing
structure and testing the long-term stability of the algorithm. Once the algorithm is
running, we can again `tune' the parameters of the systems in a real-time
environment.
Experiments of Fixed-Point Implementations
The purposes of experiments in this section are to learn input quantization effects
and to determine the proper fixed-point representation for a DSP system.
To experiment with input quantization effects, we shift off(right) bits ofinput signal
and then evaluate the shifted samples. Altering the number of bits for shifting right,
we can obtain an output stream that corresponds to a wordlength of 14 bits, 12 bits,
and so on. The example given in Table 3.5 simulates an A/D converter of different
wordlength. Instead of shifting the samples, we mask out the least significant 4 (or
8, or 10) bits of each sample, resulting in the 12 (8 or 6) bits data having comparable
amplitude to the 16-bit data.
1. Copy the C function exp3a.c and the linker command file exp3.cmd from the
software package to A: Experiment3 directory, create project exp3a to simulate
16, 12, 8, and 6 bits A/D converters. Use the run-time support library rts55.lib
and build the project.
2. Use the CCS graphic display function to plot all four output buffers: out16,
out12, out8, and out6. Examples of the plots and graphic settings are shown in
Figure 3.19 and Figure 3.20, respectively.
3. Compare the graphic results of each output stream, and describe the differences
between waveforms represented by different wordlength.
Program listing of quantizing a sinusoid, exp3a.c
1. Copy the C function exp3a.c and the linker command file exp3.cmd from the
software package to A: Experiment3 directory, create project exp3a to simulate
16, 12, 8, and 6 bits A/D converters. Use the run-time support library rts55.lib
and build the project.
To experiment with input quantization effects, we shift off (right) bits of input signal
and then evaluate the shifted samples. Altering the number of bits for shifting right, we
can obtain an output stream that corresponds to a wordlength of 14 bits, 12 bits, and so
on. The example given in Table 3.5 simulates an A/D converter of different wordlength.
Instead of shifting the samples, we mask out the least significant 4 (or 8, or 10) bits of
each sample, resulting in the 12 (8 or 6) bits data having comparable amplitude to the
16-bit data.
2. Use the CCS graphic display function to plot all four output buffers: out16,
out12, out8, and out6. Examples of the plots and graphic settings are shown in
Figure 3.19 and Figure 3.20, respectively.
.
#define BUF_SIZE 40
const int sineTable [BUF_SIZE] ˆ
{0x0000, 0x01E0, 0x03C0, 0x05A0, 0x0740, 0x08C0, 0x0A00, 0x0B20,
0x0BE0, 0x0C40, 0x0C60, 0x0C40, 0x0BE0, 0x0B20, 0x0A00, 0x08C0,
0x0740, 0x05A0, 0x03C0, 0x01E0, 0x0000, 0xFE20, 0xFC40, 0xFA60,
0xF8C0, 0xF740, 0xF600, 0xF4E0, 0xF420, 0xF3C0, 0xF3A0, 0xF3C0,
0xF420, 0xF4E0, 0xF600, 0xF740, 0xF8C0, 0xFA60, 0xFC40, 0x0000 };
int out16 [BUF_SIZE];/* 16 bits output sample buffer */
int out12 [BUF_SIZE];/* 12 bits output sample buffer */
int out8 [BUF_SIZE];/* 8 bits output sample buffer */
int out6 [BUF_SIZE];/* 6 bits output sample buffer */
void main( )
{
int i;
for (i ˆ 0; i < BUF_SIZEÀ1; i‡‡)
{
out16[i]ˆ sineTable[i];/* 16-bit data*/
out12[i]ˆ sineTable[ &0xfff0; /* Mask off 4-bit */i]
out8[i]ˆ sineTable[ &0xff00;i]/* Mask off 8-bit */
out6[i]ˆ sineTable[ &0xfc00;i]/* Mask off 10-bit */
}
}
APPLICATIONS
Overlap-Save Algorithm
First, you will construct a block diagram for an Overlap-Save algorithm using
elementary Simulink blocks. For this model, we will walk you through it step by
step. The model will implement an FIR filter kernel of length M = 113. The
algorithm will use an FFT and inverse FFT oflength N = L + M -1 = 512. Thus, the
input blocks will be of length N = 512 and the throughput will be L = 400 output
samples per processed block.
Designing the Filter Kernel
1. Before the Simulink model can be built, the FIR filter needs to be designed. Go
to the command window and type fdatool to bring up MATLAB’s Filter Design &
Analysis Tool.
2. Select Lowpass under Response Type.
3. Choose an Equiripple FIR under Design Method.
4. Specify the filter order as 112 (this will result in a kernel of length M = 113).
5. Under Frequency Specifications, set Units to Hz, Fs to 8000, Fpass to 400, and
Fstop to 800.
6. Click Design Filter. The Magnitude response of the filter is displayed.
7. Go to File → Export. ChooseExport To Workspaceand Export As Coefficients.
Under Variable names, name the Numerator h. Press Export. This exports the filter
coefficients to the MATLAB Workspaceas a 1×113 vector named h. You can verify
the presence of this variable by going to the command window and typing “whos”.
Building the Simulink Model
1. In a new Simulink model, set the Amplitude of a Sine Wave Sourceto 1 and the
Frequency to 100 Hz. Also, set the Sample time to 1/8000 (this will imitate a
sampling rate on the C6437 board).
2. Connect your input to a Buffer block that you can find in Signal Processing
Blockset → Signal Management → Buffers. Set the Output buffer size to 400. The
buffer divides the input signal into data block segments of length L. The output of
the buffer is a frame-based signal (as opposed to a sample-based signal) such that
each segment (or “frame” ) of 400 samples is processed as one chunk, as required
by the Overlap-Save process.
3. Add a Delay Line blockto the diagram from Signal ProcessingBlockset → Signal
Management → Buffers. Set the Delay line size to 400 and connectthe output ofthe
Buffer to the Delay Line input. Effectively, the Delay Line delays its input by one
data block (“frame”) of length L.
4. The Overlap-Save algorithm calls for the last M-1 points from the previous data
block to be saved and appended to the beginning of the next data block. The Delay
Line inserted in step 3 above allows us to access the previous data block. In order to
extract the necessary M-1 points, insert a Submatrix block from Signal Processing
Blockset → Signal Management →Indexing that you connect to the output of the
Delay line. Set the Rowspanto Range of rows, the Starting row to Index, the Starting
row index to 289, the Ending row to Last, and the Column span to All columns.
Here’s what this block does: the data blocks outputted from the Delay Line are
400×1 column vectors; we want the last M-1 points. The Submatrix block selects
elements 289 through 400 of these input vectors and outputs 112×1 column vectors.
5. The next step is to take the M-1 saved points from step 4 and append them to the
beginning of the current data block. To do this, we can use a matrix concatenate
block from Simulink→ Math Operations. Insert this block into the model and set
Number of inputs to 2, Mode to Multidimensional Array, and Concatenate
Dimension to 1. Connect the output of the Submatrix block from step 4 to the first
(top) input of the Matrix concatenate block, and connect the output of the Buffer
block from step 2 to the second (bottom) input of the Matrix concatenate block.
Theseconnections causethe 112×1 vectors from the Submatrix block(the M-1 saved
data points) and the 400×1 vectors from the Buffer (the current data block) to be
combined into 512×1 vectors that are suitable for FFT calculation.
6. Add an FFT block to the mode. Connect the output of the Matrix Concatenate
block to the input of the FFT block. This will compute the 512-point FFT of the
overlapped data blocks. Notice that N = L+M-1 = 512 is chosen to be a power of
two; this is necessary because Simulink’s FFT block uses a radix-2 FFT algorithm.
7. The next step in the Overlap-Save algorithm is to multiply the FFT computed in
step 7 by the FFT of the filter kernel. Before we can do this, we need to import the
filter coefficients into the Simulink model. Add a From Workspace block to the
model from Simulink → Sources. Set Data to the name of the filter kernel you
exported to the MATLAB Workspaceand Sample Time to 400/8000. Note that this
sample time causes the filter coefficients to be read at the same rate that data blocks
are outputted from the Buffer block of step 3.
8. In order to compute the FFT of the filter kernel, we need to extend it so it has a
length of 512. To do this, add a Pad blockfrom Signal ProcessingBlockset → Signal
Operations. Set Pad over to Columns, Pad value to 0, and Column size to 512.
Connect the output of the From Workspace block in step 7 to the input of the Pad
block. The Pad block simply appends enough zeros to the end of the filter kernel to
make it a 512×1 vector.
9. Add another FFT block to the diagram and connect the output of the Pad block
from the previous step to the input of this FFT block. Clearly, this just computes the
512-point FFT of the filter kernel.
10. Now we are ready to perform the frequency multiplication necessary for FFT
convolution.
Connect the Output of the two FFT blocks to the two inputs of a Product block.
11. Once frequency multiplication has occurred, the inverse FFT needs to be
computed. Insert an IFFT block from Signal Processing Blockset → Transforms.
Under the IFFT block parameters, select the check box labeled “Input is conjugate
symmetric.” This tells Simulink that the output should be real-valued; that is, any
small imaginary parts in the output due to rounding errors will be ignored. Connect
the output of the Product block to the input of the IFFT block.
12. The last major step in the Overlap-Save algorithm is to discard all of the points
that have aliasing. Namely, the first M-1 points of the data blocks resulting from the
inverse FFT operation need to be thrown out. To do this, insert another Submatrix
blockinto the model. Set Rowspanto Range of rows, Starting row to Index, Starting
row index to 113, Ending row to Last, Column span to All columns. Connect the
output of the IFFT block to the input of the Submatrix block.
Note that by discarding M-1 points, the data blocks are reduced in size back to 400
points, the size of the original data blocks from the input signal.
13. Add an Unbuffer blockfrom Signal ProcessingBlockset → Signal Management
→ Buffers and connect its input to the output of the Submatrix block from step 12.
As its name suggests, the Unbuffer block takes the frame-based signal of 400×1
vectors from the Submatrix block and converts it into a sample-based signal (the
original format of the input).
14. To view the filtered signal, add a Scope block to the model from Simulink →
Sinks and connect its input to the output of the buffer block.
15. The Overlap-Save filter is now complete. Save your model.
C6437 Real-Time Simulation
1. For the real-time simulation on the C6437 board, you may want to re-save your
model under a different name.
2. Insert a DM6437 target block.
3. Replace the Sine Wave source block with the DM6437 ADC block. Set ADC
input source to line in, sampling rate to 8000 Hz, and set the samples per frame to
400.
4. Connect the output of the ADC to the input of a Data TypeConversion block. Set
the output data type to single.
5. Because the output of the ADC is an Nx2 matrix (a stereo signal), an additional
identical filter kernel is needed. Copythe From Workspace, Pad, and FFT blocks of
the filter kernel. Take the output of the original and the copyof the filter kernel and
feed this into a Matrix Concatenate block. Connect the output of the Matrix
Concatenate blockto the input ofthe Productblock(the blockright before the IFFT).
6. For the simulation on the board, we’re going to add high frequency noise to the
input signal to see if the filter can attenuate it properly. To do this, add a noisy input
to the input created in step 3 by adding a Uniform Random Number block followed
by a Highpass Filter block. Set the Minimum and Maximum parameters to -5 and 5,
respectively of the Uniform Random Number block. Also, set the Sample time to
1/8000. Set the parameters of your highpass filter such that there will be practically
no noise below 2000 Hz.
7. Delete any scopes you may have in your model.
8. In order to view the original input, the input corrupted by noise, and the filter
output during simulation, we’re going to use the board’s manual switches to select
which signal goes to the board’s output. Just like we did in Lab 2, with a Multiport
Switch and a DIP Switch block configure your modelto beable to output the original
input signal, the noisy input, and the filtered output.
9. Connecta Data Type Conversion block to the output of the Multiport Switch and
set the output data type to int16.
10. Connect the output of the conversion block to the input of the DM6437 DAC
block and set the sampling frequency to 8000 Hz.
11. Set the simulation to Normal mode and configure the simulation parameters as
in Lab 2.
12. Build your Simulink model by pressing CTRL-B. The project should load and
run automatically if you have selected the Build and Execute option in the Link for
CCS tab of the Configuration Parameters.
13. Connect the input of the board to the function generator and the output of the
board to the oscilloscope. Feed the board with a sine wave of 1 Vpp and 100 Hz. Try
different configurations for the board’s switch, identify and explain the
corresponding outputs obtained.
14. Disconnect the output of the board from the oscilloscope and connect it instead
to speakers. Use the switches on the board to listen to the 100 Hz input tone, the
noise-corrupted signal, and the filtered signal.
Overlap-Add Algorithm
Now that you have walked through the Overlap-Save algorithm step-by-step,
it is time to try designing an algorithm for yourself. Your task in this section is to
design from scratcha filter that uses the Overlap-Add algorithm. You should beable
to do this using the same type of blocks (Buffer, Delay Line, Submatrix, etc.) that
were used in the Overlap-Save filter. The guidelines are as follows:
1. Usethe same lowpass filter as was used for the Overlap-Save algorithm. However,
change the order of the filter to 500 such that the filter kernel has a length of M =
501. Also, change the Fstop parameter from 800 Hz to 500 Hz. Essentially, we are
using a longer, more expensive filter kernel to producealowpass frequency response
that drops of much more sharply than the previous filter. Even with a filter order of
500, FFT convolution can filter the output in a reasonable amount of time.
2. Divide the input to the filter into data blocks of length L = 1548. Note that this
makes the size of your FFT and inverse FFT calculations N = L+M–1 = 2048.
Here are some hints on how to proceed with your design:
1. It is probably best to use Pad blocks from Signal Processing Blockset → Signal
Operations to append zeros to the end of data blocks.
2. When using the Delay Line block, make sure that the block parameter named
Delay line size is setto be the same size as the input vectorto the block. Forexample,
if you input 50×1 data blocks into the Delay Line and want a delay ofone data block,
set the Delay line size to 50.
3. If youuse a FromWorkspaceblockto importthe filter coefficients to the Simulink
model in a similar way to the Overlap-Save algorithm, make sure to set the sample
time to 1548/8000. This will match the rate at which the length L data blocks are
received by the filter at its input.
The DCT Algorithm
Although there are several transformations utilized in digital video
processing, the DCT continues to be one of the mostcommonlinear transformations
within digital signal processing [3]. By performing lossy compression, the DCT
separates the image into parts of differing importance The 2D-DCT and 2D-IDFT
displayed below in figures 1 and 2 respectively, can not only concentrate the main
information of original image into the smallest low frequency coefficient which
enables a reduction in overall computational complexity. That is, by concentration
the majority of the frequencies in the initial coefficients, we can reduce computation
by performing processing solely on those elements [2]. Moreover, this process
performs robustly when applied to compression and decompression storage and
retrieval techniques.
Digital Video Hardware Implementation
1. Go to the Simulation→Configuration Parameters, change the solver options type
back to “Fixedstep”, and the solver to “discrete (no continuous states).”
2. Start with your Simulink Model for digital video edge detection and delete all the
input and output blocks but leave the “Image from Workspace” block.
3. Under Target Support Package→Supported Processors→TI C6000→Board
Support→DM6437EVM, found Video Capture block. Set Sample Time to -1.
4. Add two Video Display blocks. Forthe first one, setVideo Window to “Video 0”,
Video Window Position to [180, 0, 360, 480], Horizontal Zoomto 2x, Vertical Zoom
to 2x; for the Second one, Set Video Window to “Video 1”, Video Window Position
to [0, 240, 360, 240], Horizontal Zoom to 2x, Vertical Zoom to 2x.
5. Add a Deinterleave block, and two Interleave blocks from DM6437 EVM Board
Support. Link the Video Capture block to the Deinterleave block. And Link the two
Interleave blocks each with a Video Display.
6. For computational efficiency output video stream components Y, Cb, and Cr can
be resized to a lower sample size for processing. This can be accomplished by
inserting a resize blockforeach video componentafter the initial Deinterleave block.
The lower resize value should always bea factor of32 if the default Interleave mask
value of 32 is utilized. This can be calculated by first dividing the initial video mask
size by a factor of two and rounded to the nearest number of a factor of 32. Then by
dividing by 2 oncemore. Forexample our Y video componentfor the gray scale was
originally 720. (720x480)/2 = 360x240. However, 352x256 is the nearest factor of
32. Then by dividing by 2 we obtain the entered value of 176x128. This process
should be repeated for Cb, and Cr to get 88x128.
7. From the Simulink library add a DIP block for the DM6437EVM and connect it
to a multi-port switch. The DIP switch setting should be set to SW4(0) and sample
time of 1. Connect the output pin of the DIP to the top pin of the multi-port switch.
Double click on the multi-port switch and set the number of input port to 2 (++).
Connect one of the inputs to the output of the Image from Workspace block. This
will allow us to simulate the insertion of the watermark image when unauthorized
access is detected.
8. Press Ctrl + B to begin upload and execute the program.
Barcode recognition
The demonstration is a prototypeof 1D barcodescanning using the
DSP DM6437EVM board. Barcode encodes data on parallel lines of different
widths. The mostuniversally used barcodeis the UPC, Universal ProductCode. The
most common form of the UPC is the UPC-A, which has 12 numerical digits
encoded through varying width of black and white parallel lines. The UPC-A
barcodeis an optical pattern ofbars and spaces that format and encode the UPC digit
string. Each digit is represented by a unique pattern of two bars and two spaces. The
bars and spaces are variable width; they may be 1, 2, 3, or 4 units wide. The total
width for a digit is always 7 units. Since there are 12 numbers, the barcode has
starting lines, middle separator, and ending lines. A complete UPC-A includes 95
units: the 84 for the left and right digits combined and 11 for the start, middle, and
end patterns. The start and end patterns are 3 units wide and use the pattern bar-
space-bar; each bar and space is one unit wide. The middle pattern is 5 units wide
and uses the pattern space-bar-space-bar-space, also one unit wide. In addition, a
UPC symbol requires a quiet zone (additional space) before the start and after the
end.The second set of 6 numbers after the middle separator uses the same encoding
format of the numerical values of the first 6, except the black and white widths are
reversed.
The algorithm implemented in this prototypereads the UPC barcodethrough
modules of video input, color conversion, feature calculations, barcoderecognition,
barcode validation, and output video display.
Color Conversion
Using video capture from the board, the image is taken from the camera to
Simulink and is converted from YCrCb to RGB for better processing in Simulink.
The conversion requires taking the YCrCb and splitting it into the three colorsignals
of Y, Cr, and Cb. After the split, since the Cr and Cb are smaller in dimension than
Y, the Cr and Cb are upsampled using chroma resampling and transposed to match
the dimensions of RGB from the 4:2:2 to 4:4:4. The three color signals are
transposed again before sending them to the color spaceconversion from YCrCb to
RGB still in three separate signals. The separate RGB signals are concatenated with
a matrix concatenate for one to use as display, and for another line, it is sent to
convert from RGB to intensity. The grayscale version of the image will be inserted
to the feature calculations. This process of color conversion is also reversed before
sending to output of board, except in this case, it will be from RGB to YCrCb.
Feature Calculations
The feature calculations module of the algorithm creates 3 scanlines for
scanning barcodes as well as calculating the pixel values from the barcodeintensity
image in a given row to a vector. First a Gaussian filter is implemented to smooth
out the image gradient identified as the barcoderegion. The gradient of the scanlines
are set and validated so that the scanlines are inside the appropriate range. Then, the
mean and standard deviation of the pixel intensities are calculated for the barcode
area. The range of pixel parameters, f_low and f_high, for setting the color is
determined. Pixels on the scanlines are compared to the f_low and f_high intensity
values. A pixel is considered black if its value is less than f_low, and it is considered
white if its value is f_high or larger. The remaining pixels are proportionally set
between white and black. Black pixels are set to 1 and white pixels are set to -1.
From the
calculations, the vector of pixels from the scanlines is inputted to the barcode
recognition. The scanlines are also sent to display to be added to the real time video.
The barcoderecognition module consists of three parts: bar detection,
barcodedetection, and a barcodecomparisonblock. The bar detection block detects
bars from the barcodefeature signal. First, it tries to identify a black bar, if it is not
there, then the first bar has zero width. If there is a black bar, then it calculates the
pixels of the black bar. Forthe white bars, it does the same. After the bar detections,
the
barcode detection begins with the beginning bars and calculates all the possible
values ofbarcodevalues that may form a valid string with all the possibleseparators.
This function returns sequence of indices to barcode guard bars. The barcode
comparisonblocktakes in the codebookforallthe encoded GTIN13 barcodevalues.
It also reverses it for determining the last 6 digits of the GTIN 13 barcode. The
barcode recognition block takes in the barcodes and tries to match up the barcode
with the numbers ofpixels generated from the bar detection. In order to ensure better
accuracy, the values are calculated from the left to right and right to left. The
normalized confidence is calculated. The barcode recognition block set returns the
barcode and the normalized confidence.
Barcode Validation
In the barcodevalidation stage ofthe algorithm, the simple calculation is used
to determine whether the barcode is valid or not. It is calculated by taking the even
elements and multiplying them by three. Then, add the sum of the odd elements with
the sum of the even elements. Take 10 mod the sum and subtract 10. If the answer
is the same as the check digit, which is the last digit, then the barcodeis valid. This
validation along with a confidence level higher than the threshold allows the barcode
to be displayed on the screen.
Display
The display adds the scanlines to the real time video and displays the barcode
only if it is validated and has a high enough confidence level to enable the switch
for display. All the information is sent to the module to convert the 3 dimensional
matrices back to 2D matrices. Then, RGB is converted to YCrCb format to display
through the board.
Edge Detection in Image and Video
Advances in digital image processing and digital video processing
have opened incredible possibilities for computer vision applications. Research and
development into common image and signal processing algorithms often employ a
combination of multidimensional signal processing, discrete mathematics, topology,
color theory, human perception and application-based physics modeling, among
others. Algorithms range from “low-level” techniques such as image and video
restoration and enhancement to abstract methodologies based on artificial
intelligence for pattern recognition. Common challenges in designing image and
video processing applications include managing computational complexity and
scalability, achieving real-time performance given the volume of data that must be
processed, mathematical modeling of a broad class of image and video signals that
can vary greatly for different applications, and reliability and robustness to a broad
range of applications.
Edge Detection: 1-D Example
Edge detection algorithms can be grouped into one of two categories:
gradient-based edgedetection and zero-crossing-based edge detection. To elucidate
the concept of edge detection, we present the steps for a one-dimensional example
that makes use ofa gradient-based approach. Figure 1(a) presents a one-dimensional
signal f(x) for which we would like to identify “edges” (i.e., sharp changes in
amplitude). Taking the gradient (in this case corresponding to a one-dimensional
continuous-time derivative with respect to the independent variable x) to give f’(x)
results in the signal of Figure 1(b). Taking the absolute value of f’(x) and then
thresholding results in the detection of edges as seen in Figures 1(c) and (d). If the
threshold is high, then fewer edges are identified. If the threshold is too
low, then spurious edges may be detected.
Figure 1: (a) One-dimensional signal denoted f(x),(b) Gradient signal f’(x),(c) Absolute value of gradient,
|f’(x)| and edge detection using Threshold 1, (d) Absolute value of gradient, |f’(x)| and edge detection using
Threshold 2.

Weitere ähnliche Inhalte

Was ist angesagt?

Unit 1 speech processing
Unit 1 speech processingUnit 1 speech processing
Unit 1 speech processingazhagujaisudhan
 
Convolution
ConvolutionConvolution
Convolutionmuzuf
 
Digital Signal Processors - DSP's
Digital Signal Processors - DSP'sDigital Signal Processors - DSP's
Digital Signal Processors - DSP'sHicham Berkouk
 
Adaptive delta modulation
Adaptive delta modulationAdaptive delta modulation
Adaptive delta modulationmpsrekha83
 
Fir filter design using Frequency sampling method
Fir filter design using Frequency sampling methodFir filter design using Frequency sampling method
Fir filter design using Frequency sampling methodSarang Joshi
 
Overview of sampling
Overview of samplingOverview of sampling
Overview of samplingSagar Kumar
 
Digital speech processing lecture1
Digital speech processing lecture1Digital speech processing lecture1
Digital speech processing lecture1Samiul Parag
 
Decimation and Interpolation
Decimation and InterpolationDecimation and Interpolation
Decimation and InterpolationFernando Ojeda
 
A seminar report on speech recognition technology
A seminar report on speech recognition technologyA seminar report on speech recognition technology
A seminar report on speech recognition technologySrijanKumar18
 
Deep Learning For Speech Recognition
Deep Learning For Speech RecognitionDeep Learning For Speech Recognition
Deep Learning For Speech Recognitionananth
 
Digital signal processing
Digital signal processingDigital signal processing
Digital signal processingvanikeerthika
 
Speech Recognition Technology
Speech Recognition TechnologySpeech Recognition Technology
Speech Recognition TechnologySrijanKumar18
 
TRANSITIONAL BUTTERWORTH-CHEBYSHEV FILTERS
TRANSITIONALBUTTERWORTH-CHEBYSHEV FILTERSTRANSITIONALBUTTERWORTH-CHEBYSHEV FILTERS
TRANSITIONAL BUTTERWORTH-CHEBYSHEV FILTERSNITHIN KALLE PALLY
 
Linear Convolution using Matlab Code
Linear Convolution  using Matlab CodeLinear Convolution  using Matlab Code
Linear Convolution using Matlab CodeBharti Airtel Ltd.
 

Was ist angesagt? (20)

1 PCM & Encoding
1  PCM & Encoding1  PCM & Encoding
1 PCM & Encoding
 
Unit 1 speech processing
Unit 1 speech processingUnit 1 speech processing
Unit 1 speech processing
 
Convolution
ConvolutionConvolution
Convolution
 
Digital Signal Processors - DSP's
Digital Signal Processors - DSP'sDigital Signal Processors - DSP's
Digital Signal Processors - DSP's
 
DSP Processor
DSP Processor DSP Processor
DSP Processor
 
Final ppt
Final pptFinal ppt
Final ppt
 
Adaptive delta modulation
Adaptive delta modulationAdaptive delta modulation
Adaptive delta modulation
 
Fir filter design using Frequency sampling method
Fir filter design using Frequency sampling methodFir filter design using Frequency sampling method
Fir filter design using Frequency sampling method
 
Dsp lab manual 15 11-2016
Dsp lab manual 15 11-2016Dsp lab manual 15 11-2016
Dsp lab manual 15 11-2016
 
Overview of sampling
Overview of samplingOverview of sampling
Overview of sampling
 
Dsp ppt
Dsp pptDsp ppt
Dsp ppt
 
Digital speech processing lecture1
Digital speech processing lecture1Digital speech processing lecture1
Digital speech processing lecture1
 
Decimation and Interpolation
Decimation and InterpolationDecimation and Interpolation
Decimation and Interpolation
 
A seminar report on speech recognition technology
A seminar report on speech recognition technologyA seminar report on speech recognition technology
A seminar report on speech recognition technology
 
Deep Learning For Speech Recognition
Deep Learning For Speech RecognitionDeep Learning For Speech Recognition
Deep Learning For Speech Recognition
 
Digital signal processing
Digital signal processingDigital signal processing
Digital signal processing
 
Speech Recognition Technology
Speech Recognition TechnologySpeech Recognition Technology
Speech Recognition Technology
 
TRANSITIONAL BUTTERWORTH-CHEBYSHEV FILTERS
TRANSITIONALBUTTERWORTH-CHEBYSHEV FILTERSTRANSITIONALBUTTERWORTH-CHEBYSHEV FILTERS
TRANSITIONAL BUTTERWORTH-CHEBYSHEV FILTERS
 
Linear Convolution using Matlab Code
Linear Convolution  using Matlab CodeLinear Convolution  using Matlab Code
Linear Convolution using Matlab Code
 
Design for Testability
Design for Testability Design for Testability
Design for Testability
 

Andere mochten auch

THINK DIFFERENT, THINK SIGNAL PROCESSING: APPROACHES TO REAL-TIME NETWORK DA...
THINK DIFFERENT, THINK SIGNAL PROCESSING:  APPROACHES TO REAL-TIME NETWORK DA...THINK DIFFERENT, THINK SIGNAL PROCESSING:  APPROACHES TO REAL-TIME NETWORK DA...
THINK DIFFERENT, THINK SIGNAL PROCESSING: APPROACHES TO REAL-TIME NETWORK DA...Gigaom
 
Electrocardiogram (ECG or EKG)
Electrocardiogram (ECG or EKG)Electrocardiogram (ECG or EKG)
Electrocardiogram (ECG or EKG)Minh Anh Nguyen
 
Biomedical signal modeling
Biomedical signal modelingBiomedical signal modeling
Biomedical signal modelingRoland Silvestre
 
EEG signal background and real-time processing
EEG signal background and real-time processingEEG signal background and real-time processing
EEG signal background and real-time processingRobert Oostenveld
 
Dsp application on mobile communication
Dsp application on mobile communicationDsp application on mobile communication
Dsp application on mobile communicationKeval Patel
 
Medical applications of dsp
Medical applications of dspMedical applications of dsp
Medical applications of dspkanusinghal3
 
Radar signal processing
Radar signal processingRadar signal processing
Radar signal processingMustahid Ali
 
Digital signal processing
Digital signal processingDigital signal processing
Digital signal processingVedavyas PBurli
 

Andere mochten auch (8)

THINK DIFFERENT, THINK SIGNAL PROCESSING: APPROACHES TO REAL-TIME NETWORK DA...
THINK DIFFERENT, THINK SIGNAL PROCESSING:  APPROACHES TO REAL-TIME NETWORK DA...THINK DIFFERENT, THINK SIGNAL PROCESSING:  APPROACHES TO REAL-TIME NETWORK DA...
THINK DIFFERENT, THINK SIGNAL PROCESSING: APPROACHES TO REAL-TIME NETWORK DA...
 
Electrocardiogram (ECG or EKG)
Electrocardiogram (ECG or EKG)Electrocardiogram (ECG or EKG)
Electrocardiogram (ECG or EKG)
 
Biomedical signal modeling
Biomedical signal modelingBiomedical signal modeling
Biomedical signal modeling
 
EEG signal background and real-time processing
EEG signal background and real-time processingEEG signal background and real-time processing
EEG signal background and real-time processing
 
Dsp application on mobile communication
Dsp application on mobile communicationDsp application on mobile communication
Dsp application on mobile communication
 
Medical applications of dsp
Medical applications of dspMedical applications of dsp
Medical applications of dsp
 
Radar signal processing
Radar signal processingRadar signal processing
Radar signal processing
 
Digital signal processing
Digital signal processingDigital signal processing
Digital signal processing
 

Ähnlich wie Real time signal processing

Introduction to digital signal processing 2
Introduction to digital signal processing 2Introduction to digital signal processing 2
Introduction to digital signal processing 2Hossam Hassan
 
Titan X Research Paper
Titan X Research PaperTitan X Research Paper
Titan X Research PaperJennifer Wood
 
The application wavelet transform algorithm in testing adc effective number o...
The application wavelet transform algorithm in testing adc effective number o...The application wavelet transform algorithm in testing adc effective number o...
The application wavelet transform algorithm in testing adc effective number o...ijcsit
 
pulse code modulation.pdf
pulse code modulation.pdfpulse code modulation.pdf
pulse code modulation.pdfStannousGreen
 
Nt1330 Unit 4.2 Paper
Nt1330 Unit 4.2 PaperNt1330 Unit 4.2 Paper
Nt1330 Unit 4.2 PaperLisa Olive
 
digital control Chapter1 slide
digital control Chapter1 slidedigital control Chapter1 slide
digital control Chapter1 slideasyrafjpk
 
Pulse code modulation and Quantization
Pulse code modulation and QuantizationPulse code modulation and Quantization
Pulse code modulation and QuantizationMuhamamd Awaissaleem
 
Compression of digital voice and video
Compression of digital voice and videoCompression of digital voice and video
Compression of digital voice and videosangusajjan
 
Real Time System Identification of Speech Signal Using Tms320c6713
Real Time System Identification of Speech Signal Using Tms320c6713Real Time System Identification of Speech Signal Using Tms320c6713
Real Time System Identification of Speech Signal Using Tms320c6713IOSRJVSP
 
DSM Based low oversampling using SDR transmitter
DSM Based low oversampling using SDR transmitterDSM Based low oversampling using SDR transmitter
DSM Based low oversampling using SDR transmitterIJTET Journal
 
FPGA Design & Simulation Modeling of Baseband Data Transmission System
FPGA Design & Simulation Modeling of Baseband Data Transmission SystemFPGA Design & Simulation Modeling of Baseband Data Transmission System
FPGA Design & Simulation Modeling of Baseband Data Transmission SystemIOSR Journals
 
Chaguaro daniel
Chaguaro danielChaguaro daniel
Chaguaro danielAnita Pal
 
EC8562 DSP Viva Questions
EC8562 DSP Viva Questions EC8562 DSP Viva Questions
EC8562 DSP Viva Questions ssuser2797e4
 
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLAB
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLABFAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLAB
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLABJournal For Research
 
presentation on digital signal processing
presentation on digital signal processingpresentation on digital signal processing
presentation on digital signal processingsandhya jois
 

Ähnlich wie Real time signal processing (20)

Introduction to digital signal processing 2
Introduction to digital signal processing 2Introduction to digital signal processing 2
Introduction to digital signal processing 2
 
Titan X Research Paper
Titan X Research PaperTitan X Research Paper
Titan X Research Paper
 
Lec2
Lec2Lec2
Lec2
 
The application wavelet transform algorithm in testing adc effective number o...
The application wavelet transform algorithm in testing adc effective number o...The application wavelet transform algorithm in testing adc effective number o...
The application wavelet transform algorithm in testing adc effective number o...
 
Closed loop DPCM
Closed loop DPCMClosed loop DPCM
Closed loop DPCM
 
pulse code modulation.pdf
pulse code modulation.pdfpulse code modulation.pdf
pulse code modulation.pdf
 
Nt1330 Unit 4.2 Paper
Nt1330 Unit 4.2 PaperNt1330 Unit 4.2 Paper
Nt1330 Unit 4.2 Paper
 
dsp.pdf
dsp.pdfdsp.pdf
dsp.pdf
 
digital control Chapter1 slide
digital control Chapter1 slidedigital control Chapter1 slide
digital control Chapter1 slide
 
Pulse code modulation and Quantization
Pulse code modulation and QuantizationPulse code modulation and Quantization
Pulse code modulation and Quantization
 
Digital communication unit II
Digital communication unit IIDigital communication unit II
Digital communication unit II
 
Compression of digital voice and video
Compression of digital voice and videoCompression of digital voice and video
Compression of digital voice and video
 
Real Time System Identification of Speech Signal Using Tms320c6713
Real Time System Identification of Speech Signal Using Tms320c6713Real Time System Identification of Speech Signal Using Tms320c6713
Real Time System Identification of Speech Signal Using Tms320c6713
 
DSM Based low oversampling using SDR transmitter
DSM Based low oversampling using SDR transmitterDSM Based low oversampling using SDR transmitter
DSM Based low oversampling using SDR transmitter
 
FPGA Design & Simulation Modeling of Baseband Data Transmission System
FPGA Design & Simulation Modeling of Baseband Data Transmission SystemFPGA Design & Simulation Modeling of Baseband Data Transmission System
FPGA Design & Simulation Modeling of Baseband Data Transmission System
 
Chaguaro daniel
Chaguaro danielChaguaro daniel
Chaguaro daniel
 
EC8562 DSP Viva Questions
EC8562 DSP Viva Questions EC8562 DSP Viva Questions
EC8562 DSP Viva Questions
 
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLAB
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLABFAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLAB
FAST AND EFFICIENT IMAGE COMPRESSION BASED ON PARALLEL COMPUTING USING MATLAB
 
FPGA Implementation of High Speed FIR Filters and less power consumption stru...
FPGA Implementation of High Speed FIR Filters and less power consumption stru...FPGA Implementation of High Speed FIR Filters and less power consumption stru...
FPGA Implementation of High Speed FIR Filters and less power consumption stru...
 
presentation on digital signal processing
presentation on digital signal processingpresentation on digital signal processing
presentation on digital signal processing
 

Kürzlich hochgeladen

Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptNANDHAKUMARA10
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdfSuman Jyoti
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXssuser89054b
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Standamitlee9823
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdfankushspencer015
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01KreezheaRecto
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756dollysharma2066
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTbhaskargani46
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Arindam Chakraborty, Ph.D., P.E. (CA, TX)
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueBhangaleSonal
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringmulugeta48
 

Kürzlich hochgeladen (20)

Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.ppt
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
NFPA 5000 2024 standard .
NFPA 5000 2024 standard                                  .NFPA 5000 2024 standard                                  .
NFPA 5000 2024 standard .
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineering
 

Real time signal processing

  • 1. R V COLLEGE OF ENGINEERING, BANGALORE report on Real time signal processing: Implimentation & Application Submitted by, Laxman Jaygonde(1RV11EE029) Mahantesh Padashetty(1RV11EE030) Rajesh kumar rajpurohit(1RV11EE044) Vijeth V S(1RV11EE061)
  • 2. Introduction There are two types of DSP applications ± non-real-time and real time. Non-real- time signal processing involves manipulating signals that have already been collected and digitized. This may or may not represent a current action and the need for the result is not a function of real time. Real-time signal processing places stringent demands on DSP hardware and software design to complete predefined tasks within a certain time frame. This chapter reviews the fundamental functional blocks of real-time DSP systems. The basic functional blocks of DSP systems are illustrated in Figure 1.1, where a real- world analog signal is converted to a digital signal, processed by DSP hardware in digital form, and converted backinto an analog signal. Each ofthe functional blocks in Figure 1.1 will be introduced in the subsequent sections. For some real-time applica- tions, the input data may already be in digital form and/or the output data may not need to be converted to an analog signal. For example, the processed digital information may be stored in computer memory for later use, or it may bedisplayed graphically. In other applications, the DSP system may berequired to generate signals digitally, such as speech synthesis used for cellular phones or pseudo-random number generators for CDMA (code division multiple access) systems.
  • 3. An ideal sampler can be considered as a switch that is periodically open and closed every T seconds and where fs is the sampling frequency (or sampling rate) in hertz (Hz, or cycles per second). The intermediate signal, x…nT†, is a discrete-time signal with a continuous- value (a number has infinite precision) at discrete time nT, n ˆ 0, 1, . . ., I as illustrated in Figure 1.3. The signal x…nT† is an impulse train with values equal to the amplitude of x…t† at time nT. The analog input signal x…t† is continuous in both time and amplitude. The sampled signal x…nT† is continuous in amplitude, but it is defined only at discrete points in time. Thus the signal is zero except at the sampling instants t ˆ nT. Quantizing and Encoding An obvious constraint of physically realizable digital systems is that sample values can only be represented by a finite number of bits. The fundamental distinction between discrete-time signal processing and DSP is the wordlength. The former assumes that discrete-time signal values x…nT† have infinite wordlength, while the latter assumes that digital signal values x…n† only have a limited B-bit. We now discuss a method of representing the sampled discrete- time signal x…nT† as a binary number that can be processed with DSP hardware. This is the quantizing and encoding process. As shown in Figure 1.3, the discrete- time signal x…nT† has an analog amplitude (infinite precision) at time t ˆ nT. To process or store this signal with DSP hardware, the discrete-time signal must be quantized to a digital signal x…n† with a finite number of bits. If the wordlength of an ADC is B bits, there are 2B different values (levels) that can be used to represent a sample. The entire continuous amplitude range is divided into 2B subranges. Amplitudes of waveform that are in the same subrange are assigned the same amplitude values. Therefore quantization is a process that represents an analog- valued sample x…nT† with its nearest level that corresponds to the digital signal x…n†. The discrete-time signal x…nT† is a sequence of real numbers using infinite bits, while the digital signal x…n† represents each sample value by a finite number of bits which can be stored and processed using DSP hardware. The quantization process introduces errors that cannot be removed. For example,
  • 4. we can use two bits to define four equally spaced levels (00, 01, 10, and 11) to classify the signal into the four subranges as illustrated in Figure 1.4. In this figure, the symbol `o' represents the discrete-time signal x…nT†, and the symbol `' represents the digital signal x…n†. This is a theoretical maximum. When real input signals and converters are used, the achievable SNR will be less than this value due to imperfections in the fabrication of A/D converters. As a result, the effective number of bits may be less than the number of bits in the ADC. However, Equation (1.2.5) provides a simple guideline for determining the required bits for a given application. For each additional bit, a digital signal has about a 6-dB gain in SNR. For example, a 16-bit ADC provides about 96 dB SNR. The more bits used to represent a waveform sample, the smaller the quantization noise will be. If we had an input signal that varied between 0 and 5 V, using a 12-bit ADC, which has 4096 …212 † levels, the least significant bit (LSB) would correspond to 1.22 mV resolution. An 8-bit ADC with 256 levels can only provide up to 19.5 mV resolution. Obviously with more quantization levels, one can represent the analog signal more accurately. If the uniform quantization scheme shown in Figure 1.4 can adequately represent loud sounds, mostof the softer sounds may be pushed into the same small value. This means soft sounds may not be distinguishable. To solve this problem, a quantizer whose quantization step size varies according to the signal amplitude can beused. In practice, the non-uniform quantizer uses a uniform step size, butthe input signal is compressed first. The overall effect is identical to the non-uniform quantization. For example, the logarithm-scaled input signal, rather than the input
  • 5. signal itself, will be quantized. After processing, the signal is reconstructed at the output by expanding it. The process of compression and expansion is called companding (compressing and expanding). For example, the m-law (used in North America and parts of Northeast Asia) and A-law (used in Europe and most of the rest of the world) companding schemes are used in most digital communications. As shown in Figure 1.1, the input signal to DSP hardware may be a digital signal from other DSP systems. In this case, the sampling rate of digital signals from other digital systems must be known. The signal processing techniques called interpolation or decimation can be used to increase or decrease the existing digital signals' sampling rates. Sampling rate changes are useful in many applications such as interconnecting DSP systems operating at different rates. A multirate DSP system uses more than one sampling frequency to perform its tasks. Implementation Procedure for Real-Time Applications The digital filters and algorithms can be implemented on a DSP chip such as the TMS320C55x following a four-stage procedureto minimize the amount of time spent on finite wordlength analysis and real-time debugging. Figure 3.17 shows a flowchart of this procedure. In the first stage, algorithm design and study is performed on a general-purpose computerin a non-real-time environment using a high-level MATLAB orC program with floating-point coefficients and arithmetic. This stage produces an `ideal' system. In the second stage, we develop the C (or MATLAB) program in a way that emulates the same sequenceof operations that will be implemented on the DSP chip, using the same parameters and state variables. For example, we can define the data samples and filter coefficients as 16-bit integers to mimic the wordlength of 16-bit DSP chips. It is carefully redesigned and restructured, tailoring it to the architecture, the I/O timing structure, and the memory constraints of the DSP device. The quantization errors due to fixed-point representation and arithmetic can be evaluated using the simulation technique illustrated in Figure 3.18. The testing data x(n) is applied to both the ideal system designed in stage 1 and the practical system developed in stage 2. The output difference, e(n), between these two systems is due to finite-precision effects. We can re-optimize the structure and algorithm of the practical system in order to minimize finite-precision errors. The third stage develops the DSP assembly programs (or mixes C programs with assembly routines) and tests the programs on a general-purpose computer using a
  • 6. DSP software simulator (CCS with simulator or EVM) with test data from a disk file. This test data is either a shortened version of the data used in stage 2, which can be generated internally by the program or read in as digitized data emulating a real application. Output from the simulator is saved as another disk file and is compared to the correspondingoutputof the C program in the second stage. Oncea one-to-one agreement is obtained between these two outputs, we are assured that the DSP assembly program is essentially correct.
  • 7. The final stage downloads the compiled (or assembled) and linked program into the target hardware (such as EVM) and brings it to a real-time operation. Thus the real- time debugging process is primarily constrained to debugging the I/O timing structure and testing the long-term stability of the algorithm. Once the algorithm is running, we can again `tune' the parameters of the systems in a real-time environment. Experiments of Fixed-Point Implementations The purposes of experiments in this section are to learn input quantization effects and to determine the proper fixed-point representation for a DSP system. To experiment with input quantization effects, we shift off(right) bits ofinput signal and then evaluate the shifted samples. Altering the number of bits for shifting right, we can obtain an output stream that corresponds to a wordlength of 14 bits, 12 bits, and so on. The example given in Table 3.5 simulates an A/D converter of different wordlength. Instead of shifting the samples, we mask out the least significant 4 (or 8, or 10) bits of each sample, resulting in the 12 (8 or 6) bits data having comparable amplitude to the 16-bit data. 1. Copy the C function exp3a.c and the linker command file exp3.cmd from the software package to A: Experiment3 directory, create project exp3a to simulate 16, 12, 8, and 6 bits A/D converters. Use the run-time support library rts55.lib and build the project. 2. Use the CCS graphic display function to plot all four output buffers: out16, out12, out8, and out6. Examples of the plots and graphic settings are shown in Figure 3.19 and Figure 3.20, respectively. 3. Compare the graphic results of each output stream, and describe the differences between waveforms represented by different wordlength. Program listing of quantizing a sinusoid, exp3a.c
  • 8. 1. Copy the C function exp3a.c and the linker command file exp3.cmd from the software package to A: Experiment3 directory, create project exp3a to simulate 16, 12, 8, and 6 bits A/D converters. Use the run-time support library rts55.lib and build the project. To experiment with input quantization effects, we shift off (right) bits of input signal and then evaluate the shifted samples. Altering the number of bits for shifting right, we can obtain an output stream that corresponds to a wordlength of 14 bits, 12 bits, and so on. The example given in Table 3.5 simulates an A/D converter of different wordlength. Instead of shifting the samples, we mask out the least significant 4 (or 8, or 10) bits of each sample, resulting in the 12 (8 or 6) bits data having comparable amplitude to the 16-bit data. 2. Use the CCS graphic display function to plot all four output buffers: out16, out12, out8, and out6. Examples of the plots and graphic settings are shown in Figure 3.19 and Figure 3.20, respectively.
  • 9. . #define BUF_SIZE 40 const int sineTable [BUF_SIZE] ˆ {0x0000, 0x01E0, 0x03C0, 0x05A0, 0x0740, 0x08C0, 0x0A00, 0x0B20, 0x0BE0, 0x0C40, 0x0C60, 0x0C40, 0x0BE0, 0x0B20, 0x0A00, 0x08C0, 0x0740, 0x05A0, 0x03C0, 0x01E0, 0x0000, 0xFE20, 0xFC40, 0xFA60, 0xF8C0, 0xF740, 0xF600, 0xF4E0, 0xF420, 0xF3C0, 0xF3A0, 0xF3C0, 0xF420, 0xF4E0, 0xF600, 0xF740, 0xF8C0, 0xFA60, 0xFC40, 0x0000 }; int out16 [BUF_SIZE];/* 16 bits output sample buffer */ int out12 [BUF_SIZE];/* 12 bits output sample buffer */ int out8 [BUF_SIZE];/* 8 bits output sample buffer */ int out6 [BUF_SIZE];/* 6 bits output sample buffer */ void main( ) { int i; for (i ˆ 0; i < BUF_SIZEÀ1; i‡‡) { out16[i]ˆ sineTable[i];/* 16-bit data*/ out12[i]ˆ sineTable[ &0xfff0; /* Mask off 4-bit */i] out8[i]ˆ sineTable[ &0xff00;i]/* Mask off 8-bit */ out6[i]ˆ sineTable[ &0xfc00;i]/* Mask off 10-bit */ } }
  • 10.
  • 11. APPLICATIONS Overlap-Save Algorithm First, you will construct a block diagram for an Overlap-Save algorithm using elementary Simulink blocks. For this model, we will walk you through it step by step. The model will implement an FIR filter kernel of length M = 113. The algorithm will use an FFT and inverse FFT oflength N = L + M -1 = 512. Thus, the input blocks will be of length N = 512 and the throughput will be L = 400 output samples per processed block. Designing the Filter Kernel 1. Before the Simulink model can be built, the FIR filter needs to be designed. Go to the command window and type fdatool to bring up MATLAB’s Filter Design & Analysis Tool. 2. Select Lowpass under Response Type. 3. Choose an Equiripple FIR under Design Method. 4. Specify the filter order as 112 (this will result in a kernel of length M = 113). 5. Under Frequency Specifications, set Units to Hz, Fs to 8000, Fpass to 400, and Fstop to 800.
  • 12. 6. Click Design Filter. The Magnitude response of the filter is displayed. 7. Go to File → Export. ChooseExport To Workspaceand Export As Coefficients. Under Variable names, name the Numerator h. Press Export. This exports the filter coefficients to the MATLAB Workspaceas a 1×113 vector named h. You can verify the presence of this variable by going to the command window and typing “whos”. Building the Simulink Model 1. In a new Simulink model, set the Amplitude of a Sine Wave Sourceto 1 and the Frequency to 100 Hz. Also, set the Sample time to 1/8000 (this will imitate a sampling rate on the C6437 board). 2. Connect your input to a Buffer block that you can find in Signal Processing Blockset → Signal Management → Buffers. Set the Output buffer size to 400. The buffer divides the input signal into data block segments of length L. The output of the buffer is a frame-based signal (as opposed to a sample-based signal) such that each segment (or “frame” ) of 400 samples is processed as one chunk, as required by the Overlap-Save process. 3. Add a Delay Line blockto the diagram from Signal ProcessingBlockset → Signal Management → Buffers. Set the Delay line size to 400 and connectthe output ofthe Buffer to the Delay Line input. Effectively, the Delay Line delays its input by one data block (“frame”) of length L. 4. The Overlap-Save algorithm calls for the last M-1 points from the previous data block to be saved and appended to the beginning of the next data block. The Delay Line inserted in step 3 above allows us to access the previous data block. In order to extract the necessary M-1 points, insert a Submatrix block from Signal Processing Blockset → Signal Management →Indexing that you connect to the output of the Delay line. Set the Rowspanto Range of rows, the Starting row to Index, the Starting row index to 289, the Ending row to Last, and the Column span to All columns. Here’s what this block does: the data blocks outputted from the Delay Line are 400×1 column vectors; we want the last M-1 points. The Submatrix block selects elements 289 through 400 of these input vectors and outputs 112×1 column vectors. 5. The next step is to take the M-1 saved points from step 4 and append them to the beginning of the current data block. To do this, we can use a matrix concatenate block from Simulink→ Math Operations. Insert this block into the model and set Number of inputs to 2, Mode to Multidimensional Array, and Concatenate Dimension to 1. Connect the output of the Submatrix block from step 4 to the first (top) input of the Matrix concatenate block, and connect the output of the Buffer block from step 2 to the second (bottom) input of the Matrix concatenate block. Theseconnections causethe 112×1 vectors from the Submatrix block(the M-1 saved data points) and the 400×1 vectors from the Buffer (the current data block) to be combined into 512×1 vectors that are suitable for FFT calculation.
  • 13. 6. Add an FFT block to the mode. Connect the output of the Matrix Concatenate block to the input of the FFT block. This will compute the 512-point FFT of the overlapped data blocks. Notice that N = L+M-1 = 512 is chosen to be a power of two; this is necessary because Simulink’s FFT block uses a radix-2 FFT algorithm. 7. The next step in the Overlap-Save algorithm is to multiply the FFT computed in step 7 by the FFT of the filter kernel. Before we can do this, we need to import the filter coefficients into the Simulink model. Add a From Workspace block to the model from Simulink → Sources. Set Data to the name of the filter kernel you exported to the MATLAB Workspaceand Sample Time to 400/8000. Note that this sample time causes the filter coefficients to be read at the same rate that data blocks are outputted from the Buffer block of step 3. 8. In order to compute the FFT of the filter kernel, we need to extend it so it has a length of 512. To do this, add a Pad blockfrom Signal ProcessingBlockset → Signal Operations. Set Pad over to Columns, Pad value to 0, and Column size to 512. Connect the output of the From Workspace block in step 7 to the input of the Pad block. The Pad block simply appends enough zeros to the end of the filter kernel to make it a 512×1 vector. 9. Add another FFT block to the diagram and connect the output of the Pad block from the previous step to the input of this FFT block. Clearly, this just computes the 512-point FFT of the filter kernel. 10. Now we are ready to perform the frequency multiplication necessary for FFT convolution. Connect the Output of the two FFT blocks to the two inputs of a Product block. 11. Once frequency multiplication has occurred, the inverse FFT needs to be computed. Insert an IFFT block from Signal Processing Blockset → Transforms. Under the IFFT block parameters, select the check box labeled “Input is conjugate symmetric.” This tells Simulink that the output should be real-valued; that is, any small imaginary parts in the output due to rounding errors will be ignored. Connect the output of the Product block to the input of the IFFT block. 12. The last major step in the Overlap-Save algorithm is to discard all of the points that have aliasing. Namely, the first M-1 points of the data blocks resulting from the inverse FFT operation need to be thrown out. To do this, insert another Submatrix blockinto the model. Set Rowspanto Range of rows, Starting row to Index, Starting row index to 113, Ending row to Last, Column span to All columns. Connect the output of the IFFT block to the input of the Submatrix block. Note that by discarding M-1 points, the data blocks are reduced in size back to 400 points, the size of the original data blocks from the input signal. 13. Add an Unbuffer blockfrom Signal ProcessingBlockset → Signal Management → Buffers and connect its input to the output of the Submatrix block from step 12. As its name suggests, the Unbuffer block takes the frame-based signal of 400×1
  • 14. vectors from the Submatrix block and converts it into a sample-based signal (the original format of the input). 14. To view the filtered signal, add a Scope block to the model from Simulink → Sinks and connect its input to the output of the buffer block. 15. The Overlap-Save filter is now complete. Save your model. C6437 Real-Time Simulation 1. For the real-time simulation on the C6437 board, you may want to re-save your model under a different name. 2. Insert a DM6437 target block. 3. Replace the Sine Wave source block with the DM6437 ADC block. Set ADC input source to line in, sampling rate to 8000 Hz, and set the samples per frame to 400. 4. Connect the output of the ADC to the input of a Data TypeConversion block. Set the output data type to single. 5. Because the output of the ADC is an Nx2 matrix (a stereo signal), an additional identical filter kernel is needed. Copythe From Workspace, Pad, and FFT blocks of the filter kernel. Take the output of the original and the copyof the filter kernel and feed this into a Matrix Concatenate block. Connect the output of the Matrix Concatenate blockto the input ofthe Productblock(the blockright before the IFFT). 6. For the simulation on the board, we’re going to add high frequency noise to the input signal to see if the filter can attenuate it properly. To do this, add a noisy input to the input created in step 3 by adding a Uniform Random Number block followed by a Highpass Filter block. Set the Minimum and Maximum parameters to -5 and 5, respectively of the Uniform Random Number block. Also, set the Sample time to 1/8000. Set the parameters of your highpass filter such that there will be practically no noise below 2000 Hz. 7. Delete any scopes you may have in your model. 8. In order to view the original input, the input corrupted by noise, and the filter output during simulation, we’re going to use the board’s manual switches to select which signal goes to the board’s output. Just like we did in Lab 2, with a Multiport Switch and a DIP Switch block configure your modelto beable to output the original input signal, the noisy input, and the filtered output. 9. Connecta Data Type Conversion block to the output of the Multiport Switch and set the output data type to int16.
  • 15. 10. Connect the output of the conversion block to the input of the DM6437 DAC block and set the sampling frequency to 8000 Hz. 11. Set the simulation to Normal mode and configure the simulation parameters as in Lab 2. 12. Build your Simulink model by pressing CTRL-B. The project should load and run automatically if you have selected the Build and Execute option in the Link for CCS tab of the Configuration Parameters. 13. Connect the input of the board to the function generator and the output of the board to the oscilloscope. Feed the board with a sine wave of 1 Vpp and 100 Hz. Try different configurations for the board’s switch, identify and explain the corresponding outputs obtained. 14. Disconnect the output of the board from the oscilloscope and connect it instead to speakers. Use the switches on the board to listen to the 100 Hz input tone, the noise-corrupted signal, and the filtered signal. Overlap-Add Algorithm Now that you have walked through the Overlap-Save algorithm step-by-step, it is time to try designing an algorithm for yourself. Your task in this section is to design from scratcha filter that uses the Overlap-Add algorithm. You should beable to do this using the same type of blocks (Buffer, Delay Line, Submatrix, etc.) that were used in the Overlap-Save filter. The guidelines are as follows: 1. Usethe same lowpass filter as was used for the Overlap-Save algorithm. However, change the order of the filter to 500 such that the filter kernel has a length of M = 501. Also, change the Fstop parameter from 800 Hz to 500 Hz. Essentially, we are using a longer, more expensive filter kernel to producealowpass frequency response that drops of much more sharply than the previous filter. Even with a filter order of 500, FFT convolution can filter the output in a reasonable amount of time. 2. Divide the input to the filter into data blocks of length L = 1548. Note that this makes the size of your FFT and inverse FFT calculations N = L+M–1 = 2048. Here are some hints on how to proceed with your design:
  • 16. 1. It is probably best to use Pad blocks from Signal Processing Blockset → Signal Operations to append zeros to the end of data blocks. 2. When using the Delay Line block, make sure that the block parameter named Delay line size is setto be the same size as the input vectorto the block. Forexample, if you input 50×1 data blocks into the Delay Line and want a delay ofone data block, set the Delay line size to 50. 3. If youuse a FromWorkspaceblockto importthe filter coefficients to the Simulink model in a similar way to the Overlap-Save algorithm, make sure to set the sample time to 1548/8000. This will match the rate at which the length L data blocks are received by the filter at its input.
  • 17. The DCT Algorithm Although there are several transformations utilized in digital video processing, the DCT continues to be one of the mostcommonlinear transformations within digital signal processing [3]. By performing lossy compression, the DCT separates the image into parts of differing importance The 2D-DCT and 2D-IDFT displayed below in figures 1 and 2 respectively, can not only concentrate the main information of original image into the smallest low frequency coefficient which enables a reduction in overall computational complexity. That is, by concentration the majority of the frequencies in the initial coefficients, we can reduce computation by performing processing solely on those elements [2]. Moreover, this process performs robustly when applied to compression and decompression storage and retrieval techniques. Digital Video Hardware Implementation 1. Go to the Simulation→Configuration Parameters, change the solver options type back to “Fixedstep”, and the solver to “discrete (no continuous states).” 2. Start with your Simulink Model for digital video edge detection and delete all the input and output blocks but leave the “Image from Workspace” block. 3. Under Target Support Package→Supported Processors→TI C6000→Board Support→DM6437EVM, found Video Capture block. Set Sample Time to -1. 4. Add two Video Display blocks. Forthe first one, setVideo Window to “Video 0”, Video Window Position to [180, 0, 360, 480], Horizontal Zoomto 2x, Vertical Zoom to 2x; for the Second one, Set Video Window to “Video 1”, Video Window Position to [0, 240, 360, 240], Horizontal Zoom to 2x, Vertical Zoom to 2x.
  • 18. 5. Add a Deinterleave block, and two Interleave blocks from DM6437 EVM Board Support. Link the Video Capture block to the Deinterleave block. And Link the two Interleave blocks each with a Video Display. 6. For computational efficiency output video stream components Y, Cb, and Cr can be resized to a lower sample size for processing. This can be accomplished by inserting a resize blockforeach video componentafter the initial Deinterleave block. The lower resize value should always bea factor of32 if the default Interleave mask value of 32 is utilized. This can be calculated by first dividing the initial video mask size by a factor of two and rounded to the nearest number of a factor of 32. Then by dividing by 2 oncemore. Forexample our Y video componentfor the gray scale was originally 720. (720x480)/2 = 360x240. However, 352x256 is the nearest factor of 32. Then by dividing by 2 we obtain the entered value of 176x128. This process should be repeated for Cb, and Cr to get 88x128. 7. From the Simulink library add a DIP block for the DM6437EVM and connect it to a multi-port switch. The DIP switch setting should be set to SW4(0) and sample time of 1. Connect the output pin of the DIP to the top pin of the multi-port switch. Double click on the multi-port switch and set the number of input port to 2 (++). Connect one of the inputs to the output of the Image from Workspace block. This will allow us to simulate the insertion of the watermark image when unauthorized access is detected. 8. Press Ctrl + B to begin upload and execute the program. Barcode recognition The demonstration is a prototypeof 1D barcodescanning using the DSP DM6437EVM board. Barcode encodes data on parallel lines of different widths. The mostuniversally used barcodeis the UPC, Universal ProductCode. The most common form of the UPC is the UPC-A, which has 12 numerical digits encoded through varying width of black and white parallel lines. The UPC-A barcodeis an optical pattern ofbars and spaces that format and encode the UPC digit string. Each digit is represented by a unique pattern of two bars and two spaces. The bars and spaces are variable width; they may be 1, 2, 3, or 4 units wide. The total width for a digit is always 7 units. Since there are 12 numbers, the barcode has starting lines, middle separator, and ending lines. A complete UPC-A includes 95 units: the 84 for the left and right digits combined and 11 for the start, middle, and end patterns. The start and end patterns are 3 units wide and use the pattern bar- space-bar; each bar and space is one unit wide. The middle pattern is 5 units wide
  • 19. and uses the pattern space-bar-space-bar-space, also one unit wide. In addition, a UPC symbol requires a quiet zone (additional space) before the start and after the end.The second set of 6 numbers after the middle separator uses the same encoding format of the numerical values of the first 6, except the black and white widths are reversed. The algorithm implemented in this prototypereads the UPC barcodethrough modules of video input, color conversion, feature calculations, barcoderecognition, barcode validation, and output video display. Color Conversion Using video capture from the board, the image is taken from the camera to Simulink and is converted from YCrCb to RGB for better processing in Simulink. The conversion requires taking the YCrCb and splitting it into the three colorsignals of Y, Cr, and Cb. After the split, since the Cr and Cb are smaller in dimension than Y, the Cr and Cb are upsampled using chroma resampling and transposed to match the dimensions of RGB from the 4:2:2 to 4:4:4. The three color signals are transposed again before sending them to the color spaceconversion from YCrCb to RGB still in three separate signals. The separate RGB signals are concatenated with a matrix concatenate for one to use as display, and for another line, it is sent to convert from RGB to intensity. The grayscale version of the image will be inserted to the feature calculations. This process of color conversion is also reversed before sending to output of board, except in this case, it will be from RGB to YCrCb.
  • 20. Feature Calculations The feature calculations module of the algorithm creates 3 scanlines for scanning barcodes as well as calculating the pixel values from the barcodeintensity image in a given row to a vector. First a Gaussian filter is implemented to smooth out the image gradient identified as the barcoderegion. The gradient of the scanlines are set and validated so that the scanlines are inside the appropriate range. Then, the mean and standard deviation of the pixel intensities are calculated for the barcode area. The range of pixel parameters, f_low and f_high, for setting the color is determined. Pixels on the scanlines are compared to the f_low and f_high intensity values. A pixel is considered black if its value is less than f_low, and it is considered white if its value is f_high or larger. The remaining pixels are proportionally set between white and black. Black pixels are set to 1 and white pixels are set to -1. From the calculations, the vector of pixels from the scanlines is inputted to the barcode recognition. The scanlines are also sent to display to be added to the real time video. The barcoderecognition module consists of three parts: bar detection, barcodedetection, and a barcodecomparisonblock. The bar detection block detects bars from the barcodefeature signal. First, it tries to identify a black bar, if it is not there, then the first bar has zero width. If there is a black bar, then it calculates the pixels of the black bar. Forthe white bars, it does the same. After the bar detections, the barcode detection begins with the beginning bars and calculates all the possible values ofbarcodevalues that may form a valid string with all the possibleseparators. This function returns sequence of indices to barcode guard bars. The barcode comparisonblocktakes in the codebookforallthe encoded GTIN13 barcodevalues. It also reverses it for determining the last 6 digits of the GTIN 13 barcode. The barcode recognition block takes in the barcodes and tries to match up the barcode with the numbers ofpixels generated from the bar detection. In order to ensure better accuracy, the values are calculated from the left to right and right to left. The normalized confidence is calculated. The barcode recognition block set returns the barcode and the normalized confidence. Barcode Validation In the barcodevalidation stage ofthe algorithm, the simple calculation is used to determine whether the barcode is valid or not. It is calculated by taking the even elements and multiplying them by three. Then, add the sum of the odd elements with
  • 21. the sum of the even elements. Take 10 mod the sum and subtract 10. If the answer is the same as the check digit, which is the last digit, then the barcodeis valid. This validation along with a confidence level higher than the threshold allows the barcode to be displayed on the screen. Display The display adds the scanlines to the real time video and displays the barcode only if it is validated and has a high enough confidence level to enable the switch for display. All the information is sent to the module to convert the 3 dimensional matrices back to 2D matrices. Then, RGB is converted to YCrCb format to display through the board. Edge Detection in Image and Video Advances in digital image processing and digital video processing have opened incredible possibilities for computer vision applications. Research and development into common image and signal processing algorithms often employ a combination of multidimensional signal processing, discrete mathematics, topology, color theory, human perception and application-based physics modeling, among others. Algorithms range from “low-level” techniques such as image and video restoration and enhancement to abstract methodologies based on artificial intelligence for pattern recognition. Common challenges in designing image and video processing applications include managing computational complexity and scalability, achieving real-time performance given the volume of data that must be processed, mathematical modeling of a broad class of image and video signals that can vary greatly for different applications, and reliability and robustness to a broad range of applications. Edge Detection: 1-D Example Edge detection algorithms can be grouped into one of two categories: gradient-based edgedetection and zero-crossing-based edge detection. To elucidate the concept of edge detection, we present the steps for a one-dimensional example that makes use ofa gradient-based approach. Figure 1(a) presents a one-dimensional signal f(x) for which we would like to identify “edges” (i.e., sharp changes in amplitude). Taking the gradient (in this case corresponding to a one-dimensional
  • 22. continuous-time derivative with respect to the independent variable x) to give f’(x) results in the signal of Figure 1(b). Taking the absolute value of f’(x) and then thresholding results in the detection of edges as seen in Figures 1(c) and (d). If the threshold is high, then fewer edges are identified. If the threshold is too low, then spurious edges may be detected. Figure 1: (a) One-dimensional signal denoted f(x),(b) Gradient signal f’(x),(c) Absolute value of gradient, |f’(x)| and edge detection using Threshold 1, (d) Absolute value of gradient, |f’(x)| and edge detection using Threshold 2.