SlideShare ist ein Scribd-Unternehmen logo
1 von 32
DISCRETECOSINETRANSFORMWITH
ADAPTIVEHUFFMANCODINGBASED
IMAGECOMPRESSION
ABSTRACT
Method of compression which is
Huffman coding based on
histogram information and
image segmentation. It is used
for lossless and lossy
compression. Theamount of
image will be compressed in
lossy manner, and in lossless
manner, depends on
theinformation obtained by the
histogram of the image. The
results show that the difference
betweenoriginal and compressed
images is visually negligible.
The compression ratio(CR) and
peak signal tonoise ratio(PSNR)
are obtained for different
images. The relation between
compression ratio and
peaksignal to noise ratio shows
that whenever we increase
compression ratio we get PSNR
high. We can alsoobtain
minimum mean square error. It
shows that if we get high PSNR
than our image quality is better.
A PROJECT REPORT
On
DISCRETECOSINETRANSFORMWITHADAPTIVEHUFFMAN
CODINGBASEDIMAGECOMPRESSION
Submitted by: – SATYENDRA KUMAR
(ROLL.NO-0028; REGISTRATION NO-0068/15)
Academic Year 2015-2019.
As a partial fulfilment for the award of degree
of
Electronics and Communication Engineering
in
BACHELOR OF TECHNOLOGY
Under the Supervision of :-
Dr. Anukul Pandey
Assistant Professor
Department of Electronics and Communication Engineering
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
DUMKA ENGINEERING COLLEGE
(Established by Government of Jharkhand & Run by TECHNO INDIA under PPP)
DUMKA-814101 (AFFILIATED
TO S.K.M.U)
1 | P a g e
Image compression using by Huffman
DECLARATION
I SATYENDRA KUMAR University Roll No.0028/Reg. Number 0068/15 Students Of 7th
Semester B-Tech In Electronics & Communication Engineering, Dumka Engineering College,
Dumka, Hereby Declare That The Project Work Entitled “Discrete Cosine Transform With
Adaptive Huffman Coding Based Image Compression” Submitted To The Sido-Kanho
Murmu University During The Academic Year 2018-2019 Is The Original Work Done By Us
Under Supervision Of Dr. Anukul Pandey, Assistant Professor Department Electronics &
Communication Engineering This Project Work Is Submitted In Partial Fulfilment Of The
Requirements For The Award Of The Degree Of Bachelor In Technology In Electronics &
Communication Engineering. The Results Embodied In This Project Has Not Been Submitted
To Any Other Institute Or University For The Award Of Any Degree.
Date:14/03/2019 SARYENDRA KUMAR
Place: Dumka
1 | P a g e
Image compression using by Huffman
CERTIFICATE OF APPROVAL
The dissertation is hereby approved as a bonafide and creditable project work “DISCRETE
COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE
COMPRESSION” carried out and presented by SATYENDRA KUMAR(Roll No.0028and Reg.
No.0068/15 of 2018-2019) in a manner to warrant its acceptance as a prerequisite for award
of the degree of Bachelor of Technology(B-TECH) in Electronics and Communication
Engineering. The undersigned do not necessarily endorse or take responsibility for any
statement or opinion expressed or conclusion drawn there in, but only approve the
dissertation for the purpose for which it is submitted.
Dr. Anukul Pandey
Supervisor
Assistant Professor
Department Electronics & Communication Engineering
Dumka Engineering College
(Established by Government of Jharkhand & Run by
TECHNO INDIA under PPP)
Mr. Sujit Khamaru
Head of Department
Assistant Professor
Department Electronics & Communication
Engineering
Dumka Engineering College
(Established by Government of Jharkhand &
Run by TECHNO INDIA under PPP)
2 | P a g e
Image compression using by Huffman
Acknowledgments
I take this opportunity to express a deep sense of gratitude towards my project supervisor Dr.
Anukul Pandey, for providing excellent guidance, encouragement and inspiration throughout
the project work. Without his invaluable guidance, this work would never have been a
successful one. I would also like to thank all my batch mates for their valuable suggestions and
helpful discussions.
I have taken efforts in this work. However, it would not have been possible without the valuable
suggestions, kind support and help of our HOD Mr. Sujit Khamaru and many other individuals
of ECE department of our institution. I would like to extend my sincere thanks to all of them.
Finally, I will forever be grateful to my parents and sister for their unconditional endless love
and for giving me the best of everything in the world. I would like to express my appreciation,
for all their sacrifices and efforts. Without their love and encouragement, all I could have
achieved would be a complete failure.
SATYENDRA KUMAR
Roll No- 0028
Registration No- 0068/15
Dumka Engineering College
Date-
3 | P a g e
Image compression using by Huffman
4 | P a g e
Image compression using by Huffman
ABSTRACT
Image compression is one of the most important steps in image transmission and storage. “A picture is worth
more than thousand words “is a common saying. Images play an indispensable role in representing vitalin
formation and needs to be saved for further use or can be transmitted over a medium. In order to have efficient
utilization of disk space and transmission rate, images need to be compressed. Image compression is the
technique of reducing the file size of a image without compromising with the image quality at acceptable level.
Image compression is been used from a long time and many algorithms have been devised
Due to the increasing requirements for transmission of images in computer, mobile environments, the
research in the field of image compression has increased significantly. Image compression plays a crucial
role in digital image processing, it is also very important for efficient transmission and storage of images.
When we compute the number of bits per image resulting from typical sampling rates and quantization
methods, we find that Image compression is needed. Therefore development of efficient techniques for
image compression has become necessary .This paper is a survey for lossy image compression using
Discrete Cosine Transform, it covers JPEGcompression algorithm which is used for full-colour still image
applications and describes all the components of it.
Commented [s1]:
5 | P a g e
Image compression using by Huffman
Definition: -
The process of encoding information using fewer units of storage than
an un-encoded representation of data, through the use of
specific encoding schemes.
Data compression, or sometimes called source coding, is the process
of converting input data into another data stream that has a smaller
size, but retains the essential information contained within the original
data stream.
6 | P a g e
Image compression using by Huffman
1. Make optimal use of limited storage space.
2. Save time and help to optimize resources.
If compression and decompression are done in I/O
processor, less time is required to move data to or from storage
subsystem, freeing I/O bus for other work
In sending data over communication line: less time to
transmit and less storage to host
1. Compression is useful because it helps reduce the
Consumption of resources, such as hard disk space or
transmission bandwidth.
2. With the interest and surge in environmental test data for the
Surveillance Program, significant strains on computer storage
resources will occur.
5.Archiving of environmental test data from legacy systems,
including data for the Environment Test lab.
6. Familiar examples of data compressed files include .zip,.rar,
.tar file extensions.
7 | P a g e
Image compression using by Huffman
- Lossy Compression System
Lossy compression techniques is used in images where we can sacrifice some of the finer
details in the image to save a little more bandwidth or storage space.
Guided by research on how people perceive the data in question.
Used when some loss of fidelity is acceptable.
As an example, the human eye is more sensitive to subtle variations in luminance than to
variations in color. Therefore, color complexity can be reduced to maintain
the integrity of images, etc.
JPEG image compression works in part by “rounding off” some of this less important
information.
Lossy data compression provides a method of obtaining the best fidelity for a given amount of
compression desired.
- Lossless compression system
Lossless Compression System aims at reducing the bit rate of the compressed output without
Any distortion of the image. The bit-stream after decompression is identical to the original bit stream.
These types of algorithms usually exploit statistical redundancy to represent the user’s data
more concisely without error.
Most real-world data has statistical redundancy, Example – In English text, the letter ‘e’ is
much more common than the letter ‘z’. Similarly the probability that
the letter ‘q’ will be followed by the letter ‘z’ is very small.
- Predictive coding
It is a lossless coding method, which means the value for every element in the decoded image
and the original image is identical to Differential Pulse Code Modulation (DPCM).
- Transform coding
Transform coding forms an integral part of compression techniques. the reversible linear
transform in transform coding aims at mapping the image into a set of coefficients and the
resulting coefficients are then quantized and coded. the first attempts is the discrete cosine
transform (DCT) domain.
8 | P a g e
Image compression using by Huffman
Three closely connected components form a typical lossy image compression system, they are (a)Source
Encoder (b) Quantizer and (c) Entropy Encoder.
(a) Source Encoder (or Linear Transformer)
It is aimed at decorrelating the input signal by transforming its representation in which the
set of data values is sparse, thereby compacting the information content of the signal into
smaller number of coefficients. a variety of linear transforms have been developed such as
Discrete Cosine Transform (DCT), Discrete wavelet Transform (DWT), Discrete Fourier
Transform (DFT).
(b) Quantizer
Aquantizer aims at reducing the number of bits needed to store transformed coefficients by
reducing the precision of those values. Quantization performs on each individual coefficient
i.e. Scalar Quantization (SQ) or it performs on a group of coefficients together i.e. Vector
Quantization (VQ).
(c) Entropy Coding
Entropy encoding removes redundancy by removing repeated bit patterns in the output of
the Quantizer. the most common entropy coders are the Huffman Coding, Arithmetic Coding,
Run Length Encoding (RLE) and Lempel-Ziv (LZ) algorithm.
Figure 1.represents the encoding of image compression system
InputSource
Image
Reduce Correlation
Between pixels
QuantizationEntropyconding
Output compressedimage
9 | P a g e
Image compression using by Huffman
Performance Criteria in Image Compression
We can estimate the performance by applying the following two essential criteria: the
compression ratio (CR )and the quality measurement of the reconstructed image( PSNR)
(a) Compression ratio
The Compression ratio (CR) is the ratio between the original image size and the compressed
image size.
CR = 𝑛1
𝑛2
(b) Distortion measure
Mean Square Error (MSE) is a measure of the distortion rate in the reconstructed image.
𝐻𝑊
2
MSE= 1
∑ 𝐻 ∗ ∑ 𝑊 [𝑋(𝑖, 𝑗) − 𝑌(𝑖,𝑗)]𝑖 =1 𝑖 =1
- PSNR has been accepted as widely used quality measurement in the field of image
compression
2
PSNR=10log 255
(dB)10 𝑀𝑆𝐸
DCT TRANSFORMATION
The most popular technique for image compression, over the past several years, was Discrete
cosine transform (DCT). Its selection as the standard for JPEG is One of the major reasons for its
popularity. DCT is used by many Non-analytical applications such as image processing and
signal-processing DSP applications such as video conferencing. The DCT is used in
transformation for data compression. DCT is an orthogonal transform, which has a fixed set of
basis function.Dct is used to map an image space into a frequency.
DCT has many advantages: It has the ability to pack energy in the lower frequencies for image data. It
has the ability to reduce the blocking artefact effect and this effect results from the boundaries between
sub-images become visible.
10 | P a g e
Image compression using by Huffman
JPEG COMPRESSION
JPEG Standard is the very well-known ISO/ITU-T standard created in the late 1980s. jpeg
Standard is targeted for full- color still frame applications. one of the most common compression
standard is the JPEG standard . Several modes are defined for JPEG including baseline, lossless,
progressive and hierarchical.
The most common mode uses the discrete cosine transform is the JPEG baseline codingsystem,
also it is suitable for most compression applications. Despite being developed for low
compressions JPEG it is very helpful for DCT quantization and compression.
JPEG compression reduces file size with minimum image degradation by eliminating the least
important information. But it is considered a lossy image compression technique because the final
image and the original image are not completely the same and In lossy compression the
information that may be lost and missed is affordable. JPEG compression is performed in
sequential steps.
JPEG Process Steps for color images
This section presents jpeg compression steps
- An RGB to YCbCr color space conversion ( color specification )
- Original image is divided into blocks of 8 x 8.
-The pixel values within each block range from[-128 to 127] but pixel values of a
black and white image range from [0-255] so, each block is shifted from[0-255]
to [-128 to 127].
- The DCT works from left to right, top to bottom thereby it is applied to each block.
- Each block is compressed through quantization.
- Quantized matrix is entropy encoded.
-Compressed image is reconstructed through reverse process. This process uses the
inverse Discrete Cosine Transform (IDCT).
Figure 2. represents the encoder and decoder block diagrams for colour images.
Compression algorithm scheme: (a) compression step and (b) decompression step
11 | P a g e
Image compression using by Huffman
Color Specification
The YUV colour coordinate defines Y, Cb, and Cr components of one color image, where Y is
commonly called the luminance and Cb, Cr are commonly called the chrominance. the RGB
primary uses colour display to mix the luminance and chrominance attributes of a light.
Describing of a colour in terms of its luminance and chrominance content separately enable
more efficient processing and transmission of colour signals in many applications. To obtain
this goal, various three-component colour coordinates have been developed, in which one
component(Y) reflects the luminance and the other two collectively (Cb,Cr) characterize hue
and saturation. The [Y Cb Cr] T values in the YUV coordinate are related to the [R G B]T
values in the RGB coordinate by
 y   0.299 0.114  R   0 
Cb  0.169 0.500  G 128
      
Cr   0.500
0.587
0.334
0.419 0.081 B  128
      
Similarly, if we want to transform the YUV coordinate back to RGB coordinate, the inverse
matrix can be calculated from (4), and the inverse transform is taken to obtain the corresponding
RGB components.
After colour coordinate conversion, the next step is to divide the three colour components of the
image into many 8×8 blocks. For an 8-bit image, in the original block each element falls in the
range [0,255]. Data range that is centred around zero is produced after subtracting The mid-point
of the range (the value 128) from each element in the original block, so that the modified range is
shifted from[0,255] to [-128,127]. Images are separated into parts of different frequencies by the
DCT. The quantization step discards less important frequencies and the decompression step uses
the important frequencies to retrieve the image.
This equation gives the forward 2D_DCT transformation:
N 2N 2N
N1 N1
f (x, y)cos[
 
]cos[
 
]
x0 y0
F(u,v)= 2
C(u)C(v)
for u=0,...,N- 1 and v= 0,...,N- 1
whereN= 8 andC (k )= 
1
2
fork  0
1otherwise
This equation gives the inverse 2D_DCTtransformation:
2N 2N
N1N1
f (x, y) 
2
 C(u)C(v)F(u,v)cos[
 
]cos[
 
]
N x0 y0
for x=0,...,N- 1 and y= 0,...,N-1
After dct transformation, the “DC coefficient” is the element in the upper most left corresponding
to (0,0) and the rest coefficients are called “AC coefficients
12 | P a g e
Image compression using by Huffman
The JPEG encoding system
13 | P a g e
Image compression using by Huffman
We actually throw away data through the Quantization step. We obtain the Quantization by
dividing transformed image DCT matrix by the quantization matrix used . Values of the resultant
matrix are then rounded off. The quantized coefficient is defined in (6), and the reverse process
can be achieved by the (7).
F(u ,v ) Quantization =round
𝐹(𝑢,𝑣)
𝑄(𝑢 ,𝑣)
F(u, v)deQ = F(u, v)Quantization XQ(u, v)
Quantization aims at reducing most of the less important high frequency DCT coefficients to
zero, the more zeros the better the image will compress. Lower frequencies are used to
reconstruct the image because human eye is more sensitive to them and higher frequencies are
discarded. Matrix (8) and (9) defines the Q matrix for luminance and chrominance components
11 10 16 24 40 51
12 14 19 26 58 60
13 16 24 40 57 69
17 22 29 51 87 80
22 37 56 68 109 103
35 55 64 81 104 113
64 78 87 103 121 102
92 95 98 112 100 103
16 61 
12 55 

56 

14

14 62

QY=  
18 77 

 24
 49

92 
101
72 99 
 
17 99
QC=  
18 24 47 99 99 99
 18 21 26 66 99 99 99 99 
 
 24 26 56 99 99 99 99 99 

47 66 99 99 99 99 99 99

 99 99 99 99 99 99 99 99 
 
 99 99 99 99 99 99 99 99 
 99 99 99 99 99 99 99 99 
 99 99 99 99 99 99 99 99 
 
After quantization, the "zig-zag" sequence orders all of the quantized coefficients as shown in
Figure 3 .In the "zig-zag" sequence, firstly it encodes the coefficients with lower frequencies
(typically with higher values) and then the higher frequencies (typically zero or almost zero). The result
is an extended sequence of similar data bytes, permitting efficient entropy encoding.
14 | P a g e
Image compression using by Huffman
Figure 3. Zigzag Sequencing
Huffman Encoding
Entropy Coding achieves more lossless compression by encoding more compactly the quantized
DCT coefficients. Both Huffman coding and arithmetic coding is specified by The JPEG
proposal. Huffman coding is used in the baseline sequential codec, but all modes of operation use
Huffman coding and arithmetic coding. The source symbols that are not equally probable use
Huffman coding efficiently. In 1952 , a variable length encoding algorithm, based on the source
symbol probabilities P(xi), i=1,2…….,L is suggested by Huffman . The algorithm achieves the
optimality if the average number of bits required to represent the source symbols is a minimum
provided the Prefix condition is met. The Huffman algorithm begins with a set of symbols each
with its frequency of occurrence (probability) constructing what we can call a frequency table.
The Huffman algorithm then builds the Huffman Tree using frequency table. The tree structure
contains nodes, each contains a symbol, its frequency, a pointer to a parent node, and pointers to
the left and right child nodes. Successive passes through the existing nodes allows the tree to
grow. Each pass searches for two nodes that have the two lowest frequency counts, provided that
they have not grown a parent node. Anew node is generated when the algorithm finds those two
nodes. A new node is assigned as the parent of the two nodes and is given a frequency count that
equals the sum of the two child nodes. Those two child nodes are ignored by the next iterations
which include the new parent node. The passes stop when only one node with no parent remains.
Only one node with no parent will be the root node of the tree. Compression involves traversing
the tree beginning at the leaf node for the symbol to be compressed and navigating to the root.
The parent of the current node is iteratively selected and seen by this navigation to determine
whether the current node is the "right" or "left" child of the parent, therefore determining if the
next bit is a (1) or a (0). The final bit string is now to be reversed, because we are proceeding
from leaf to root.
15 | P a g e
Image compression using by Huffman
Decompression
The compression phase is reversed in the decompression process, and in the opposite order. The
first step is restoring the Huffman tables from the image and decompressing the Huffman tokens
in the image. Next, the DCT values for each block will be the first things needed to decompress a
block. The other 63 values in each block are decompressed by JPEG, filling in the appropriate
number of zeros. The last step is combined of decoding the zigzag order and recreating the 8 x 8
blocks .The inverse DCT(IDCT) takes each value in the spatial domain and examines the
contributions that each of the 64 frequency values make to that pixel.
1. Method
img= imread('C:UsersADMINDesktopDSC_0157.jpg');
Image = rgb2gray(img);
Image = Image(:);
[N M] = size(Image)
Count = zeros(256,1)
for i = 1:N
for j = 1:M
Count(Image(i,j)+1)=Count(Image(i,j)+1)+1;
end
end
prob = Count/(M*N)
symbols = 0:255;
[dict,avglen] = huffmandict(symbols,prob);
comp = huffmanenco(Image,dict)
imshow(img);
2. Method
%Target: To huffman encode and decode user entered string
%---------------------------------------------------------------
-----------
string=input('enter the string in inverted commas');
%input string
symbol=[]; %initialise variables
16 | P a g e
Image compression using by Huffman
count=[];
j=1;
%------------------------------------------loop to
separate
symbols and how many times they occur
for i=1:length(string)
flag=0; %symbolsflag=ismember(symbol,string(i));
if sum(flag)==0
symbol(j) = string(i);
k=ismember(string,string(i));
c=sum(k);
count(j) = c;
j=j+1;
end
end
ent=0;
total=sum(count);
prob=[];
%no of times it occurs
%total no of symbols
%-----------------------------------------for loop to find
probability and
%entropy
for i=1:1:size((count)');
prob(i)=count(i)/total;
ent=ent-prob(i)*log2(prob(i));
end
var=0;
%-----------------------------------------function to create
dictionary
[dict avglen]=huffmandict(symbol,prob);
% print the dictionary.
temp = dict;
for i = 1:length(temp)
temp{i,2} = num2str(temp{i,2});
var=var+(length(dict{i,2})-avglen)^2; %variance
calculation
end
temp
%-----------------------------------------encoder and decoder
functions
sig_encoded=huffmanenco(string,dict)
deco=huffmandeco(sig_encoded,dict);
equal = isequal(string,deco)
%-----------------------------------------decoded string and
output
%variables
str ='';for i=1:length(deco)
17 | P a g e
Image compression using by Huffman
str= strcat(str,deco(i));
end
str
ent
avglen
var
3. Method
%clearing all variableas and screen
clear all;
close all;
clc;
%Reading image
a=imread('DSC_0157.jpg');
imshow(a);
%converting an image to grayscale
I=rgb2gray(a);
%size of the image
[m,n]=size(I);
Totalcount=m*n;
%variables using to find the probability
cnt=1;
sigma=0;
%computing the cumulative probability.
for i=0:255
k=I==i;
count(cnt)=sum(k(:))
%pro array is having the probabilities
pro(cnt)=count(cnt)/Totalcount;
sigma=sigma+pro(cnt);
cumpro(cnt)=sigma;
cnt=cnt+1;
end;
%Symbols for an image
symbols = [0:255];
%Huffman code Dictionary
dict = huffmandict(symbols,pro);
%function which converts array to vector
vec_size = 1;
for p = 1:m
for q = 1:n
newvec(vec_size) = I(p,q);
vec_size = vec_size+1;end
18 | P a g e
Image compression using by Huffman
end
%Huffman Encodig
hcode = huffmanenco(newvec,dict);
%Huffman Decoding
dhsig1 = huffmandeco(hcode,dict);
%convertign dhsig1 double to dhsig uint8
dhsig = uint8(dhsig1);
%vector to array conversion
dec_row=sqrt(length(dhsig));
dec_col=dec_row;
%variables using to convert vector 2 array
arr_row = 1;
arr_col = 1;
vec_si = 1;
for x = 1:m
for y = 1:n
back(x,y)=dhsig(vec_si);
arr_col = arr_col+1;
vec_si = vec_si + 1;
end
arr_row = arr_row+1;
end
%converting image from grayscale to rgb
[deco, map] = gray2ind(back,256);
RGB = ind2rgb(deco,map);
imwrite(RGB,'decoded.JPG');
%end of the huffman coding
4. Method
%clearing all variableas and screen
clear all;
close all;
clc;
%Reading image
a=imread('jpeg-image-compression-1-638.JPG');
figure,imshow(a)
%converting an image to grayscale
I=rgb2gray(a);
%size of the image
[m,n]=size(I);
Totalcount=m*n;
19 | P a g e
Image compression using by Huffman
%variables using to find the probability
cnt=1;
sigma=0;
%computing the cumulative probability.
for i=0:255
k=I==i;
count(cnt)=sum(k(:))
%pro array is having the probabilities
pro(cnt)=count(cnt)/Totalcount;
sigma=sigma+pro(cnt);
cumpro(cnt)=sigma;
cnt=cnt+1;
end;
%Symbols for an image
symbols = [0:255];
%Huffman code Dictionary
dict = huffmandict(symbols,pro);
%function which converts array to vector
vec_size = 1;
for p = 1:m
for q = 1:n
newvec(vec_size) = I(p,q);
vec_size = vec_size+1;
end
end
%Huffman Encodig
hcode = huffmanenco(newvec,dict);
%Huffman Decoding
dhsig1 = huffmandeco(hcode,dict);
%convertign dhsig1 double to dhsig uint8
dhsig = uint8(dhsig1);
%vector to array conversion
dec_row=sqrt(length(dhsig));
dec_col=dec_row;
%variables using to convert vector 2 array
20 | P a g e
Image compression using by Huffman
arr_row = 1;
arr_col = 1;
vec_si = 1;
for x = 1:m
for y = 1:n
back(x,y)=dhsig(vec_si);
arr_col = arr_col+1;
vec_si = vec_si + 1;
end
arr_row = arr_row+1;
end
%converting image from grayscale to rgb
[deco, map] = gray2ind(back,256);
RGB = ind2rgb(deco,map);
imwrite(RGB,'decoded.JPG');
%end of the huffman coding
5.Method
clc;
clear all;
global CODE
A1 = imread('fig1.jpg');
A1=rgb2gray(A1);
A1=imresize(A1,[128 128]);
figure(1)
imshow(A1)
[M, N]=size(A1);
A = A1(:);
count = imhist(A);
A=num2cell(A);
p = count / numel(A);
%sum(p);
CODE = cell(length(p), 1);
s = cell(length(p), 1);
21 | P a g e
Image compression using by Huffman
for i = 1:length(p) % Generate a starting tree with symbol nodes 1, 2, 3, ... to
% reference the symbol probabilities.
s{i} = i ;
end
while numel(s) > 2
[p, i] = sort(p); % Sort the symbol probabilities
p(2) = p(1) + p(2); % Merge the 2 lowest probabilities
p(1) = []; % and prune the lowest one
s = s(i) ; % Reorder tree for new probabilities
s{2} = {s{1}, s{2}}; % and merge & prune its nodes
s(1) = []; % to match the probabilities
end
makecode(s, [])
for i = 1:numel(CODE)
c = CODE{i};
t = c; c(c=='1') = '0'; c(t=='0') = '1';
CODE{i} = c;
end
CODE;
for n=1:256
index=find(cell2mat(A)==n-1);
ANN(index)=CODE(n);
end
codedim=bin2dec(ANN');
codedim=reshape(codedim,M,N);
codedim=uint8(codedim);
imwrite(codedim, 'fig1-Copy.jpg')
figure(2)
imshow(codedim)
6.Method
clc;
% input image
image = imread('img1.png');
22 | P a g e
Image compression using by Huffman
[m,n] = size(image);
J = imresize(image,[256 256]);
[x,y] = size(J);
figure();
imshow(J);
title('original RGB image')
drawnow();
Image = rgb2gray(J);
figure();
imshow(Image);
title('original image as grayscale');
drawnow();
[N, M] = size(Image);
Count = zeros(256,1);
for i = 1:N
for j = 1:M
Count(Image(i,j)+1) = Count(Image(i,j)+1)+1;
end
end
prob = Count/(M*N);
prob_1=prob(:)';
symbols = 0:255;
[dict,avglen] = huffmandict(symbols,prob);
comp = huffmanenco(Image(:),dict);
l_encoded=length(comp);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%
head1 = [l_encoded [N, M] prob_1 ];
l_head = length( head1 );
header = [l_head head1];
%%%%%%%%%%% File writing
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
image=input('comp.bin): ', 's');
fid=fopen(image,'w');
fwrite(fid, header, 'double'); %%%% writing the header in double format
fwrite(fid,comp,'ubit1'); %%%% writing the encoded bits in binary format after the header
23 | P a g e
Image compression using by Huffman
fclose(fid);
decomp = cast( reshape(huffmandeco(comp,dict), N, M), class(Image));
figure();
imshow(decomp);
title('reconstructed image');
drawnow();
figure();
dimg = imsubtract(Image, decomp);
imshow(dimg, []);
title('difference between original and reconstructed (expect no difference)');
drawnow();
24 | P a g e
Image compression using by Huffman
RESULTS AND DISCUSSION
In the JPEG image compression algorithm, the input image is divided into 4-by-4 or 8-by-8 blocks, and
the two-dimensional DCT is computed for each block. The DCT coefficients are then quantized, coded,
and transmitted. The JPEG receiver (or JPEG file reader) decodes the quantized DCT coefficients,
computes the inverse two-dimensional DCT of each block, and then puts the blocks back together into
a single image. For typical images, many of the DCT coefficients have values close to zero; these
coefficients can be discarded without seriously
affecting the quality of the reconstructed image. The example code below computes the two-
dimensional DCT of 8-by-8 blocks in the input image, discards (sets to zero) all but 10 of the 64 DCT
coefficients in each block, and then reconstructs the image using the two dimensional inverse DCT of
each block. The transform matrix computation method is used.
In the JPEG image compression algorithm, the input image is divided into 4-by-4 or 8-by-8 blocks, and
the two-dimensional DCT is computed for each block. The DCT coefficients are then quantized, coded,
and transmitted. The JPEG receiver (or JPEG file reader) decodes the quantized DCT coefficients,
computes the inverse two-dimensional DCT of each block, and then puts the blocks back together into
a single image. For typical images, many of the DCT coefficients have values close to zero; these
coefficients can be discarded without seriously affecting the quality of the reconstructed image. The
example code below computes the two-dimensional DCT of 8-by-8 blocks in the input image, discards
(sets to zero) all but 10 of the 64 DCT coefficients in each block, and then reconstructs the image using
the two dimensional inverse DCT of each block. The transform matrix computation method is used.
Figure 2: Al-though there is some loss of quality in
the reconstructed image, it is clearly recognizable,
even though almost 85% of the DCT coefficients
25 | P a g e
Image compression using by Huffman
( Original Image)
(CompressedImage)
26 | P a g e
Image compression using by Huffman
DecompressedImage
Method JPEG
Existing
JPEG
Proposed
MSE 17502.21 63.63
PSNR 5.70 30.09
NK 0.00 1.00
AD 123.58 -0.02
SC 64979.04 1.00
MD 239.00 58.00
NAE 1.00 0.04
27 | P a g e
Image compression using by Huffman
Figure 3. (Left-Bottom) Lena, 8-by-8 DCT, 4-by-4 DCT
(Right-Bottom) Apple, 8-by-8 DCT, 4-by-4 DCT
Compress
ion by 8-
by-8DCT
Compress
ion by 4-
by-4 DCT
Compress
ion by 8-
by-8 DC
Compress
ion by 4-
by-4DCT
JPEG-
1
6.70% 6.31% 2.97% 2.76%
JPEG-
2
6.24% 4.86% 2.47% 1.73%
JPEG-
3
6.24% 4.43% 2.29% 1.56%
JPEG-
4
6.04% 4.17% 2.14% 1.35%
JPEG-
5
5.19% 3.76% 1.51% 1.26%
JPEG-
6
4. 47% 3.20% 1.26% 0.96%
JPEG-
7
3.79% 2.44% 1.11% 0.68%
JPEG-8 3.02% 1.63% 0.81% 0.23%
JPEG-
9
2.25% 0.00% 0.26% 0.00%
28 | P a g e
Image compression using by Huffman
29 | P a g e
Image compression using by Huffman
CONCLUSION
Image compression is used for managing images in digital format. This survey paper has been
focused on the Fast and efficient lossy coding algorithms JPEG for image
Compression/Decompression using Discrete Cosine transform. We also briefly introduced the
principles behind the Digital Image compression and various image compression methodologies
.and the jpeg process steps including DCT, quantization , entropy encoding.
In the future work we will make a comparison between two techniques of the image compression
(Discrete cosine transform and Discrete Wavelet transform).
In this paper, we are improving the performance of the image compression. For improve the
performance of the image compression , we are improving the parameter of MSE(Mean Square
Error , PSNR(Peak Signal to Noise Ratio) , NK(Normalized Cross Correlation) , AD( Average
Difference) , SC(Structural Content) , MD(Maximum Difference) , NAE(Normalized Absolute
Error). As we can check from the results session, PSNR and NK value is getting increase and
remaining all the values are getting the decrease . For improving the performance of the image
compression, we have to decrease the value of NAE, MD, SC, AD, MSE and PSNR, NK values we
have to increase.
Future Scope
For further enhance the performance of the image compression and decompression, can be done
by other lossless methods of image compression because as it is concluded above, that the result
of the decompressed image is almost same as that of the input image, so it indicates that there is
no loss of information during transmission. So other methods of image compression, any of the
type i.e., lossless or lossy can be carried out as namely JPEG method, LZW coding, etc. Use of
different metrics can also take place to evaluate the performance of compression algorithms.
Data compression using Huffmann coding 30 | P a g e

Weitere ähnliche Inhalte

Was ist angesagt?

Point processing
Point processingPoint processing
Point processingpanupriyaa7
 
Lecture 16 KL Transform in Image Processing
Lecture 16 KL Transform in Image ProcessingLecture 16 KL Transform in Image Processing
Lecture 16 KL Transform in Image ProcessingVARUN KUMAR
 
Fundamentals and image compression models
Fundamentals and image compression modelsFundamentals and image compression models
Fundamentals and image compression modelslavanya marichamy
 
Image compression
Image compressionImage compression
Image compressionAle Johnsan
 
Image compression using discrete cosine transform
Image compression using discrete cosine transformImage compression using discrete cosine transform
Image compression using discrete cosine transformmanoj kumar
 
Color fundamentals and color models - Digital Image Processing
Color fundamentals and color models - Digital Image ProcessingColor fundamentals and color models - Digital Image Processing
Color fundamentals and color models - Digital Image ProcessingAmna
 
Digital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency DomainDigital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency DomainMostafa G. M. Mostafa
 
Image compression standards
Image compression standardsImage compression standards
Image compression standardskirupasuchi1996
 
Chapter 2 Image Processing: Pixel Relation
Chapter 2 Image Processing: Pixel RelationChapter 2 Image Processing: Pixel Relation
Chapter 2 Image Processing: Pixel RelationVarun Ojha
 
discrete wavelet transform
discrete wavelet transformdiscrete wavelet transform
discrete wavelet transformpiyush_11
 
Intensity Transformation and Spatial filtering
Intensity Transformation and Spatial filteringIntensity Transformation and Spatial filtering
Intensity Transformation and Spatial filteringShajun Nisha
 
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...Hemantha Kulathilake
 
Chapter 6 color image processing
Chapter 6 color image processingChapter 6 color image processing
Chapter 6 color image processingasodariyabhavesh
 

Was ist angesagt? (20)

Point processing
Point processingPoint processing
Point processing
 
Image compression .
Image compression .Image compression .
Image compression .
 
Lecture 16 KL Transform in Image Processing
Lecture 16 KL Transform in Image ProcessingLecture 16 KL Transform in Image Processing
Lecture 16 KL Transform in Image Processing
 
Fundamentals and image compression models
Fundamentals and image compression modelsFundamentals and image compression models
Fundamentals and image compression models
 
Image compression
Image compressionImage compression
Image compression
 
Hit and-miss transform
Hit and-miss transformHit and-miss transform
Hit and-miss transform
 
Data compression
Data compressionData compression
Data compression
 
Image compression using discrete cosine transform
Image compression using discrete cosine transformImage compression using discrete cosine transform
Image compression using discrete cosine transform
 
Color fundamentals and color models - Digital Image Processing
Color fundamentals and color models - Digital Image ProcessingColor fundamentals and color models - Digital Image Processing
Color fundamentals and color models - Digital Image Processing
 
Digital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency DomainDigital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency Domain
 
Image compression standards
Image compression standardsImage compression standards
Image compression standards
 
Chapter 2 Image Processing: Pixel Relation
Chapter 2 Image Processing: Pixel RelationChapter 2 Image Processing: Pixel Relation
Chapter 2 Image Processing: Pixel Relation
 
discrete wavelet transform
discrete wavelet transformdiscrete wavelet transform
discrete wavelet transform
 
Data Redundacy
Data RedundacyData Redundacy
Data Redundacy
 
Intensity Transformation and Spatial filtering
Intensity Transformation and Spatial filteringIntensity Transformation and Spatial filtering
Intensity Transformation and Spatial filtering
 
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...
 
image compression ppt
image compression pptimage compression ppt
image compression ppt
 
Digital Image Processing
Digital Image ProcessingDigital Image Processing
Digital Image Processing
 
Chapter 6 color image processing
Chapter 6 color image processingChapter 6 color image processing
Chapter 6 color image processing
 
Digtial Image Processing Q@A
Digtial Image Processing Q@ADigtial Image Processing Q@A
Digtial Image Processing Q@A
 

Ähnlich wie Data compression using huffman coding

DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSION
DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSIONDISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSION
DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSIONSATYENDRAKUMAR279
 
A hybrid predictive technique for lossless image compression
A hybrid predictive technique for lossless image compressionA hybrid predictive technique for lossless image compression
A hybrid predictive technique for lossless image compressionjournalBEEI
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Enhanced Image Compression Using Wavelets
Enhanced Image Compression Using WaveletsEnhanced Image Compression Using Wavelets
Enhanced Image Compression Using WaveletsIJRES Journal
 
Reduction of Blocking Artifacts In JPEG Compressed Image
Reduction of Blocking Artifacts In JPEG Compressed ImageReduction of Blocking Artifacts In JPEG Compressed Image
Reduction of Blocking Artifacts In JPEG Compressed ImageDr Sukhpal Singh Gill
 
Design of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLABDesign of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLABIJEEE
 
IRJET- RGB Image Compression using Multi-Level Block Trunction Code Algor...
IRJET-  	  RGB Image Compression using Multi-Level Block Trunction Code Algor...IRJET-  	  RGB Image Compression using Multi-Level Block Trunction Code Algor...
IRJET- RGB Image Compression using Multi-Level Block Trunction Code Algor...IRJET Journal
 
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
 
Image compression and reconstruction using improved Stockwell transform for q...
Image compression and reconstruction using improved Stockwell transform for q...Image compression and reconstruction using improved Stockwell transform for q...
Image compression and reconstruction using improved Stockwell transform for q...IJECEIAES
 
Compression technologies
Compression technologiesCompression technologies
Compression technologiesKetan Hulaji
 
Comparison and improvement of image compression
Comparison and improvement of image compressionComparison and improvement of image compression
Comparison and improvement of image compressionIAEME Publication
 
Comparison and improvement of image compression
Comparison and improvement of image compressionComparison and improvement of image compression
Comparison and improvement of image compressionIAEME Publication
 
Comparison and improvement of image compression
Comparison and improvement of image compressionComparison and improvement of image compression
Comparison and improvement of image compressionIAEME Publication
 
Symbols Frequency based Image Coding for Compression
Symbols Frequency based Image Coding for CompressionSymbols Frequency based Image Coding for Compression
Symbols Frequency based Image Coding for CompressionIJCSIS Research Publications
 
A Study of Image Compression Methods
A Study of Image Compression MethodsA Study of Image Compression Methods
A Study of Image Compression MethodsIOSR Journals
 
Seminar Report on image compression
Seminar Report on image compressionSeminar Report on image compression
Seminar Report on image compressionPradip Kumar
 
A spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encodingA spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encodingjournalBEEI
 

Ähnlich wie Data compression using huffman coding (20)

DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSION
DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSIONDISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSION
DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSION
 
M.sc.iii sem digital image processing unit v
M.sc.iii sem digital image processing unit vM.sc.iii sem digital image processing unit v
M.sc.iii sem digital image processing unit v
 
A hybrid predictive technique for lossless image compression
A hybrid predictive technique for lossless image compressionA hybrid predictive technique for lossless image compression
A hybrid predictive technique for lossless image compression
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Enhanced Image Compression Using Wavelets
Enhanced Image Compression Using WaveletsEnhanced Image Compression Using Wavelets
Enhanced Image Compression Using Wavelets
 
Reduction of Blocking Artifacts In JPEG Compressed Image
Reduction of Blocking Artifacts In JPEG Compressed ImageReduction of Blocking Artifacts In JPEG Compressed Image
Reduction of Blocking Artifacts In JPEG Compressed Image
 
Spiht 3d
Spiht 3dSpiht 3d
Spiht 3d
 
Design of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLABDesign of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLAB
 
Ec36783787
Ec36783787Ec36783787
Ec36783787
 
IRJET- RGB Image Compression using Multi-Level Block Trunction Code Algor...
IRJET-  	  RGB Image Compression using Multi-Level Block Trunction Code Algor...IRJET-  	  RGB Image Compression using Multi-Level Block Trunction Code Algor...
IRJET- RGB Image Compression using Multi-Level Block Trunction Code Algor...
 
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...
 
Image compression and reconstruction using improved Stockwell transform for q...
Image compression and reconstruction using improved Stockwell transform for q...Image compression and reconstruction using improved Stockwell transform for q...
Image compression and reconstruction using improved Stockwell transform for q...
 
Compression technologies
Compression technologiesCompression technologies
Compression technologies
 
Comparison and improvement of image compression
Comparison and improvement of image compressionComparison and improvement of image compression
Comparison and improvement of image compression
 
Comparison and improvement of image compression
Comparison and improvement of image compressionComparison and improvement of image compression
Comparison and improvement of image compression
 
Comparison and improvement of image compression
Comparison and improvement of image compressionComparison and improvement of image compression
Comparison and improvement of image compression
 
Symbols Frequency based Image Coding for Compression
Symbols Frequency based Image Coding for CompressionSymbols Frequency based Image Coding for Compression
Symbols Frequency based Image Coding for Compression
 
A Study of Image Compression Methods
A Study of Image Compression MethodsA Study of Image Compression Methods
A Study of Image Compression Methods
 
Seminar Report on image compression
Seminar Report on image compressionSeminar Report on image compression
Seminar Report on image compression
 
A spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encodingA spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encoding
 

Kürzlich hochgeladen

Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756dollysharma2066
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...Call Girls in Nagpur High Profile
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Christo Ananth
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLManishPatel169454
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 

Kürzlich hochgeladen (20)

Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 

Data compression using huffman coding

  • 1. DISCRETECOSINETRANSFORMWITH ADAPTIVEHUFFMANCODINGBASED IMAGECOMPRESSION ABSTRACT Method of compression which is Huffman coding based on histogram information and image segmentation. It is used for lossless and lossy compression. Theamount of image will be compressed in lossy manner, and in lossless manner, depends on theinformation obtained by the histogram of the image. The results show that the difference betweenoriginal and compressed images is visually negligible. The compression ratio(CR) and peak signal tonoise ratio(PSNR) are obtained for different images. The relation between compression ratio and peaksignal to noise ratio shows that whenever we increase compression ratio we get PSNR high. We can alsoobtain minimum mean square error. It shows that if we get high PSNR than our image quality is better.
  • 2. A PROJECT REPORT On DISCRETECOSINETRANSFORMWITHADAPTIVEHUFFMAN CODINGBASEDIMAGECOMPRESSION Submitted by: – SATYENDRA KUMAR (ROLL.NO-0028; REGISTRATION NO-0068/15) Academic Year 2015-2019. As a partial fulfilment for the award of degree of Electronics and Communication Engineering in BACHELOR OF TECHNOLOGY Under the Supervision of :- Dr. Anukul Pandey Assistant Professor Department of Electronics and Communication Engineering DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING DUMKA ENGINEERING COLLEGE (Established by Government of Jharkhand & Run by TECHNO INDIA under PPP) DUMKA-814101 (AFFILIATED TO S.K.M.U) 1 | P a g e Image compression using by Huffman
  • 3. DECLARATION I SATYENDRA KUMAR University Roll No.0028/Reg. Number 0068/15 Students Of 7th Semester B-Tech In Electronics & Communication Engineering, Dumka Engineering College, Dumka, Hereby Declare That The Project Work Entitled “Discrete Cosine Transform With Adaptive Huffman Coding Based Image Compression” Submitted To The Sido-Kanho Murmu University During The Academic Year 2018-2019 Is The Original Work Done By Us Under Supervision Of Dr. Anukul Pandey, Assistant Professor Department Electronics & Communication Engineering This Project Work Is Submitted In Partial Fulfilment Of The Requirements For The Award Of The Degree Of Bachelor In Technology In Electronics & Communication Engineering. The Results Embodied In This Project Has Not Been Submitted To Any Other Institute Or University For The Award Of Any Degree. Date:14/03/2019 SARYENDRA KUMAR Place: Dumka 1 | P a g e Image compression using by Huffman
  • 4. CERTIFICATE OF APPROVAL The dissertation is hereby approved as a bonafide and creditable project work “DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSION” carried out and presented by SATYENDRA KUMAR(Roll No.0028and Reg. No.0068/15 of 2018-2019) in a manner to warrant its acceptance as a prerequisite for award of the degree of Bachelor of Technology(B-TECH) in Electronics and Communication Engineering. The undersigned do not necessarily endorse or take responsibility for any statement or opinion expressed or conclusion drawn there in, but only approve the dissertation for the purpose for which it is submitted. Dr. Anukul Pandey Supervisor Assistant Professor Department Electronics & Communication Engineering Dumka Engineering College (Established by Government of Jharkhand & Run by TECHNO INDIA under PPP) Mr. Sujit Khamaru Head of Department Assistant Professor Department Electronics & Communication Engineering Dumka Engineering College (Established by Government of Jharkhand & Run by TECHNO INDIA under PPP) 2 | P a g e Image compression using by Huffman
  • 5. Acknowledgments I take this opportunity to express a deep sense of gratitude towards my project supervisor Dr. Anukul Pandey, for providing excellent guidance, encouragement and inspiration throughout the project work. Without his invaluable guidance, this work would never have been a successful one. I would also like to thank all my batch mates for their valuable suggestions and helpful discussions. I have taken efforts in this work. However, it would not have been possible without the valuable suggestions, kind support and help of our HOD Mr. Sujit Khamaru and many other individuals of ECE department of our institution. I would like to extend my sincere thanks to all of them. Finally, I will forever be grateful to my parents and sister for their unconditional endless love and for giving me the best of everything in the world. I would like to express my appreciation, for all their sacrifices and efforts. Without their love and encouragement, all I could have achieved would be a complete failure. SATYENDRA KUMAR Roll No- 0028 Registration No- 0068/15 Dumka Engineering College Date- 3 | P a g e Image compression using by Huffman
  • 6. 4 | P a g e Image compression using by Huffman
  • 7. ABSTRACT Image compression is one of the most important steps in image transmission and storage. “A picture is worth more than thousand words “is a common saying. Images play an indispensable role in representing vitalin formation and needs to be saved for further use or can be transmitted over a medium. In order to have efficient utilization of disk space and transmission rate, images need to be compressed. Image compression is the technique of reducing the file size of a image without compromising with the image quality at acceptable level. Image compression is been used from a long time and many algorithms have been devised Due to the increasing requirements for transmission of images in computer, mobile environments, the research in the field of image compression has increased significantly. Image compression plays a crucial role in digital image processing, it is also very important for efficient transmission and storage of images. When we compute the number of bits per image resulting from typical sampling rates and quantization methods, we find that Image compression is needed. Therefore development of efficient techniques for image compression has become necessary .This paper is a survey for lossy image compression using Discrete Cosine Transform, it covers JPEGcompression algorithm which is used for full-colour still image applications and describes all the components of it. Commented [s1]: 5 | P a g e Image compression using by Huffman
  • 8. Definition: - The process of encoding information using fewer units of storage than an un-encoded representation of data, through the use of specific encoding schemes. Data compression, or sometimes called source coding, is the process of converting input data into another data stream that has a smaller size, but retains the essential information contained within the original data stream. 6 | P a g e Image compression using by Huffman
  • 9. 1. Make optimal use of limited storage space. 2. Save time and help to optimize resources. If compression and decompression are done in I/O processor, less time is required to move data to or from storage subsystem, freeing I/O bus for other work In sending data over communication line: less time to transmit and less storage to host 1. Compression is useful because it helps reduce the Consumption of resources, such as hard disk space or transmission bandwidth. 2. With the interest and surge in environmental test data for the Surveillance Program, significant strains on computer storage resources will occur. 5.Archiving of environmental test data from legacy systems, including data for the Environment Test lab. 6. Familiar examples of data compressed files include .zip,.rar, .tar file extensions. 7 | P a g e Image compression using by Huffman
  • 10. - Lossy Compression System Lossy compression techniques is used in images where we can sacrifice some of the finer details in the image to save a little more bandwidth or storage space. Guided by research on how people perceive the data in question. Used when some loss of fidelity is acceptable. As an example, the human eye is more sensitive to subtle variations in luminance than to variations in color. Therefore, color complexity can be reduced to maintain the integrity of images, etc. JPEG image compression works in part by “rounding off” some of this less important information. Lossy data compression provides a method of obtaining the best fidelity for a given amount of compression desired. - Lossless compression system Lossless Compression System aims at reducing the bit rate of the compressed output without Any distortion of the image. The bit-stream after decompression is identical to the original bit stream. These types of algorithms usually exploit statistical redundancy to represent the user’s data more concisely without error. Most real-world data has statistical redundancy, Example – In English text, the letter ‘e’ is much more common than the letter ‘z’. Similarly the probability that the letter ‘q’ will be followed by the letter ‘z’ is very small. - Predictive coding It is a lossless coding method, which means the value for every element in the decoded image and the original image is identical to Differential Pulse Code Modulation (DPCM). - Transform coding Transform coding forms an integral part of compression techniques. the reversible linear transform in transform coding aims at mapping the image into a set of coefficients and the resulting coefficients are then quantized and coded. the first attempts is the discrete cosine transform (DCT) domain. 8 | P a g e Image compression using by Huffman
  • 11. Three closely connected components form a typical lossy image compression system, they are (a)Source Encoder (b) Quantizer and (c) Entropy Encoder. (a) Source Encoder (or Linear Transformer) It is aimed at decorrelating the input signal by transforming its representation in which the set of data values is sparse, thereby compacting the information content of the signal into smaller number of coefficients. a variety of linear transforms have been developed such as Discrete Cosine Transform (DCT), Discrete wavelet Transform (DWT), Discrete Fourier Transform (DFT). (b) Quantizer Aquantizer aims at reducing the number of bits needed to store transformed coefficients by reducing the precision of those values. Quantization performs on each individual coefficient i.e. Scalar Quantization (SQ) or it performs on a group of coefficients together i.e. Vector Quantization (VQ). (c) Entropy Coding Entropy encoding removes redundancy by removing repeated bit patterns in the output of the Quantizer. the most common entropy coders are the Huffman Coding, Arithmetic Coding, Run Length Encoding (RLE) and Lempel-Ziv (LZ) algorithm. Figure 1.represents the encoding of image compression system InputSource Image Reduce Correlation Between pixels QuantizationEntropyconding Output compressedimage 9 | P a g e Image compression using by Huffman
  • 12. Performance Criteria in Image Compression We can estimate the performance by applying the following two essential criteria: the compression ratio (CR )and the quality measurement of the reconstructed image( PSNR) (a) Compression ratio The Compression ratio (CR) is the ratio between the original image size and the compressed image size. CR = 𝑛1 𝑛2 (b) Distortion measure Mean Square Error (MSE) is a measure of the distortion rate in the reconstructed image. 𝐻𝑊 2 MSE= 1 ∑ 𝐻 ∗ ∑ 𝑊 [𝑋(𝑖, 𝑗) − 𝑌(𝑖,𝑗)]𝑖 =1 𝑖 =1 - PSNR has been accepted as widely used quality measurement in the field of image compression 2 PSNR=10log 255 (dB)10 𝑀𝑆𝐸 DCT TRANSFORMATION The most popular technique for image compression, over the past several years, was Discrete cosine transform (DCT). Its selection as the standard for JPEG is One of the major reasons for its popularity. DCT is used by many Non-analytical applications such as image processing and signal-processing DSP applications such as video conferencing. The DCT is used in transformation for data compression. DCT is an orthogonal transform, which has a fixed set of basis function.Dct is used to map an image space into a frequency. DCT has many advantages: It has the ability to pack energy in the lower frequencies for image data. It has the ability to reduce the blocking artefact effect and this effect results from the boundaries between sub-images become visible. 10 | P a g e Image compression using by Huffman
  • 13. JPEG COMPRESSION JPEG Standard is the very well-known ISO/ITU-T standard created in the late 1980s. jpeg Standard is targeted for full- color still frame applications. one of the most common compression standard is the JPEG standard . Several modes are defined for JPEG including baseline, lossless, progressive and hierarchical. The most common mode uses the discrete cosine transform is the JPEG baseline codingsystem, also it is suitable for most compression applications. Despite being developed for low compressions JPEG it is very helpful for DCT quantization and compression. JPEG compression reduces file size with minimum image degradation by eliminating the least important information. But it is considered a lossy image compression technique because the final image and the original image are not completely the same and In lossy compression the information that may be lost and missed is affordable. JPEG compression is performed in sequential steps. JPEG Process Steps for color images This section presents jpeg compression steps - An RGB to YCbCr color space conversion ( color specification ) - Original image is divided into blocks of 8 x 8. -The pixel values within each block range from[-128 to 127] but pixel values of a black and white image range from [0-255] so, each block is shifted from[0-255] to [-128 to 127]. - The DCT works from left to right, top to bottom thereby it is applied to each block. - Each block is compressed through quantization. - Quantized matrix is entropy encoded. -Compressed image is reconstructed through reverse process. This process uses the inverse Discrete Cosine Transform (IDCT). Figure 2. represents the encoder and decoder block diagrams for colour images. Compression algorithm scheme: (a) compression step and (b) decompression step 11 | P a g e Image compression using by Huffman
  • 14. Color Specification The YUV colour coordinate defines Y, Cb, and Cr components of one color image, where Y is commonly called the luminance and Cb, Cr are commonly called the chrominance. the RGB primary uses colour display to mix the luminance and chrominance attributes of a light. Describing of a colour in terms of its luminance and chrominance content separately enable more efficient processing and transmission of colour signals in many applications. To obtain this goal, various three-component colour coordinates have been developed, in which one component(Y) reflects the luminance and the other two collectively (Cb,Cr) characterize hue and saturation. The [Y Cb Cr] T values in the YUV coordinate are related to the [R G B]T values in the RGB coordinate by  y   0.299 0.114  R   0  Cb  0.169 0.500  G 128        Cr   0.500 0.587 0.334 0.419 0.081 B  128        Similarly, if we want to transform the YUV coordinate back to RGB coordinate, the inverse matrix can be calculated from (4), and the inverse transform is taken to obtain the corresponding RGB components. After colour coordinate conversion, the next step is to divide the three colour components of the image into many 8×8 blocks. For an 8-bit image, in the original block each element falls in the range [0,255]. Data range that is centred around zero is produced after subtracting The mid-point of the range (the value 128) from each element in the original block, so that the modified range is shifted from[0,255] to [-128,127]. Images are separated into parts of different frequencies by the DCT. The quantization step discards less important frequencies and the decompression step uses the important frequencies to retrieve the image. This equation gives the forward 2D_DCT transformation: N 2N 2N N1 N1 f (x, y)cos[   ]cos[   ] x0 y0 F(u,v)= 2 C(u)C(v) for u=0,...,N- 1 and v= 0,...,N- 1 whereN= 8 andC (k )=  1 2 fork  0 1otherwise This equation gives the inverse 2D_DCTtransformation: 2N 2N N1N1 f (x, y)  2  C(u)C(v)F(u,v)cos[   ]cos[   ] N x0 y0 for x=0,...,N- 1 and y= 0,...,N-1 After dct transformation, the “DC coefficient” is the element in the upper most left corresponding to (0,0) and the rest coefficients are called “AC coefficients 12 | P a g e Image compression using by Huffman
  • 15. The JPEG encoding system 13 | P a g e Image compression using by Huffman
  • 16. We actually throw away data through the Quantization step. We obtain the Quantization by dividing transformed image DCT matrix by the quantization matrix used . Values of the resultant matrix are then rounded off. The quantized coefficient is defined in (6), and the reverse process can be achieved by the (7). F(u ,v ) Quantization =round 𝐹(𝑢,𝑣) 𝑄(𝑢 ,𝑣) F(u, v)deQ = F(u, v)Quantization XQ(u, v) Quantization aims at reducing most of the less important high frequency DCT coefficients to zero, the more zeros the better the image will compress. Lower frequencies are used to reconstruct the image because human eye is more sensitive to them and higher frequencies are discarded. Matrix (8) and (9) defines the Q matrix for luminance and chrominance components 11 10 16 24 40 51 12 14 19 26 58 60 13 16 24 40 57 69 17 22 29 51 87 80 22 37 56 68 109 103 35 55 64 81 104 113 64 78 87 103 121 102 92 95 98 112 100 103 16 61  12 55   56   14  14 62  QY=   18 77    24  49  92  101 72 99    17 99 QC=   18 24 47 99 99 99  18 21 26 66 99 99 99 99     24 26 56 99 99 99 99 99   47 66 99 99 99 99 99 99   99 99 99 99 99 99 99 99     99 99 99 99 99 99 99 99   99 99 99 99 99 99 99 99   99 99 99 99 99 99 99 99    After quantization, the "zig-zag" sequence orders all of the quantized coefficients as shown in Figure 3 .In the "zig-zag" sequence, firstly it encodes the coefficients with lower frequencies (typically with higher values) and then the higher frequencies (typically zero or almost zero). The result is an extended sequence of similar data bytes, permitting efficient entropy encoding. 14 | P a g e Image compression using by Huffman
  • 17. Figure 3. Zigzag Sequencing Huffman Encoding Entropy Coding achieves more lossless compression by encoding more compactly the quantized DCT coefficients. Both Huffman coding and arithmetic coding is specified by The JPEG proposal. Huffman coding is used in the baseline sequential codec, but all modes of operation use Huffman coding and arithmetic coding. The source symbols that are not equally probable use Huffman coding efficiently. In 1952 , a variable length encoding algorithm, based on the source symbol probabilities P(xi), i=1,2…….,L is suggested by Huffman . The algorithm achieves the optimality if the average number of bits required to represent the source symbols is a minimum provided the Prefix condition is met. The Huffman algorithm begins with a set of symbols each with its frequency of occurrence (probability) constructing what we can call a frequency table. The Huffman algorithm then builds the Huffman Tree using frequency table. The tree structure contains nodes, each contains a symbol, its frequency, a pointer to a parent node, and pointers to the left and right child nodes. Successive passes through the existing nodes allows the tree to grow. Each pass searches for two nodes that have the two lowest frequency counts, provided that they have not grown a parent node. Anew node is generated when the algorithm finds those two nodes. A new node is assigned as the parent of the two nodes and is given a frequency count that equals the sum of the two child nodes. Those two child nodes are ignored by the next iterations which include the new parent node. The passes stop when only one node with no parent remains. Only one node with no parent will be the root node of the tree. Compression involves traversing the tree beginning at the leaf node for the symbol to be compressed and navigating to the root. The parent of the current node is iteratively selected and seen by this navigation to determine whether the current node is the "right" or "left" child of the parent, therefore determining if the next bit is a (1) or a (0). The final bit string is now to be reversed, because we are proceeding from leaf to root. 15 | P a g e Image compression using by Huffman
  • 18. Decompression The compression phase is reversed in the decompression process, and in the opposite order. The first step is restoring the Huffman tables from the image and decompressing the Huffman tokens in the image. Next, the DCT values for each block will be the first things needed to decompress a block. The other 63 values in each block are decompressed by JPEG, filling in the appropriate number of zeros. The last step is combined of decoding the zigzag order and recreating the 8 x 8 blocks .The inverse DCT(IDCT) takes each value in the spatial domain and examines the contributions that each of the 64 frequency values make to that pixel. 1. Method img= imread('C:UsersADMINDesktopDSC_0157.jpg'); Image = rgb2gray(img); Image = Image(:); [N M] = size(Image) Count = zeros(256,1) for i = 1:N for j = 1:M Count(Image(i,j)+1)=Count(Image(i,j)+1)+1; end end prob = Count/(M*N) symbols = 0:255; [dict,avglen] = huffmandict(symbols,prob); comp = huffmanenco(Image,dict) imshow(img); 2. Method %Target: To huffman encode and decode user entered string %--------------------------------------------------------------- ----------- string=input('enter the string in inverted commas'); %input string symbol=[]; %initialise variables 16 | P a g e Image compression using by Huffman
  • 19. count=[]; j=1; %------------------------------------------loop to separate symbols and how many times they occur for i=1:length(string) flag=0; %symbolsflag=ismember(symbol,string(i)); if sum(flag)==0 symbol(j) = string(i); k=ismember(string,string(i)); c=sum(k); count(j) = c; j=j+1; end end ent=0; total=sum(count); prob=[]; %no of times it occurs %total no of symbols %-----------------------------------------for loop to find probability and %entropy for i=1:1:size((count)'); prob(i)=count(i)/total; ent=ent-prob(i)*log2(prob(i)); end var=0; %-----------------------------------------function to create dictionary [dict avglen]=huffmandict(symbol,prob); % print the dictionary. temp = dict; for i = 1:length(temp) temp{i,2} = num2str(temp{i,2}); var=var+(length(dict{i,2})-avglen)^2; %variance calculation end temp %-----------------------------------------encoder and decoder functions sig_encoded=huffmanenco(string,dict) deco=huffmandeco(sig_encoded,dict); equal = isequal(string,deco) %-----------------------------------------decoded string and output %variables str ='';for i=1:length(deco) 17 | P a g e Image compression using by Huffman
  • 20. str= strcat(str,deco(i)); end str ent avglen var 3. Method %clearing all variableas and screen clear all; close all; clc; %Reading image a=imread('DSC_0157.jpg'); imshow(a); %converting an image to grayscale I=rgb2gray(a); %size of the image [m,n]=size(I); Totalcount=m*n; %variables using to find the probability cnt=1; sigma=0; %computing the cumulative probability. for i=0:255 k=I==i; count(cnt)=sum(k(:)) %pro array is having the probabilities pro(cnt)=count(cnt)/Totalcount; sigma=sigma+pro(cnt); cumpro(cnt)=sigma; cnt=cnt+1; end; %Symbols for an image symbols = [0:255]; %Huffman code Dictionary dict = huffmandict(symbols,pro); %function which converts array to vector vec_size = 1; for p = 1:m for q = 1:n newvec(vec_size) = I(p,q); vec_size = vec_size+1;end 18 | P a g e Image compression using by Huffman
  • 21. end %Huffman Encodig hcode = huffmanenco(newvec,dict); %Huffman Decoding dhsig1 = huffmandeco(hcode,dict); %convertign dhsig1 double to dhsig uint8 dhsig = uint8(dhsig1); %vector to array conversion dec_row=sqrt(length(dhsig)); dec_col=dec_row; %variables using to convert vector 2 array arr_row = 1; arr_col = 1; vec_si = 1; for x = 1:m for y = 1:n back(x,y)=dhsig(vec_si); arr_col = arr_col+1; vec_si = vec_si + 1; end arr_row = arr_row+1; end %converting image from grayscale to rgb [deco, map] = gray2ind(back,256); RGB = ind2rgb(deco,map); imwrite(RGB,'decoded.JPG'); %end of the huffman coding 4. Method %clearing all variableas and screen clear all; close all; clc; %Reading image a=imread('jpeg-image-compression-1-638.JPG'); figure,imshow(a) %converting an image to grayscale I=rgb2gray(a); %size of the image [m,n]=size(I); Totalcount=m*n; 19 | P a g e Image compression using by Huffman
  • 22. %variables using to find the probability cnt=1; sigma=0; %computing the cumulative probability. for i=0:255 k=I==i; count(cnt)=sum(k(:)) %pro array is having the probabilities pro(cnt)=count(cnt)/Totalcount; sigma=sigma+pro(cnt); cumpro(cnt)=sigma; cnt=cnt+1; end; %Symbols for an image symbols = [0:255]; %Huffman code Dictionary dict = huffmandict(symbols,pro); %function which converts array to vector vec_size = 1; for p = 1:m for q = 1:n newvec(vec_size) = I(p,q); vec_size = vec_size+1; end end %Huffman Encodig hcode = huffmanenco(newvec,dict); %Huffman Decoding dhsig1 = huffmandeco(hcode,dict); %convertign dhsig1 double to dhsig uint8 dhsig = uint8(dhsig1); %vector to array conversion dec_row=sqrt(length(dhsig)); dec_col=dec_row; %variables using to convert vector 2 array 20 | P a g e Image compression using by Huffman
  • 23. arr_row = 1; arr_col = 1; vec_si = 1; for x = 1:m for y = 1:n back(x,y)=dhsig(vec_si); arr_col = arr_col+1; vec_si = vec_si + 1; end arr_row = arr_row+1; end %converting image from grayscale to rgb [deco, map] = gray2ind(back,256); RGB = ind2rgb(deco,map); imwrite(RGB,'decoded.JPG'); %end of the huffman coding 5.Method clc; clear all; global CODE A1 = imread('fig1.jpg'); A1=rgb2gray(A1); A1=imresize(A1,[128 128]); figure(1) imshow(A1) [M, N]=size(A1); A = A1(:); count = imhist(A); A=num2cell(A); p = count / numel(A); %sum(p); CODE = cell(length(p), 1); s = cell(length(p), 1); 21 | P a g e Image compression using by Huffman
  • 24. for i = 1:length(p) % Generate a starting tree with symbol nodes 1, 2, 3, ... to % reference the symbol probabilities. s{i} = i ; end while numel(s) > 2 [p, i] = sort(p); % Sort the symbol probabilities p(2) = p(1) + p(2); % Merge the 2 lowest probabilities p(1) = []; % and prune the lowest one s = s(i) ; % Reorder tree for new probabilities s{2} = {s{1}, s{2}}; % and merge & prune its nodes s(1) = []; % to match the probabilities end makecode(s, []) for i = 1:numel(CODE) c = CODE{i}; t = c; c(c=='1') = '0'; c(t=='0') = '1'; CODE{i} = c; end CODE; for n=1:256 index=find(cell2mat(A)==n-1); ANN(index)=CODE(n); end codedim=bin2dec(ANN'); codedim=reshape(codedim,M,N); codedim=uint8(codedim); imwrite(codedim, 'fig1-Copy.jpg') figure(2) imshow(codedim) 6.Method clc; % input image image = imread('img1.png'); 22 | P a g e Image compression using by Huffman
  • 25. [m,n] = size(image); J = imresize(image,[256 256]); [x,y] = size(J); figure(); imshow(J); title('original RGB image') drawnow(); Image = rgb2gray(J); figure(); imshow(Image); title('original image as grayscale'); drawnow(); [N, M] = size(Image); Count = zeros(256,1); for i = 1:N for j = 1:M Count(Image(i,j)+1) = Count(Image(i,j)+1)+1; end end prob = Count/(M*N); prob_1=prob(:)'; symbols = 0:255; [dict,avglen] = huffmandict(symbols,prob); comp = huffmanenco(Image(:),dict); l_encoded=length(comp); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%% head1 = [l_encoded [N, M] prob_1 ]; l_head = length( head1 ); header = [l_head head1]; %%%%%%%%%%% File writing %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% image=input('comp.bin): ', 's'); fid=fopen(image,'w'); fwrite(fid, header, 'double'); %%%% writing the header in double format fwrite(fid,comp,'ubit1'); %%%% writing the encoded bits in binary format after the header 23 | P a g e Image compression using by Huffman
  • 26. fclose(fid); decomp = cast( reshape(huffmandeco(comp,dict), N, M), class(Image)); figure(); imshow(decomp); title('reconstructed image'); drawnow(); figure(); dimg = imsubtract(Image, decomp); imshow(dimg, []); title('difference between original and reconstructed (expect no difference)'); drawnow(); 24 | P a g e Image compression using by Huffman
  • 27. RESULTS AND DISCUSSION In the JPEG image compression algorithm, the input image is divided into 4-by-4 or 8-by-8 blocks, and the two-dimensional DCT is computed for each block. The DCT coefficients are then quantized, coded, and transmitted. The JPEG receiver (or JPEG file reader) decodes the quantized DCT coefficients, computes the inverse two-dimensional DCT of each block, and then puts the blocks back together into a single image. For typical images, many of the DCT coefficients have values close to zero; these coefficients can be discarded without seriously affecting the quality of the reconstructed image. The example code below computes the two- dimensional DCT of 8-by-8 blocks in the input image, discards (sets to zero) all but 10 of the 64 DCT coefficients in each block, and then reconstructs the image using the two dimensional inverse DCT of each block. The transform matrix computation method is used. In the JPEG image compression algorithm, the input image is divided into 4-by-4 or 8-by-8 blocks, and the two-dimensional DCT is computed for each block. The DCT coefficients are then quantized, coded, and transmitted. The JPEG receiver (or JPEG file reader) decodes the quantized DCT coefficients, computes the inverse two-dimensional DCT of each block, and then puts the blocks back together into a single image. For typical images, many of the DCT coefficients have values close to zero; these coefficients can be discarded without seriously affecting the quality of the reconstructed image. The example code below computes the two-dimensional DCT of 8-by-8 blocks in the input image, discards (sets to zero) all but 10 of the 64 DCT coefficients in each block, and then reconstructs the image using the two dimensional inverse DCT of each block. The transform matrix computation method is used. Figure 2: Al-though there is some loss of quality in the reconstructed image, it is clearly recognizable, even though almost 85% of the DCT coefficients 25 | P a g e Image compression using by Huffman
  • 28. ( Original Image) (CompressedImage) 26 | P a g e Image compression using by Huffman
  • 29. DecompressedImage Method JPEG Existing JPEG Proposed MSE 17502.21 63.63 PSNR 5.70 30.09 NK 0.00 1.00 AD 123.58 -0.02 SC 64979.04 1.00 MD 239.00 58.00 NAE 1.00 0.04 27 | P a g e Image compression using by Huffman
  • 30. Figure 3. (Left-Bottom) Lena, 8-by-8 DCT, 4-by-4 DCT (Right-Bottom) Apple, 8-by-8 DCT, 4-by-4 DCT Compress ion by 8- by-8DCT Compress ion by 4- by-4 DCT Compress ion by 8- by-8 DC Compress ion by 4- by-4DCT JPEG- 1 6.70% 6.31% 2.97% 2.76% JPEG- 2 6.24% 4.86% 2.47% 1.73% JPEG- 3 6.24% 4.43% 2.29% 1.56% JPEG- 4 6.04% 4.17% 2.14% 1.35% JPEG- 5 5.19% 3.76% 1.51% 1.26% JPEG- 6 4. 47% 3.20% 1.26% 0.96% JPEG- 7 3.79% 2.44% 1.11% 0.68% JPEG-8 3.02% 1.63% 0.81% 0.23% JPEG- 9 2.25% 0.00% 0.26% 0.00% 28 | P a g e Image compression using by Huffman
  • 31. 29 | P a g e Image compression using by Huffman
  • 32. CONCLUSION Image compression is used for managing images in digital format. This survey paper has been focused on the Fast and efficient lossy coding algorithms JPEG for image Compression/Decompression using Discrete Cosine transform. We also briefly introduced the principles behind the Digital Image compression and various image compression methodologies .and the jpeg process steps including DCT, quantization , entropy encoding. In the future work we will make a comparison between two techniques of the image compression (Discrete cosine transform and Discrete Wavelet transform). In this paper, we are improving the performance of the image compression. For improve the performance of the image compression , we are improving the parameter of MSE(Mean Square Error , PSNR(Peak Signal to Noise Ratio) , NK(Normalized Cross Correlation) , AD( Average Difference) , SC(Structural Content) , MD(Maximum Difference) , NAE(Normalized Absolute Error). As we can check from the results session, PSNR and NK value is getting increase and remaining all the values are getting the decrease . For improving the performance of the image compression, we have to decrease the value of NAE, MD, SC, AD, MSE and PSNR, NK values we have to increase. Future Scope For further enhance the performance of the image compression and decompression, can be done by other lossless methods of image compression because as it is concluded above, that the result of the decompressed image is almost same as that of the input image, so it indicates that there is no loss of information during transmission. So other methods of image compression, any of the type i.e., lossless or lossy can be carried out as namely JPEG method, LZW coding, etc. Use of different metrics can also take place to evaluate the performance of compression algorithms. Data compression using Huffmann coding 30 | P a g e