3. Time Distribution
3/4 1/4
Theory Practical
Theoretical aspects
Labs
of the course
Mid Term Exam
Practical approach Labs
to the course
4. Performance Assessment Criteria
Marks Distribution
Total 100 points
80 20
Lectures Labs
10 70
10 10
Course Work Exams Weekly Labs Final
5 5 10 60
Assign. 1 Assign. 2 Mid Term Final Lab1, Lab 2 ……Lab n
8. • MPI
• Distributed Systems
• Clusters
• Grid Computing
• Fault tolerance
• Prototype of parallel Computers
• Parallel Algorithms and Software
9. Reading Resources
• Text Book
• Reference Books
– Specific to the course
– General to the topic
• Internet Sources
10. Text Book
• Advanced Computer Architecture: Parallel,
Scalability, Programmability
By William Stallings – McGraw-Hill Inc.
11. Reference Books
– Scalable Parallel Computing
• By Kai Hwang & Z. Xu
– Grid Computing: Making The Global Infrastructure a Reality,
John Wiley and Sons, 2003
• By F. Berman, G. Fox, and T. Hey
13. How to get what we discuss?
• Online Access
• http://www.ssuet.edu.pk/~mfarooqui/ParComp
• Soft Copy
• http://www.ssuet.edu.pk/~mfarooqui/ParComp
• Hard Copy
• Will not be provided
14. Codes of Conduct
• Strictly practice your attendance in the class
and labs.
• No relaxation, compensation or adjustment
in your attendance.
• Be in Uniform (at least in the class)
• Preserve the sanity of the class, teachers,
department and the University.
• Help us in serving you for a better future.
15. Introduction
• Parallel Computing – Real Life Scenario
• Cost Effectiveness of Parallel Processors
• What is parallel computing?
• Why do parallel computing?
• Types of parallel computing
• What are some limits of parallel
computing?
16. Parallel Computing – Real Life Scenario
• Stacking or reshelving of a set of library books. Assume
books are organized into shelves and shelves are grouped
into bays.
Single worker can only do it in a certain rate.
We can speed it up by employing multiple workers.
What is the best strategy ?
5. Simple way is to divide the total books equally among
workers. Each worker stacks the books one at a time.
Worker must walk all over the library.
6. Alternate way is to assign fixed disjoint sets of bay to each
worker. Each worker is assigned equal # of books
arbitrarily. Workers stack books in their bays or pass to
another worker responsible for the bay it belongs to.
17. Parallel Computing – Real Life Scenario
• Parallel processing allows to accomplish a task faster by
dividing the work into a set of substacks assigned to
multiple workers.
• Assigning a set of books to workers is task partitioning.
Passing of books to each other is an example of
communication between subtasks.
• For some problems assigning work to multiple workers
might be more time consuming than doing it locally.
• Some problems may be completely serial; e.g. digging a
post hole. Poorly suited to parallel processing.
• All problems are not equally amenable to parallel
processing
18. Wheather Modelling and Forcasting
Consider 3000 X 3000 miles, and height of 11 miles.
For modeling partition into segments of 0.1X0.1X0.1
cubic miles = ~1011 segments.
Lets take 2-day period and parameters need to be computed
every 30 min. Assume the computations take 100 instrs. A
single update takes 1013 instrs. For two days we have total
instrs. of 1015 . For serial computer with 1010 instrs./sec,
this takes 280 hrs to predict next 48 hrs !!
Lets take 1000 processors capable of 108 instrs/sec. Each
processor will do 108 segments. For 2 days we have 1012
instrs. Calculation done in 3 hrs !!
Currently all major weather forcast centers (US, Europe,
Asia) have supercomputers with 1000s of processors.
19. What is Parallel Computing?
• Traditionally, software has been written for serial computation:
– To be executed by a single computer having a single Central Processing
Unit (CPU);
– Problems are solved by a series of instructions, executed one after the
other by the CPU. Only one instruction may be executed at any moment
in time.
• In the simplest sense, parallel computing is the simultaneous use of
multiple computer resources to solve a computational problem.
• The computer resources can include:
– A single computer with multiple processors;
– An arbitrary number of computers connected by a network;
– A combination of both.
20. What is Parallel Computing?
• The computational problem usually demonstrates
characteristics such as the ability to be:
– Broken apart into discrete pieces of work that can be solved
simultaneously;
– Execute multiple program instructions at any moment in time;
– Solved in less time with multiple computer resources than with a
single compute resource
• Parallel computing is an evolution of serial computing that
attempts to emulate what has always been the state of
affairs in the natural world
21. What is Parallel Computing?
• many complex, interrelated events happening at
the same time, yet within a sequence. Some
examples:
– Planetary and galactic orbits
– Weather and ocean patterns
– Tectonic plate drift
– Rush hour traffic in LA
– Automobile assembly line
– Daily operations within a business
– Building a shopping mall
– Ordering a hamburger at the drive through.
22. What is Parallel Computing?
• Traditionally, parallel computing has been
considered to be "the high end of computing" and
has been motivated by numerical simulations of
complex systems and "Grand Challenge
Problems" such as:
– weather and climate
– chemical and nuclear reactions
– biological, human genome
– geological, seismic activity
– mechanical devices - from prosthetics to spacecraft
– electronic circuits
– manufacturing processes
23. What is Parallel Computing
• Today, commercial applications are providing an equal or
greater driving force in the development of faster
computers. These applications require the processing of
large amounts of data in sophisticated ways. Example
applications include:
– parallel databases, data mining
– web search engines, web based business services
– computer-aided diagnosis in medicine
– management of national and multi-national corporations
– advanced graphics and virtual reality, particularly in the
entertainment industry
– networked video and multi-media technologies
• Ultimately, parallel computing is an attempt to maximize
the infinite but seemingly scarce commodity called time
24. Why Use Parallel Computing?
• Save time
• Solve larger problems
• Taking advantage of non-local resources - using available
compute resources on a wide area network, or even the
Internet when local compute resources are scarce.
• Cost savings - using multiple "cheap" computing resources
instead of paying for time on a supercomputer.
• Overcoming memory constraints - single computers have
very finite memory resources. For large problems, using
the memories of multiple computers may overcome this
obstacle.
25. Why Use Parallel Computing?
• both physical and practical reasons pose significant constraints to
simply building ever faster serial computers:
• Transmission speeds - the speed of a serial computer is directly
dependent upon how fast data can move through hardware. Absolute
limits are the speed of light (30 cm/nanosecond) and the transmission
limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate
increasing proximity of processing elements.
• Limits to miniaturization - processor technology is allowing an
increasing number of transistors to be placed on a chip. However,
even with molecular or atomic-level components, a limit will be
reached on how small components can be.
• Economic limitations - it is increasingly expensive to make a single
processor faster. Using a larger number of moderately fast commodity
processors to achieve the same (or better) performance is less
expensive.