A Distributed Shared Memory (DSM) system provides a logical abstraction of shared memory built using interconnected nodes with distributed physical memories. There are hardware, software, and hybrid DSM approaches. DSM offers simple abstraction, improved portability, potential performance gains, large unified memory space, and better performance than message passing in some applications. Consistency protocols ensure shared data coherency across distributed memories according to the memory consistency model.
Call Girls in Ramesh Nagar Delhi ๐ฏ Call Us ๐9953056974 ๐ Escort Service
ย
6.distributed shared memory
1.
2. Basic Concepts Of DSM:
๏ A DSM system provides a logical abstraction
of shared memory which is built using a set
of interconnected nodes having physically
distributed memories.
4. Advantages of DSM
๏ Simple abstraction
๏ Improved portability of distributed
application programs
๏ Provides better performance in some
applications
๏ Large memory space at no extra cost
๏ Better than message passing systems
5. Comparison of IPC
paradigms
๏ DFS
๏ Single shared address space
๏ Communicate, synchronize using load / store
๏ Can support message passing
๏ Message Passing
๏ Send / Receive
๏ Communication + synchronization
๏ Can support shared memory
6. Hardware architectures
๏ On chip memory
๏ Bus based multiprocessor
๏ Ring based multiprocessor
๏ Switched multiprocessor
7. On chip memory
In this CPU portion of the chip has a address and data lines that directly
connect to the memory portion.
Such chips are used in cars appliances and even toys
8. Bus based multiprocessor
All CPUs connected to one bus (backplane)
Memory and peripherals are accessed via shared bus. System looks the
same from any processor.
13. NUMA Multiprocessor
๏ Non-uniform memory access (NUMA) is a
computer memory design used multiprocessing,
where the memory access time depends on the
memory location relative to the processor.
๏ Under NUMA, a processor can access its own
local memory faster than non-local memory.
๏ The benefits of NUMA are limited to particular
workloads, notably on servers where the data is
often associated strongly with certain tasks or
users.
14. UMA Multiprocessor
๏ Uniform memory access (UMA) is a shared memory
architecture used in parallel computers. All the processors
in the UMA model share the physical memory uniformly.
๏ In a UMA architecture, access time to a memory location
is independent of which processor makes the request or
which memory chip contains the transferred data.
๏ Uniform memory access computer architectures are often
contrasted with non-uniform memory access (NUMA)
architectures.
๏ In the UMA architecture, each processor may use a
private cache. Peripherals are also shared in some
fashion.
๏ The UMA model is suitable for general purpose and time
sharing applications by multiple users. It can be used to
speed up the execution of a single large program in time-
critical applications.
15. D/f bw two Multiprocessor:
BUS BASED
MULTIPROCCESORS
RING BASED
MULTIPROCCESORS
๏ They are tightly coupled
with the CPUโS
normally in a single
rack.
๏ It has separate global
memory
๏ Machines here can be
much more loosely
coupled and this loose
coupling can affect their
performance.
๏ It has no separate
global memory.
16.
17. DSM design issues
๏ Granularity of sharing
๏ Structure of data
๏ Consistency models
๏ Coherence protocols
19. Thrashing:
๏ False sharing
๏ Techniques to reduce
thrashing:
๏ Application controlled lock
๏ Pin the block to a node for
specific time
๏ Customize algorithm to
shared data usage pattern
23. Sequential consistency
๏ All processors in the system observe the same
ordering of reads and writes which are issued in
sequence by the individual processors
24. Causal consistency
๏ Weakening of sequential consistency for
better concurrency
๏ Causally related operation is the one which
has influenced the other operation
25. PRAM consistency
๏ Pipelined Random Access Memory consistency
๏ Write operations performed by different processes may
be seen by different processes in different orders
๏ All write operations of a single process are pipelined
๏ Simple, easy to implement and has good performance.
26. Processor consistency
๏ Adheres to the PRAM consistency
๏ Constraint on memory coherence
๏ Order in which the memory operations
are seen by two processors need not be
identical, but the order of writes issued
by each processor must be preserved
27. Weak consistency
๏ Use a special variable called the
synchronization variable.
๏ Very difficult to show and keep track of the changes at
time to time.
28. Properties of the weak
consistency model:
๏ Access to synchronization variables is
sequentially consistent.
๏ Only when all previous writes are
completed everywhere, access to
synchronizations variable is allowed.
๏ Until all previous accesses to
synchronization variables are performed,
no read write data access operations will
be allowed.
29. Release consistency
๏ถ Uses two synchronization variables (Acquire and
Release)
๏ถ Release consistency model uses synchronization
mechanism based on barriers.
32. Entry consistency
โข Use acquire and release at the start and
end of each critical section, respectively.
โข Each ordinary shared variable is
associated with some synchronization
variable such as a lock or barrier.
โข Entry consistency (EC) is similar to LRC
but more relaxed; shared data is explicitly
associated with synchronization primitives
and is made consistent when such an
operation is performed
33. Scope consistency
๏ A scope is a limited view of memory with
respect to which memory references are
performed