SlideShare ist ein Scribd-Unternehmen logo
1 von 26
Advance Computer Architecture
Danish Shehzad
MSCS
 Multi-Core Computers

Multithreading


GPUs
Introduction To Multi-Core System

 Integration of multiple processor cores on a single chip.
 Multi-core processor is a special kind of multiprocessors:
    All   processors are on the same chip also called Chip
     Multiprocessor.
    Different cores execute different codes (threads) operating on
     different data.
    A shared memory multiprocessor: all cores share the same
     memory via some cache organization
 Provides a cheap parallel computer solution.
 Increases the computation power to PC platform.
Why Multi-Core?
 Limitations of single core architectures:
     High power consumption due to high clock rates (2-3% power
       increase per 1% performance increase).
      Heat generation (cooling is expensive).
      Limited parallelism (Instruction Level Parallelism only).
      Design time and complexity increased due to complex methods to
       increase ILP.
   Many new applications are multithreaded, suitable for multi-core.
      Ex. Multimedia applications.
   General trend in computer architecture (shift towards more
    parallelism).
   Much faster cache coherency circuits, in a single chip.
   Smaller in physical size than SMP.
Single Core VS Multi Core
A Dual-Core Intel Processor
Intel Polaris
 80 cores with Teraflop (10^12)
  performance on a single chip (1st
  chip to do so).
 Mesh network-on-a-chip.
 Frequency target at 5GHz.
 Workload-aware power
  management:
    Instructions to make any core
     sleep or wake as apps
     demand.
    Chip voltage & frequency
     control.
    Peak of 1.01 Teraflops at 62
     watts.
    Peak power efficiency of 19.4
    Short design time.
Innovations on Intel’s Polaris
 Rapid design – The tiled-design approach allows designers to use
  smaller cores that can easily be repeated across the chip.

 Network-on-a-chip – The cores are connected in a 2D mesh
  network that implement message-passing. This scheme is much
  more scalable.

 Fine-grain power management - The individual compute engines
  and data routers in each core can be activated or put to sleep based
  on the performance required by the applications.
What applications benefit From Multicore
Processors


 • Database servers
 • Web servers (Web commerce)
 • Compilers
 • Multimedia applications
 • Scientific applications,
 • In general, applications with
  Thread-level parallelism(as opposed to instruction
  level parallelism)
Multi-Core Computers



Multithreading

GPUs
2. Multi-Threading
Thread-Level Parallelism (TLP)
 This is parallelism on a more coarser scale than instruction-level
  parallelism (ILP).
 Instruction stream divided into smaller streams (threads) to be
  executed in parallel.
 Thread has its own instructions and data.
    May be part of a parallel program or independent programs.
    Each thread has all state (instructions, data, PC, etc.) needed to
     execute.
 Single-core superscalar processors cannot fully exploit TLP.
 Multi-core architectures can exploit TLP efficiently.
    Use multiple instruction streams to improve the throughput of
     computers that run several programs .
 TLP are more cost-effective to exploit than ILP.
Threads vs. Processes
 Process: An instance of a program running on a computer.
    Resource ownership.
        Virtual address space to hold process image including
         program, data, stack, and attributes.
    Execution of a process follows a path though the program.
    Process switch - An expensive operation due to the need to
     save the control data and register contents.
 Thread: A dispatchable unit of work within a process.
    Interruptible: processor can turn to another thread.
    All threads within a process share code and data segments.
    Thread switch is usually much less costly than process switch.
Multithreading Approaches
 Interleaved (fine-grained)
    Processor deals with several thread contexts at a time.
    Switching thread at each clock cycle (hardware need).
    If a thread is blocked, it is skipped.
    Hide latency of both short and long pipeline stalls.
 Blocked (coarse-grained)
    Thread executed until an event causes delay (e.g., cache miss).
    Relieves the need to have very fast thread switching.
    No slow down for ready-to-go threads.
 Simultaneous (SMT)
    Instructions simultaneously issued from multiple threads to
     execution units of superscalar processor.
 Chip multiprocessing
    Each processor handles separate threads.
Multithreading Paradigms
Programming for Multi-Core (In Context Of
Multithreading)
 There must be many threads or processes:
    Multiple applications running on the same machine.
        Multi-tasking is becoming very common.
        OS software tends to run many threads as a part of its normal
         operation.

 An application may also have multiple threads.
        In most cases, it must be specifically written.

 OS scheduler should map the threads to different cores, in
  order to balance the work load or to avoid hot spots due to
  heat generation.
Multi-Core Computers


Multithreading



GPUs
Introduction To GPU
What is GPU?
• It is a processor optimized for 2D/3D
  graphics, video, visual computing, and display.
• It is highly parallel, highly multithreaded multiprocessor
  optimized for visual computing.
• It provide real-time visual interaction with computed
  objects via graphics images, and video.
• It serves as both a programmable graphics processor
  and a scalable parallel computing platform.
• Heterogeneous Systems: combine a GPU with a CPU
Graphics Processing Unit, Why?
Graphics applications are:
    Rendering of 2D or 3D images with complex optical effects;
    Highly computation intensive;
    Massively parallel;
    Data stream based.
 General purpose processors (CPU) are:
    Designed to handle huge data volumes;
    Serial, one operation at a time;
    Control flow based;
    High flexible, but not very well adapted to graphics.
 GPUs are the solutions!
GPU Design
 Process pixels in parallel
    2.3M pixels per frame
    lots of work
 All pixels are independent
    no synchronization
 Lots of spatial locality
    regular memory access
 Great speedups:
    Limited only by the amount of hardware
GPU Design
2) Focus on throughput, not latency
Each pixel can take a long time……as long as we process
  many at the same time.
 Great scalability
 Lots of simple parallel processors
 Low clock speed
CPU vs. GPU Architecture
 GPUs are throughput-optimized
    Each thread may take a long time, but thousands of threads
 CPUs are latency-optimized
    Each thread runs as fast as possible, but only a few threads

 GPUs have hundreds of simple cores
 CPUs have a few massive cores

 GPUs excel at regular math-intensive work
    Lots of ALUs for math, little hardware for control
 CPUs excel at irregular control-intensive work
    Lots of hardware for control, few ALUs
GPU Architecture Features
 Massively Parallel, 1000s of processors (today).
 Power Efficient:
    Fixed Function Hardware = area & power efficient.
    Lack of speculation. More processing, less leaky cache.
 Memory Bandwidth:
    Memory Bandwidth is limited in CPU.
    GPU is not dependent on large caches for performance.
 Computing power = Frequency * Transistors
    GPUs: 1.7X (pixels) to 2.3X (vertices) annual growth.
    CPUs: 1.4X annual growth.
Computational Power
CUDA
 CUDA = Computer Unified Device Architecture.
 A scalable parallel programming model and a software
  environment for parallel computing.
 Threads:
   GPU threads are extremely light-weight, with little creation
    overhead.
   GPU needs 1000s of threads for full efficiency.
   Multi-core CPU needed only a few.
 GPU/CUDA address all three levels of parallelism:
    Thread parallelism;
    Data parallelism;
    Task parallelism.
Summary
 All computers are now parallel computers!
 Multi-core processors represent an important new trend in
  computer architecture.
    Decreased power consumption and heat generation.
    Minimized wire lengths and interconnect latencies.
 They enable true thread-level parallelism with great energy
  efficiency and scalability.
 Graphics requires a lot of computation and a huge amount
  of bandwidth - GPUs.
 GPUs are coming to general purpose computing, since they
  deliver huge performance with small power.
THANK YOU

Weitere ähnliche Inhalte

Was ist angesagt?

Parallel computing with Gpu
Parallel computing with GpuParallel computing with Gpu
Parallel computing with GpuRohit Khatana
 
Gpu and The Brick Wall
Gpu and The Brick WallGpu and The Brick Wall
Gpu and The Brick Wallugur candan
 
Indian Contribution towards Parallel Processing
Indian Contribution towards Parallel ProcessingIndian Contribution towards Parallel Processing
Indian Contribution towards Parallel ProcessingAjil Jose
 
GPU Computing: A brief overview
GPU Computing: A brief overviewGPU Computing: A brief overview
GPU Computing: A brief overviewRajiv Kumar
 
Nvidia (History, GPU Architecture and New Pascal Architecture)
Nvidia (History, GPU Architecture and New Pascal Architecture)Nvidia (History, GPU Architecture and New Pascal Architecture)
Nvidia (History, GPU Architecture and New Pascal Architecture)Saksham Tanwar
 
Multi_Core_Processor_2015_(Download it!)
Multi_Core_Processor_2015_(Download it!)Multi_Core_Processor_2015_(Download it!)
Multi_Core_Processor_2015_(Download it!)Sudip Roy
 
ECML PKDD 2021 ML meets IoT Tutorial Part II: Creating ML based Self learning...
ECML PKDD 2021 ML meets IoT Tutorial Part II: Creating ML based Self learning...ECML PKDD 2021 ML meets IoT Tutorial Part II: Creating ML based Self learning...
ECML PKDD 2021 ML meets IoT Tutorial Part II: Creating ML based Self learning...Bharath Sudharsan
 
Quad Core Processors - Technology Presentation
Quad Core Processors - Technology PresentationQuad Core Processors - Technology Presentation
Quad Core Processors - Technology Presentationvinaya.hs
 
FIne Grain Multithreading
FIne Grain MultithreadingFIne Grain Multithreading
FIne Grain MultithreadingDharmesh Tank
 
Graphics processing unit ppt
Graphics processing unit pptGraphics processing unit ppt
Graphics processing unit pptSandeep Singh
 
Gpu with cuda architecture
Gpu with cuda architectureGpu with cuda architecture
Gpu with cuda architectureDhaval Kaneria
 
Multiprocessor Architecture for Image Processing
Multiprocessor Architecture for Image ProcessingMultiprocessor Architecture for Image Processing
Multiprocessor Architecture for Image Processingmayank.grd
 
Ximea - the pc camera, 90 gflps smart camera
Ximea  - the pc camera, 90 gflps smart cameraXimea  - the pc camera, 90 gflps smart camera
Ximea - the pc camera, 90 gflps smart cameraXIMEA
 

Was ist angesagt? (19)

Parallel computing with Gpu
Parallel computing with GpuParallel computing with Gpu
Parallel computing with Gpu
 
Gpu and The Brick Wall
Gpu and The Brick WallGpu and The Brick Wall
Gpu and The Brick Wall
 
Indian Contribution towards Parallel Processing
Indian Contribution towards Parallel ProcessingIndian Contribution towards Parallel Processing
Indian Contribution towards Parallel Processing
 
GPU Computing: A brief overview
GPU Computing: A brief overviewGPU Computing: A brief overview
GPU Computing: A brief overview
 
Nvidia (History, GPU Architecture and New Pascal Architecture)
Nvidia (History, GPU Architecture and New Pascal Architecture)Nvidia (History, GPU Architecture and New Pascal Architecture)
Nvidia (History, GPU Architecture and New Pascal Architecture)
 
Tensor Processing Unit (TPU)
Tensor Processing Unit (TPU)Tensor Processing Unit (TPU)
Tensor Processing Unit (TPU)
 
Multi_Core_Processor_2015_(Download it!)
Multi_Core_Processor_2015_(Download it!)Multi_Core_Processor_2015_(Download it!)
Multi_Core_Processor_2015_(Download it!)
 
Gpu databases
Gpu databasesGpu databases
Gpu databases
 
ECML PKDD 2021 ML meets IoT Tutorial Part II: Creating ML based Self learning...
ECML PKDD 2021 ML meets IoT Tutorial Part II: Creating ML based Self learning...ECML PKDD 2021 ML meets IoT Tutorial Part II: Creating ML based Self learning...
ECML PKDD 2021 ML meets IoT Tutorial Part II: Creating ML based Self learning...
 
Quad Core Processors - Technology Presentation
Quad Core Processors - Technology PresentationQuad Core Processors - Technology Presentation
Quad Core Processors - Technology Presentation
 
Parallel Computing
Parallel Computing Parallel Computing
Parallel Computing
 
FIne Grain Multithreading
FIne Grain MultithreadingFIne Grain Multithreading
FIne Grain Multithreading
 
GPU Programming
GPU ProgrammingGPU Programming
GPU Programming
 
Graphics processing unit ppt
Graphics processing unit pptGraphics processing unit ppt
Graphics processing unit ppt
 
Gpu with cuda architecture
Gpu with cuda architectureGpu with cuda architecture
Gpu with cuda architecture
 
Multiprocessor Architecture for Image Processing
Multiprocessor Architecture for Image ProcessingMultiprocessor Architecture for Image Processing
Multiprocessor Architecture for Image Processing
 
Ximea - the pc camera, 90 gflps smart camera
Ximea  - the pc camera, 90 gflps smart cameraXimea  - the pc camera, 90 gflps smart camera
Ximea - the pc camera, 90 gflps smart camera
 
GPU Computing
GPU ComputingGPU Computing
GPU Computing
 
Google TPU
Google TPUGoogle TPU
Google TPU
 

Ähnlich wie Modern processor art

Stream Processing
Stream ProcessingStream Processing
Stream Processingarnamoy10
 
Multicore processor by Ankit Raj and Akash Prajapati
Multicore processor by Ankit Raj and Akash PrajapatiMulticore processor by Ankit Raj and Akash Prajapati
Multicore processor by Ankit Raj and Akash PrajapatiAnkit Raj
 
Intel new processors
Intel new processorsIntel new processors
Intel new processorszaid_b
 
Study of various factors affecting performance of multi core processors
Study of various factors affecting performance of multi core processorsStudy of various factors affecting performance of multi core processors
Study of various factors affecting performance of multi core processorsateeq ateeq
 
Maxwell siuc hpc_description_tutorial
Maxwell siuc hpc_description_tutorialMaxwell siuc hpc_description_tutorial
Maxwell siuc hpc_description_tutorialmadhuinturi
 
Computer architecture multi core processor
Computer architecture multi core processorComputer architecture multi core processor
Computer architecture multi core processorMazin Alwaaly
 
Blue gene- IBM's SuperComputer
Blue gene- IBM's SuperComputerBlue gene- IBM's SuperComputer
Blue gene- IBM's SuperComputerIsaaq Mohammed
 
The evolution of computer
The evolution of computerThe evolution of computer
The evolution of computerLolita De Leon
 
Implementation of RISC-Based Architecture for Low power applications
Implementation of RISC-Based Architecture for Low power applicationsImplementation of RISC-Based Architecture for Low power applications
Implementation of RISC-Based Architecture for Low power applicationsIOSR Journals
 
trends of microprocessor field
trends of microprocessor fieldtrends of microprocessor field
trends of microprocessor fieldRamya SK
 
Parallel and Distributed Computing chapter 3
Parallel and Distributed Computing chapter 3Parallel and Distributed Computing chapter 3
Parallel and Distributed Computing chapter 3AbdullahMunir32
 
Hyper threading technology
Hyper threading technologyHyper threading technology
Hyper threading technologyNikhil Venugopal
 
Area Optimized Implementation For Mips Processor
Area Optimized Implementation For Mips ProcessorArea Optimized Implementation For Mips Processor
Area Optimized Implementation For Mips ProcessorIOSR Journals
 

Ähnlich wie Modern processor art (20)

Stream Processing
Stream ProcessingStream Processing
Stream Processing
 
Multicore processor by Ankit Raj and Akash Prajapati
Multicore processor by Ankit Raj and Akash PrajapatiMulticore processor by Ankit Raj and Akash Prajapati
Multicore processor by Ankit Raj and Akash Prajapati
 
Intel new processors
Intel new processorsIntel new processors
Intel new processors
 
Distributed Computing
Distributed ComputingDistributed Computing
Distributed Computing
 
Module 1 unit 3
Module 1  unit 3Module 1  unit 3
Module 1 unit 3
 
Study of various factors affecting performance of multi core processors
Study of various factors affecting performance of multi core processorsStudy of various factors affecting performance of multi core processors
Study of various factors affecting performance of multi core processors
 
Maxwell siuc hpc_description_tutorial
Maxwell siuc hpc_description_tutorialMaxwell siuc hpc_description_tutorial
Maxwell siuc hpc_description_tutorial
 
Computer architecture multi core processor
Computer architecture multi core processorComputer architecture multi core processor
Computer architecture multi core processor
 
Blue gene- IBM's SuperComputer
Blue gene- IBM's SuperComputerBlue gene- IBM's SuperComputer
Blue gene- IBM's SuperComputer
 
Lecture1
Lecture1Lecture1
Lecture1
 
The evolution of computer
The evolution of computerThe evolution of computer
The evolution of computer
 
Implementation of RISC-Based Architecture for Low power applications
Implementation of RISC-Based Architecture for Low power applicationsImplementation of RISC-Based Architecture for Low power applications
Implementation of RISC-Based Architecture for Low power applications
 
trends of microprocessor field
trends of microprocessor fieldtrends of microprocessor field
trends of microprocessor field
 
Exascale Capabl
Exascale CapablExascale Capabl
Exascale Capabl
 
Multi-Core on Chip Architecture *doc - IK
Multi-Core on Chip Architecture *doc - IKMulti-Core on Chip Architecture *doc - IK
Multi-Core on Chip Architecture *doc - IK
 
CPU vs GPU Comparison
CPU  vs GPU ComparisonCPU  vs GPU Comparison
CPU vs GPU Comparison
 
High performance computing
High performance computingHigh performance computing
High performance computing
 
Parallel and Distributed Computing chapter 3
Parallel and Distributed Computing chapter 3Parallel and Distributed Computing chapter 3
Parallel and Distributed Computing chapter 3
 
Hyper threading technology
Hyper threading technologyHyper threading technology
Hyper threading technology
 
Area Optimized Implementation For Mips Processor
Area Optimized Implementation For Mips ProcessorArea Optimized Implementation For Mips Processor
Area Optimized Implementation For Mips Processor
 

Kürzlich hochgeladen

Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 

Kürzlich hochgeladen (20)

Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 

Modern processor art

  • 3. Introduction To Multi-Core System  Integration of multiple processor cores on a single chip.  Multi-core processor is a special kind of multiprocessors:  All processors are on the same chip also called Chip Multiprocessor.  Different cores execute different codes (threads) operating on different data.  A shared memory multiprocessor: all cores share the same memory via some cache organization  Provides a cheap parallel computer solution.  Increases the computation power to PC platform.
  • 4. Why Multi-Core?  Limitations of single core architectures:  High power consumption due to high clock rates (2-3% power increase per 1% performance increase).  Heat generation (cooling is expensive).  Limited parallelism (Instruction Level Parallelism only).  Design time and complexity increased due to complex methods to increase ILP.  Many new applications are multithreaded, suitable for multi-core.  Ex. Multimedia applications.  General trend in computer architecture (shift towards more parallelism).  Much faster cache coherency circuits, in a single chip.  Smaller in physical size than SMP.
  • 5. Single Core VS Multi Core
  • 6. A Dual-Core Intel Processor
  • 7. Intel Polaris  80 cores with Teraflop (10^12) performance on a single chip (1st chip to do so).  Mesh network-on-a-chip.  Frequency target at 5GHz.  Workload-aware power management:  Instructions to make any core sleep or wake as apps demand.  Chip voltage & frequency control.  Peak of 1.01 Teraflops at 62 watts.  Peak power efficiency of 19.4  Short design time.
  • 8. Innovations on Intel’s Polaris  Rapid design – The tiled-design approach allows designers to use smaller cores that can easily be repeated across the chip.  Network-on-a-chip – The cores are connected in a 2D mesh network that implement message-passing. This scheme is much more scalable.  Fine-grain power management - The individual compute engines and data routers in each core can be activated or put to sleep based on the performance required by the applications.
  • 9. What applications benefit From Multicore Processors  • Database servers  • Web servers (Web commerce)  • Compilers  • Multimedia applications  • Scientific applications,  • In general, applications with Thread-level parallelism(as opposed to instruction level parallelism)
  • 11. 2. Multi-Threading Thread-Level Parallelism (TLP)  This is parallelism on a more coarser scale than instruction-level parallelism (ILP).  Instruction stream divided into smaller streams (threads) to be executed in parallel.  Thread has its own instructions and data.  May be part of a parallel program or independent programs.  Each thread has all state (instructions, data, PC, etc.) needed to execute.  Single-core superscalar processors cannot fully exploit TLP.  Multi-core architectures can exploit TLP efficiently.  Use multiple instruction streams to improve the throughput of computers that run several programs .  TLP are more cost-effective to exploit than ILP.
  • 12. Threads vs. Processes  Process: An instance of a program running on a computer.  Resource ownership.  Virtual address space to hold process image including program, data, stack, and attributes.  Execution of a process follows a path though the program.  Process switch - An expensive operation due to the need to save the control data and register contents.  Thread: A dispatchable unit of work within a process.  Interruptible: processor can turn to another thread.  All threads within a process share code and data segments.  Thread switch is usually much less costly than process switch.
  • 13. Multithreading Approaches  Interleaved (fine-grained)  Processor deals with several thread contexts at a time.  Switching thread at each clock cycle (hardware need).  If a thread is blocked, it is skipped.  Hide latency of both short and long pipeline stalls.  Blocked (coarse-grained)  Thread executed until an event causes delay (e.g., cache miss).  Relieves the need to have very fast thread switching.  No slow down for ready-to-go threads.  Simultaneous (SMT)  Instructions simultaneously issued from multiple threads to execution units of superscalar processor.  Chip multiprocessing  Each processor handles separate threads.
  • 15. Programming for Multi-Core (In Context Of Multithreading)  There must be many threads or processes:  Multiple applications running on the same machine.  Multi-tasking is becoming very common.  OS software tends to run many threads as a part of its normal operation.  An application may also have multiple threads.  In most cases, it must be specifically written.  OS scheduler should map the threads to different cores, in order to balance the work load or to avoid hot spots due to heat generation.
  • 17. Introduction To GPU What is GPU? • It is a processor optimized for 2D/3D graphics, video, visual computing, and display. • It is highly parallel, highly multithreaded multiprocessor optimized for visual computing. • It provide real-time visual interaction with computed objects via graphics images, and video. • It serves as both a programmable graphics processor and a scalable parallel computing platform. • Heterogeneous Systems: combine a GPU with a CPU
  • 18. Graphics Processing Unit, Why? Graphics applications are:  Rendering of 2D or 3D images with complex optical effects;  Highly computation intensive;  Massively parallel;  Data stream based.  General purpose processors (CPU) are:  Designed to handle huge data volumes;  Serial, one operation at a time;  Control flow based;  High flexible, but not very well adapted to graphics.  GPUs are the solutions!
  • 19. GPU Design  Process pixels in parallel  2.3M pixels per frame  lots of work  All pixels are independent  no synchronization  Lots of spatial locality  regular memory access  Great speedups:  Limited only by the amount of hardware
  • 20. GPU Design 2) Focus on throughput, not latency Each pixel can take a long time……as long as we process many at the same time.  Great scalability  Lots of simple parallel processors  Low clock speed
  • 21. CPU vs. GPU Architecture  GPUs are throughput-optimized  Each thread may take a long time, but thousands of threads  CPUs are latency-optimized  Each thread runs as fast as possible, but only a few threads  GPUs have hundreds of simple cores  CPUs have a few massive cores  GPUs excel at regular math-intensive work  Lots of ALUs for math, little hardware for control  CPUs excel at irregular control-intensive work  Lots of hardware for control, few ALUs
  • 22. GPU Architecture Features  Massively Parallel, 1000s of processors (today).  Power Efficient:  Fixed Function Hardware = area & power efficient.  Lack of speculation. More processing, less leaky cache.  Memory Bandwidth:  Memory Bandwidth is limited in CPU.  GPU is not dependent on large caches for performance.  Computing power = Frequency * Transistors  GPUs: 1.7X (pixels) to 2.3X (vertices) annual growth.  CPUs: 1.4X annual growth.
  • 24. CUDA  CUDA = Computer Unified Device Architecture.  A scalable parallel programming model and a software environment for parallel computing.  Threads:  GPU threads are extremely light-weight, with little creation overhead.  GPU needs 1000s of threads for full efficiency.  Multi-core CPU needed only a few.  GPU/CUDA address all three levels of parallelism:  Thread parallelism;  Data parallelism;  Task parallelism.
  • 25. Summary  All computers are now parallel computers!  Multi-core processors represent an important new trend in computer architecture.  Decreased power consumption and heat generation.  Minimized wire lengths and interconnect latencies.  They enable true thread-level parallelism with great energy efficiency and scalability.  Graphics requires a lot of computation and a huge amount of bandwidth - GPUs.  GPUs are coming to general purpose computing, since they deliver huge performance with small power.

Hinweis der Redaktion

  1. smp
  2. L1, L2 cache