Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Introduction to Intel Optane Data Center Persistent Memory

866 Aufrufe

Veröffentlicht am

In this deck from the 2019 Stanford HPC Conference, Usha Upadhyayula & Tom Krueger from Intel present: Introduction to Intel Optane Data Center Persistent Memory.

"For decades, developers had to balance data in memory for performance with data in storage for persistence. The emergence of data-intensive applications in various market segments is stretching the existing I/O models beyond limits.

Intel® Optane™ DC Persistent Memory introduces a new, flexible tier within the memory/storage hierarchy, applicable to workloads across cloud, HPC, in-memory computing, and storage. This disruptive technology will deliver persistence at memory bus speeds, reducing the need for persistence in storage which means fewer I/O trips, and lower latency, for accelerated performance. In addition, the new media will offer a lower-cost alternative compared to DRAM.

In this session you will learn about different modes supported by Intel® Optane™ DC Persistent Memory and open source libraries called PMDK (Persistent Memory Developer Kit) designed to solve challenges with persistent memory and simplify adoption of persistent memory programming. Usha Upadhyayula is a Developer Evangelist who will be providing this information in interactive format moderated by Tom Krueger, Global Sales Enablement Director both with Intel."

Watch the video: https://wp.me/p3RLHQ-jQy

Learn more: https://www.intel.com/content/www/us/en/architecture-and-technology/intel-optane-technology.html
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/

Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter

Veröffentlicht in: Technologie
  • Als Erste(r) kommentieren

Introduction to Intel Optane Data Center Persistent Memory

  1. 1. Usha Upadhyayula Tom Krueger February 2019
  2. 2. Disrupting the Storage AND MEMORY Hierarchy
  3. 3. Intel® Optane™ Data Center Persistent Memory - Value Pillars• Memory Mode ( Access to Large Volatile Memory Capacity)  Ease of Use; No Software Changes required  Extract more value from larger data sets then previously possible – TBs of dataset fully in memory  Delivers new capabilities for memory focused workloads – Large model simulation  Improve Time to Solution  reduce IO to storage • App Direct (Persistent Memory)  Data Access – Access granularity : Cache-line vs block  Application Controlled Data Placement – Load/Store Access – No Paging/context switching  Faster Restarts with Persistence – Higher Availability for Large Analytics Systems – Fraud Detection, Cyber Security,  Reduced Infrastructure Cost
  4. 4. 5 What does this mean to Software Developers?
  5. 5. Ease of Adoption • Memory Mode • 2 Levels of Memory • DRAM as Cache = Near Memory • Intel DCPMM = Far Memory • No Operating System or Application Changes Required • Data Placement Controlled by the Memory Controller • Latency • Same as DRAM for Cache Friendly Workloads • Storage Over App Direct • Persistent Memory acting as an SSD • Operates in Blocks • Traditional RD/WR • Works with Existing File Systems • Atomicity at block level • Block size configurable • No Application Changes Required • NVDIMM Driver Required • Support starting Kernel • 4.2 & Windows 2016 server • Latency • Lower compared to NVMe SSDs
  6. 6. 7 • Enabling Applications for Load/Store Access • Data Persistence • Stores are not guaranteed persistence until flushed • Need to Flush the CPU Caches to Persistent Domain • Data Consistency • Prevent Torn Updates • Using transactions • Persistent Memory Allocation/Free • Persistent Memory aware allocator • Prevent Persistent Memory Leaks • Persistent Memory Error Handling Enabling App Direct – Needs Re- Architecting the Application WPQ ADR -or- WPQ Flush (kernel only) Core L1 L1 L2 L3 WPQ MOV DIMM CPUCACHES CLWB + fence -or- CLFLUSHOP T + fence -or- CLFLUSH -or- NT stores + fence Minimum Required Power fail protected domain: Memory subsystem Custom Power fail protected domain indicated by ACPI property: CPU Cache Hierarchy
  7. 7. Storage 8 Exposing Persistent Memory to Applications The SNIA NVM Programming Model NVDIMMs User Space Kernel Space Standard File API NVDIMM Driver Application File System ApplicationApplication Standard Raw Device Access Load/Store Management Library Management UI Standard File API pmem-Aware File System MMU Mappings SNIA – Storage and Networking Industry Association FILE Memory
  8. 8. Support for volatile memory usage Persistent Memory Developer Kit -A Suite of Open Source of Libraries libmemkind Low level support for local persistent memory libpmem Low level support for remote access to persistent memory librpmem NVDIMM User Space Kernel Space Application Load/Store Standard File API pmem-Aware File System MMU Mappings LibrariesInterface to create arrays of pmem- resident blocks of same size for atomic updates Interface for persistent memory allocation, transactions and general facilities Interface to create a persistent memory resident log file libpmemblklibpmemlog libpmemobj Support Transaction s C++ C PCJ/L LPL Python Low-level support PCJ – Persistent Collection for Java Persistent containers for C++
  9. 9. Using Persistent Memory as Volatile Memory • Persistent Memory Support added to libmemkind • Application creates temporary file via pmem-aware file system and maps it • File disappears on reboot • Benefits: • App sees separate pools of memory for DRAM and pmem • For optimal QOS – latency-sensitive data goes into DRAM • App-managed data placement • API • memkind_create_pmem(const char *dir, size_t max_size, memkind_t *kind) • memkind_malloc(memkind_t kind, size_t size) • memkind_calloc(memkind_t kind, size_t num, size_t size) • memkind_realloc(memkind_t kind, void *ptr, size_t size) • memkind_free(memkind_t kind, void *ptr) 10 Application Interleave Set Load/Sto re Standard File API pmem-aware file system MMU Mappings Cache Line I/O Temporary file DRAM Load/Sto re
  10. 10. Ecosystem Partners • Standards Organizations  Storage Network Industry Association (SNIA), ACPI, UEFI, and DMTF • Operating System Vendors  Microsoft, Red Hat, SUSE, and Canonical • Virtualization Vendors  VMware, KVM, Xen, • Java* Vendors  Oracle* • Application Vendors • Data Analytics, ML Vendors, Database and Enterprise Application
  11. 11. 12 Developer Resources • PMDK Resources: • Home: https://pmem.io • PMDK: https://pmem.io/pmdk • PMDK Source Code : https://github.com/pmem/PMDK • Google Group: https://groups.google.com/forum/#!forum/pmem • Intel Developer Zone: https://software.intel.com/persistent-memory • NDCTL: https://pmem.io/ndctl • IPMCTL: https://github.com/intel/ipmctl • MemKind: https://memkind.github.io/memkind/ • LLPL: https://github.com/pmem/llpl • PCJ: https://github.com/pmem/pcj • SNIA NVM Programming Model: https://www.snia.org/tech_activities/standards/curr_standards/npm • Getting Started Guides: https://docs.pmem.io Save the Date for SPDK & PMDK Developer Summit: April 16/17. Watch for updates on the Google group: https://groups.google.com/forum/#!forum/pmem
  12. 12. FOR HPC, Where Can you Take Intel® Opta HPC Workloads with large data sets will benefit by keeping the data resident on the cluster. • Artificial Intelligence • Simulation and Modeling • Visualization • Health and Life Sciences
  13. 13. Backup
  14. 14. 15 future INTEL® XEON® SCALABLE PROCESSOR Cascade Lake With Intel® OPTANE™ DC PERSISTENT MEMORY Improved Per Core Performance Optimized Cache Hierarchy Higher CPU Frequencies Support for Intel® Deep Learning Boost (VNNI) Optimized Frameworks & Libraries Hardware-Enhanced Security Intel® Infrastructure Management Technologies Catalyst for data driven transformation (Pervasive Performance + HW Enhanced Security & Agility/Efficiency for Improved Tco) Public

×