Transcript of a discussion on how energy use and resources management have emerged as key ingredients of artificial intelligence adoption success -- or failure.
Big data technologies can help balance supply and demand in supply chains by processing large amounts of structured and unstructured data faster. This allows supply chain managers to get insights in milliseconds instead of hours and make quicker decisions. An example is using social media data to improve forecast accuracy by 1%, which can significantly impact the whole supply chain. Companies should start with a measurable use case like improving top reports to realize quick ROI and then expand big data applications. This will fundamentally change how supply chain employees access and use data for their work.
Mission Critical Use Cases Show How Analytics Architectures Usher in an Artif...Dana Gardner
A discussion on how artificial intelligence and advanced analytics solutions coalesce into top competitive differentiators that prove indispensable for digital business transformation.
AI in Business - Key drivers and future valueAPPANION
Artificial Intelligence is undoubtedly a hyped topic at the moment. But what is the reasoning for investors and digital platform players to bet very large amounts of money on this technology right now? To better understand the current market dynamics and to give an overview of renown predictions for the upcoming 2-3 years, we compiled a practical overview of this topic. This report covers the major driving forces of AI, assumptions for the future from the industry thought leaders as well as practical advice on how to start AI projects within your company.
This document discusses challenges and solutions related to big data implementation. Some key challenges mentioned include reluctance to invest in big data strategies, integrating traditional and big data, and finding professionals with both big data and domain skills. The document recommends starting small with proofs of concept and taking an iterative approach to derive early benefits from big data before making larger investments. It also stresses the importance of having an enterprise-wide data strategy and acquiring various skills needed for big data projects.
APIs challenge every notion of IT – governance, financial planning, team composition, success metrics, security – and many notions of business – secrecy, precise business agreements, locus of control.
This is not because of APIs as a technical evolution.
This is because APIs are part of the vanguard of the new world of work, the beginning of a 20-year productivity boom that will unsettle traditional hierarchies and business models in an even more pervasive way than the 10-year boom of the Web.
Looking back from 2018, how will you describe the changes and how you led your company to a dominant market position?
Defining a Practical Path to Artificial Intelligence Roman Chanclor
With the evolution of purpose built AI Infrastructures and the advancement of Graphics Processing Units (GPUs) that enable massively parallel, deep analysis in real-time; cognitive computing may be the norm in data centers in record time. But how?
A Practical Guide to Rapid ITSM as a Foundation for Overall Business AgilityDana Gardner
This document summarizes a podcast discussion on how rapidly advancing IT service management (ITSM) capabilities can improve IT performance and enable business agility.
The panelists discuss how traditional long IT project timelines no longer meet business needs, and how new ITSM technologies and methods allow for more rapid ITSM adoption. Rapid ITSM implementation using out-of-the-box configurations from SaaS solutions can establish best practices faster than custom approaches. However, data quality issues and unclear requirements can hinder speed. Adopting true agile principles and focusing on business needs rather than desired features helps overcome barriers to rapid ITSM.
Selected Topics
Modern Artificial Intelligence 1980s-2021 and Beyond
A Vision for the Next Decade of Computing
The Next Decade in AI: Four Steps Toward Robust Artificial Intelligence
Keras and TensorFlow: The Next Five Years
A Vision for the Future of ML Frameworks
AI Implementation at Scale: Lessons from the Front Lines
A Future with Self-Driving Vehicles
Advances in Renewable Energy: Enabling Our Decarbonized Energy Future with Technology Innovations and Smart Operations
Accelerating Health Care at Bayer with Science@Scale and Federated Learning
Large-Scale Deep Learning Recommendation Models at Facebook
Is AI at the Edge the Killer App for 5G?
Deep Learning for Anomaly Detection
From Storytelling to StoryLiving: A Vision for the Future of Immersive Entertainment
A New Era in Virtual Cinematography
Digital Transformation Is Here: Augmenting Human Capacity with Exponential Compute
Rethinking Drug Discovery in the Era of Digital Biology
Representation Learning for Autonomous Robots
Architecting the Secure Accelerated Data Center of the Future
Convergence of AI and HPC to Solve Grand Challenge Science Problems
Presenting US HHS Artificial Intelligence Strategy 2021: AI Mission and Ambition Commentary by the CAIO
Big data technologies can help balance supply and demand in supply chains by processing large amounts of structured and unstructured data faster. This allows supply chain managers to get insights in milliseconds instead of hours and make quicker decisions. An example is using social media data to improve forecast accuracy by 1%, which can significantly impact the whole supply chain. Companies should start with a measurable use case like improving top reports to realize quick ROI and then expand big data applications. This will fundamentally change how supply chain employees access and use data for their work.
Mission Critical Use Cases Show How Analytics Architectures Usher in an Artif...Dana Gardner
A discussion on how artificial intelligence and advanced analytics solutions coalesce into top competitive differentiators that prove indispensable for digital business transformation.
AI in Business - Key drivers and future valueAPPANION
Artificial Intelligence is undoubtedly a hyped topic at the moment. But what is the reasoning for investors and digital platform players to bet very large amounts of money on this technology right now? To better understand the current market dynamics and to give an overview of renown predictions for the upcoming 2-3 years, we compiled a practical overview of this topic. This report covers the major driving forces of AI, assumptions for the future from the industry thought leaders as well as practical advice on how to start AI projects within your company.
This document discusses challenges and solutions related to big data implementation. Some key challenges mentioned include reluctance to invest in big data strategies, integrating traditional and big data, and finding professionals with both big data and domain skills. The document recommends starting small with proofs of concept and taking an iterative approach to derive early benefits from big data before making larger investments. It also stresses the importance of having an enterprise-wide data strategy and acquiring various skills needed for big data projects.
APIs challenge every notion of IT – governance, financial planning, team composition, success metrics, security – and many notions of business – secrecy, precise business agreements, locus of control.
This is not because of APIs as a technical evolution.
This is because APIs are part of the vanguard of the new world of work, the beginning of a 20-year productivity boom that will unsettle traditional hierarchies and business models in an even more pervasive way than the 10-year boom of the Web.
Looking back from 2018, how will you describe the changes and how you led your company to a dominant market position?
Defining a Practical Path to Artificial Intelligence Roman Chanclor
With the evolution of purpose built AI Infrastructures and the advancement of Graphics Processing Units (GPUs) that enable massively parallel, deep analysis in real-time; cognitive computing may be the norm in data centers in record time. But how?
A Practical Guide to Rapid ITSM as a Foundation for Overall Business AgilityDana Gardner
This document summarizes a podcast discussion on how rapidly advancing IT service management (ITSM) capabilities can improve IT performance and enable business agility.
The panelists discuss how traditional long IT project timelines no longer meet business needs, and how new ITSM technologies and methods allow for more rapid ITSM adoption. Rapid ITSM implementation using out-of-the-box configurations from SaaS solutions can establish best practices faster than custom approaches. However, data quality issues and unclear requirements can hinder speed. Adopting true agile principles and focusing on business needs rather than desired features helps overcome barriers to rapid ITSM.
Selected Topics
Modern Artificial Intelligence 1980s-2021 and Beyond
A Vision for the Next Decade of Computing
The Next Decade in AI: Four Steps Toward Robust Artificial Intelligence
Keras and TensorFlow: The Next Five Years
A Vision for the Future of ML Frameworks
AI Implementation at Scale: Lessons from the Front Lines
A Future with Self-Driving Vehicles
Advances in Renewable Energy: Enabling Our Decarbonized Energy Future with Technology Innovations and Smart Operations
Accelerating Health Care at Bayer with Science@Scale and Federated Learning
Large-Scale Deep Learning Recommendation Models at Facebook
Is AI at the Edge the Killer App for 5G?
Deep Learning for Anomaly Detection
From Storytelling to StoryLiving: A Vision for the Future of Immersive Entertainment
A New Era in Virtual Cinematography
Digital Transformation Is Here: Augmenting Human Capacity with Exponential Compute
Rethinking Drug Discovery in the Era of Digital Biology
Representation Learning for Autonomous Robots
Architecting the Secure Accelerated Data Center of the Future
Convergence of AI and HPC to Solve Grand Challenge Science Problems
Presenting US HHS Artificial Intelligence Strategy 2021: AI Mission and Ambition Commentary by the CAIO
The Evolving Role of the Data Engineer - Whitepaper | QuboleVasu S
A whitepaper about how the evolving data engineering profession helps data-driven companies work smarter and lower cloud costs with Qubole.
https://www.qubole.com/resources/white-papers/the-evolving-role-of-the-data-engineer
Platform 3.0 Ripe to Give Standard Access to Advanced Intelligence and Automa...Dana Gardner
Transcript of a BriefingsDirect podcast on how The Open Group is working to stay ahead of challenges organization face with an increase in data volume and sources.
HP's Converged Infrastructure and Data Center Transformation Models Define th...Dana Gardner
Transcript of a sponsored podcast discussion from HP Discover 2011 in Las Vegas on How HP's converged strategy is designed to meet the challenges facing IT today.
IRJET- Scope of Big Data Analytics in Industrial DomainIRJET Journal
This document discusses the scope of big data analytics in industrial domains. It begins by defining big data and its key characteristics, known as the "7 V's" - volume, velocity, variety, variability, veracity, value, and volatility. It then discusses how big data is generated in various fields like social media, search engines, healthcare, online shopping, and stock exchanges. The document focuses on how big data analytics can be applied in industrial Internet of Things (IoT) to extract meaningful information from large and continuous data streams generated by IoT devices using machine learning techniques.
A study on web analytics with reference to select sports websitesBhanu Prakash
This document is a project report submitted by Y. Bhanu Prakash to GITAM Institute of Management in partial fulfillment of the degree of Bachelor of Business Administration in Business Analytics. The report is on the topic of web analytics with reference to select sports websites. It includes declarations by the student and certification by the guide, as well as acknowledgements. The report will consist of 5 chapters - an introduction to analytics, a profile of Alexa.com, methodology, analysis and interpretation of data, and observations and conclusions.
This document summarizes an upcoming wind energy conference focused on data management and operational efficiency. The conference will take place March 13-15, 2017 in Houston, Texas, with early registration savings available by January 27. Speakers will include representatives from energy companies, OEMs, and research organizations.
The conference will consist of keynote presentations, panel discussions, case studies, and workshops on topics such as: securing and using data for operational efficiency; big data analytics; predictive maintenance; and understanding turbine performance constraints. Attendees will include data experts, asset managers, and other industry professionals. The goal is to provide opportunities to network, discuss challenges, and gain practical insights on optimizing wind farm performance through data-driven
After many hours of research and speaking to those who helped make last year's event such a success, Windpower Monthly's Wind Data North America Forum is back BIGGER and BETTER than ever before on the 13-15th March in Houston, Texas.
We have listened to the communities feedback and tried to make this latest edition even more interactive to ensure there is no more 'death by powerpoint'!!
Therefore we have several panel discussions, round-tables and interactive audience participation sessions that will ensure your time is well spent in attendance at this industry leading event.
Dell NVIDIA AI Powered Transformation in Financial Services WebinarBill Wong
Digital transformation through data analytics and AI can help financial services firms address business, technology, and labor challenges caused by COVID-19. Key trends include increased reliance on remote work and digital platforms, and the importance of data analytics for decision making. By 2025, 90% of new apps will use AI. The document discusses NVIDIA and Dell Technologies' partnership and strategies for providing infrastructure to support AI workloads through solutions like the DGX A100 system, which can support training, inference, and analytics on one platform through technologies like GPUs and MIG. This helps provide a more flexible and efficient infrastructure compared to traditional siloed approaches.
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...Dana Gardner
A discussion on how intelligent data center designs and components are delivering what amounts to data centers-as-a-service to SMBs, enterprises, and public sector agencies.
Serving the long tail white-paper (how to rationalize IT yet produce more apps)Newton Day Uploads
Businesses benefit from having fewer technology tools in their 'enterprise stack'. Yet CIOs still need to encourage innovation and employ software tools as an enabler for growth and cost reduction. This white paper focuses on the role of Situational Applications platforms to reduce the number of technology platforms whilst increasing opportunities to serve the long-tail of applications demands from individuals and communities of users whose needs are unfulfilled by core enterprise platforms.
Leveraging research findings from EMA's 2012 "Big Data Comes of Age" Research Report, this new Infographic outlines the five business requirements driving Big Data solutions and the technologies that support those requirements.
1) The document discusses how training AI models can be very energy intensive and proposes ways to develop "Green AI" that is more efficient.
2) It outlines the carbon neutral pledges of major tech companies to power their AI services with renewable energy and be carbon neutral by 2030.
3) Green AI applications are discussed for key sectors like energy, transport, water, and agriculture that could help reduce CO2 emissions.
4) Stakeholders across government, companies, non-profits are encouraged to collaborate to develop responsible AI that considers environmental impacts.
Influence of Hadoop in Big Data Analysis and Its Aspects IJMER
This paper is an effort to present the basic understanding of BIG DATA and
HADOOP and its usefulness to an organization from the performance perspective. Along-with the
introduction of BIG DATA, the important parameters and attributes that make this emerging concept
attractive to organizations has been highlighted. The paper also evaluates the difference in the
challenges faced by a small organization as compared to a medium or large scale operation and
therefore the differences in their approach and treatment of BIG DATA. As Hadoop is a Substantial
scale, open source programming system committed to adaptable, disseminated, information
concentrated processing. A number of application examples of implementation of BIG DATA across
industries varying in strategy, product and processes have been presented. This paper also deals
with the technology aspects of BIG DATA for its implementation in organizations. Since HADOOP has
emerged as a popular tool for BIG DATA implementation. Map reduce is a programming structure for
effectively composing requisitions which prepare boundless measures of information (multi-terabyte
information sets) in- parallel on extensive bunches of merchandise fittings in a dependable,
shortcoming tolerant way. A Map reduce skeleton comprises of two parts. They are “mapper" and
"reducer" which have been examined in this paper. The paper deals with the overall architecture of
HADOOP along with the details of its various components in Big Data.
Implementation of application for huge data file transferijwmn
Nowadays big data transfers make people’s life difficult. During the big data transfer, people waste so
much time. Big data pool grows everyday by sharing data. People prefer to keep their backups at the cloud
systems rather than their computers. Furthermore considering the safety of cloud systems, people prefer to
keep their data at the cloud systems instead of their computers. When backups getting too much size, their
data transfer becomes nearly impossible. It is obligated to transfer data with various algorithms for moving
data from one place to another. These algorithms constituted for transferring data faster and safer. In this
Project, an application has been developed to transfer of the huge files. Test results show its efficiency and
success.
A Data-driven Maturity Model for Modernized, Automated, and Transformed ITbalejandre
This document presents a research-based maturity model for measuring organizations' progress in IT transformation. The model segments organizations into four levels of maturity based on surveys of 1,000 IT executives about their infrastructure, processes, and relationships. Only a small percentage have achieved the highest levels of modernized infrastructure, automated processes, and business-IT alignment needed for digital transformation. Higher maturity is correlated with improved agility, efficiency, innovation funding, and business outcomes. Adopting modern data center technologies, automated processes, and DevOps practices can help organizations progress to more mature states.
This document provides an overview of big data concepts including:
- Mohamed Magdy's background and credentials in big data engineering and data science.
- Definitions of big data, the three V's of big data (volume, velocity, variety), and why big data analytics is important.
- Descriptions of Hadoop, HDFS, MapReduce, and YARN - the core components of Hadoop architecture for distributed storage and processing of big data.
- Explanations of HDFS architecture, data blocks, high availability in HDFS 2/3, and erasure coding in HDFS 3.
Converged IoT Systems: Bringing the Data Center to the Edge of EverythingDana Gardner
Transcript of a discussion on the rapidly evolving architectural shift of moving advanced information technology (IT) capabilities to the edge to support Internet of Things (IoT) requirements for operational integrity benefits.
Practical analytics john enoch white paperJohn Enoch
This document discusses using data analytics to provide value to businesses. It recommends starting with smaller, more manageable data sets and business intelligence (BI) projects that have clear goals and can yield quick wins, like analyzing travel costs. While big data holds promise, the author advises focusing first on consolidating existing data that is stuck in silos and using BI to improve processes and save costs in areas employees already know need improvement. Starting small builds skills for larger initiatives and ensures analytics provides practical benefits.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Weitere ähnliche Inhalte
Ähnlich wie Make AI Adoption a Strategic, ROI-Focused, Fit-for-Purpose and Sustainable Transformation, Says HPE
The Evolving Role of the Data Engineer - Whitepaper | QuboleVasu S
A whitepaper about how the evolving data engineering profession helps data-driven companies work smarter and lower cloud costs with Qubole.
https://www.qubole.com/resources/white-papers/the-evolving-role-of-the-data-engineer
Platform 3.0 Ripe to Give Standard Access to Advanced Intelligence and Automa...Dana Gardner
Transcript of a BriefingsDirect podcast on how The Open Group is working to stay ahead of challenges organization face with an increase in data volume and sources.
HP's Converged Infrastructure and Data Center Transformation Models Define th...Dana Gardner
Transcript of a sponsored podcast discussion from HP Discover 2011 in Las Vegas on How HP's converged strategy is designed to meet the challenges facing IT today.
IRJET- Scope of Big Data Analytics in Industrial DomainIRJET Journal
This document discusses the scope of big data analytics in industrial domains. It begins by defining big data and its key characteristics, known as the "7 V's" - volume, velocity, variety, variability, veracity, value, and volatility. It then discusses how big data is generated in various fields like social media, search engines, healthcare, online shopping, and stock exchanges. The document focuses on how big data analytics can be applied in industrial Internet of Things (IoT) to extract meaningful information from large and continuous data streams generated by IoT devices using machine learning techniques.
A study on web analytics with reference to select sports websitesBhanu Prakash
This document is a project report submitted by Y. Bhanu Prakash to GITAM Institute of Management in partial fulfillment of the degree of Bachelor of Business Administration in Business Analytics. The report is on the topic of web analytics with reference to select sports websites. It includes declarations by the student and certification by the guide, as well as acknowledgements. The report will consist of 5 chapters - an introduction to analytics, a profile of Alexa.com, methodology, analysis and interpretation of data, and observations and conclusions.
This document summarizes an upcoming wind energy conference focused on data management and operational efficiency. The conference will take place March 13-15, 2017 in Houston, Texas, with early registration savings available by January 27. Speakers will include representatives from energy companies, OEMs, and research organizations.
The conference will consist of keynote presentations, panel discussions, case studies, and workshops on topics such as: securing and using data for operational efficiency; big data analytics; predictive maintenance; and understanding turbine performance constraints. Attendees will include data experts, asset managers, and other industry professionals. The goal is to provide opportunities to network, discuss challenges, and gain practical insights on optimizing wind farm performance through data-driven
After many hours of research and speaking to those who helped make last year's event such a success, Windpower Monthly's Wind Data North America Forum is back BIGGER and BETTER than ever before on the 13-15th March in Houston, Texas.
We have listened to the communities feedback and tried to make this latest edition even more interactive to ensure there is no more 'death by powerpoint'!!
Therefore we have several panel discussions, round-tables and interactive audience participation sessions that will ensure your time is well spent in attendance at this industry leading event.
Dell NVIDIA AI Powered Transformation in Financial Services WebinarBill Wong
Digital transformation through data analytics and AI can help financial services firms address business, technology, and labor challenges caused by COVID-19. Key trends include increased reliance on remote work and digital platforms, and the importance of data analytics for decision making. By 2025, 90% of new apps will use AI. The document discusses NVIDIA and Dell Technologies' partnership and strategies for providing infrastructure to support AI workloads through solutions like the DGX A100 system, which can support training, inference, and analytics on one platform through technologies like GPUs and MIG. This helps provide a more flexible and efficient infrastructure compared to traditional siloed approaches.
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...Dana Gardner
A discussion on how intelligent data center designs and components are delivering what amounts to data centers-as-a-service to SMBs, enterprises, and public sector agencies.
Serving the long tail white-paper (how to rationalize IT yet produce more apps)Newton Day Uploads
Businesses benefit from having fewer technology tools in their 'enterprise stack'. Yet CIOs still need to encourage innovation and employ software tools as an enabler for growth and cost reduction. This white paper focuses on the role of Situational Applications platforms to reduce the number of technology platforms whilst increasing opportunities to serve the long-tail of applications demands from individuals and communities of users whose needs are unfulfilled by core enterprise platforms.
Leveraging research findings from EMA's 2012 "Big Data Comes of Age" Research Report, this new Infographic outlines the five business requirements driving Big Data solutions and the technologies that support those requirements.
1) The document discusses how training AI models can be very energy intensive and proposes ways to develop "Green AI" that is more efficient.
2) It outlines the carbon neutral pledges of major tech companies to power their AI services with renewable energy and be carbon neutral by 2030.
3) Green AI applications are discussed for key sectors like energy, transport, water, and agriculture that could help reduce CO2 emissions.
4) Stakeholders across government, companies, non-profits are encouraged to collaborate to develop responsible AI that considers environmental impacts.
Influence of Hadoop in Big Data Analysis and Its Aspects IJMER
This paper is an effort to present the basic understanding of BIG DATA and
HADOOP and its usefulness to an organization from the performance perspective. Along-with the
introduction of BIG DATA, the important parameters and attributes that make this emerging concept
attractive to organizations has been highlighted. The paper also evaluates the difference in the
challenges faced by a small organization as compared to a medium or large scale operation and
therefore the differences in their approach and treatment of BIG DATA. As Hadoop is a Substantial
scale, open source programming system committed to adaptable, disseminated, information
concentrated processing. A number of application examples of implementation of BIG DATA across
industries varying in strategy, product and processes have been presented. This paper also deals
with the technology aspects of BIG DATA for its implementation in organizations. Since HADOOP has
emerged as a popular tool for BIG DATA implementation. Map reduce is a programming structure for
effectively composing requisitions which prepare boundless measures of information (multi-terabyte
information sets) in- parallel on extensive bunches of merchandise fittings in a dependable,
shortcoming tolerant way. A Map reduce skeleton comprises of two parts. They are “mapper" and
"reducer" which have been examined in this paper. The paper deals with the overall architecture of
HADOOP along with the details of its various components in Big Data.
Implementation of application for huge data file transferijwmn
Nowadays big data transfers make people’s life difficult. During the big data transfer, people waste so
much time. Big data pool grows everyday by sharing data. People prefer to keep their backups at the cloud
systems rather than their computers. Furthermore considering the safety of cloud systems, people prefer to
keep their data at the cloud systems instead of their computers. When backups getting too much size, their
data transfer becomes nearly impossible. It is obligated to transfer data with various algorithms for moving
data from one place to another. These algorithms constituted for transferring data faster and safer. In this
Project, an application has been developed to transfer of the huge files. Test results show its efficiency and
success.
A Data-driven Maturity Model for Modernized, Automated, and Transformed ITbalejandre
This document presents a research-based maturity model for measuring organizations' progress in IT transformation. The model segments organizations into four levels of maturity based on surveys of 1,000 IT executives about their infrastructure, processes, and relationships. Only a small percentage have achieved the highest levels of modernized infrastructure, automated processes, and business-IT alignment needed for digital transformation. Higher maturity is correlated with improved agility, efficiency, innovation funding, and business outcomes. Adopting modern data center technologies, automated processes, and DevOps practices can help organizations progress to more mature states.
This document provides an overview of big data concepts including:
- Mohamed Magdy's background and credentials in big data engineering and data science.
- Definitions of big data, the three V's of big data (volume, velocity, variety), and why big data analytics is important.
- Descriptions of Hadoop, HDFS, MapReduce, and YARN - the core components of Hadoop architecture for distributed storage and processing of big data.
- Explanations of HDFS architecture, data blocks, high availability in HDFS 2/3, and erasure coding in HDFS 3.
Converged IoT Systems: Bringing the Data Center to the Edge of EverythingDana Gardner
Transcript of a discussion on the rapidly evolving architectural shift of moving advanced information technology (IT) capabilities to the edge to support Internet of Things (IoT) requirements for operational integrity benefits.
Practical analytics john enoch white paperJohn Enoch
This document discusses using data analytics to provide value to businesses. It recommends starting with smaller, more manageable data sets and business intelligence (BI) projects that have clear goals and can yield quick wins, like analyzing travel costs. While big data holds promise, the author advises focusing first on consolidating existing data that is stuck in silos and using BI to improve processes and save costs in areas employees already know need improvement. Starting small builds skills for larger initiatives and ensures analytics provides practical benefits.
Ähnlich wie Make AI Adoption a Strategic, ROI-Focused, Fit-for-Purpose and Sustainable Transformation, Says HPE (20)
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Make AI Adoption a Strategic, ROI-Focused, Fit-for-Purpose and Sustainable Transformation, Says HPE
1. Page 1 of 13
Make AI Adoption a Strategic, ROI-
Focused, Fit-for-Purpose and
Sustainable Transformation, Says HPE
Transcript of a discussion on how energy use and resources management have emerged as key
ingredients of artificial intelligence adoption success -- or failure.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard
Enterprise.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast series. I’m
Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this
ongoing discussion on best practices for deploying artificial intelligence (AI) with a focus on
sustainability and strategic business benefits.
As AI rises as an imperative that impacts companies at nearly all levels, proper concern for
efficiency around energy use and resources management has emerged as a key ingredient of
success -- or failure. It’s becoming increasingly evident that AI deployments will demand vast
resources, energy, water, skills, and upgraded or wholly new data center and electrical grid
infrastructures.
Stay with us now as we examine why factoring the full and long-term benefits — accurately
weighed against the actual costs — is essential to assuring the desired business outcomes from
AI implementations. Only by calculating the true and total expected costs in the fullest sense
can businesses predict the proper fit-for-purpose use for large deployments of AI systems.
Here to share the latest findings and best planning practices for
sustainable AI is John Frey, Director and Chief Technologist of
Sustainable Transformation at Hewlett Packard Enterprise
(HPE). Welcome, John.
John Frey: Thank you. It’s great to be here.
Gardner: It’s good to have you back.
John, AI capabilities have moved outside the traditional
boundaries of technology, data science, and analytics. AI has
become a rapidly growing imperative for business leaders -- and
it’s impacting the daily life of more and more workers.
Generative AI, and more generally large language models, for
example, are now widely sought for broad and varied uses.
While energy efficiency has long been sought for general IT and high-performance computing
(HPC), AI appears to dramatically up the game on the need to factor and manage the required
resources.
John, how much of sea change is the impact of AI having on all that’s needed to support these
complex systems?
Frey
2. Page 2 of 13
AI impact adds up, everywhere
Frey: Well, AI certainly is an additional load on resources. AI training, for example, is power-
intensive. AI inferencing acts similarly, and obviously is used again and again and again if users
use the tools as designed for a long period of time.
It remains to be seen how much and how quickly, but there’s a lot of research out there that
suggests that AI use is going to rapidly grow in terms of overall technology demand.
Gardner: And, you know, we need to home in on how powerful and useful AI is. Nearly
everyone seems confident that there’s going to be really important new use cases and very
powerful benefits. But we also need to focus on what it takes to get those results, and I think
some people may have skipped over that part.
Frey: Yes, absolutely. A lot of businesses are still trying to figure out the best uses of AI, and
the types of solutions within their infrastructure that either add business value, or speed up their
processes, or that save some money.
Gardner: And this explosive growth isn’t replacing a traditional IT. We still need to have the
data centers that we’re running now performing what they’re performing. This is not a rip and
replace by any stretch. This is an add-on and perhaps even a different type of infrastructure
requirement given the high energy density, total power, and resulting heat requirements.
Frey: Absolutely. In fact, we constantly have customers coming to us asking both how does this
supplement the existing technology workloads that they are already running, and what do they
need to change in terms of the infrastructure to run these new workloads in the future?
Gardner: John, we’re seeing different countries approach these questions in different ways. We
do not have a clean room approach to deploying AI. We have it going into the existing public
infrastructure that serves cities, countries, and rural localities.
And so, how important is it to consider the impact -- not just from an AI capabilities requirement
-- but from a societal and country-by-country specific set of infrastructure requirements?
Frey: That’s a great question. We’re already
seeing evidence of jurisdictions looking at the
increasing power demand, and also water
demand. Some are either slowing down the
implementation or even pausing the
implementation for a period of time so that they
can truly understand the implications on the
utilities and infrastructure that these new AI
workloads are going to have.
Gardner: And, of course, the some countries and regulatory agencies are examining how
sustainable our overall economy is, given the amount of carbon and record-breaking levels still
being delivered into the atmosphere.
Some [jurisdictions] are either slowing
down the implementation … for a
period of time so that they can truly
understand the implications on the
utilities and infrastructure that these
new AI workloads are going to have.
3. Page 3 of 13
Frey: Absolutely, and that has been a constant focus. Certainly, technology like AI brings that
front and center from both a power and water perspective, but also from a social good
perspective.
If you think in the broadest use of the term sustainability, that’s what those jurisdictions are
looking at. And so, we’re going to see new permitting processes, I predict. We’re also going to
see more regulatory action.
Gardner: John, there’s been creep over the years as to what AI entails and includes -- from
traditional analytics and data crunching, to machine learning (ML), and now the newer large
language models and their ongoing inference demands. We’re also looking at tremendous
amounts of data, and the requirement for more data, as another important element of the
burgeoning demand for more resources.
How important is it for organizations to examine the massive data gathering, processing, and
storing requirements -- in addition to the AI modeling aspects -- as they seek resources for
sustainability.
Data efficiency delivers maximum effectiveness
Frey: It’s vital. In fact, when we think about how HPE looks at sustainable IT broadly, data
efficiency is the first place we suggest users think about improvement. From an AI perspective,
it’s increasingly about minimizing the training sets of data used to train the models.
For example, if you’re using off-the-shelf data sets, like from crawls of the entire internet around
the globe, and if your solutions are only going to operate in English, you can instantly discard
the data that have been collected that aren’t in the English language. If, for example, you’re
building a large language model, you don’t need the HTML and other programming code that a
crawler probably grabbed as well.
Getting the data pull right in the first place, before you do the training, is a key part of
sustainable AI, and then you can use only your customer’s specific data as you tune that model
as well.
By starting first with data efficiency -- and getting that data population as concise as it can be
from the early stages of the process -- then you’re driving efficiency all the way through.
Gardner: So being wise with your data choices is an important first step for any AI activity. Do
you have any data points on how big of an impact data and associated infrastructure demands
for AI can have?
Frey: Yes, for the latest large language models that many people are using or familiar with,
such as GPT-4, there’s been research looking at the reported infrastructure stack that was
needed to train that. They’ve estimated more than 50 gigawatt hours of energy were consumed
Rethink Sustainability
As A Productivity Catalyst
4. Page 4 of 13
during the training process. And that training process, by the way, was believed to be
somewhere on the order of about 95 days.
Now to put that level of power in perspective, that is about the same power that 2,700 U.S.
homes consume for a year using the US Environmental Protection Agency’s (EPA’s)
equivalency model. So, there’s a tremendous amount of energy that goes into the training
process. And remember, that’s only in the first 95 days of training for that model. Then the
model can be used for multiple years with people running inference problems against it.
In the same way, we can look at the water consumption involved. There are often millions of
gallons of water used in the cooling during such a training run. Researchers also predicted that
running a single 5- to 20-variable problem, or doing inference with 5- to 20-variables, results in a
water consumption for each inference run of about 500 milliliters, or a 16-ounce bottle of water,
as part of the needed cooling. If you have millions of users running millions of problems that
each require an inference run, that’s a significant amount of water in a short period of time.
Gardner: These impacts then are on some of our most precious and valuable commodities:
carbon, water, and electricity. How is the infrastructure needed to support these massive AI
undertakings different from past data centers? Do such things as energy density per server rack
or going to water instead of air cooling need to be re-examined in the new era of AI?
Get a handle on AI workloads
Frey: It’s a great question. Part of the challenge, and why it comes up so much, is as we think
about these new AI workloads, the question becomes, “Can our existing infrastructure and
existing data centers handle that?”
Several things that we think are pushing us to consider either new facilities or new co-location
sites are such issues as rack density going up. Global surveys look at rack densities and the
most commonly reported rack density today is about four to six kilowatts per rack. Yet we know
with AI training systems, and even inference systems, that those rack densities may be up in the
20, 30, to all the way up to 50 kilowatts per rack.
Many existing IT facilities aren’t made to
handle that power density at all. The other
thing we know is many of the existing facilities
continue to be air-cooled. They’re taking in
outside air, cooling it down and then providing
that to the IT equipment to remove the heat.
We know that when you start getting above 20
kilowatts per rack or so, air cooling is less
effective against some of those high-heat-producing workloads. You really may need to make a
shift to direct liquid cooling.
And again, what we find is so many data centers that exist today, whether they’re privately
owned or in a co-location space, don’t have the capability for the liquid cooling that’s required.
So that’s going to be another needed change.
We know that when you start getting
above 20 kilowatts per rack, air
cooling is less effective against some
of those high-heat-producing
workloads. You really may need to
make a shift to direct liquid cooling.
5. Page 5 of 13
And then the third thing here is the workloads are running both the training and the inference
and so often they have accelerators in them. We’re seeing the critical temperature that those
accelerators -- along with the central processing units (CPUs) – run at have to be kept below
certain thresholds to run most effectively, and that is actually dropping.
At the same time, we have higher densities, higher heat generation, and therefore, need for
more effective cooling. The required critical temperature of the most critical devices is dropping.
These three elements put together are really what’s driving a tremendous amount of the data
that calls for our infrastructure to change in the future.
Gardner: And this is not going to just impact your typical global 2000 enterprise’s on-premises
data centers. This is going to impact co-location providers, various IT service providers, and the
entire ecosystem of IT infrastructure-as-a-service (IaaS) providers.
Frey: Yes, absolutely. I will say that many of these providers have already started the transition
for their normal, non-AI workloads as server efficiency has dramatically improved, particularly in
terms of performance per watt and as rack densities have grown.
One of the ways that co-location providers
charge their customers is by space used, and
another way is by power consumption. So, if
you’re trying to do as much work as possible
for the same watt of power -- and you’re trying
to do it in the smallest footprint possible -- you
naturally will raise rack densities.
So, this trend has already started, but AI accelerates the trend dramatically.
Gardner: It occurs to me, John, that for 30 or more years, there was a vast amount of wind in
the sails of IT and its evolution in the form of Moore’s law. That benefit of the processor design
improving its efficiency, capability, and to scale rapidly over time was often taken for granted in
the economics of IT in general. And then, for the last 5 to 10 years, we’ve had advances in
virtualization and soaring server utilization improvements. Then massive improvements in data
storage capacities and management efficiencies were added.
But it now seems that even with all of that efficiency and improved IT capabilities, that we’re
going in reverse. We face such high demands and higher costs because of AI workloads that
the cost against value is rapidly rising and demands more of our most expensive and precious
resources.
Do we kiss goodbye any notion of Moore’s law? How long can true escalating costs continue for
these newer compute environments?
Is it time to move on from Moore’s Law?
Frey: Those of us who are technologists, of course, we love to find technology solutions to
challenges. And as we’ve pushed on energy efficiency and performance per watt, we have seen
and predicted in many cases an end to Moore’s law.
If you’re trying to do as much work as
possible for the same watt of power –
and you’re trying to do it in the
smallest footprint possible – you
naturally will raise rack densities.
6. Page 6 of 13
But then we find new ways to develop higher functioning processors with even better
performance. We haven’t hit thresholds there that have stopped us yet. And I think that’s going
to continue, we will grow performance per watt. And that’s what all of the processor vendors are
pushing for, on improving that performance per watt equation.
That trajectory is going to continue into the near future, at least. At the same time, though, when
we think more broadly, we have to focus on energy efficiency, so we literally consume less
power per device.
But as you look at human behavior over the past two decades, every time we’ve been able to
save energy in one place, it doesn’t mean that overall demand drops. It means that people get
another device that they can’t live without.
For example, we all now have cell phones in our pockets, which two decades ago we didn’t
even know we needed. And now, we have tablets and laptop computers and the internet and all
of the things that we have come to not be able to live without.
It’s gotten to the point that every time we drive these power efficiencies, there are new uses for
technology -- many of which, by the way, decarbonize other processes. So, there’s a definite
benefit there. But we always have to weigh that.
Is a technology solution always the right
way to solve a challenge? And what are
the societal and environmental impacts of
that new technology solution so that we
can factor and make the best decisions?
Gardner: In addition to this evolution of AI technology toward productivity and per watt
efficiency, there are also market factors involved. If the total costs are too high, then the
marketplace won’t sustain the AI solution on a cost-benefit basis. And so, as a business, if
you’re able to reduce cost as the only way to make solutions viable, that’s going to be what the
market demands, and what your competitors are going to force on you, too.
The second market forces pillar is the compliance and regulatory factor. In fact, in May of 2024,
the European Union Energy Efficiency Directive kicks in. And so, there are powerful forces
around total costs of AI supply and consumption that we don’t have much choice over, that are
compelling facts of life.
Frey: Absolutely. In fact, one of the things we’re seeing in the market is a tremendous amount
of money being spent to develop some AI technologies. That comes with really hard questions
about what’s a proper return on investment (ROI) for that initial money spent to build and train
the models. And then, can we further prove out the ROI over the long-term?
Our customers are now wisely asking those very questions. We’re also, from an HPE
perspective, making sure that customers think about the ethical and societal consequences of
these AI solutions. We don’t want customers bringing AI solutions to market and having an
What are the societal and environmental
impacts of a new technology solution so
that we can make the best decisions?
Empowering Sustainable IT
Through Data Efficiency
7. Page 7 of 13
unintended consequence from a bias that’s discovered, or some other aspect around privacy
and cybersecurity that they had not considered when they built the solution.
And, to your point, there is also increasing interest in how to contend with regulatory constraints
for AI solutions as well.
Gardner: So, one way or another, you’re going to be seeking a fit-for-purpose approach to AI
implementations -- whether you want to or not. And so, you might as well start on that earlier
than later.
Let’s move now toward ways that we can accomplish what we’ve been describing in terms of
keeping the AI services costs down, the energy demand down, and making sure that the
business benefits outweigh the total and real costs.
What are some ways that HPE -- through your research, product development, and customer
experiences -- is driving toward general business sustainability and transformation? How can
HPE be leveraged to improve and reduce risk specifically around the AI transformation journey?
Five levers for moving forward sustainably
Frey: One of the things that we’ve learned in 22 years or so of working with customers
specifically on sustainable technology broadly is we’ve discovered five levers. And we
intentionally call them “levers” because we believe that all of them apply to every customer,
whether they have their IT workloads in the public cloud, a hybrid or private cloud, a bare-metal
environment, or whether they are on-premises, co-location, or even out on the edge.
We know that they can drive efficiencies if customers consider these levers. And those five are
first data efficiency, which we’ve talked about a little bit already. From the AI context, it’s first
about making sure that the data sets that you’re using are optimized before running the training.
When we process a bit of data in a training environment, for example, do we avoid processing it
again if we can? And how do we make sure that any data that we’re going to train for, or derive
from an inference, actually has a use? Does that data provide a business value?
Next, if we’re going to collect data, how do we
make sure that we make an intentional decision
on the front end about how long we’re going to
store that data, and how we’re going to store it?
What types of storage? Is it something they will
need instantaneously? And we can choose from
high availability storage or go all the way down to
tape storage if it’s more of an archival or regulatory requirement to keep that data for a long
period of time. So, data efficiency is where we suggest we start, because making the right
decisions there flows down through all of the other aspects.
The second lever is software efficiency and this, from a broader technology perspective, is
focused on writing more efficient software applications. How do we reduce the carbon intensity
of software applications? And how do we use software to drive efficiency?
Data efficiency is where we suggest
we start, because making the right
decisions there flows down through
all of the other aspects.
8. Page 8 of 13
From an AI perspective, this gets into model development. How do we develop more efficient
models? How do we design these models, or leverage existing models, to be as efficient as
possible and to use the least amount of compute capability, storage capability, and networking
capability to operate most efficiently?
Software efficiency even includes things such as the efficiency of the coding. Can it be in a
compiled language versus a non-compiled language, and so it takes less power and CPU
capability to run that software as well? And HPE brings many tools to the market in that
environment.
Next, how do we use software to drive efficiency? Some of the things we’re seeing lots of
interest in with AI are things like predictive maintenance and digital twins, where we can actually
use software tools to predict things like maintenance cycles or failures, even things like inferring
operating and buying behaviors. We see these used in terms of the design of data centers. How
do we shift workloads for most efficient and lowest carbon operation? All of those aspects are in
software efficiency.
And then we move to the hardware stack and that means equipment efficiency. When you
have a piece of technology equipment, can you have it do the most amount of work? We know
from global industry surveys that technology equipment is often very underutilized. For a variety
of reasons, there’s redundancy and resiliency built into the solutions.
But as we begin moving more into AI, we tend
to look at hardware and software solutions that
deliver high levels of availability across the
equipment infrastructure. On one hand, by its
very nature, AI is designed to run this
equipment at higher levels of utilization. And
there is huge demand, particularly in terms of
training, on single large workloads that run across a variety of devices as well. But equipment
efficiency is all about attaining the highest levels of utilization.
Then, we move to energy efficiency. And this is about how to do the most amount of work per
input watt of power so that the devices are as high performing as possible. We tend to call that
being energy effective. Can you do the most amount of work with the same input of energy?
And, from an AI perspective, it’s so critical because these systems consume so much power
that often we’re able to easily demonstrate the benefits for an input watt of power or volume of
water that we’re using from a cooling perspective.
And finally, resource efficiency, and that’s about how do we run technology solutions so that
they need the least number of various resources. Those include auxiliary cooling or power
conversions, or even the human resources that it takes to run these solutions.
So, from an AI context, again, we’ve talked about raising power densities and how we can shift
directly from air to water. Cooling is going to be so critical. And it turns out that as you move to
direct liquid cooling, that has a much lower power percentage compared to some of our air-
cooled infrastructure. You can drop your power consumption dramatically by moving to direct
liquid cooling.
As we begin to move more into AI,
we tend to look to hardware and
software solutions that deliver high
levels of availability across the
equipment infrastructure.
9. Page 9 of 13
It's the same way from a staffing perspective. As you begin having analytics that allow you to
monitor all these variables across your technology solutions -- which is so common in an AI
solution – you need fewer staff to run those solutions. You also gain higher levels of employee
satisfaction because they can see how the infrastructure is doing and a lot of the mundane
tasks, such as constant tuning, are being made more efficient.
Gardner: Well, this drive for sustainability is clearly a non-trivial undertaking. Obviously, when
planning out efficiencies across entire data centers, it continues over many years, even
decades.
It occurs to me, John, that smaller companies that may want to do AI deployments themselves -
- to customize their models and their data sets for particular uses – and so to develop
proprietary and advantage-based operations, they are going to be challenged when it comes to
achieving AI efficiently.
At the same time, the large hyperscalers, which are very good at building out efficient data
center complexes around the globe, may not have the capability to build AI models at the
granular level needed for the vertical-industry customization required of smaller companies.
So, it seems to me that an ecosystem approach is going to shake out where these efficiencies
are going to need to manifest. But it’s not all going to happen at the company-by-company level.
And it can’t necessarily happen at the cloud provider-by-cloud provider level either.
Do you have any sense of how we should expect an AI services ecosystem – one that reaches
a balance between needed customization and needed efficiency at scale – will emerge that can
take advantage of these essential efficiency levers you described?
An AI ecosystem evolves
Frey: Yes, exactly what you describe is what we see happening. We have some customers
that want to make the investments in high-performance computing and in the development and
training of their own AI solutions. But those customers are very few that want to make that type
of investment.
Other customers want to access an AI
capability and either have some of that
expertise themselves or they want to
leverage a vendor such as HPE’s
expertise from a data science
perspective, from a model development
perspective, and from a data efficiency
perspective. We certainly see a lot more
customers that are interested in that.
Customers want to access an AI capability
and either have some of that expertise
themselves or they want to leverage a
vendor such as HPE’s expertise from a
data science perspective, from a model
development perspective, and from a data
efficiency perspective.
How Digital Transformation Benefits
From Energy Efficiency
10. Page 10 of 13
And then there’s a level above that. Customers that want to take a pre-trained model and just
tune it using their own specific data sets. And we think that segment of the population is even
broader because so many highly valuable uses of AI still require training on task-specific or
organization-specific data.
And finally, we see a large range of customers that want to take advantage of pre-trained, pre-
tuned AI solutions that are applicable across an entire industry or segment of some kind. One of
the things that HPE has found over the years as we’ve built all portions of that stack and then
partnered with companies is that having that whole portfolio, and having the expertise across
them, allows us to look both downstream and upstream to what the customer is looking at. It
allows us to help them make the most efficient decisions because we look across the hardware,
software, and that entire ecosystem of partners as well.
It does, in our mind, allow us to leverage decades worth of experience to help customers attain
the most efficient and most effective solutions when they’re implementing AI.
Gardner: John, are there any leading use cases or examples that we can look to that illustrate
how such efficiency makes an impactful difference?
Examples of AI increasing efficiency, productivity
Frey: Yes. I’ll give you just a couple of examples. Obviously, an early adopter of some types of
AI systems have been in healthcare. A great example is x-rays and looking at x-rays. It turns
out, with ML, you can actually do a pretty good job of having an ML system look at x-rays, do
scanning, and make a decision. “Is that a bone fracture or not?” for example. And if it’s unsure,
pass that to a radiologist who can take a deeper look. You can tune the system very, very well.
There’s a large population of x-ray imagery that creates some very clear examples of something
that is a fracture or something that is not, for example. There have been lots of studies looking
at how these systems perform against the single radiologist looking at these x-rays as well.
Particularly, when a radiologist spends their day
going from x-ray to x-ray to x-ray, there can be some
fatigue associated with that, so their diagnostic
capabilities get better when the system does a first-
level screen and then passes the more specific
cases to the radiologist for a deeper analysis. If
there is something that’s not really clear one way or
the other, it lets the radiologist spend more time on
it. So, that’s a great one.
We’re seeing a lot of interest in manufacturing processes as well. How do we look at something
using video and video analytics to examine parts or final assemblies coming off of an assembly
line and say, “Does this appear the way it’s supposed to from a quality perspective?” “Are there
any additional components or are there any components missing,” for example.
It turns out those use cases actually do a really good job from a power performance perspective
and from a ROI perspective. If you dive deeper, into natural language processing (NLP), we
want to train tools that can answer basic customer questions or allow the customer to interact
[A radiologist’s] diagnostic
capabilities improve when the
system does a first-level screen
and then passes the more
specific cases to the radiologist
for a deeper analysis.
11. Page 11 of 13
from a voice perspective with a service tool that can provide low-level diagnostics for a
customer or route, for example. In some cases, it can even give them the right answer in both
voice and typed speech.
In fact, you’re now seeing some of those come out in very popular software applications that a
variety of people around the world use. We’re seeing AI systems that predict the next couple
words in a sentence, for example, or allow for a higher level of productivity. I think those again
are still proving their case.
In some cases, users see them as a barrier, not as an assistant, but I think the time will come,
as those start getting more and more accurate, when they’re going to be really useful tools.
Gardner: Well, it certainly seems that given the costs and the impacts on carbon load, on
infrastructure, on the demand for skills to support it, that it’s incumbent on companies big and
small to be quite choosy about which AI use cases and problems they seem to solve first. This
just can’t be a solution in search of a problem. You need to have a very good problem that will
deliver very good business results.
It seems to me that businesses should carefully evaluate where they devote resources and use
these intelligence capabilities to the fullest effect and pick those highly productive use cases
and tasks earlier rather than later.
Define and refine your sustainable IT strategy
Frey: Yes, absolutely. Let’s not have a solution in search of a problem. Let’s find the best
business challenges and opportunities to solve, and then look at what the right strategic
approaches to solving them. What’s the ROI for each of those solutions? What are some of the
unintended consequences, like a privacy issue or a bias issue, that you want to prevent? And
then, how do we learn from others that have implemented those tools and partner with vendors
that have a lot of historical competencies in those topics and have had many customers bring
those solutions to market.
So, it’s really finding the best solution for the business challenge and being able to quantify that
benefit. One of the things that we did really early on as we were developing our sustainable IT
approach is to recognize that so many customers didn’t know how to get started.
We offered a free workbook for the customers called, Six Steps for Developing a Sustainable IT
Strategy. Well, one of the things that it says -- and this is in the majority of AI conversations as
well – is that the customer couldn’t measure the impact of what they had today because they
didn’t have a baseline. So, they implemented a technology solution and then said, “That must
be much better because we’re using technology.” But without measuring the baseline, they
weren’t able to quantify the financial, environmental, and carbon implications of the new
solution.
Six Steps for Developing
A Sustainable IT Strategy
12. Page 12 of 13
We help customers along this journey by helping them think about this strategically, to get all
the appropriate organizations within their company that need to be part of making a decision
about these solutions together. For example, if you’re worried about cybersecurity implications,
make sure the cybersecurity team is part of this project team. If you’re worried about bias
implications, make sure that your legal teams are involved and anyone else it’s looking at
employee or customer privacy. If you’re thinking about solutions that are going to decarbonize
or save power, for example, make sure you have your global workplace teams involved and
help quantify that, and your sustainability teams if you’re going to talk about carbon mitigation as
part of all of this.
It’s about having the right organizations
involved, looking at all the issues that can
help make the decisions, and examine if the
solution really is sustainable. Does it have
both a financial and an environmental ROI
that makes sense?
Gardner: It sure seems that emphasizing AI sustainability should be coming from the very top
of the organization, any organization, and in very loud and impressive terms. Because as AI
becomes a core competency -- whether you source it or do it in-house -- it is going to be an
essential business differentiator. If you’re going to do AI successfully, you’re going to need to do
it sustainably. And so, AI sustainability seems to be a pillar of getting to an AI outcome that
works long-term for the organization.
As we move to the end of our very interesting discussion, John, what are some resources that
people can go to? How can they start to consider what they’ve been doing around sustainability
and extending that into AI, or examine what they’re doing with AI and make sure that it conforms
to the concepts around sustainability and the all-important objectives of efficiency?
Frey: The first one, which we’ve already talked about, is make sure you have a sustainable IT
strategy. It’s part of your overarching technology strategy. And now that it includes AI, it really
gets accelerated by AI workloads.
Part of that strategy is getting stakeholders together so that folks can help look for the blind
spots and help quantify the implications and the opportunities. And then, look across the entire
environment -- from public cloud to edge, hybrid cloud, and private cloud in the middle -- and
look to those five levers of efficiency that we talked about. In particular, emphasize data
efficiency and software efficiency from an AI perspective.
And then, look at it all across the lifecycle, from the design of those products to the return and
the end-of-life processes. Because when we think about IT lifecycles, we need to consider all of
the aspects in the middle.
That drives such things as how do you procure the most efficient hardware in the first place and
provide the most efficient solutions? How do you think about tech refresh cycles and why are
tech refresh cycles different for compute, storage, networking, and with AI? How do all those
pieces interconnect to impact tech refresh cycles?
And from an HPE perspective, one of the things that we’ve done is published a whole series of
resources for customers. We mentioned the Six Steps for Developing a Sustainable IT Strategy
Examine if the solution really is
sustainable. Does it have both a
financial and an environmental return
on investment that makes sense?
13. Page 13 of 13
workbook. But we also have specific white papers as well on software efficiency, data efficiency,
energy efficiency, equipment efficiency, and resource efficiency.
We make those freely available on HPE’s website. So, use the resources that exist, partner with
vendors that have core capability and core expertise across all of these areas of efficiency, and
spend a fair amount of time in the development process trying to ensure that that ROI both
financially and from a sustainability perspective are as positive as possible when implementing
these solutions.
Gardner: I’m afraid we’ll have to leave it there. We’ve been exploring how AI deployments will
demand vast resources -- energy, water, skills, and upgraded or wholly new data center and
electrical grid infrastructures.
And we’ve learned that not only by calculating the true and total expected costs in the fullest
sense can businesses predict the proper fit-for-purpose use in deployments of AI but doing it in
a most efficient manner might be the only way to go about successful AI deployments.
And so, please join me in thanking our guest, John Frey, Director and Chief Technologist for
Sustainable Transformation at HPE. Thank you so much, John.
Frey: Thank you for letting me come on and share this expertise.
Gardner: You bet. And thanks as well to our audience for joining this sponsored BriefingsDirect
discussion on the best path to sustainable AI.
I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of
HPE-sponsored discussions. Thanks again for listening. Please pass this along to your IT
community and do come back next time.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard
Enterprise.
Transcript of a discussion on how energy use and resources management have emerged as key
ingredients of artificial intelligence adoption success -- or failure. Copyright Interarbor Solutions, LLC,
2005-2024. All rights reserved.
You may also be interested in:
• HPE Accelerates its Sustainability Goals While Improving the Impact of IT on the Environment
and Society
• How HPE Pointnext Tech Care changes the game for delivering enhanced IT solutions and
support
• How HPE Pointnext ‘Moments' provide a proven critical approach to digital business
transformation
• How to industrialize data science to attain mastery of repeatable intelligence delivery
• The journey to modern data management is paved with an inclusive edge-to-cloud Data Fabric
• The IT intelligence foundation for digital business transformation rests on HPE InfoSight AIOps
• How Digital Transformation Navigates Disruption to Chart a Better Course to the New Normal
• How the right data and AI deliver insights and reassurance on the path to a new normal
• How IT modern operational services enables self-managing, self-healing, and self-optimizing