Since when has IBM been in the business of doing technology projection..? (Show quote by Watson)..
But, we've also gotten help from other very successful IT business experts on high risk predictions... (show the other three quotes )
While in retrospect these quotes are pretty funny, it's useful to reflect on these technology predictions... All four of these experts have been very very successful in technology! But clearly some of their predictions were proven wrong! My take home message is that it's hard to make these predictions-- to have the insight which will prevail over many years.... That's why you need to evaluate risky predictions often--- revise predictions frequently based on new data, new facts and new markets. Quote Information:
"Prediction is very difficult, especially about the future," said the distinguished scientist who lobbied for peaceful atomic policies and won the first U.S. Atoms for Peace award in 1957.
"Prediction is difficult, especially about the future" attributed to:
•Niels Bohr (Barlow 1997) [2290]
•Mark Twain (Flake 2000) [1580]
•Yogi Berra (Caswell 2000) [411]
[n] =web pages from Google from http://www.eeb.cornell.edu/Ellner/ESA01.pdf
also said to be a Chinese proverb
Other Quotes:
"Computers in the future may weigh no more than 1.5 tons." --Popular Mechanics, forecasting the relentless march of science, 1949
"I think there is a world market for maybe five computers." --Thomas Watson, chairman of IBM, 1943
"I have traveled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won't last out the year."--The editor in charge of business books for Prentice Hall, 1957
"But what ... is it good for?" --Engineer at the Advanced Computing Systems Division of IBM, 1968, commenting on the microchip.
"There is no reason anyone would want a computer in their home." --Ken Olsen, president, chairman and founder of Digital Equipment Corp.,1977
"640K ought to be enough for anybody." --Bill Gates, 1981
"The commercial market for computers will never exceed a half-dozen in the US. -- Howard Aiken, 1945
"This 'telephone' has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us." --Western Union internal memo, 1876.
"The wireless music box has no imaginable commercial value. Who would pay for a message sent to nobody in particular?" --David Sarnoff's associates in response to his urgings for investment in the radio in the 1920s.
"Who the hell wants to hear actors talk?"--H.M. Warner, Warner Brothers, 1927.
This chart should typically be hidden.
There are three pairwise examples of convergence, as illustrated by the red ellipses. As people and processes emerge are 1st class abstractions (of the same importance as information (documents)), there will be an increase in the integration between the abstractions, leading to convergence.
This chart should typically be hidden.
Use this chart if you want to provide a definition of the terms model, tool, method and architecture
Methods are the recipes that are used in the transformation process.
Models are defined using a modeling language that directly supports the system being modeled. The term “system” is general, and can refer to IT systems as well as businesses
Tools support the methods and modeling language that are used in the transformation process.
Architecture – only certain components of the service-oriented architecture are expected to be model-driven
1) Components and modeling go hand-in hand; The more “componentized” something is, the easier and more natural it is to model
2) Modeling has been done in the business domain and in the IT domain, with overlap in the BPM area. Taking the union of the models leads to the 4-level model we have here.
Example:
Strategy: Business objective: Order fulfillment within 3 days
Process: Place, Plan, Fulfill,
Executed by (late binding) Siebel (Place), I2(Plan), SAP (Fulfill)
All done (orchestrated) through XML and message brokering in a SOA
Monitor – not only IT, but business constructs also.
Example: Batelle (a Yantra customer)
Batelle’s RFQ through PO process
Batelle was shown a “generic” RFQ process
asked for it to be modified: 1) Any $ amount > X requires approval from person Y and 2) any order from department ABC must get double approval (funding validation check)
Yantra ran a simulation that was based on their production code base and modified the process on the spot by manipulating the business process model
Business intent (the business owner specified how the wanted the process to work) was captured, translated, simulated and deployable in minutes
Biz strategy – improve the efficiency of the RFQ process – Biz objective – reduce the cycle time
Best Buy – runs all their orders through Yantra, proving the performance and scalability of the solution
Shared services – once a service is created, it can be used in multiple business processes
Open Standards – provide a means by which there is a well-defined way to create and deploy new services
Loosely coupled - SOA’s will provide a flexible environment in which services can be composed into business processes. An important new notion of SOA’s is late binding – the ability to connect the services at run-time (vs. “build” time).
SOAs will be probably be implemented with web services, and are clearly being accelerated by the interest in web services.
Foreshadowing: Given the right tools to control the web service components of an SOA and assuming business processes are implemented using web services, it becomes possible for businesses to change their processes very rapidly.
Key enabler of on demand enterprises
New software & consulting opportunities will emerge around modeling, monitoring and management of business
There will be decrease time-to-value and increased ROI for new IT deployments
Businesses will become mote adaptable and flexible
It will increase the value of service-oriented Architectures, and accelerate their adoption
Possible long term scenarios (thought provokers)
Extension of the reference model to include aspects of strategy
Industry standardization of the reference model
Emergence of a “BDA” industry
CBM’s have strong commonality among similar businesses (i.e. CBM’s look almost exactly alike) prior to “attribution”. This suggests there are excellent opportunities for reuse at many levels of the stack
Supercomputing Roadmap
In the next 20 years, the growth of compute power will correspond to hundreds of millions of years of evolution. Deep Blue had the compute power (8 TF in 1997) of a lizard brain.
ASCI Red Option at 1TF, ASCI Red at 3 TF, ASCI Blue Pacific at 4TF, ASCI White at 12TF, ASCI Q at 30TF, ASCI Purple at 100TF.
BG/L at 360 TF, BG/P at 1 petaflop.
By 2014 or 2015, a supercomputer (in 2020 for "PCs") will have the compute power of a human brain. While it is difficult to compare brain operations to computer operations (various types of estimates are used, such as the density of retinal cells extrapolated up to the volume of the brain). It is clear the computers of the future will have enormous capabilities, the uses for which we have only just begun to explore. Special purpose machines, chess, molecular chemistry computations, protein folding are just the beginning to touch the possibilities
A good chart for summarizing the power problem. Even today, in high-end processors with their several hundred million transistors, not all the transistors can simultaneously be operated at full speed due to power limitations (Green Statement above).
This trend will continue, with the capability by 2010 of integrating several billion transistors on a chip – only a fraction will be usable at any particular instant of time.
The diagram shows the distribution of “L effective”, the channel length of a transistor. Smaller Leff (moving to the left) means higher speeds, and higher selling prices. However, reducing Leff also increases subthreshold leakage power – causing the total power to exceed the power limit (yellow horizontal line). As a result, the fastest chips may not be usable because they consume too much power! This leakage power effect has come into play over the past few years only – as a result of the extremely small dimensions we are now dealing with in semiconductor manufacturing.
Moore’s Law scaling will continue to enable more and more transistors
Different workloads (compute tasks) make very different demands on the hardware – thus no single. conventional design is “optimum” from a power and performance perspective
We need to control power by selectively using the transistors and functions on the chip
We can better accommodate different workloads (in performance / power terms) by selectively using functions on the chip
As a result, new chip designs, geared towards both power management and workload management, will emerge
Historically, microprocessor operating frequencies have increased at a compound annual growth rate of ~35% (from 1998-2003). However, with the challenges of power/heat dissipation the rate of frequency growth has slowed to ~15% CAGR (less than half the historic rate of improvement). New cooling technologies and designs are required for future microprocessors.
The reduction in the rate of frequency growth is changing the way products are differentiated in the marketplace. In the past PC companies marketed frequency growth as the main performance driver; today, functions such as integrated WiFi and extended battery life are highlighted.
There are other ways to gain performance improvements without solely shrinking the device with lithography. Some examples are shown here. These are all leadership technologies developed by IBM.
The speed that signals can propagate through the on-chip wiring is determined by the resistance and capacitance of the wiring. One major lever for "speeding up the wires", lowering their resistance, was the motivation for moving from aluminum to copper a few years ago. The other, low-k dielectrics (the insulator that surrounds the on-chip wires), lower the wiring capacitance which also improves propagation delay. Transistors can be made faster by increasing electron mobility and by reducing parasitic capacitance. SOI reduces parasitic source and drain capacitance. Strained silicon and HOT both increase electron mobility. SiGe results in higher-speed bipolar transistors (all other transistors mentioned and the predominant transistors in use, are FETs, or Field-Effect Transistors). More details are below.
Copper - Fishkill, N.Y., September 22, 1997 - IBM has announced that it has developed a new semiconductor manufacturing process that enables the company to shrink electronic circuitry to smaller dimensions and fit more computer logic, or "intelligence", into a single chip. The copper process was developed through a close collaboration between IBM's Research and Microelectronics divisions. This technology, called CMOS 7S, is the first to use copper instead of aluminum to create the circuitry on silicon wafers. This represents a major milestone in semiconductor technology. While copper has long been recognized as a superior electrical conductor, it has been difficult to adapt to semiconductor manufacturing, leaving aluminum as the material of choice for over thirty years.
SOI - August 3, 1998 - IBM announced it has perfected a process for building high-speed transistors that can be used to deliver higher performance microchips for servers and mainframes, as well as more power-efficient chips for battery-operated handheld devices. This technology, called "silicon-on-insulator" (SOI), represents a fundamental advance in the way chips are built. SOI and other advanced chip technologies will enable more powerful voice-recognition software to be broadly used in home computers, development of smaller cell phones with batteries lasting many hours longer than they do today and creation of entire new classes of portable devices for accessing the Internet.
Silicon Germanium - Significantly better performance is achieved by using Silicon Germanium material instead of silicon on bulk FETs. IBM leads the industry by a wide margin in the use of this technology.
Strained silicon - On June 8, 2001, IBM announced it has pioneered a new form of silicon -- called strained silicon -- to boost chip speeds up to 35 percent. Scientists at IBM have discovered a breakthrough method to stretch silicon, the fundamental material at the heart of microchips, that can speed the flow of electrons through transistors, increasing semiconductor performance and decreasing power consumption in semiconductors.
Low-k Dielectric (April 3, 2000 - E. Fishkill) -IBM perfects new technique for making high-performance microchips
IBM on this day announced that it has developed a new method for building microchips that can deliver up to a 30 percent boost in computing speed and performance.
IBM's new manufacturing technique uses a material known as a "low-k dielectric" to meticulously shield millions of individual copper circuits on a chip, reducing electrical "crosstalk" between wires that can hinder chip performance and waste power.
The company is putting the technology to work immediately, designing custom chips that meet the high performance and low power consumption demands of next-generation networking equipment and Internet servers. The first chips built with this new process are expected to be available next year (2001).
Hybrid-Orientation Technology (HOT) –
FinFET Double Gate – 12/3/01 - IBM today announced that it has made advancements in an alternate type of transistor that doubles the gates on a chip and could lead to major performance, function and power consumption improvements in semiconductors within several years. So what exactly is this double-gate transistor?
First, a quick lesson in semiconductors, also called microchips. Microchips are made up of millions of transistors, or electrical on/off switches. A chip's performance depends largely on each transistor's ability to switch on and off quickly and completely using the least amount of power. Within a transistor, an element called a "gate" controls the electrical flow through the transistor. As transistors continue to get smaller and smaller, it becomes more difficult for the "gate" to effectively control switching, which can affect the chip's performance. Enter IBM's double-gate transistor. It surrounds the channel with two gates, doubling control of the current and enabling significantly smaller and faster lower-power circuits. Many in the industry have long studied it, but problems like electrical leakage, high energy demands and poor electrical flow have hampered previous experimental designs. IBM researchers have found ways to overcome the problems associated with the double-gate transistor, moving it from a purely theoretical realm to a structure that shows potential for actual use in chips in the future.
A key implication is that power consumption management will be as important as designing for performance in systems going forward.
Future semiconductor performance improvements will increasingly depend on innovations in materials and device structures, as opposed to ongoing, “conventional” scaling.
System-level performance will increasingly rely on integration over the entire stack, from semiconductors at the bottom to applications at the top. Innovative technologies, combined with novel architectures and circuit design (e.g., multiple cores/threads on a single chip), well-integrated with the software stack, will differentiate system-level performance.
Power dissipation is limiting the performance of CMOS technologies; aggressive power-management techniques and new thermal solutions are required to enable future generation microprocessors and systems.
Autonomic computing systems are self-configuring, self-healing, self-optimizing and self-protecting.
Self-configuring systems increase IT responsiveness/agility
Self-healing systems improve business resiliency
Self-optimizing systems improve operational efficiency
Self-protecting systems help secure information and resources
Major Points to make on this chart:
To address the questions just raised, the capabilities customers require are:
Automation of transaction service levels
Automation of identities and the service levels around how quickly identities are provisioned
Automation of service levels based on business practices and policies
Speaking points for this chart:
To address the challenges of building an on demand IT infrastructure, IBM delivers automation capabilities for on demand operating environments. We have taken the key challenges I mentioned and have evolved our solutions to directly address those requirements.
Let’s look at automation of service levels for transactions… in our question from the last chart, the ATM network may have service level measurements of how quickly a user’s transaction completes – since the bank may start losing customers if the transaction takes too long. This transaction is made up of several pieces. The ATM transaction consists of steps going from the ATM, through the network to a communications server, to a database, an application that processes transactions, back through the communications server through the network to your ATM. Any step along that chain may have a performance or availability issue which will impact the customer’s view of his entire ATM transaction. Being able to understand the path of the entire transaction and identify the source of the problem is critical to meeting your service level targets that help ensure customer satisfaction. Being able to manage this relationship between the user – the ATM customer – and the business process – processing the desired transaction – is a critical element of being an on demand business.
Equally critical is the relationships between users and resources. In our retail example, being able to measure the service levels provided to users when they request access to applications or data, or when they just need to have a password reset – is a good example of the importance of the relationships between the IT resources and your users.
Finally, there are service levels to meet in the relationships between processes and resources. IBM provides your business with a single, integrated view of all business processes and enables IT resources to be managed holistically across multiple business processes. By optimizing business processes, you can maximize the business value of your IT infrastructure. Like the coffee shop example from the previous chart, having resources deployed with the right priorities is key to helping ensure that business processes are getting the resources they need to execute when they need to while meeting service level targets. Therefore, understanding the relationship between your IT resources and the business processes they support is a crucial capability.
Transition to next chart: Now let’s see what IBM can provide to help you on this journey to automation….
*****
Building an autonomic IT environment will not happen overnight, and not only because much autonomic technology has yet to be invented. Companies simply can't afford to rip out IT investments made over many years and start from scratch.
That being said, they can evolve their systems to become more autonomic by making incremental changes that deliver improvements in infrastructure and business performance every step of the way.
As the evolution proceeds, the balance between manual management and autonomic management shifts toward autonomic. This means that companies will be increasingly able to focus on running their businesses instead of running IT.
Our customers told us this is the way they would like to proceed so we developed an evolutionary deployment model that plots a path for implementing autonomic capabilities.
Our market research shows varying degrees of autonomic evolution:
42% of IBM's customers have IT environments that are at the basic level. They rely on reports, products and manual actions to manage IT components.
27% have achieved the managed level, relying on management software to provide facilitation and automation of certain IT tasks, delivering greater system awareness and improved productivity.
19% have attained the predictive level, using individual components and systems management tools that can analyze and recommend changes in the computing environment. These customers are using autonomic technologies to reduce dependency on key IT skills, and they are making better IT decisions and doing so faster.
Our research also shows that the number of our customers achieving managed and predictive levels of autonomic computing will be growing at a 15% annual rate by 2004. The fastest growth will occur in the predictive and adaptive stages -- both projected to grow by more than 50% by 2006.
The media, worldwide, is also very favorably disposed toward IBM's autonomic computing initiative. For instance, InfoWorld states "there is arguably no company better positioned than IBM to deliver that value in a robust commercial form."
Simplify & Rationalize is part of enablement as well
Enable – business designs & process are integrated. Open stds
Virtualize – increase flexilibity & responsiveness. Less cost
OD OE – adds Automation.
Why Virtualization -- What is Virtualization?Major Points to Make (first section of build slide)
First of all, virtualization can be defined by what it does – it improves the utilization of IT, information, and people assets because it allows businesses to treat resources as a single pool by accessing and managing resources across the organization more efficiently – that is, by effect and need rather than by physical location.
Major Points to Make (second section of build slide)
IBM has been delivering virtualization for many years, breaking the boundaries between logical and physical entities. Logical partitions in our servers allow many configuration options for customers, and clustering options bring the power of many servers into a single image. We use SAN (Storage Area Network) volumes to pool storage, and offer many networking features in our products to virtualize network functions.
Grid technologies allow customers to harness the power of many computers to solve a problem, optimizing the resource utilization, and in some cases, dramatically improving the response times for highly computational workloads. We're delivering grid technologies through the IBM Grid Toolbox, WebSphere capabilities, and alliances with Grid ISVs.
Optional: Speaker Note: In the speaker reference guide, there are more detailed definitions of how we virtualize and its benefits. These may be added into your discussion if you feel that more detail is appropriate.
Transition line:
So, where are we going next? Some of the technologies, such as the TotalStorage Virtualization Family are already here, allowing both block and file virtualization. Others, such as workload management across a heterogeneous infrastructure, are on the horizon. And in the area of Grid Computing, we are seeing tremendous advancement in, so let’s look at one example of Grid Computing now.
JEFF:A more fundamental view of...well, not fundamental, a more next generation view, if you will, of virtualization, is the fact that a service rendering of something provides virtualization. It gives you, the service gives you a canonical expression of the function which is not portraying the implementation.
It says, this is the contract. This is what I do. It then at the bottom of that is an implementation, but the implementation doesn't bleed through or shouldn't bleed through the interface.
So that's an encapsulation, that's a virtualization of the physical thing. It also gives you remotability, a projected view, because you can have multiple bindings to the resource or the implementation underneath.
So you can communicate with it locally through, I might have two processes on the same server and I'm invoking the service from one process, and it's actually flowing over an IPC that is an optimized instruction flow to the next process on the same machine.
It could be that it's an invocation of the service that has a binding that is [flowing] SOAP over HTTP and over to a distributed node that's someplace else in the world. To the consumer of the service interface that doesn't matter. It shouldn't matter how he's talking to the thing underneath or what the actual implementation of the thing underneath is, certainly not at a resource management level of the client, if you're going to provision it, if you're going to monitor it.
At some level you always need the ability to drill down, which is why I don't use the word transparent, it's more like translucent. You can look through down to the lower level details if you want to or need to, but you shouldn't have to.
So it's a translucent view. And you can also compose things as a service, a representation of functionality that is actually an aggregation of different sets of componentry underneath, that presents a unified view through the service interface.
So I can have a whole workflow that's expressed as a service. I can have a single node as a service. I can have a cluster as a service. I don't, to the point that was brought up before, I don't necessarily need to drill down on and express every individual resource.
If there's a cluster manager there I'd expose the cluster manager as the resource. So virtualization is not just what you've thought of it as, virtualization the way we mean it to create a virtual computer environment, is really about the services model itself.
Integration is about connecting people, processes and information in a way that allows companies to become more flexible to the dynamics of the markets, customers and competitors around them. To achieve this integration within and beyond the enterprise, companies need to implement five unique capabilities:
The ability to perform business modeling
Process transformation
Application and information integration
The ability to allow for access and collaboration
And finally, business process management.
Implementation of each of these five capabilities allows you to further and more deeply integrate your people, processes and/or information.
Infrastructure management is about enabling access to and creating a consolidated, logical view of resources across a network. To achieve this management of your infrastructure companies need to implement 7 unique capabilities:
The ability to ensure availability of resources
Security
Optimization
Provisioning
Orchestration
Business service management
And finally, virtualization of resources across servers, storage, distributed systems/grid and the network
Implementation of each of these 7 capabilities allows you to create greater optimization and simplification of your infrastructure.
Let’s learn more about what these capabilities actually mean and how you can ensure you’re able to implement them within your own enterprise IT environment.
As I said earlier, the on demand computing model really transcends the dominant computing models that preceded it.
Take the traditional IT model, which has historically focused on calculations, data processing, transactions and other highly structured tasks. This has served us incredibly well for those applications that are indeed highly structured – and will continue to do so over time. But it breaks down when you try to extend it into applications or processes that aren’t so highly structured. They either feel too complex and static (long-term ERP projects, for example) or are plain impossible (like large AI problems).The Internet and the Web introduced a totally different computing model. It gave us simple mechanisms, based on open standards, for linking together lots and lots of components, which you can then use for relatively simple activities like communications, browsing, searching and sending e-mail. It works incredibly well. But it soon became clear that the Internet standards and mechanisms needed to be extended to handle more sophisticated applications.The on demand Computing Model builds on the IT and Internet models. It is based on what we call a services-oriented architecture – we’ll talk more about that in a minute – which essentially provides us with a set of modular components to be defined and manipulated (Web services), and a set of XML-based standards for doing so. Since the characteristics of the components can be now expressed in XML, we can define applications that work with and manipulate these modular components.It all enables a much more flexible and real-time way of implementing business policies than was possible with more structured computing models.