This document provides an overview of key concepts related to bits, bytes, computer architecture, and networking. It begins with an explanation of bits and bytes as the basic units of digital information. It then covers common computer components such as the motherboard, CPU, RAM, and hard drive. The document introduces different computer platforms and discusses networking fundamentals like topologies and the OSI model. It provides a high-level tour of fundamental digital concepts.
The 7 Things I Know About Cyber Security After 25 Years | April 2024
IT Book of Knowledge
1.
2. 2
Compiled from public domain knowledge. Compiler claims no ownership of images.
Keep the Internet a place of free sharing of information.
3. Topics to be Covered:
1. Bits and Bytes
2. Computer Platforms
3. PC Architecture
4. Networking
5. Storage Media
6. Databases
7. Client Server Applications
8. Web Applications
9. Advanced Topics
10. Multi-Tier Support
4. Bits
• A bit is the most basic unit of
measurement in Digital Computing.
• A bit is the measurement of On or Off,
True or False. All information stored
digitally is represented by strings of 1’s
and 0’s.
• What makes a collection of 1’s and 0’s
have greater meaning is the system by
which they organized.
1. Bits and Bytes
5. Bytes
• A byte is an ordered collection of bits,
with each bit denoting a single binary
value of 1 or 0.
• The byte most often consists of 8 bits in
modern systems; however, the size of a
byte can vary and is generally
determined by the underlying computer
operating system or hardware.
• Historically, byte size was determined
by the number of bits required to
represent a single character from a
Western character set.
1. Bits and Bytes
6. Calculating a Byte
• The “power of two” is used to
determine all values in modern
computing.
• The value of a Byte is determined by
mathematical exponentiation.
• Bit cells are numbered from the right
starting with 0, and incrementing by
one for each bit to the left.
• Each cells value is expressed as the
cell number as an exponent of two.
• The sum of the values of all cells
represents the value of the byte.
x 2 = 1 x 1 = 1
0
x 2 = 0 x 2 = 0
1
+
x 2 = 1 x 4 = 4
2
x 2 = 0 x 8 = 0
3
x 2 = 1 x 16 = 16
4
x 2 = 1 x 32 = 32
5
x 2 = 0 x 64 = 0
6
x 2 = 0 x 128 = 0
7
+
+
+
+
+
+
00110101 = 53
1 0234567
1. Bits and Bytes
7. Kilobytes to Exabytes
• A Byte can be conceived of as the amount of digital information
required to represent a single keyboard character.
• When Bytes are grouped in larger arrangements they can
represent infinitesimal numeric values, graphics, audio and video.
• Managing Bytes is all about scale of economy:
• One Thousand Bytes = Kilobyte
• One Million Bytes = Megabyte
• One Billion Bytes = Gigabyte
• One Trillion Bytes = Terabyte
• One Quadrillion Bytes = Petabyte
• One Quintillion Bytes = Exabyte
1. Bits and Bytes
8. Checkpoint 1
Bits and Bytes
2. Computer Platforms
3. PC Architecture
4. Networking
5. Storage Media
6. Databases
7. Client Server Applications
8. Web Applications
9. Advanced Topics
10. Multi-Tier Support
9. Computing Platforms
• A platform typically refers to the hardware
on which system relies or operates, but
can also refer to the operating system by
which the hardware is operated.
• Midrange/Mainframes for example employ
both a unique architecture as well operating
environments. Examples include IBM
AS400 and IBM z/OS.
• PCs have similar and even interchangeable
hardware platforms (Intel, AMD), however,
can operate with a range of operating
platforms such as Windows, Unix/Linux,
MacOS, AmigaOS, Solaris.
• Other examples of unique computing
platforms can include handheld devices like
Palm, Blackberry or Tablet PCs, or even
gaming consoles like PS3 or XBOX 360.
2. Computer Platforms
10. Platform Interoperation
• While unique hardware architecture
and operating system software can
create challenges to interoperation
between different computing
platforms, there are various methods
for interoperation depending on the
technology.
• Development of new platforms over
the years has lead to platform
independent standardization
formats. Examples of such formats
include ASCII (American Standard
Code for Information Interchange),
or more recently HTML (Hyper Text
Markup Language) and XML
(eXtensible Markup Language).
2. Computer Platforms
11. Workstations and Servers
• We commonly distinguish
workstations from servers,
although they are not necessarily
different platforms.
• Servers tend to have slightly
enhanced operating systems even
within a common platform (i.e.
Windows XP for a desktop vs.
Windows 2003 Server for a server)
• Another distinction is that a server
can be more powerful and/or in a
more compact case referred to as
“rack mounted” server, although,
this is not a hard and fast rule.
2. Computer Platforms
12. Checkpoint 2
Bits and Bytes
Computer Platforms
3. PC Architecture
4. Networking
5. Storage Media
6. Databases
7. Client Server Applications
8. Web Applications
9. Advanced Topics
10. Multi-Tier Support
13. PC Architecture
• While the exact date of the introduction of
the Personal Computer is a topic for debate,
today’s PCs are certainly leaps and bounds
ahead of their 1970’s and 1980’s
predecessors.
• Despite many advancements in PC
architecture, the basic components of a PC
remain the same today.
• The basic architecture is relatively consistent
between both workstation and servers.
3. Personal Computer Architecture
14. PC Components
Common Components Include:
1. Monitor
2. Motherboard
3. CPU
4. RAM
5. GPU / Expansion Cards
6. Power Supply
7. CD / DVD / Floppy
8. HDD
9. Keyboard
10. Mouse
15. Motherboard
Common Components Include:
1. Monitor
2. Motherboard
3. CPU
4. RAM
5. GPU / Expansion Cards
6. Power Supply
7. CD / DVD / Floppy
8. HDD
9. Keyboard
10. Mouse
3. Personal Computer Architecture
16. Motherboard continued…
Motherboard:
• It includes:
• BIOS or the ‘Basic Input / Output
System’
• which is part of what is referred to as
• the ROM (Read Only Memory)
‘chipset.’
• ROM is information permanently
stored on a chip.
• Certain kinds of ROM can be
updated in a procedure known as
“flashing”.
• Motherboards are designed to provide
basic connectivity to popular devices,
and allow for expandability with a variety
of expansion slots.
3. Personal Computer Architecture
17. Motherboard continued…
• Modern motherboards provide
connector slots for :
• CPU (Central Processing Unit),
• PCI and PCI-e (Peripheral
Component Interconnect
[Express]),
• RAM (Random Access Memory),
• PSU (Power Supply Unit), and
for
• IDE (Integrated Drive
Electronics) and/or
• SATA (Serial Advance
Technology Attachment) hard
drives.
3. Personal Computer Architecture
18. Motherboard Continued…
• Most modern motherboards have
integrated (built-in):
• graphics
• sound
• network capability
• and include ports for :
• USB (Universal Serial Bus)
Keyboards, Mice, Printers and
other peripherals.
• and sometimes have ports for:
• PS/2 (IBM Personal System 2)
connectors for Keyboard and
Mouse.
3. Personal Computer Architecture
19. Central Processing Unit
Common Components Include:
1. Monitor
2. Motherboard
3. CPU
4. RAM
5. GPU / Expansion Cards
6. Power Supply
7. CD / DVD / Floppy
8. HDD
9. Keyboard
10. Mouse
3. Personal Computer Architecture
20. Central Processing Unit continued…
CPU (Central Processing Unit):
• Functions as the brain of a computer.
• Performance is measured in MIPS, (Millions of
Instructions Per Second) or more accurately by
FLOPS (FLoating-point Operations Per Second.)
• The processor is required for program execution, and
works in concert with the motherboard to access
resources needed for program execution.
• CPUs come in many different designs and pin
configurations (or Socket Types). A CPU must be
matched to a motherboard that can host it’s socket
type and processing capabilities.
3. Personal Computer Architecture
21. Random Access Memory
Common Components Include:
1. Monitor
2. Motherboard
3. CPU
4. RAM
5. GPU / Expansion Cards
6. Power Supply
7. CD / DVD / Floppy
8. HDD
9. Keyboard
10. Mouse
3. Personal Computer Architecture
22. Random Access Memory continued…
RAM (Random Access Memory)
• Information stored in RAM is only there so long as
the computer is powered on.
• Used for caching information.
• ‘Cache’ is defined as a temporary storage area
where frequently accessed data can be stored for
rapid access.
• RAM installed on a motherboard is general purpose
temporary storage for completing operations and
execution of programs.
• Other components such as CPUs and Hard Disk
Drives have their own small amounts of dedicated
RAM for their own caching needs.
• RAM must be appropriately matched to the
motherboard and processor it will be used by.
3. Personal Computer Architecture
23. Graphic Process Unit / Expansion Cards
Common Components Include:
1. Monitor
2. Motherboard
3. CPU
4. RAM
5. GPU / Expansion Cards
6. Power Supply
7. CD / DVD / Floppy
8. HDD
9. Keyboard
10. Mouse
3. Personal Computer Architecture
24. Expansion Slots
• Computers are equipped with expansion slots
to enable users to add new hardware
functionality to a computer.
• Historically, computers were required to be
equipped with a graphics card, a network card
and a sound card in order to provide those
services – features that are largely integrated
in today’s motherboards.
• Common Slot types:
• ISA (Industry Standard Architecture) Obsolete
• AGP (Advanced Graphics Port) Phasing Out
• PCI (Peripheral Component Interconnect)
• PCI-e (PCI Express)
Expansion Slots
3. Personal Computer Architecture
25. Graphic Processing Unit
GPUs (Graphic Processing Units) are the
next great advance in Computer
Architecture.
• These cards take advantage of the PCI-e
x16 slot.
• Based on gaming technology – but not for
gaming.
• CPUs complete operations in a linear
fashion, whereas GPUs complete operations
in parallel using ‘multi-threading’.
• A single GPU can provide hundreds of
processing cores, in comparison, to the best
multi-core CPU to-date - the Quad Core
CPU.
• nVidia unveiled the first Teraflop capable
GPU with Tesla C1060 – and the first 4
Teraflop rack mounted server with the Tesla
S1070.
3. Personal Computer Architecture
26. Expansion Cards
Examples of other cards that can be used
in expansion slots:
• GPU (Graphics Processing Unit)
• Fax Modem Cards
• Fax Server Cards
• USB Expansion Cards
• Network Cards
• Security Camera Control Cards
• Etc.
3. Personal Computer Architecture
27. Power Supply
Common Components Include:
1. Monitor
2. Motherboard
3. CPU
4. RAM
5. GPU / Expansion Cards
6. Power Supply
7. CD / DVD / Floppy
8. HDD
9. Keyboard
10. Mouse
3. Personal Computer Architecture
28. Power Supply Unit
PSU (Power Supply Units)
• Power Supply Units come in various
sizes and Wattage output capacities.
• Off-the-shelf PCs used to come with
300W PSUs standard. Increasing
technology demands are driving higher
output requirements.
• A Power Supply plugs directly into the
Motherboard itself, as well as
peripheral devices that cannot be
powered from the motherboard.
• Hard Drives, CD/DVD Drives, Cooling
Fans, and even some GPUs require
direct connections to a PSU.
3. Personal Computer Architecture
29. Removable Storage
Common Components Include:
1. Monitor
2. Motherboard
3. CPU
4. RAM
5. GPU / Expansion Cards
6. Power Supply
7. CD / DVD / Floppy
8. HDD
9. Keyboard
10. Mouse
3. Personal Computer Architecture
30. Removable Storage continued…
CD / DVD / Floppy Drives
• Removable media with varying storage
capacities.
• 3 ½ “ Floppy Drives/Disks came in Low
Density (720KB / Disk) or in High Density
(1.44MB / Disk) – Nearing Obsolescence.
• 5 ¼ “ CDs (Compact Disks) are capable of
storing 682MB of digital information.
• 5 ¼ “ Single Layer DVDs (Digital Video
Disks) are capable of storing 4.7GB of digital
information, and Dual Layer are capable of
storing 8.5GB of digital information
• 5 ¼ “ Single Layer Blue Ray Disks are
capable of storing 25GB of digital
information, and Dual Layer are capabel of
storing 50GB of digital information.
3. Personal Computer Architecture
31. Hard Disk Drive
Common Components Include:
1. Monitor
2. Motherboard
3. CPU
4. RAM
5. GPU / Expansion Cards
6. Power Supply
7. CD / DVD / Floppy
8. HDD
9. Keyboard
10. Mouse
3. Personal Computer Architecture
32. Hard Disk Drive continued…
HDD (Hard Disk Drive)
• A hard disk drive is an enclosed set of
disk platters that store digital information
magnetically.
• In operation, the platters are spun at very
high speeds. Information is written to a
platter as it rotates past devices called
read-and-write heads that operate very
close over the magnetic surface
• Hard Disk Drives have varying capacities.
• In the mid 1980s a 40MB hard drive at a
price of $400 was considered a large
storage device.
• Today it is not uncommon to find 500GB -
1TB hard drives for $100.
3. Personal Computer Architecture
33. Computer Operating System
• An operating system (commonly abbreviated to O/S) is
an interface between computer hardware and the user; it
is responsible for the management and coordination of
activities and the sharing of the limited resources of the
computer.
• The operating system acts as a host for applications that
are run on the machine. As a host, one of the purposes of
an operating system is to handle the details of the
operation of the hardware. This relieves application
programs from having to manage these details and makes
it easier to write applications.
• Almost all computers, including handheld computers,
desktop computers, supercomputers, and even video
game consoles, use an operating system of some type.
3. Personal Computer Architecture
34. Checkpoint 3
Bits and Bytes
Computer Platforms
PC Architecture
4. Networking
5. Storage Media
6. Databases
7. Client Server Applications
8. Web Applications
9. Advanced Topics
10. Multi-Tier Support
35. Networking
• A network is a collection of computers
and devices connected to each other. The
network allows computers to
communicate with each other and share
resources and information.
• Computer networks may be classified
according to the network topology upon
which the network is based.
• Network Topology signifies the way in
which devices in the network see their
logical relations to one another.
4. Networking
36. Network Topology
• Network topology is not synonymous
with of the "physical" layout of the
network.
• Even if networked computers are
physically placed in a linear
arrangement, if they are connected
via a hub, the network has a Star
topology, rather than a bus topology.
• In this regard the visual and
operational characteristics of a
network are distinct; the logical
network topology is not necessarily
the same as the physical layout.
4. Networking
37. Modern Networking
• Star networks are one of the most common
modern computer network topologies. In its
simplest form, a star network consists of one
central switch, hub or computer, which acts
as a conduit to transmit messages.
• Although star networks are most prevalent,
large networks tend to contain a hybrid
mixture of different topologies.
• Ethernet is the standard for network
communication (IEEE 802.3) which defines
the architecture and fundamental layers of
Network communication.
4. Networking
38. The OSI Networking Model
• The Open Systems
Interconnection reference model
is an abstract description for
layered communications and
computer network protocol
design.
• In its most basic form, it divides
network communication into seven
layers which, from top to bottom,
are the Application, Presentation,
Session, Transport, Network,
Data-Link, and Physical Layers.
4. Networking
39. Network Protocols
• Protocols in human communication are rules about
appearance, speaking, listening and
understanding. All these rules, also called
protocols of conversation, represent different
layers of communication. They work together to
help people successfully communicate; the need
for protocols also applies to network devices.
• A protocol is a convention or standard that controls
or enables the connection, communication, and
data transfer between computing endpoints or
nodes. In its simplest form, a protocol can be
defined as the rules governing the syntax,
semantics, and synchronization of communication.
• Protocols may be implemented by hardware,
software, or a combination of the two. At the
lowest level, a protocol defines the behaviour of a
hardware connection.
4. Networking
40. OSI Model Networking Layers
• The Application layer functions typically include identifying
communication partners, determining resource availability, and
synchronizing communication.
• The Presentation layer works to transform data into the form that
the application layer can accept. This layer formats and encrypts
data to be sent across a network, providing freedom from
compatibility problems. It is sometimes called the syntax layer.
• The Session layer controls the connections between computers. It
establishes, manages and terminates the connections between the
local and remote application.
• The Transport Layer provides transfer of data between end users,
providing reliable data transfer services to the upper layers through
flow control and error control.
• The Network Layer performs network routing functions, and might
also perform fragmentation and reassembly, and report delivery
errors.
• The Data Link Layer provides the functional and procedural
means to transfer data between network entities and to detect and
possibly correct errors that may occur in the Physical Layer.
• The Physical Layer defines the electrical and physical
specifications for devices. In particular, it defines the relationship
between a device and a physical transportation medium.
4. Networking
41. Network Protocols and the OSI Model
• The widespread use and expansion of communications
protocols is both a prerequisite for the Internet, and a major
contributor to its power and success.
• The pair of Internet Protocol (or IP) and Transmission Control
Protocol (or TCP) are the most important of these, and the
term TCP/IP refers to a collection (or protocol suite) of its
most used protocols.
• Examples of common protocols include:
• DHCP (Dynamic Host Configuration Protocol)
• IP (Internet Protocol )
• UDP (User Datagram Protocol)
• TCP (Transmission Control Protocol)
• HTTP (Hypertext Transfer Protocol)
• FTP (File Transfer Protocol)
• Telnet (Telnet Remote Protocol)
• SSH (Secure Shell Remote Protocol)
• POP3 (Post Office Protocol 3)
• SMTP (Simple Mail Transfer Protocol)
• IMAP (Internet Message Access Protocol)
• SOAP (Simple Object Access Protocol)
4. Networking
42. Network Packets
• A network packet is a formatted unit of
data carried on an network (level 3 on
the OSI Model).
• A network packet consists of two kinds
of data: control information and user
data (also known as payload).
• The control information provides
information the network needs to deliver
the payload, for example: source and
destination addresses, error detection
codes like checksums, and packet
sequencing information.
• A good analogy is to consider a packet
to be like a letter; the header acts like
the envelope, and the payload is
whatever the person puts inside the
envelope.
4. Networking
43. Network Bandwidth
• Consider a network connection as a pipe, and
bandwidth as a measure of the size of the pipe in
comparison to the amount of information you can
pass through it. Alternately, bandwidth can
measure only a section of the pipe reserved for
specific data communications.
• Bandwidth is measured in bits per second, or
multiples of it (Kilobits/sec, Megabits/sec, etc.) of
available or consumed data resources on the
‘pipe’.
• Bandwidth may refer to bandwidth capacity of a
network transportation medium, or can mean the
‘channel capacity’ or the maximum throughput of
a logical or physical communication path in a
digital communication system.
4. Networking
44. Network Communication Mediums
• Mediums for network
communications include:
• Twisted Pair
• Coaxial Cable
• Fibre Optic
• Infrared
• Radio Frequency
4. Networking
45. Networking - Twisted Pair
• Twisted pair cabling is a form of wiring in which two
conductors (the forward and return conductors of a
single circuit) are twisted together for the purposes of
cancelling out electromagnetic interference from
external sources .
• Twisted pair cabling employs many forms of connector,
some of which include: (depicted in the diagram from
left to right):
• RJ45 or 8P8C (Network)
• RJ25 or 6P4C (6 Pin Phone)
• RJ14 or 4P4C (4 Pin Phone)
4. Networking
46. Networking - Twisted Pair continued…
• Twisted Pair also comes in different cable
categories. These categories represent
communication reliability at various electro
magnetic frequencies.
• The higher the maximum reliable frequency
supported by the cable, the greater the bandwidth
available for data transport.
• Examples of cable categories include:
• Cat 1 – 1 Mbit / Sec (Phone)
• Cat 3 – 10 Mbits / Sec
• Cat 5/5e – 100 Mbits / Sec
• Cat 6a – 10 Gbit / Sec
• Cat 7a – 100 Gbit / Sec
• Cat 5 is the most commonly used twisted pair
cabling in use today.
4. Networking
47. Networking - Coaxial
• Like a telephone cord or other forms of twisted pair,
coaxial cable conducts AC electric current from one
place to another.
• Like twisted pair, it has two conductors, the central
wire and the mesh shield. At any given moment the
current is traveling outward in one of the
conductors, and returning in the other.
• Coaxial has cable categories, like twisted pair.
• Coaxial RG-8/u and RG-9/u were formerly used for
Ethernet networks at 10 Mbit / sec, although it has
largely been superseded.
• Coaxial RG-6/u is commonly used for cable
television and cable internet connections.
4. Networking
48. Networking – Fibre Optic
• An optical fibre is a glass or plastic fibre that
carries light along its length.
• Optical fibres are widely used in fibre-optic
communications, which permits transmission
over longer distances and at higher bandwidths
(Gbits/sec) than other forms of communications.
• Fibre Optic cables are used instead of metal
wires because signals travel along them with
less signal loss, and they are also immune to
electromagnetic interference.
• Fibre Optic is most commonly used as a network
backbone, with other mediums used to connect
workgroup nodes.
4. Networking
49. Networking – Infrared (IR)
• Infrared radiation is electromagnetic
radiation whose wavelength is longer
than that of visible light.
• Devices use infrared light-emitting
diodes (LEDs) to emit infrared
radiation. The beam is modulated
(switched on and off), to transmit data.
• IR technology is most commonly used
for remote control devices for home
entertainment.
• Free space optical communication
using infrared lasers can be a relatively
inexpensive way to install a
communications link in an urban area
compared to the cost of burying fibre
optic cable.
4. Networking
50. Networking – Radio Frequency (RF)
• The RF spectrum is used to organize
and map the physical phenomena of
electromagnetic waves. These waves
propagate through space at different
frequencies, and the set of all possible
frequencies is called the
electromagnetic spectrum.
• The term radio spectrum typically
refers to the full frequency range from
3 kHz to 300 GHz that may be used for
wireless communication.
• The UHF (Ultra-High Frequency) Band
is used for a range of technology such
as Radio Frequency Identification
(RFID), Keyless Remote Entry, Cell
Phones and Wireless Networking.
4. Networking
53. Network Interface Card continued…
NIC (Network Interface Card)
• A Network Interface Card is a hardware component
which provides physical access to a networking
medium and provides a low-level addressing system
through the use of MAC (Media Access Control)
addresses.
• Every Ethernet network card has a unique 48-bit MAC
address, which is stored in ROM carried on the card.
• Every computer on an Ethernet network must have a
card with a unique MAC address. Normally it is safe to
assume that no two network cards will share the same
address, because card vendors purchase blocks of
addresses from the Institute of Electrical and
Electronics Engineers (IEEE) and assign a unique
address to each card at the time of manufacture.
• NICs allow users to connect to each other either by
cables or wirelessly.
4. Networking
55. Network Hub continued…
• A network hub is a fairly un-sophisticated broadcast
device. Hubs do not manage any of the traffic that
comes through them, and any packet entering any
port is broadcast out on every other port. Since
every packet is being sent out through every other
port, packet collisions result--which greatly impedes
the smooth flow of traffic.
• The need for hosts to be able to detect collisions
limits the number of hubs and the total size of the
network.
• For 10 Mbit/s networks, up to 5 segments (4 hubs)
are allowed between any two end stations. For 100
Mbit/s networks, the limit is reduced to 3 segments
(2 hubs) between any two end stations, and even
that is only allowed if the hubs are of the low delay
variety.
• A large Ethernet network is likely to require
switches to avoid the chaining limits of hubs.
4. Networking
57. Network Switch continued…
• The network switch plays an integral part in most
Ethernet local area networks (LANs). Mid-to-large
sized LANs contain a number of linked managed
switches. Switches differ from hubs in that they can
have ports of different speed.
• If you have 4 computers A/B/C/D on 4 switch ports,
then A and B can transfer data between them as
well as C and D at the same time, and they will
never interfere with each others' conversations.
• In the case of a "hub" then they would all have to
share the bandwidth, and there would be collisions
and retransmissions.
• Using a switch is called micro-segmentation. It
allows you to have dedicated bandwidth on point to
point connections with every computer and to
therefore run in full duplex with no collisions.
4. Networking
59. Network Router continued…
• A router is a networking device whose software
and hardware are usually tailored to the tasks of
routing and forwarding information.
• Routers generally contain a specialized
operating system, RAM, flash memory, and one
or more processors, as well as two or more
network interfaces.
• Routing is the process of selecting paths in a
network along which to send network traffic.
• Routing directs packet forwarding, the transit of
logically addressed packets from their source
toward their ultimate destination through
intermediate nodes.
• The routing process usually directs forwarding on
the basis of routing tables which maintain a
record of the routes to various network
destinations. Thus, constructing routing tables,
which are held in the routers' memory, is very
important for efficient routing.
4. Networking
61. Network Bridge continued…
• Bridges are similar to repeaters or network
hubs, devices that connect network
segments at the physical layer; however,
with bridging, traffic from one network is
managed rather than simply rebroadcast to
adjacent network segments.
• Bridges tend to be more complex than hubs
or repeaters. Bridges can analyze incoming
data packets to determine if the bridge is
able to send the given packet to another
segment of the network.
• Bridging and routing are both ways of
performing data control, but work through
different methods. Bridges are not concerned
with and are unable to distinguish networks,
while routers can.
4. Networking
63. Network Repeater continued…
• A repeater is an electronic device that
receives a signal and retransmits it at a
higher level and/or higher power so that
the signal can cover longer distances
without degradation.
• Repeaters work with the actual physical
signal, and do not attempt to interpret the
data being transmitted.
• The network medium used to carry a signal
determines requirements for usage of
repeaters to maintain signal integrity.
• Fibre Optic cable has the least amount of
signal degradation and superior bandwidth,
making it a good candidate for long
distance connections with fewer repeaters.
4. Networking
64. Network Hardware Summary
• Hubs are ‘dumb’ devices that
interconnect workstations on a network
by broadcasting packets to all ports.
• Bridges are much more specialized than
hubs and are used to interconnect
subnets or LANs.
• Switches are more intelligent than a
bridge and can do all of the tasks of a
hub or a bridge as well as segment the
traffic on the network to avoid network
congestion.
• Routers are the most sophisticated of
the devices mentioned. Routers perform
all of the tasks performed by hubs,
switches and bridges as well as provide
address resolution on wide area
networks, and act as a firewall.
4. Networking
65. Network Types
• A local area network (LAN) is a computer network covering a small physical
area, like a home, office, or small group of buildings, such as a school, or an
airport.
• The defining characteristics of LANs, in contrast to WANs (wide area networks),
include their higher data transfer rates, smaller geographic range, and lack of a
need for leased telecommunication lines.
• A wide area network (WAN) is a computer network that covers a relatively
broad geographic area (i.e. one city to another and one country to another
country) and that often uses transmission facilities provided by common
carriers, such as telephone companies.
• The largest and most well-known example of a WAN is the Internet.
• A virtual private network (VPN) is a computer network in which some of
the links between nodes are carried by open connections or virtual circuits in
some larger network (e.g., the Internet)
• The virtual network is said to be tunnelled through the larger network when this is
the case. One common application is secure private network communications
through the public Internet (i.e. work from home)
4. Networking
66. Network Addressing
• Network Addressing occurs in two steps:
1. Every Network Card on particular Network
should have a unique hardware address
called a MAC address (previously
discussed)
2. When a computer boots it’s O/S it is either
assigned a static IP address (through it’s
network drivers, and it’s TCP/IP protocol
settings), or it is configured to use DHCP
(Dynamic Host Configuration Protocol) to
obtain the next available IP address.
• IP addresses must also be unique within a
network. There are both internal IP addresses
and external IP addresses.
• External IP addresses are visible to the Internet
and must be unique to the Internet.
• These addresses are managed by a Firewall,
Router and or Proxy Server so that individual
computer addresses on an internal network
cannot be accessed directly from the Internet.
4. Networking
67. Checkpoint 4
Bits and Bytes
Computer Platforms
PC Architecture
Networking
5. Storage Media
6. Databases
7. Client Server Applications
8. Web Applications
9. Advanced Topics
10. Multi-Tier Support
68. Storage Media
• We touched on some storage
media types when looking at PC
architecture.
• There are however many types of
storage, that can be categorized as
portable and semi-portable.
• Alternately we can categorize the
types of media by the method by
which the media is written to and
read from.
• Electrical
• Magnetic
• Optical
5. Storage Media
69. Electrical Storage Media
• Electrical Media is described by two other
terms; solid-state and volatile / non-volatile.
• Solid state electronics are electronic
components that are entirely based on semi-
conductors, transistors and microprocessors.
• Solid state electronics have no mechanical
action or moving parts.
• RAM, which loses all information when the
computer is powered off, is described as
volatile memory.
• Non-volatile memory is memory that can retain
stored information in the absence of a power
source.
• Non-volatile electrical media functions by
writing and erasing information to chips with an
electrical current.
5. Storage Media
70. Flash Drives
• Flash Drives are a spin-off of a type of ROM known as
EEPROM or Electrically Erasable Programmable Read-Only
Memory.
• Before EEPROM chips, ROM information was stored by
manipulating the physical structure of the chip; either by
design, or after the fact in a non-reversible process with
electrical current.
• Flash Drives come in varying capacities, from 64MB to as
high as 64GB of storage space.
• The inside of a flash drive includes:
1. USB Interface
2. Device Microprocessor
3. Test Points
4. Flash Memory Chip
5. Crystal Oscillator (like in a quartz watch)
6. Activity LED
7. Write Protect Switch
8. In this case, space for a secondary memory chip.
5. Storage Media
71. Flash Memory Cards
• Another type of Electrical Media is Flash
Memory Cards.
• Flash Memory Cards come in dozens of sizes
and shapes, and a wide selection of storage
capacities.
• Due to the inexpensive nature of this form of
electrical storage, it’s prevalence in the market
has grown exponentially.
• Flash memory cards are used in everything
from cell phones to PDAs, digital cameras and
portable media players.
• Many newer PCs come equipped with a multi-
port media reader to accommodate for the most
common forms of this type of media storage.
5. Storage Media
72. Solid State Drives
• SSDs (Solid State Drives) are a newer
alternative to the conventional HDD (Hard
Disk Drive).
• SSDs perform better than flash drives by
leveraging the same interface as a
conventional HDD which provides greater
bandwidth between the stored information
and the computer itself.
• SSDs are more durable, and in some cases
perform faster and with less noise than
conventional HDDs.
• Currently SSDs are considerably more
expensive than conventional HDDs.
5. Storage Media
73. Magnetic Storage Media
• Magnetic Storage Media is described
exclusively as non-volatile media, in that it
also does not require power to retain
information stored within it.
• Magnetic storage media uses a read/write
head to magnetically alter the surface of the
media to represent the information being
stored on it.
• Modern magnetic media come in floppy or
hard disk types, or as cassette tapes.
5. Storage Media
74. Floppy Disks
• A floppy disk is a data storage medium that is
composed of a disk of thin, flexible ("floppy") magnetic
storage medium encased in a square or rectangular
plastic shell.
• Before hard disks became affordable, floppy disks
were often also used to store a computer's operating
system (OS), in addition to application software and
data.
• Floppy disks have come in a variety of standard sizes,
ranging from 8 inch to 5 ¼ inch down to 3 ½ inch
(right).
• While floppy disk drives still have some limited uses,
especially with legacy industrial computer equipment,
they have now been largely superseded by USB flash
drives, External Hard Drives, CD-ROMs and DVD-
ROMs.
5. Storage Media
75. Hard Disks
• While we examined HDDs in the PC
Architecture section, we did not discuss
arrangements of HDDs in context of
organizational storage.
• While the capacities mentioned in the
diagram to the right are dated, the HDD
arrangements are worthy of discussion.
• A single computer could have one or more
HDDs. The logical relation of those devices
to each other determines how those
devices will function in support of our
storage needs.
• JBOD, or Just a Bunch Of Disks, refers to
a basic configuration of multiple HDDs to
make up multiple separate disk volumes, or
a single logical volume comprised of
multiple physical disks.
5. Storage Media
76. Hard Disks - RAID
• RAID refers to Redundant Array of Inexpensive Disks.
• RAID combines two or more physical HDDs into a single
logical unit by using either special hardware or software..
• There are three key concepts in RAID:
• mirroring, the copying of data to more than one disk;
• striping, the splitting of data across more than one disk;
• error correction, where redundant data is stored to allow
problems to be detected and possibly fixed (known as fault
tolerance).
• Different RAID levels use one or more of these
techniques, depending on the system requirements.
• Redundancy in RAID is a way in which data is written
across the disk collection, which is organized so that the
failure of one disk in the array (or more depending on the
RAID configuration) will not result in loss of data.
• A failed disk may be replaced by a new one, and the data
on it reconstructed from the remaining data and the extra
data.
5. Storage Media
77. Hard Disks - Local Storage vs. Storage Subsystem
• JBOD and RAID are disk
configurations that can be
implemented within an individual
workstation or server.
• Disk configurations that connect
directly to the server and do not
interact directly with the network are
called DAS (Direct Access Storage).
• When such configurations are
implemented as physically and
logically independent entities with
direct network accessibility, they
become storage subsystems.
• Two terms for referring to storage
subsystems of this nature are NAS
(Network Attached Storage) and
SAN (Storage Area Network).
5. Storage Media
78. Hard Disks - Network Attached Storage (NAS)
• A NAS unit is essentially a self-contained storage device
connected to a network, with the sole purpose of
supplying file-based data storage services to other
devices on the network (pictured upper).
• The operating system and other software on the NAS
unit provide the functionality of data storage, file
systems, and access to files, and the management of
these functionalities.
• A NAS unit is not designed to carry out general-purpose
computing tasks, although it may technically be possible
to run other software on it.
• NAS units usually do not have a keyboard or display,
and are controlled and configured over the network,
often by connecting a browser to their network address.
• The alternative to NAS storage on a network is to use a
computer as a file server. (pictured lower)
5. Storage Media
79. Hard Disks - Storage Area Network (SAN)
• A SAN is a type of network architecture designed to
attach HDDs to servers (externally) in such a way
that the devices appear as locally attached to the
operating system.
• SANs help to increase storage capacity utilization,
since multiple servers share the storage space on
the disk arrays.
• The common application of a SAN is for the use of
frequently accessed data that requires high-speed
access to the hard drives such as email servers,
databases, and high usage file servers.
• A common element of SANs is the use of Fibre
Channel control cards in the servers connected to
the SAN, and a Fibre Channel Switch to manage
SAN traffic over Fibre Optic cabling (pictured right).
5. Storage Media
80. Magnetic Tape
• A tape drive is a data storage device that reads
and writes data stored on a magnetic tape.
• Magnetic tape is typically used for archival storage
of data stored on hard drives. Tape media
generally has a favourable unit cost and long
archival stability.
• Instead of allowing random-access to data as hard
disk drives do, tape drives only allow for
sequential-access of data.
• A hard disk drive can move its read/write heads to
any random part of the disk platters in a very short
amount of time, but a tape drive must spend a
considerable amount of time winding tape between
reels to read any one particular piece of data.
• As a result of sequential access, tape drives have
very slow average seek times. Despite the slow
seek time, tape drives can stream data to tape very
quickly. Modern tape drives can reach continuous
data transfer rates of up to 80 MB/s
5. Storage Media
81. Magnetic Tape continued…
• Tape drives can range in capacity from a few
megabytes to hundreds of gigabytes, uncompressed.
• In marketing materials, tape storage is usually referred to
with the assumption of 2:1 compression ratio, so a tape
drive might be known as 80/160, meaning that the true
storage capacity is 80 whilst the compressed storage
capacity can be approximately 160 in many situations.
• A tape library or tape jukebox, is a storage device
which contains one or more tape drives, a number of
slots to hold tape cartridges, a barcode reader to
identify tape cartridges and an automated method for
loading tapes (a robot).
• These devices can store immense amounts of data,
currently ranging from 20 terabytes up to more than 50
petabytes of data, or about one hundred thousand
times the capacity of a typical hard drive and well in
excess of capacities achievable with network attached
storage.
5. Storage Media
82. Optical Storage Media
• Optical storage is any storage method in which data is
written and read with a laser for archival or backup
purposes. For several years, proponents have spoken
of optical storage as a near-future replacement for
both hard drives in personal computers and tape
backup in mass storage.
• Optical media is more durable than tape and less
vulnerable to environmental conditions. On the other
hand, it tends to be slower and offers lower storage
capacities than modern hard disk drives .
• A number of new optical formats, such as Blu-ray, use
a blue laser to dramatically increase capacities.
• Common optical media types include:
• Magneto-Optical Disk (MO)
• Compact Disk (CD)
• Digital Video Disk (DVD)
• Blueray Disk (BD)
5. Storage Media
83. Optical Disks - Magneto-Optical (MO)
• A Magneto-Optical disk consists of a ferromagnetic
material sealed beneath a plastic coating.
• During recording, the laser power is increased so it can
heat the material up to the Curie point in a single spot.
This allows an electromagnet positioned on the
opposite side of the disc to change the local magnetic
polarization, and the polarization is retained when
temperature drops.
• During reading, a laser projects a beam on the disk and
according to the magnetic state of the surface, the
reflected light varies due to the Magneto-optic Kerr
effect.
• MO Disks are high capacity storage (650MB – 9.2 GB),
and are typically used for data archival.
• As with tape media, they can be used in standalone MO
drives or in MO jukeboxes.
5. Storage Media
84. Optical Disks – Compact Disk (CD)
• A Compact Disk (or CD) is an optical disk used to store digital
data, originally developed for storing digital audio.
• A compact disk is made from 1.2 mm thick, almost-pure
polycarbonate plastic (A) and weighs approximately 16 grams.
• A thin layer of aluminium (B) or, more rarely, gold is applied to the
surface to make it reflective, and is protected by a film of lacquer
(C) that is normally spin coated directly on top of the reflective layer,
upon which the label print is applied (D).
• CD data is stored as a series of tiny indentations known as “pits”,
encoded in a spiral track moulded into the top of the polycarbonate
layer. The areas between pits are known as “lands”.
• A CD is read by focusing a wavelength (near infrared)
semiconductor laser (E) through the bottom of the polycarbonate
layer. The change in height between pits and lands results in a
difference in intensity in the light reflected. By measuring the
intensity change with a photodiode, the data can be read from the
disc.
• The digital data on a CD begin at the center of the disc and
proceeds toward the edge, which allows adaptation to the different
size formats available. Standard CDs are available in two sizes. By
far the most common is 120 mm in diameter, with a 74- or 80-
minute audio capacity and a 650 or 700 MB data capacity.
5. Storage Media
85. Optical Disks – Digital Video Disk (DVD)
• Most DVDs are of the same dimensions as compact
discs (CDs) but store more than six times as much data.
• DVD uses 650 nm wavelength laser diode light as
opposed to 780 nm for CD. This permits a smaller pit to
be etched on the media surface compared to CDs,
allowing for a DVD's increased storage capacity.
• A Dual Layer DVD differs from its usual DVD counterpart
by employing a second physical layer within the disc
itself. DVD drives with Dual Layer capability access the
second layer by shining the laser through the first
semitransparent layer.
• DVD recordable discs supporting this technology are
backward compatible with some existing DVD players
and DVD-ROM drives.
• Standard Single Layer (Single Sided) DVDs have a
capacity of 4.7 GB, and Dual Layer (Single sided) DVDs
support 8.5 GB/
5. Storage Media
86. Optical Disks – Blue Ray (BD)
• The name Blu-ray Disc (BD) is derived from
the blue laser used to read and write to this
type of disc. Because of the wavelength (405
nanometres), substantially more data can be
stored on a Blu-ray Disc than on the DVD
format, which uses a red (650 nm) laser.
• BD supports 25GB in Single Layer and 50GB
in Dual Layer.
• A dual-layer Blu-ray Disc can store six times
the capacity of a dual-layer DVD, or ten and
a half times that of a single-layer DVD.
• Because the Blu-ray Disc data layer is closer
to the surface of the disc, compared to the
DVD standard, it is more vulnerable to
scratches. The first discs were housed in
cartridges for protection.
• BD manufacturers now use proprietary hard-
coating technologies to protect the disks.
5. Storage Media
87. Checkpoint 5
Bits and Bytes
Computer Platforms
PC Architecture
Networking
Storage Media
6. Databases
7. Client Server Applications
8. Web Applications
9. Advanced Topics
10. Multi-Tier Support
88. Databases
• A database (DB) is a structured collection of
records or data that is stored in a computer
system.
• The structure of a database is achieved by
organizing the data according to a database
model.
• A database management system (DBMS) is
computer software that manages one or more
databases.
• A DBMS controls the organization, storage,
management, and retrieval of data in a database
6. Databases
89. Database Concepts
• A database model is the structure or
format of a database, described in a
formal language supported by the
database management system.
• The database model in most common
use today is the relational data model.
• Entity relationship modeling is a
relational database modeling method,
used to produce a conceptual schema of
a database called an Entity
Relationship Diagram (ERD).
• A database schema is a term often used
to refer to a graphical depiction of the
detailed database structure .
6. Databases
90. Database Modelling
• Database design occurs in two phases;
Logical design and Physical design.
• Logical design involves understanding the
information that needs to be managed, and
sorting that information into logical groups
called entities.
• An entity may be abstractly defined as a
thing which is recognized as being capable
of an independent existence and which can
be uniquely identified.
• Entities themselves are typically described
with nouns, whereas, the relationships
between entities are described with verbs.
• An entity relationship diagram is
intended to sort information to be managed
into unique entities and demonstrate the
relationships between them.
6. Databases
91. Logical Database Design
• An entity relationship diagram can be
depicted in many ways, but ultimately
the point is to ensure a uniform
understanding of the information to be
managed, and to be able to sort and
assign attributes we need to record to
the correct entities.
• This process of identifying entities and
assigning attributes is called data
normalization.
• When logical design is translated to
physical design, entities become tables,
which consist of rows of records, and
columns of attributes or fields. (Think of
Excel)
• In order to understand normalization, we
need to understand the purpose of a
relational database, which is, to store
information efficiently and without
anomalies.
• Anomalies are problems that could
result in data loss or data corruption
when inserting, updating or deleting
records in tables.
6. Databases
92. Database Anomalies
• If the same information is expressed on multiple rows; updates to
the information may result in logical inconsistencies. For example,
each record in an un-normalized "Employees' Skills" table might
contain Employee Addresses; thus a change of address for a
particular employee will potentially need to be applied to multiple
records (one for each of his skills). If the employee's address is
updated on some records but not others—then the table is left in
an inconsistent state. This phenomenon is known as an update
anomaly.
• There are circumstances in which certain facts cannot be
recorded at all. For example, each record in an un-normalized
"Faculty and Their Courses" table might contain a Faculty ID,
Faculty Name, Faculty Hire Date, and Course Code—thus we can
record the details of any faculty member who teaches at least one
course, but we cannot record the details of a newly-hired faculty
member who has not yet been assigned to teach any courses.
This phenomenon is known as an insertion anomaly.
• There are circumstances in which the deletion of data
representing certain facts necessitates the deletion of data
representing completely different facts. The "Faculty and Their
Courses" table described in the previous example suffers from
this type of anomaly, for if a faculty member temporarily ceases to
be assigned to any courses, we must delete the last of the
records on which that faculty member appears. This phenomenon
is known as a deletion anomaly.
6. Databases
93. Database Normalization
• Normalization is a systematic way of ensuring that a database structure is
suitable for general-purpose querying and free of certain undesirable
characteristics—insertion, update, and deletion anomalies—that could lead
to a loss of data integrity.
• The normalization process requires that information in it’s table state meets
at a minimum three measures of normalization; or normal forms.
• 1st Normal Form – Remove Repeating Groups
• 2nd Normal Form – Remove Functional Dependencies
• 3rd Normal Form – Remove Transitive Dependencies
6. Databases
94. First Normal Form (1NF)
Rule: There should be no repeating
groups.
• As an example, it might be tempting
to make an invoice table with columns
for the first, second, and third line
item.
• This, however, violates the first
normal form, and would result in large
rows, wasted space where an invoice
had less than the maximum number
of line items.
• In this example, first normal form
requires that we make a separate line
item table (entity), with it's own unique
identifier. In this case the combination
of invoice number and line number
uniquely identify the remaining
contents of each record.
• This unique identifier is called a
Primary Key.
6. Databases
95. Second Normal Form (2NF)
Rule: Each column must depend on the
*entire* primary key.
• In the previous example, the customer
information was put in the line item
table.
• The trouble with that is that the
customer belongs with the invoice, not
directly with each line on the invoice.
• i.e. One customer can have many
invoices, One invoice can have many
line items.
• Putting customer information in the line
item table will cause redundant
customer data, with it's inherent
overhead and modification anomalies.
• Second normal form requires that we
place the customer information in the
invoice table.
6. Databases
96. Third Normal Form (3NF)
Rule: Each column must depend on
*directly* on the primary key.
• As an example, the customer address
could go in the invoice table, but this
would cause data redundancy if several
invoices were for the same customer.
• It would also cause an update anomaly
when the customer changes address.
• Third normal form requires the customer
address go in a separate customer table
with its own Primary Key, with only the
customer number in the invoice table.
• The customer number in the invoice
table can now referred to as a Foreign
Key.
6. Databases
97. Physical Database Design
• Physical database design applies the
logical view of entities and attributes to a
physical set of tables and fields.
• This process includes defining the type
of data to be stored in each field,
whether fields are mandatory or
optional, and defining the physical
nature of the relationships between
tables.
• Relationships govern referential integrity
between records.
• In this case a customer can have many
orders, but an order must have one
customer.
• A customer cannot be deleted without
orphaning the orders associated to that
customer.
6. Databases
98. Database Management Systems
• A Database Management System (DBMS) is a set
of software programs that controls the organization,
storage, management, and retrieval of data in a
database.
• A DBMS accepts requests for data from application
software and instructs the operating system to
transfer the appropriate data.
• When a DBMS is used, information systems can be
changed much more easily as the organization's
information requirements change. New categories
of data can be added to the database without
disruption to the existing system.
• Database servers are computers that only run a
DBMS and related software, which holds the actual
databases.
• Database servers are usually multiprocessor
computers, with generous memory and RAID disk
arrays used for stable storage. Connected to one
or more servers via a high-speed channel,
hardware database accelerators are also used in
large volume transaction processing environments.
6. Databases
99. Structured Query Language (SQL)
• SQL (Structured Query Language) is a database
language designed for the creation and modification of
tables, retrieval and management of data, and
management of access controls in a database
management system.
• SQL was standardized first by ANSI and later by the ISO.
Most database management systems implement a
majority of one of these standards and add their
proprietary extensions.
• The most common operation in SQL databases is the
query.
• SQL queries allow the user to specify a description of the
desired result set, but it is left to the devices of the DBMS
to plan, optimize, and perform the physical operations
necessary to produce that result set in the most efficient
manner possible.
• An SQL query includes a list of columns to be included in
the final result.
• Commercial software typically includes pre-built queries
behind reports and application interfaces, designed to
operate within the parameters of the application itself,
although many applications also include facilities for
users to write their own queries, if necessary.
6. Databases
100. Open Database Connectivity
• Open Database Connectivity (ODBC) makes it
possible to access any data from any application,
regardless of which database management system
is handling the data.
• The ODBC specification offers a procedural
Application Programming Interface (API) for using
SQL queries to access data.
• An implementation of ODBC will contain one or
more applications, a core ODBC "Driver Manager"
library, and one or more "database drivers".
• The Driver Manager, independent of the applications
and DBMS, acts as an interpreter between the
applications and the database drivers, whereas the
database drivers contain the DBMS-specific details.
Thus a programmer can write applications that use
standard types and features without concern for the
specifics of each DBMS that the applications may
encounter.
6. Databases
101. Operational Databases
• Operational systems are optimized for
preservation of data integrity and speed of
recording of business transactions through
use of database normalization and an entity-
relationship model.
• Operational system designers generally
follow the rules of data normalization in order
to ensure data integrity. Fully normalized
database designs often result in information
from a business transaction being stored in
dozens to hundreds of tables.
• Relational databases are efficient at
managing the relationships between tables.
These databases have very fast insert/update
performance because only a small amount of
data in those tables is affected each time a
transaction is processed.
• In order to improve performance, older data
are usually periodically purged from
operational systems, in something know as
an off-line operational database.
6. Databases
102. Data Warehouses
• Data warehouses are different from
operational databases in that they are
optimized for speed of data retrieval.
• Frequently the data contained in a data
warehouse is de-normalized via a
dimension-based model.
• To speed data retrieval, data
warehouse data is often stored multiple
times - in its most granular form and in
summarized forms called aggregates.
• Data warehouse data is gathered from
the operational systems and held in the
data warehouse, typically even after the
data has been purged from the
operational systems.
6. Databases
103. Data Marts
• A data mart is a subset of an
organizational data warehouse, usually
oriented to a specific purpose or major
data subject, that may be distributed to
support business needs.
• Data marts are analytical databases
designed to focus on specific business
functions for a specific community
within an organization.
• In practice, the terms data mart and
data warehouse each tend to imply the
presence of the other in some form.
• Most writers using the term seem to
agree that the design of a data mart
tends to start from an analysis of user
needs and that a data warehouse
tends to start from an analysis of what
data already exists and how it can be
collected in such a way that the data
can later be used.
6. Databases
104. Dimensional Database Modelling
• In dimensional modelling, information is
partitioned into "facts", which is generally
transactional data, and "dimensions", which are
the reference information that gives context to the
facts.
• The facts that the data warehouse / data marts
helps analyze are classified along different
dimensions: the fact tables hold the main data,
while the usually smaller dimension tables
describe each value of a dimension and can be
joined to fact tables as needed.
• For example, a sales transaction can be broken
up into facts such as the number of products
ordered and the price paid for the products, and
into dimensions such as order date, customer
name, product number, order ship-to and bill-to
locations, and salesperson responsible for
receiving the order.
• It is common for dimension tables to consolidate
redundant data and be in second normal form,
while fact tables are usually in third normal form
because all data depend on either one dimension
or all of them, not on combinations of a few
dimensions.
6. Databases
105. Business Intelligence and Data Warehousing
• Business Intelligence (BI)
tools aim to support better
business decision-making,
and can also be referred to as
DSS (or Decision Support
Systems).
• BI tools are commonly
associated with data
warehousing, and reporting on
information in context of the
“big picture”.
• BI tools are applications
specially designed to interact
with data warehouses, and
allow creation of various
reports and views of
aggregated enterprise
transactional data.
6. Databases
106. Checkpoint 6
Bits and Bytes
Computer Platforms
PC Architecture
Networking
Storage Media
Databases
7. Client Server
8. Web Applications
9. Advanced Topics
10. Multi-Tier Support
107. Client-Server Architecture
• Client-server describes the relationship between two
computer programs in which one program, the client program,
makes a service request to another, the server program.
• Standard networked functions such as email exchange, web
access and database access, are based on the client-server
model.
• The client-server model has become one of the central ideas
of network computing. Most business applications being
written today use the client-server model.
• In marketing, the term has been used to distinguish distributed
computing by dispersed computers on a network from the
"monolithic" centralized computing of mainframe computers.
This distinction has largely disappeared as mainframes and
their applications have also turned to the client-server model
and become part of network computing.
• Each instance of the client software can send data requests to
one or more connected servers. In turn, the servers can
accept these requests, process them, and return the
requested information to the client. Although this concept can
be applied for a variety of reasons to many different kinds of
applications, the architecture remains fundamentally the
same.
7. Client Server
108. 2 and 3-Tier Architecture
• The most basic type of client-server architecture
employs only two types of hosts: clients and
servers. This type of architecture is sometimes
referred to as two-tier. It allows devices to share
files and resources. The two tier architecture
means that the client acts as one tier and
application in combination with server acts as
another tier.
• In 3-tier architecture, there is an intermediary level,
meaning the architecture is generally split up
between:
• A client which requests the resources,
equipped with a user interface for
presentation purposes.
• The application server (also called
middleware), whose task it is to provide the
requested resources, but by calling on
another server.
• The database server, which provides the
application server with the data it requires.
7. Client Server
109. Multi-Tiered Architecture
• 2-tier architecture is client-server architecture where the server is
versatile; it is capable of directly responding to all of the client's
resource requests.
• In 3-tier architecture; however, the server-level applications are
remote from one another. Each server is specialized with hosting
a certain service (for example: web server/database server).
• This can be extended to n-tier or multi-tier architecture, which
provides:
• A greater degree of flexibility
• Increased security, as security can be defined for each service, and at
each level
• Increased performance, as tasks are shared between servers
• In most cases, a client-server architecture enables the roles and
responsibilities of a computing system to be distributed among
several independent computers that are known to each other
only through a network.
• This makes it possible to replace, repair, upgrade, or even
relocate a server while its clients remain both unaware and
unaffected by that change. This independence from change is
also referred to as encapsulation.
7. Client Server
110. Thin Client
• A thin client is a client computer or client software in client-
server networks which depends primarily on a central server
for processing activities, and mainly focuses on conveying
input and output between the user and the remote server.
• Many thin client hardware devices run only web browsers or
remote desktop software, meaning that all significant
processing occurs on the server. However, recent devices
marketed as thin clients can run complete operating systems,
qualifying them as diskless nodes or hybrid clients. Some
thin clients are also called "access terminals.“
• A thin client as an application program communicates with
an application server and relies for most significant elements
of its business logic on a separate piece of software, an
application server, typically running on a host computer
located nearby in a LAN or at a distance on a WAN or MAN.
• A thin client does most of its processing on a central server
with as little hardware and software as possible at the user's
location, and as much as necessary at some centralized
managed site.
• Web applications are largely considered to be thin client
applications, where little or no client side software installation
is required for usage.
7. Client Server
111. Thick Client
• A thick client is a computer in client-server
architecture networks which typically provides rich
functionality independently of the central server. The
name is contrasted to thin client, which describes a
computer heavily dependent on a server's
applications.
• A thick client still requires at least periodic
connection to a network or central server, but is
often characterised by the ability to perform many
functions without that connection.
• In contrast, a thin client generally does as little
processing as possible and relies on accessing the
server each time input data needs to be processed
or validated.
• In designing a client-server application, a decision is
to be made as to which parts of the task should be
executed on the client, and which on the server.
This decision can crucially affect the cost of clients
and servers, the robustness and security of the
application as a whole, and the flexibility of the
design to later modification or portability.
7. Client Server
112. Client-Server Architecture: Common Server Types
• Domain Controller
• File Server
• Print Server
• Database Server
• Application Server
• Web Server
• Mail Server
• FTP Server
• Fax Server
Notes Server
External Mail
Relay
Map Server Web Server
WebSphere
App Server
LDAP Server
Database
Server Proxy Server
Intranet
Web Server
Development
Web Server
Development
Database
Server
City Internal
Network
Internet
Internal
Mail Relay
7. Client Server
113. Domain Controller
• On Windows Server Systems, a domain
controller (DC) is a server that responds to
security authentication requests (logging
in, checking permissions, etc.) within the
Windows Server domain. On modern
Windows servers, this is achieved with the
help of Active Directory.
• Active Directory is a directory service used
to store information about the network
resources across a domain and also
centralize the network.
• An Active Directory structure is a hierarchical
framework of objects. The objects fall into
three broad categories: resources (e.g.,
printers), services (e.g., email), and users
(user accounts and groups).
• Active Directory networks can vary from a
small installation with a few computers, users
and printers to tens of thousands of users,
many different domains and large server
farms spanning many geographical
locations.
7. Client Server
114. File Server
• A file server is a computer attached to a network that has
the primary purpose of providing a location for the shared
storage of computer that can be accessed by the
workstations that are attached to the computer network.
• The term server highlights the role of the machine in the
client-server scheme, where the clients are the
workstations using the storage.
• A file server is usually not performing any calculations, and
does not run any programs on behalf of the clients. It is
designed primarily to enable the rapid storage and retrieval
of data where the heavy computation is provided by the
workstations.
• A file server may be dedicated or non-dedicated. A
dedicated server is generally designed specifically for use
as a file server, with workstations attached for reading and
writing files and databases.
• File servers generally offer some form of system security to
limit access to files to specific users or groups. In large
organizations, this is a task usually delegated to what is
known as directory services such as Novell's eDirectory or
Microsoft's Active Directory.
• File Server storage can be directly within the Server (DAS
– see storage media) or externally in the form of a SAN.
7. Client Server
115. Print Server
• A print server, is a computer or device that is
connected to one or more printers and to client
computers over a network, and can accept print
jobs from the computers and send the jobs to the
appropriate printers.
• The term can refer to:
• A host computer running Windows OS with
one or more shared printers. Client
computers connect using the Microsoft
Network Printing protocol .
• A computer running some other operating
system, but still implementing the Microsoft
Network Printing protocol.
• A dedicated device that connects one or
more printers to a local area network (LAN).
It typically has a single LAN connector, such
as an RJ-45 socket, and one or more
physical ports (e.g. serial, parallel or USB
(Universal Serial Bus)) to provide
connections to printers. In essence this
dedicated device provides printing protocol
conversion from what was sent by client
computers to what will be accepted by the
printer.
7. Client Server
116. Database Server
• A database server is a computer
program that provides database services
to other computer programs or
computers, as defined by the client-
server model.
• The term may also refer to a computer
dedicated to running such a program.
• Database management systems
frequently provide database server
functionality, and some DBMSs rely
exclusively on the client-server model for
database access.
• Database servers can operate In a
master-slave configuration, where
database master servers are central and
primary locations of data while database
slave servers are synchronized backups
of the master.
7. Client Server
117. Application Server
• An application server, is a server that hosts the business logic and business processes of an
application separately from the application’s interfaces or presentation.
• This type of architecture is most common in Internet/Intranet and Extranet applications.
• By centralizing business logic on an individual or small number of server machines, updates
and upgrades to the application for all users can be guaranteed. There is no risk of old
versions of the application accessing or manipulating data in an older, incompatible manner.
• An application server acts as a central point through which access to data and portions of the
application itself can be managed.
• This architecture is considered a security benefit, devolving responsibility for authentication away
from the potentially insecure client layer without exposing the database layer.
118. Web Server
• A Web Server is a computer that is responsible for
accepting HTTP requests from clients (i.e. with web
browsers) and serving them HTTP responses along
with optional data contents, which usually are web
pages such as HTML documents and linked objects
(images, files etc.).
• Web Servers typically authenticate HTTP requests,
and can be configured for secure encrypted
transactions using secure socket layer via HTTPS.
• The origin of the content sent by server is called:
• static if it comes from an existing file lying on a
filesystem;
• dynamic if it is dynamically generated by some
other program or script or application
programming interface (API) called by the web
server. (Example, a web application called from a
separate application server)
• Serving static content is usually much faster (from 2 to
100 times) than serving dynamic content, especially if
the latter involves data pulled from a database.
• Web servers can be referred to as the presentation
layer in n-tier client-server environments.
7. Client Server
119. Mail Server
• A mail server is a computer acting as an Mail
Transfer Agent (MTA).
• A MTA is a computer program or that transfers
electronic mail messages from one computer to
another.
• A Mail Delivery Agent (MDA) is software that
delivers e-mail messages right after they've been
accepted on a server, distributing them to recipients'
individual mailboxes.
• A mail delivery agent is not necessarily combined
with an MTA, although on many systems the two
functions are implemented by the same program or
package.
• While electronic mail server software uses Simple
Mail Transfer Protocol (SMTP) to send and receive
mail messages, user-level client mail applications
typically only use SMTP for sending messages to a
mail server for relaying.
• For receiving messages, client applications usually
use either the Post Office Protocol (POP) or the
Internet Message Access Protocol (IMAP) to
access their mail box accounts on a mail server.
7. Client Server
120. FTP Server
• An FTP Server is a piece of software that is
running on a computer that uses the File
Transfer Protocol to store and share files.
Remote computers can connect anonymously,
if allowed, or with a user name and password
in order to download files from this server
using a piece of software called a FTP Client.
• FTP runs over TCP/IP. A FTP Server listens
on port 21 (default) for incoming connections
from FTP clients. A connection to this port from
the FTP Client forms the control stream on
which commands are passed from the FTP
client to the FTP server and on occasion from
the FTP server to the FTP client
• Most recent web browsers and file managers
can connect to FTP servers, although they
may lack the support for protocol extensions
such as FTPS (FTP with Secure encryption).
This allows manipulation of remote files over
FTP through an interface similar to that used
for local files. This is done via an FTP URL
(i.e. ftp:// rather than http://)
7. Client Server
121. Fax Server
• A fax server is software running on a
dedicated computer which is equipped with
one or more fax-capable modems (or
dedicated fax boards) attached to telephone
lines.
• A fax server’s function is to accept
documents from users, convert them into
faxes, and transmit them, as well as to
receive fax calls and either store the
incoming documents or pass them on to
users.
• Users may communicate with the server in
several ways, through either a local network
or the Internet. In a big organization with
heavy fax traffic, the computer hosting the fax
server may be dedicated to that function, in
which case the computer itself may also be
known as a fax server.
• Most fax servers employ their own fax client
software, and integrate directly with Mail
servers for sending and receiving faxes
through an organizations e-mail client.
7. Client Server
122. Checkpoint 7
Bits and Bytes
Computer Platforms
PC Architecture
Networking
Storage Media
Databases
Client Server Applications
8. Web Applications
9. Advanced Topics
10. Multi-Tier Support
123. Internetworking
• Internetworking involves connecting two or more distinct computer
networks or network segments via a common routing technology.
The result is called an internetwork (often shortened to internet).
• Any interconnection among or between public, private, commercial,
industrial, or governmental networks may also be defined as an
internetwork.
• In modern practice, the interconnected networks use the Internet
Protocol. There are at least three variants of internetwork,
depending on who administers and who participates in them:
• Intranet
• Extranet
• Internet
8. Web Applications
124. Intranet
• An intranet is a set of networks, using
the Internet Protocol and IP-based
tools such as web browsers and file
transfer applications, that is under the
control of a single administrative entity.
• That administrative entity closes the
intranet to all but specific, authorized
users.
• Most commonly, an intranet is the
internal network of an organization. A
large intranet will typically have at least
one web server to provide users with
organizational information.
• Example: Inside Toronto Portal
8. Web Applications
125. Extranet
• An extranet is a network or internetwork
that is limited in scope to a single
organization or entity, but, which also has
limited connections to the networks of one
or more other trusted organizations or
entities.
• For instance, a company's customers may
be given access to some part of its intranet
creating in this way an extranet, while at
the same time the customers may not be
considered 'trusted' from a security
standpoint.
• Example: Alberta Education’s Extranet
between the Alberta Government and
regulated education facilities and
providers.
8. Web Applications
126. Internet
• The Internet is the global network of interconnected
computers, enabling users to share information along
multiple channels.
• Typically, a computer that connects to the Internet can
access information from a vast array of internet available
servers and other computers by moving information from
them to the computer's local memory.
• A majority of widely accessible information on the Internet
consists of inter-linked hypertext documents and other
resources of the World Wide Web (WWW).
• The terms Internet and World Wide Web are often used in
every-day speech without much distinction; however, the
Internet and the World Wide Web are not one and the
same.
• The Internet is a global data communications system. It is
the hardware and software infrastructure that provides
connectivity between computers.
• In contrast, the Web is one of the services communicated
via the Internet. It is a collection of interconnected
documents and other resources, linked by hyperlinks and
URLs.
• The Internet is not a thing, a place, a single technology, or
a mode of governance. It is an agreement. “ John Gage,
Director of Science, Sun Microsystems, Inc.
8. Web Applications
127. Web Applications
• In software engineering, a web application is an
application that is accessed via web browser over an
internetwork such as the Internet, an intranet, or an
extranet, wholly depending upon the required scope of
accessibility.
• Web Applications are software programs that is coded in
a browser-supported language (such as HTML,
JavaScript, Java, etc.) and reliant on a web browser.
• Web applications are popular due to the ubiquity of web
browsers, and the convenience of using a web browser as
a client, sometimes called a thin client.
• The technology employed for web applications is platform
independent; in other words the HTML, Java and
JavaScript components of web applications are based on
standards that apply to all flavours of desktop hardware
and operating systems.
• Web applications rely on n-tier architecture in order to
separate what is exposed to the web at large from the
assets that need to be protected.
8. Web Applications
128. Web Security - Firewalls
• A firewall is a collection of security measures
designed to prevent unauthorized electronic
access to a networked computer system. It is
typically a device or dedicated computer
configured to permit, deny, encrypt, decrypt, or
proxy all computer traffic between different
security domains based upon a set of rules and
other criteria.
• A firewall's function within a network is similar to
physical firewalls with fire doors in building
construction. In the former case, it is used to
prevent network intrusion to the private network.
In the latter case, it is intended to contain and
delay structural fire from spreading to adjacent
structures.
• A firewall's basic task is to regulate some of the
flow of traffic between computer networks of
different trust levels. Typical examples are the
Internet which is a zone with no trust and an
internal network which is a zone of higher trust.
• A zone with an intermediate trust level, situated
between the Internet and a trusted internal
network, is often referred to as a "perimeter
network" or Demilitarized zone (DMZ).
8. Web Applications
129. Web Security - DMZ
• A Demilitarized Zone in computing is named
after the military usage of the term, but is also
known as a Data Management Zone or a
perimeter network.
• A DMZ is a physical or logical sub-network that
contains and exposes an organization's external
services to a larger, un-trusted network, usually
the Internet.
• The purpose of a DMZ is to add an additional
layer of security to an organization's Local Area
Network (LAN); an external attacker only has
access to equipment in the DMZ, rather than the
whole of the network.
• Generally, any service that is being provided to
users in an external network could be placed in
the DMZ. The most common of these services
are web servers, mail servers, ftp servers, VoIP
servers and DNS servers. In some situations,
additional steps need to be taken to be able to
provide secure services.
8. Web Applications
130. Web Applications and DMZ
• Web servers may need to communicate
with an internal database to provide
some specialized services.
• Since the database server should not
be publicly accessible as it may contain
sensitive information, it should not be in
the DMZ.
• Generally, it is not a good idea to allow
the web server to communicate directly
with the internal database server.
• Instead, an application server can be
used to act as a medium for
communication between the web server
and the database server. This may be
more complicated, but provides another
layer of security.
8. Web Applications
131. Checkpoint 8
Bits and Bytes
Computer Platforms
PC Architecture
Networking
Storage Media
Databases
Client Server Applications
Web Applications
9. Advanced Topics
10. Multi-Tier Support
132. Server Cluster
• A server cluster or a server farm is a group of linked
computers, working together closely so that in many
respects they form a single computer.
• Clusters are usually deployed to improve performance
and/or availability over that provided by a single
computer.
• High-availability clusters (also known as Failover
Clusters) are implemented primarily for the purpose of
improving the availability of services which the cluster
provides. They operate by having redundant nodes,
which are then used to provide service when system
components fail. HA cluster implementations attempt to
use redundancy of cluster components to eliminate
single points of failure.
• HA clusters are often used for critical databases, file
sharing on a network, business applications, and
customer services such as electronic commerce
websites.
• Load-balancing clusters operate by distributing a
workload evenly over multiple back end nodes. Typically
the cluster will be configured with multiple redundant
load-balancing front ends. Since each element in a
load-balancing cluster has to offer full service, it can be
thought of as an active/active HA cluster, where all
available servers process requests.
9. Advanced
133. Application Requirements for Clusters
• Not every application can run in a high-availability
cluster environment, and the necessary design
decisions need to be made early in the software design
phase.
• In order to run in a high-availability cluster environment,
an application must satisfy at least the following
technical requirements:
• There must be a relatively easy way to start,
stop, force-stop, and check the status of the
application. In practical terms, this means the
application must have a command line interface
or scripts to control the application, including
support for multiple instances of the application.
• The application must be able to use shared
storage (NAS/SAN).
• Most importantly, the application must store as
much of its state on non-volatile shared storage
as possible. Equally important is the ability to
restart on another node at the last state before
failure using the saved state from the shared
storage.
• Application must not corrupt data if it crashes or
restarts from the saved state.
• The last two criteria are critical to reliable functionality
in a cluster, and are the most difficult to satisfy fully.
Finally, licensing compliance must be observed.
9. Advanced
134. Hardware Requirements for Clusters
• HA clusters usually utilize all available techniques to
make the individual systems and shared infrastructure
as reliable as possible. These include:
• Disk mirroring so that failure of internal disks
does not result in system crashes
• Redundant network connections so that single
cable, switch, or network interface failures do not
result in network outages
• Redundant Storage area network or SAN data
connections so that single cable, switch, or
interface failures do not lead to loss of
connectivity to the storage (this would violate the
share-nothing architecture)
• Redundant electrical power inputs on different
circuits, usually both or all protected by
Uninterruptible power supply units, and
redundant power supply units, so that single
power feed, cable, UPS, or power supply failures
do not lead to loss of power to the system.
• These features help minimize the chances that the
clustering failover between systems will be required. In
such a failover, the service provided is unavailable for
at least a little while, so measures to avoid failover are
preferred.
9. Advanced
135. Virtualization
• Virtualization is the hosting of multiple virtual
computers or operating systems on one physical set of
computing hardware. It hides the physical
characteristics of computing platform and provides
users with a logical “emulated” computing platform.
• Many physical servers can be replaced by one larger
physical server, to increase the utilization of costly
hardware resources such as CPU.
• Although hardware is consolidated, typically the O/S’s
are not. Instead, each OS running on a physical server
becomes converted to a distinct OS running inside a
virtual machine. The large server can "host" many such
"guest" virtual machines. This is known as Physical-to-
Virtual (P2V) transformation.
• A virtual machine can be more easily controlled and
inspected from outside than a physical one, and its
configuration is more flexible.
• A new virtual machine can be provisioned as needed
without the need for an up-front hardware purchase.
Also, a virtual machine can easily be relocated from one
physical machine to another as needed.
9. Advanced
136. Why Organizations Virtualize
• Today’s powerful computer hardware was
designed to run a single operating system and a
single application. This leaves most machines
vastly underutilized.
• Virtualization lets you run multiple virtual
machines on a single physical machine, sharing
the resources of that single computer across
multiple environments. Different virtual machines
can run different operating systems and multiple
applications on the same physical computer.
• Maximizing the utilization of hardware results in
higher energy efficiency for power intensive data
centers operated by large organizations.
9. Advanced
137. Checkpoint 9
Bits and Bytes
Computer Platforms
PC Architecture
Networking
Storage Media
Databases
Client Server Applications
Web Applications
Advanced Topics
10. Multi-Tier Support
138. Technical Support
• The key challenge to organizations delivering
support, is doing so in an efficient manner.
• Given the many complex layers of technology
in place to enable business activities, there are
many places where things can go wrong.
• In order to efficiently support the growing
complexity in organizational technology
infrastructure a support model is required to
ensure that the right problems are addressed
by the right people.
• Examining this model from a business
standpoint allows us to understand how
computing problems are examined and
escalated to ensure a solution can be found in
a reasonable time frame.
9. Multi-Tier Support
139. Multi-Tiered Support
• Technical support is often subdivided into
tiers, or levels, in order to better serve a
business or customer base.
• The number of levels a business uses to
organize their technical support group is
dependent on a business’ need, want, or
desire as it revolves around their ability to
sufficiently serve their customers or
users.
• The reason for providing a multi-tiered
support system instead of one general
support group is to provide the best
possible service in the most efficient
possible manner.
• Success of the organizational structure is
dependent on the technicians’
understanding of their level of
responsibility and commitments, their
customer response time commitments,
and when to appropriately escalate an
issue and to which level.
9. Multi-Tier Support
140. Tier 1 Support
• This is the initial support level responsible for basic
customer issues.
• The first job of a Tier I support specialist is to gather
the customer’s information and to determine the
customer’s issue by analyzing the symptoms and
figuring out the underlying problem.
• Once identification of the underlying problem is
established, the specialist can begin sorting through
the possible solutions available.
• Tier 1 technical support activities include
troubleshooting, such as verifying physical layer
issues, resolving username and password problems,
uninstalling/reinstalling basic software applications,
verification of proper hardware and software set up,
and assistance with navigating around application
menus.
• Personnel at this level have a basic to general
understanding of the product or service and do not
require competency for solving complex issues.
• The goal for this group is to handle 70%-80% of the
user problems before finding it necessary to escalate
the issue to a higher level.
9. Multi-Tier Support
141. Tier 2 Support
• This is a more in-depth technical support level than Tier I
containing experienced and more knowledgeable
personnel on a particular product or service.
• Technicians in this realm of knowledge are responsible for
assisting Tier I personnel solve basic technical problems
and for investigating elevated issues by confirming the
validity of the problem and seeking for known solutions
related to these more complex issues.
• However, prior to the troubleshooting process, it is
important that the technician review the work order to see
what has already been accomplished by the Tier I
technician and how long the technician has been working
with the particular customer. This is a key element in
meeting both the customer and business needs as it
allows the technician to prioritize the troubleshooting
process and properly manage his or her time.
• If a problem is new and/or personnel from this group
cannot determine a solution, they are responsible for
raising this issue to the Tier III technical support group.
• Tier 2 activities may include, but is not limited to; onsite
installations or replacements of various hardware
components, software repair, diagnostic testing, and the
utilization of remote control tools used to take over the
user’s machine for the sole purpose of troubleshooting
and finding a solution to the problem.
9. Multi-Tier Support
142. Tier 3 Support
• This is the highest level of support in a three-tiered
technical support model responsible for handling the most
difficult or advanced problems.
• These individuals are experts in their fields and are
responsible for not only assisting both Tier I and Tier II
personnel, but with the research and development of
solutions to new or unknown issues.
• Upon encountering new problems, Tier III personnel must
first determine whether or not they can solve the problem
and may require the customer’s contact information so
that the technician can have adequate time to
troubleshoot the issue and find a solution.
• In some instances, an issue may be so problematic to the
point where the product cannot be salvaged and must be
replaced. Such extreme problems are also sent to the
original developers for in-depth analysis (Tier 4).
• If it is determined that a problem can be solved, this group
is responsible for designing and developing one or more
courses of action, evaluating each of these courses in a
test case environment, and implementing the best solution
to the problem. Once the solution is verified, it is delivered
to the customer and made available for future
troubleshooting and analysis.
• If a problem cannot be solved by Tier 3 there is one
further path of escalation.
9. Multi-Tier Support
143. Tier 4 Support
• While not universally used, a fourth level often
represents an escalation point beyond the
organization. This is generally a hardware or
software vendor.
• Within a corporate incident management system it
is important to continue to track incidents even
when they are being actioned by a vendor and the
Service Level Agreement (or SLA) may have
specific provision for this.
• When dealing with a Vendor, they will tend to follow
the same support tiers, and ultimately any serious
problems would be escalated to their own
engineering team for further investigation.
• Issues escalated to Tier 4 generally result in
software patches and updates.
9. Multi-Tier Support