SlideShare ist ein Scribd-Unternehmen logo
1 von 42
A
Acetylene
Simplest alkyne, CH. A
colourless, flammable, explosive
gas, it is used as a fuel in
welding and cutting metals and as
a raw material for many organic
compounds and plastics. It is
produced by reaction of water
with calcium carbide, passage of
a hydrocarbon through an electric
arc, or partial combustion of
methane. Decomposing it liberates
heat; depending on degree of
purity, it is also an explosive. An acetylene torch reaches
about 6,000 °F (3,300 °C), hotter than combustion of any other
known gas mixture.
Alcohols
Alcohol, any of a class of organic compounds characterized by
one or more hydroxyl (−OH) groups attached to a carbon atom of
an alkyl group (hydrocarbon chain). Alcohols may be considered
as organic derivatives of water (H2O) in which one of
1
the hydrogen atoms has been
replaced by an alkyl group,
typically represented by R in
organic structures. For example,
in ethanol (or ethyl alcohol) the
alkyl group is the ethyl group,
−CH2CH3.
Alcohols are among the most common organic compounds. They are
used as sweeteners and in making perfumes, are valuable
intermediates in the synthesis of other compounds, and are among
the most abundantly produced organic chemicals in industry.
Perhaps the two best-known alcohols are ethanol and methanol (or
methyl alcohol). Ethanol is used in toiletries, pharmaceuticals,
and fuels, and it is used to sterilize hospital instruments. It
is, moreover, the alcohol in alcoholic beverages. The
anesthetic ether is also made from ethanol. Methanol is used as
a solvent, as a raw material for the manufacture
of formaldehyde and special resins, in special fuels,
in antifreeze, and for cleaning metals.
Amino Acid
Amino
acids are biologically importa
nt organic compounds composed
of amine (-NH2) and carboxylic
acid (-COOH) functional
groups, along with a side-
chain specific to each
amino acid. The key elements
of an amino acid
are carbon,hydrogen, oxygen,
and nitrogen, though other elements are found in the side-chains
of certain amino acids. About 500 amino acids are known and can
be classified in many ways. They can be classified according to
2
the core structural functional groups' locations as alpha- (α-),
beta- (β-), gamma- (γ-) or delta- (δ-) amino acids; other
categories relate to polarity, pH level, and side-chain group
type (aliphatic, acyclic, aromatic, containing hydroxyl
or sulfur, etc.). In the form of proteins, amino acids comprise
the second-largest component (water is the largest) of
human muscles, cells and other tissues. Outside proteins, amino
acids perform critical roles in processes such
as neurotransmitter transport and biosynthesis.
Amino acids having both the amine and the carboxylic acid groups
attached to the first (alpha-) carbon atom have particular
importance in biochemistry. They are known as 2-, alpha-, or α-
amino acids (genericformula H2NCHRCOOH in most caseswhere R is
an organic substituent known as a "side-chain");often the term
"amino acid" is used to refer specifically to these. They
include the 22proteinogenic ("protein-building") amino acids,
which combine into peptide chains ("polypeptides") to form the
building-blocks of a vast array of proteins. These are all L-
stereoisomers ("left-handed" isomers), although a few D-amino
acids ("right-handed") occur in bacterial envelopes and
some antibiotics. Twenty of the proteinogenic amino acids are
encoded directly by triplet codons in the genetic code and are
known as "standard" amino acids. The other two ("non-standard"
or "non-canonical") are pyrrolysine (found
inmethanogenic organisms and other eukaryotes)
andselenocysteine (present in many noneukaryotes as well as most
eukaryotes). For example, 25 human proteins include
selenocysteine (Sec) in their primary structure, and the
structurally characterized enzymes (selenoenzymes) employ Sec as
the catalytic moiety in their active sites. Pyrrolysine and
selenocysteine are encoded via variant codons; for example,
selenocysteine is encoded by stop codon and SECIS
element. Codon–tRNAcombinations not found in nature can also be
used to"expand" the genetic code and create novel proteins known
as alloproteins incorporating non-proteinogenic amino acids.
Many important proteinogenic and non-proteinogenic amino acids
also play critical non-protein roles within the body. For
example, in the human brain, glutamate (standard glutamic acid)
and gamma-amino-butyric acid ("GABA", non-standard gamma-amino
acid) are, respectively, the main excitatory and inhibitory
neurotransmitters; hydroxyproline (a major component of
the connective tissue collagen) is synthesised from proline; the
3
standard amino acid glycine is used to synthesise porphyrins
used inred blood cells; and the non-standard carnitine is used
in lipid transport.
Nine of the 20 standard amino acids are called "essential" for
humans because they cannot be created from other compounds by
the human body and, so, must be taken in as food. Others may
be conditionally essential for certain ages or medical
conditions. Essential amino acids may also differ
between species.
Because of their biological significance, amino acids are
important in nutrition and are commonly used in nutritional
supplements,fertilizers, and food technology. Industrial uses
include the production of drugs, biodegradable plastics,
and chiral catalysts.
Aromatic Hydrocarbon
An aromatic
hydrocarbon or arene (or
sometimes aryl hydrocarbon)is
a hydrocarboncharacterized by
general alternating double and
single bonds between carbons. The
term 'aromatic' was assigned
before the physical mechanism
determining aromaticity was
discovered, and was derived from the fact that many of the
compounds have a sweet scent. The configuration of six carbon
atoms in aromatic compounds is known as a benzene ring, after
the simplest possible such hydrocarbon,benzene. Aromatic
hydrocarbons can be monocyclic(MAH) or polycyclic (PAH).
Some non-benzene-based compounds called heteroarenes, which
follow Hückel's rule, are also aromatic compounds. In these
compounds, at least one carbon atom is replaced by one of
theheteroatoms oxygen, nitrogen, or sulfur. Examples of non-
benzene compounds with aromatic properties are furan, a
heterocyclic compound with a five-membered ring that includes an
oxygen atom, andpyridine, a heterocyclic compound with a six-
membered ring containing one nitrogen atom
4
Atoms
The atom is a basic unit of matter that consists of a dense
central nucleussurrounded by a cloud of negatively
charged electrons. The atomic nucleuscontains a mix of
positively charged protons and
electrically
neutral neutrons(except in the
case of hydrogen-1, which is the
only stable nuclide with no
neutrons). The electrons of an
atom are bound to the nucleus by
theelectromagnetic force.
Likewise, a group of atoms can
remain bound to each other
by chemical bonds based on the
same force, forming a molecule. An
atom containing an equal number of protons and electrons is
electrically neutral, otherwise it is positively or negatively
charged and is known as an ion. An atom is classified according
to the number of protons and neutrons in its nucleus: the number
of protons determines the chemical element, and thenumber of
neutrons determines the isotope of the element.
Chemical atoms, which in science now carry the simple name of
"atom," are minuscule objects with diameters of a few tenths of
a nanometer and tiny masses proportional to the volume implied
by these dimensions. Atoms can only be observed individually
using special instruments such as the scanning tunneling
microscope. Over 99.94% of an atom's mass is concentrated in the
nucleus, with protons and neutrons having roughly equal mass.
Each element has at least one isotope with an unstable nucleus
that can undergoradioactive decay. This can result in
a transmutation that changes the number of protons or neutrons
in a nucleus. Electrons that are bound to atoms possess a set of
stable energy levels, or orbitals, and can undergo transitions
between them by absorbing or emitting photons that match the
energy differences between the levels. The electrons determine
the chemical properties of an element, and strongly influence an
atom's magneticproperties. The principles of quantum
mechanics have been successfully used to model the observed
properties of the atom.
5
B
Bacteria
Bacteria (; singular: bacterium)
constitute a large domain of
prokaryotic microorganisms. Typically
a few micrometers in length, bacteria
have a number of shapes, ranging from
spheres to rods and spirals. Bacteria
were among the first life forms to
appear on Earth, and are present in
most of its habitats. Bacteria inhabit
soil, water, acidic hot springs,
radioactive waste, and the deep
portions of Earth's crust. Bacteria
also live in symbiotic and parasitic
relationships with plants and animals.
They are also known to have flourished
in manned spacecraft.
There are typically 40 million bacterial cells in a gram of soil
and a million bacterial cells in a milliliter of fresh water.
There are approximately 5×1030 bacteria on Earth, forming a
biomass which exceeds that of all plants and animals. Bacteria
are vital in recycling nutrients, with many of the stages in
nutrient cycles dependent on these organisms, such as the
fixation of nitrogen from the atmosphere and putrefaction. In
the biological communities surrounding hydrothermal vents and
cold seeps, bacteria provide the nutrients needed to sustain
life by converting dissolved compounds such as hydrogen sulphide
and methane to energy. On 17 March 2013, researchers reported
data that suggested bacterial life forms thrive in the Mariana
Trench, the deepest spot on the Earth. Other researchers
reported related studies that microbes thrive inside rocks up to
1900 feet below the sea floor under 8500 feet of ocean off the
coast of the northwestern United States. According to one of the
researchers, ―You can find microbes everywhere — they're
extremely adaptable to conditions, and survive wherever they
are."
6
Most bacteria have not been characterized, and only about half
of the phyla of bacteria have species that can be grown in the
laboratory. The study of bacteria is known as bacteriology, a
branch of microbiology.
There are approximately ten times as many bacterial cells in the
human flora as there are human cells in the body, with the
largest number of the human flora being in the gut flora, and a
large number on the skin. The vast majority of the bacteria in
the body are rendered harmless by the protective effects of the
immune system, and some are beneficial. However, several species
of bacteria are pathogenic and cause infectious diseases,
including cholera, syphilis, anthrax, leprosy, and bubonic
plague. The most common fatal bacterial diseases are respiratory
infections, with tuberculosis alone killing about 2 million
people a year, mostly in sub-Saharan Africa. In developed
countries, antibiotics are used to treat bacterial infections
and are also used in farming, making antibiotic resistance a
growing problem. In industry, bacteria are important in sewage
treatment and the breakdown of oil spills, the production of
cheese and yogurt through fermentation, and the recovery of
gold, palladium, copper and other metals in the mining sector,
as well as in biotechnology, and the manufacture of antibiotics
and other chemicals.
Once regarded as plants constituting the class Schizomycetes,
bacteria are now classified as prokaryotes. Unlike cells of
animals and other eukaryotes, bacterial cells do not contain a
nucleus and rarely harbour membrane-bound organelles. Although
the term bacteria traditionally included all prokaryotes, the
scientific classification changed after the discovery in the
1990s that prokaryotes consist of two very different groups of
organisms that evolved from an ancient common ancestor. These
evolutionary domains are called Bacteria and Archaea.
Biodiversity
Is the degree of variation of life. This can refer to genetic
variation, species variation, or ecosystem variation within an
area, biome, or planet. Terrestrial biodiversity tends to be
highest at low latitudes near the equator, which seems to be the
result of the warm climate and high primary productivity. Marine
7
biodiversity tends to be highest
along coasts in the Western Pacific,
where sea surface temperature is
highest and in mid-latitudinal band
in all oceans. Biodiversity
generally tends to cluster in hot
spots and has been increasing
through time but will be likely to
slow in the future.
Rapid environmental changes
typically cause mass
extinctions. One estimate is that
<1%–3% of the species that have
existed on Earth are extant.
The earliest evidences for life on Earth are graphite found to
be biogenic in 3.7 billion-year-old metasedimentary
rocks discovered in Western Greenland and microbial
mat fossils found in 3.48 billion-year-old sandstone discovered
in Western Australia. Since life began on Earth, five major mass
extinctions and several minor events have led to large and
sudden drops in biodiversity. The Phanerozoic eon (the last
540 million years) marked a rapid growth in biodiversity via
the Cambrian explosion—a period during which the majority
of multicellular phyla first appeared. The next 400 million
years included repeated, massive biodiversity losses classified
as mass extinction events. In the Carboniferous, rainforest
collapse led to a great loss of plant and animal life.
The Permian–Triassic extinction event, 251 million years ago,
was the worst; vertebrate recovery took 30 million years.The
most recent, the Cretaceous–Paleogene extinction event, occurred
65 million years ago and has often attracted more attention than
others because it resulted in the extinction of the dinosaurs.
The period since the emergence of humans has displayed an
ongoing biodiversity reduction and an accompanying loss
of genetic diversity. Named the Holocene extinction, the
reduction is caused primarily by human impacts,
particularly habitat destruction. Conversely, biodiversity
impacts human health in a number of ways, both positively and
negatively.
8
Biomass
Is biological material derived from living, or recently
living organisms. It most often refers to plants or plant-
derived materials which are
specifically called lignocellulosic
biomass. As an energy source,
biomass can either be used directly
via combustion to produce heat, or
indirectly after converting it to
various forms of biofuel.
Conversion of biomass to biofuel
can be achieved by different
methods which are broadly
classified into: thermal, chemical,
and biochemical methods.
Wood remains the largest biomass
energy source today;examples include forest residues (such as
dead trees, branches and tree stumps), yard clippings, wood
chips and even municipal solid waste. In the second sense,
biomass includes plant or animal matter that can be converted
into fibers or other industrial chemicals, including biofuels.
Industrial biomass can be grown from numerous types of plants,
including miscanthus, switchgrass, hemp, corn, poplar, willow,
sorghum, sugarcane, bamboo and a variety of tree species,
ranging from eucalyptus to oil palm (palm oil).
Plant energy is produced by crops specifically grown for use as
fuel that offer high biomass output per hectare with low input
energy. Some examples of these plants are wheat, which typically
yield 7.5–8 tons (tonnes?) of grain per hectare, and straw,
which typically yield 3.5–5 tons (tonnes?) per hectare in the
UK.The grain can be used for liquid transportation fuels while
the straw can be burned to produce heat or electricity. Plant
biomass can also be degraded from cellulose to glucose through a
series of chemical treatments, and the resulting sugar can then
be used as a first generation biofuel.
Biomass can be converted to other usable forms of energy like
methane gas or transportation fuels like ethanol and biodiesel.
9
Rotting garbage, and agricultural and human waste, all release
methane gas—also called "landfill gas" or "biogas." Crops, such
as corn and sugar cane, can be fermented to produce the
transportation fuel, ethanol. Biodiesel, another transportation
fuel, can be produced from left-over food products like
vegetable oils and animal fats.Also, biomass to liquids (BTLs)
and cellulosic ethanol are still under research.
Biosynthesis
(Also called biogenesis or anabolism)
is a multi-step, enzyme-
catalyzed process where substrates are
converted into more complex products.
In biosynthesis, simple compounds are
modified, converted into other
compounds, or joined together to
form macromolecules. This process
often consists of metabolic pathways.
Some of these biosynthetic pathways
are located within a single
cellularorganelle, while others
involve enzymes that are located
within multiple cellular organelles.
Examples of these biosynthetic
pathways include the production of lipid membrane components
and nucleotides.
The prerequisite elements for biosynthesis
include: precursor compounds, chemical energy (e.g. ATP), and
catalytic enzymes which may require coenzymes (e.g.NADH, NADPH).
These elements createmonomers, the building blocks for
macromolecules. Some important biological macromolecules
include: proteins, which are composed of amino acid monomers
joined via peptide bonds, and DNA molecules, which are composed
of nucleotides joined via phosphodiester bonds.
10
Buoyancy
Is an upward force exerted by a fluid that opposes the
weight of an immersed object. In a
column of fluid, pressure increases
with depth as a result of the weight
of the overlying fluid. Thus a
column of fluid, or an object
submerged in the fluid, experiences
greater pressure at the bottom of
the column than at the top. This
difference in pressure results in a
net force that tends to accelerate
an object upwards. The magnitude of
that force is proportional to the
difference in the pressure between
the top and the bottom of the
column, and (as explained by
Archimedes' principle) is also equivalent to the weight of the
fluid that would otherwise occupy the column, i.e. the displaced
fluid. For this reason, an object whose density is greater than
that of the fluid in which it is submerged tends to sink. If the
object is either less dense than the liquid or is shaped
appropriately (as in a boat), the force can keep the object
afloat. This can occur only in a reference frame which either
has a gravitational field or is accelerating due to a force
other than gravity defining a "downward" direction (that is, a
non-inertial reference frame). In a situation of fluid statics,
the net upward buoyancy force is equal to the magnitude of the
weight of fluid displaced by the body.
The center of buoyancy of an object is the centroid of the
displaced volume of fluid.
11
C
CELL
Cell, in biology, the unit of structure and function of which
all plants and animals are
composed. The cell is the
smallest unit in the living
organism that is capable of
integrating the essential life
processes. There are many
unicellular organisms, e.g.,
bacteria and protozoans, in
which the single cell performs
all life functions. In higher
organisms, a division of labor
has evolved in which groups of
cells have differentiated into
specialized tissues, which in
turn are grouped into organs
and organ systems.
Cells can be separated into two major groups— prokaryotes,
cells whose DNA is not segregated within a well-defined nucleus
surrounded by a membranous nuclear envelope, and eukaryotes,
those with a membrane-enveloped nucleus. The bacteria (kingdom
Monera) are prokaryotes. They are smaller in size and simpler in
internal structure than eukaryotes and are believed to have
evolved much earlier. All organisms other than bacteria consists
of one or more eukaryotic cells.
All cells share a number of common properties; they store
information in genes made of DNA; they use proteins as their
main structural material; they synthesize proteins in the cell's
ribosomes using the information encoded in the DNA and mobilized
by means of RNA; they use adenosine triphosphate as the means of
transferring energy for the cell's internal processes; and they
12
are enclosed by a cell membrane, composed of proteins and a
double layer of lipid molecules, that controls the flow of
materials into and out of the cell.
CELLULOSE
Cellulose is an organic compound with the formula (C6H10O5)
n, a polysaccharide consisting
of a linear chain of several
hundred to over ten thousand
β(1→4) linked D-glucose units.
Cellulose is an important
structural component of the
primary cell wall of green
plants, many forms of algae
and the oomycetes. Some
species of bacteria secrete it
to form biofilms.Cellulose is
the most abundant organic
polymer on Earth. The
cellulose content of cotton
fiber is 90%, that of wood is
40–50% and that of dried hemp is approximately 45%.
Cellulose is mainly used to produce paperboard and paper.
Smaller quantities are converted into a wide variety of
derivative products such as cellophane and rayon. Conversion of
cellulose from energy crops into biofuels such as cellulosic
ethanol is under investigation as an alternative fuel source.
Cellulose for industrial use is mainly obtained from wood pulp
and cotton.
Some animals, particularly ruminants and termites, can digest
cellulose with the help of symbiotic micro-organisms that live
in their guts, such as Trichonympha. Humans can digest cellulose
to some extent,however it mainly acts as a hydrophilic bulking
agent for feces and is often referred to as a "dietary
fiber".Cellulose was discovered in 1838 by the French chemist
Anselme Payen, who isolated it from plant matter and determined
its chemical formula.Cellulose was used to produce the first
successful thermoplastic polymer, celluloid, by Hyatt
Manufacturing Company in 1870. Production to rayon ("artificial
13
silk") from cellulose began in the 1890s and cellophane was
invented in 1912. Hermann Staudinger determined the polymer
structure of cellulose in 1920. The compound was first
chemically synthesized (without the use of any biologically
derived enzymes) in 1992, by Kobayashi and Shod.
CLEOPATRA
Cleopatra VII Philopator
(Greek: Κλεοπάτρα Φιλοπάτωρ; Late
69 BC] – August 12, 30 BC), known
to history as Cleopatra, was the
last active pharaoh of Ancient
Egypt, only shortly survived by
her son, Caesarion as pharaoh.
She was a member of the Ptolemaic
dynasty, a family of Greek origin
that ruled Ptolemaic Egypt after
Alexander the Great's death
during the Hellenistic period.
The Ptolemies, throughout their
dynasty, spoke Greek and refused
to speak Egyptian, which is the
reason that Greek as well as
Egyptian languages were used on
official court documents such as
the Rosetta Stone.By contrast, Cleopatra did learn to speak
Egyptian and represented herself as the reincarnation of an
Egyptian goddess, Isis.
Cleopatra originally ruled jointly with her father, Ptolemy XII
Auletes, and later with her brothers, Ptolemy XIII and Ptolemy
XIV, whom she married as per Egyptian custom, but eventually she
became sole ruler. As pharaoh, she consummated a liaison with
Julius Caesar that solidified her grip on the throne. She later
elevated her son with Caesar, Caesarion, to co-ruler in name.
After Caesar's assassination in 44 BC, she aligned with Mark
Antony in opposition to Caesar's legal heir, Gaius Julius Caesar
Octavianus(later known as Augustus. With Antony, she bore the
twins Cleopatra Selene II and Alexander Helios, and another son,
14
Ptolemy Philadelphus (her unions with her brothers had produced
no children). After losing the Battle of Actium to Octavian's
forces, Antony committed suicide. Cleopatra followed suit,
according to tradition killing herself by means of an asp bite
on August 12, 30 BC. She was briefly outlived by Caesarion, who
was declared pharaoh by his supporters but soon killed on
Octavian's orders. Egypt became the Roman province of Aegyptus.
To this day, Cleopatra remains a popular figure in Western
culture. Her legacy survives in numerous works of art and the
many dramatizations of her story in literature and other media,
including William Shakespeare's tragedy Antony and Cleopatra,
Jules Massenet's opera Cléopâtre and the 1963 film Cleopatra. In
most depictions, Cleopatra is portrayed as a great beauty, and
her successive conquests of the world's most powerful men are
taken as proof of her aesthetic and sexual appeal.
CLIMATE
Climate is a measure of
the average pattern of
variation in temperature,
humidity, atmospheric
pressure, wind,
precipitation, atmospheric
particle count and other
meteorological variables in
a given region over long
periods of time. Climate is
different than weather, in that weather only describes the
short-term conditions of these variables in a given region.
A region's climate is generated by the climate system, which has
five components: atmosphere, hydrosphere, cryosphere, land
surface, and biosphere
The climate of a location is affected by its latitude, terrain,
15
and altitude, as well as nearby water bodies and their currents.
Climates can be classified according to the average and the
typical ranges of different variables, most commonly temperature
and precipitation. The most commonly used classification scheme
was originally developed by Wladimir Köppen. The Thornthwaite
system.in use since 1948, incorporates evapotranspiration along
with temperature and precipitation information and is used in
studying animal species diversity and potential effects of
climate changes. The Bergeron and Spatial Synoptic
Classification systems focus on the origin of air masses that
define the climate of a region.
Paleoclimatology is the study of ancient climates. Since direct
observations of climate are not available before the 19th
century, paleoclimates are inferred from proxy variables that
include non-biotic evidence such as sediments found in lake beds
and ice cores, and biotic evidence such as tree rings and coral.
Climate models are mathematical models of past, present and
future climates. Climate change may occur over long and short
timescales from a variety of factors; recent warming is
discussed in global warming. Climate is commonly defined as the
weather averaged over a long period.The standard averaging
period is 30 years, but other periods may be used depending on
the purpose. Climate also includes statistics other than the
average, such as the magnitudes of day-to-day or year-to-year
variations.
CRO-MAGNON
Cro-Magnon man (krō-măgˈnən, –
mănˈyən) , an early Homo sapiens
(the species to which modern humans belong) that lived about
40,000 years ago. Skeletal remains and associated artifacts of
the of the Aurignacian culture were first found in 1868 in Les
Eyzies, Dordogne, France. Later discoveries were made in a
number of caverns in
16
the Dordogne valley, Solutré, and in Spain, Germany, and central
Europe. Cro-Magnon man was anatomically identical to modern
humans, but differed significantly from Neanderthals , who
disappear in the fossil about 10,000 years after the appearance
of Aurignacian and other upper Paleolithic populations (e.g. the
Perigordian culture). The abrupt disappearance of Neanderthal
populations and the associated Mousterian technologies, the
sudden appearance of modern Homo sapiens (who had arisen earlier
in Africa and migrated to Europe) and the associated upper
Paleolithic technologies, and the absence of transitional
anatomical or technological forms have led most researchers to
conclude that Neanderthals were driven to extinction through
competition with Cro-Magnon or related populations. Greater
linguistic competence and cultural sophistication are often
suggested as characteristics tilting the competitive balance in
17avour of upper Paleolithic groups. Finely crafted stone and
bone tools, shell and ivory jewelry, and polychrome paintings
found on cave walls all testify to the cultural advancement of
Cro-Magnon man.
17
D
Density
A graduated cylindercontaining various coloured liquids
with different densities.
The density, or more precisely,
the volumetric mass density, of a substance
is its mass per unit volume. The symbol most
often used for density is ρ (the lower case
Greek letter rho). Mathematically, density is
defined as mass divided by volume:
Where ρ is the density, m is the mass,
and V is the volume. In some cases (for
instance, in the United States oil and gas
industry), density is loosely defined as
its weight per unit volume, although this is
scientifically inaccurate – this quantity is
more specifically called specific weight.
For a pure substance the density has the same
numerical value as its mass concentration. Different materials
usually have different densities, and density may be relevant
to buoyancy, purity and packaging. Osmium and iridium are the
densest known elements at standard conditions for temperature
and pressure but certain chemical compounds may be denser.
To simplify comparisons of density across different systems of
units, it is sometimes replaced by the dimensionless quantity
"specific gravity" or "relative density", i.e. the ratio of
the density of the material to that of a standard material,
usually water. Thus a specific
18
gravity less than one means that the substance floats in
water.
The density of a material varies with temperature and
pressure. This variation is typically small for solids and
liquids but much greater for gases. Increasing the pressure on
an object decreases the volume of the object and thus
increases its density. Increasing the temperature of a
substance (with a few exceptions) decreases its density by
increasing its volume. In most materials, heating the bottom
of a fluid results in convection of the heat from the bottom
to the top, due to the decrease in the density of the heated
fluid. This causes it to rise relative to more dense unheated
material.
The reciprocal of the density of a substance is occasionally
called its specific volume, a term sometimes used
in thermodynamics. Density is an intensive property in that
increasing the amount of a substance does not increase its
density; rather it
increases its
mass.
Destructive
interference
Once we have the condition for constructive interference,
destructive interference is a straightforward extension. The
basic requirement for destructive interference is that the two
waves are shifted by half a wavelength. This means that the path
difference for the
Waves with the same frequency traveling in opposite directions.
two waves must be: R1 – R2 = ˈ /2. But, since we can always
shift a wave by one full wavelength, the full condition for
destructive interference becomes:
R1 – R2 = ˈ /2 + nˈ .Now that we have mathematical statements for
the requirements for constructive and destructive interference,
we can apply them to a new situation and see what happens.To
create two waves traveling in opposite directions, we can take
19
our two speakers and point them at each other, as shown in the
figure above. We again want to find the conditions for
constructive and destructive interference. As we have seen, the
simplest way to get constructive interference is for the
distance from the observer to each source to be equal. Using our
mathematical terminology, we want R1 – R2 = 0, or R1 = R2. Looking
at the figure above, we see that the point where the two paths
are equal is exactly midway between the two speakers (the point
M in the figure). At this point, there will be constructive
interference, and the sound will be strong.
It makes sense to use the midpoint as a reference, as we know
that we have constructive interference. How far must we move our
observer to get to destructive interference? If we move to the
left by an amount x, the distance R1 increases by x and the
distance R2 decreases by x. If R1 increases and R2 decreases, the
difference between the two R1 – R2 increases by an amount 2x. So,
at the point x, the path difference is R1 – R2 = 2x. Now comes
the tricky part. If 2x happens to be equal to ˈ /2, we have met
the conditions for destructive interference. Therefore, if 2x
= ˈ /2, or x = ˈ /4, we have destructive interference. To put it
another way, in the situation above, if you move one quarter of
a wavelength away from the midpoint, you will find destructive
interference and the sound will sound very weak, or you might
not hear anything at all.
What happens if we keep moving our observation point? If the
path difference, 2x, equal one whole wavelength, we will have
constructive interference, 2x = ˈ . Solving for x, we have x
= ˈ /2. In other words, if we move by half a wavelength, we will
again have constructive interference and the sound will be loud.
As we keep moving the observation point, we will find that we
keep going through points of constructive and destructive
interference. This is a bit more complicated than the first
example, where we had either constructive or destructive
interference regardless of where we listened. In this case,
whether there is constructive or destructive interference
depends on where we are listening. However, the fundamental
conditions on the path difference are still the same.
What does this pattern of constructive and destructive
interference look like? We can map it out by indicating where we
have constructive (x) and destructive (ˈ ) interference:
20
What we see is a repeating pattern of constructive and
destructive interference, and it takes a distance of ˈ /4 to get
from one to the other. Where have we seen this pattern before?
At a point of constructive interference, the amplitude of the
wave is large and this is just like an antinode. At a point of
destructive interference, the amplitude is zero and this is like
an node. So, if we think of the point above as antinodes and
nodes, we see that we have exactly the same pattern of nodes and
antinodes as in a standing wave. From this, we must conclude
that two waves traveling in opposite directions create a
standing wave with the same frequency! You can get a more
intuitive understanding of this by looking at the Physlet
entitled Superposition.
Translating the interference conditions into mathematical
statements is an essential part of physics and can be quite
difficult at first. Moreover, a rather subtle distinction was
made that you might not have noticed. On the one hand, we have
some physical situation or geometry. This refers to the
placement of the speakers and the position of the observer. This
really has nothing to do with waves and it simply depends on how
the problem was set up. Given a particular setup, you can always
figure out the path length from the observer to the two sources
of the waves that are going to interference and hence you can
also find the path difference R1 – R2.
On the other hand, completely independent of the geometry, there
is a property of waves called superposition that can lead to
constructive or destructive interference. We can express these
conditions mathematically as:
R1 – R2 = 0 + nˈ , for constructive interference, and
R1 – R2 = ˈ /2 + nˈ for destructive interference.
Again, R1 – R2 was determined from the geometry of the problem.
These two aspects must be understood separately: how to
calculate the path difference and the conditions determining the
type of interference.
21
Diffraction
Diffraction pattern of red laser beam made on a
plate after passing a small circular hole in
another plate
Diffraction refers to various
phenomena which occur when a wave
encounters an obstacle. In classical
physics, the diffraction phenomenon
is described as the apparent bending
of waves around small obstacles and
the spreading out of waves past small
openings. Similar effects occur when
a light wave travels through a medium
with a varying refractive index, or a sound wave travels through
one with varying acoustic impedance. Diffraction occurs with all
waves, including sound waves, water waves, and electromagnetic
waves such asvisible light, X-rays and radio waves. As physical
objects have wave-like properties (at the atomic level),
diffraction also occurs with matter and can be studied according
to the principles of quantum mechanics. Italian
scientist Francesco Maria Grimaldi coined the word "diffraction"
and was the first to record accurate observations of the
phenomenon in 1660.
Richard Feynman wrote:
No-one has ever been able to define the difference
between interference and diffraction satisfactorily. It is just
a question of usage, and there is no specific, important
physical difference between them.
He suggested that when there are only a few sources, say two, we
call it interference, as in Young's slits, but with a large
number of sources, the process is labelled diffraction.
While diffraction occurs whenever propagating waves encounter
such changes, its effects are generally most pronounced for
waves whose wavelength is roughly similar to the dimensions of
the diffracting objects. If the obstructing object provides
multiple, closely spaced openings, a complex pattern of varying
intensity can result. This is due to the superposition,
or interference, of different parts of a wave that travels to
22
the observer by different paths. The formalism of diffraction
can also describe the way in which waves of finite extent
propagate in free space. For example, the expanding profile of a
laser beam, the beam shape of a radar antenna and the field of
view of an ultrasonic transducer can all be analyzed using
diffraction equations.
The effects of diffraction of light were first carefully
observed and characterized by Francesco Maria Grimaldi, who also
coined the term diffraction, from the Latin diffringere, 'to
break into pieces', referring to light breaking up into
different directions. The results of Grimaldi's observations
were published posthumously in 1665. Isaac Newton studied these
effects and attributed them to inflexion of light rays. James
Gregory (1638–1675) observed the diffraction patterns caused by
a bird feather, which was effectively the first diffraction
grating to be discovered. Thomas Young performed a
celebrated experiment in 1803 demonstrating interference from
two closely spaced slits. Explaining his results by interference
of the waves emanating from the two different slits, he deduced
that light must propagate as waves. Augustin-Jean Fresnel did
more definitive studies and calculations of diffraction, made
public in 1815 and 1818, and thereby gave great support to the
wave theory of light that had been advanced by Christiaan
Huygens and reinvigorated by Young, against Newton's particle
theory.
Distance
Also known as fairness, is a
numerical description of how far
apart objects are. In physics or
everyday usage, distance may refer
to a physical length, or an
estimation based on other criteria
(e.g. "two counties over").
In mathematics, a distance function
or metric is a generalization of the
concept of physical distance. A
metric is a function that behaves
according to a specific set of rules, and is a concrete way of
describing what it means for elements of some space to be "close
23
to" or "far away from" each other. In most cases, "distance from
A to B" is interchangeable with "distance between B and A".
In analytic geometry, the distance between two points of
the xy-plane can be found using the distance formula. The
distance between (x1, y1) and (x2, y2) is given by:
Similarly, given points (x1, y1, z1) and (x2, y2, z2) in three-
space, the distance
between them is:
Illustration of distance
These formula are easily derived by constructing a right
triangle with a leg on the hypotenuse of another (with the
other leg orthogonal to the plane that contains the 1st
triangle) and applying the Pythagorean theorem. In the
study of complicated geometries,we call this (most common)
type of distance Euclidean distance,as it is derived from
the Pythagorean theorem,which does not hold in Non-
Euclidean geometries.This distance formula can also be
expanded into the arc-length formula.
Distance in Euclidean space
In the Euclidean space Rn
, the distance between two points
is usually given by the Euclidean distance (2-norm
distance). Other distances, based on other norms, are
sometimes used instead.
For a point (x1, x2, ...,xn) and a point (y1, y2, ...,yn),
the Minkowski distance of order p (p-norm distance) is
defined as:
1-norm distance
2-norm distance
p-norm distance
24
infinity norm distance
p need not be an integer, but it cannot be less than 1,
because otherwise the triangle inequality does not hold.
The 2-norm distance is the Euclidean distance, a
generalization of the Pythagorean theorem to more than
two coordinates. It is what would be obtained if the
distance between two points were measured with a ruler: the
"intuitive" idea of distance.
The 1-norm distance is more colourfully called the taxicab
norm or Manhattan distance, because it is the distance a
car would drive in a city laid out in square blocks (if
there are no one-way streets).
The infinity norm distance is also called Chebyshev
distance. In 2D, it is the minimum number of
moves kings require to travel between two squares on
a chessboard.
The p-norm is rarely used for values of p other than 1, 2,
and infinity, but see super ellipse.
In physical space the Euclidean distance is in a way the
most natural one, because in this case the length of
a rigid body does not change with rotation.
Variational formulation of distance
The Euclidean distance between two points in space (
and ) may be written in a variational form
where the distance is the minimum value of an integral:
Here is the trajectory (path) between the two
points. The value of the integral (D) represents the
length of this trajectory. The distance is the minimal
value of this integral and is obtained when
where is the optimal trajectory. In the familiar
Euclidean case (the above integral) this optimal
25
Direct current
Direct current (DC) is the unidirectional flow ofelectric
charge. Direct current is produced by sources such
as batteries, thermocouples, solar cells, and commutator-type
electric machines of thedynamo type. Direct current may flow in
a conductorsuch as a wire, but can also flow
throughsemiconductors, insulators, or
even through avacuum as in electron
or ion beams. The electric current
flows in a constant direction,
distinguishing it from alternating
current (AC). A term formerly
used for direct current was galvanic
current.
Direct current may be obtained from an alternating current
supply by use of a current-switching arrangement called
a rectifier, which contains electronic elements (usually) or
electromechanical elements (historically) that allow current to
flow only in one direction. Direct current may be made into
alternating current with an inverter or a motor-generator set.
The first commercial electric power transmission (developed
by Thomas Edison in the late nineteenth century) used direct
current. Because of the significant advantages of alternating
current over direct current in transforming and transmission,
electric power distribution is nearly all alternating current
today. In the mid-1950s, HVDC transmission was developed, and is
now an option instead of long-distance high voltage alternating
current systems. For long distance underseas cables (e.g.
between countries, such as NorNed), this is the only technically
feasible option. For applications requiring direct current, such
as third rail power systems, alternating current is distributed
to a substation, which utilizes a rectifier to convert the power
to direct current. SeeWar of Currents.
Direct current is used to charge batteries, and in nearly all
electronic systems, as the power supply. Very large quantities
of direct-current power are used in production of aluminum and
otherelectrochemical processes. Direct current is used for
some railway propulsion, especially in urban areas. High-voltage
direct current is used to transmit large amounts of power from
remote generation sites or to interconnect alternating current
power grids.
26
E
Earthquake
An earthquake (also known as a quake, tremor or temblor) is the
result of a sudden release of energy in the Earth'scrust that
creates seismic waves. The seismicity, seismism or seismic
activity of an area refers to the frequency, type and size of
earthquakes experienced over a period of time.
Earthquakes are measured using
observations from seismometers.
The moment magnitude is the most common
scale on which earthquakes larger than
approximately 5 are reported for the
entire globe. The more numerous
earthquakes smaller than magnitude 5
reported by national seismological
observatories are measured mostly on
the local magnitude scale, also
referred to as the Richter scale. These
two scales are numerically similar over
their range of validity. Magnitude 3 or
lower earthquakes are mostly almost
imperceptible or weak and magnitude 7
and over potentially cause serious
damage over larger areas, depending on
their depth. The largest earthquakes in
historic times have been of magnitude
slightly over 9, although there is no
limit to the possible magnitude. The
most recent large earthquake of magnitude 9.0 or larger was a9.0
Faul types
magnitude earthquake in Japan in 2011 (as of October 2012), and
it was the largest Japanese earthquake since records began.
Intensity of shaking is measured on the modified Mercalli scale.
The shallower an earthquake, the more damage to structures it
causes, all else being equal.
At the Earth's surface, earthquakes manifest themselves by
shaking and sometimes displacement of the ground. When
27
the epicenter of a large earthquake is located offshore, the
seabed may be displaced sufficiently to cause a tsunami.
Earthquakes can also trigger landslides, and occasionally
volcanic activity.
In its most general sense, the word earthquake is used to
describe any seismic event — whether natural or caused by humans
— that generates seismic waves. Earthquakes are caused mostly by
rupture of geological faults, but also by other events such as
volcanic activity, landslides, mine blasts, and nuclear tests.
An earthquake's point of initial rupture is called
its focus or hypocenter. The epicenter is the point at ground
level directly above the hypocenter.
Electromagnetic radiation
Electromagnetic radiation (EM radiation or EMR) is a form
of radiant energy, propagating through space via photon wave
particles. In a vacuum, it propagates at a characteristic speed,
the speed of light, normally in straight lines. EMR is emitted
and absorbed by charged particles. As an electromagnetic wave,
it has both electric and magnetic field components,
which oscillate in a fixed relationship to one another,
perpendicular to each other and perpendicular to the direction
of energy and wave propagation.
EMR is characterized by the frequency or wavelength of its wave.
The electromagnetic spectrum, in order of increasing frequency
and decreasing wavelength, consists of radio
waves, microwaves, infrared radiation, visible
light, ultraviolet radiation, X-rays and gamma rays. The eyes of
various organisms sense a somewhat variable but relatively small
range of frequencies of EMR called the visible
spectrum or light. Higher frequencies correspond to
proportionately more energy carried by each photon; for
instance, a single gamma ray photon carries far more energy than
a single photon of visible light.
Electromagnetic radiation is associated with EM fields that are
free to propagate themselves without the continuing influence of
the moving charges that produced them, because they have
achieved sufficient distance from those charges. Thus, EMR is
sometimes referred to as the far field. In this language,
the near field refers to EM fields near the charges and current
that directly produced them, as for example with simple magnets
28
and static electricity phenomena. In EMR, the magnetic and
electric fields are each induced by changes in the other type of
field, thus propagating itself as a wave. This close
relationship assures that both types of fields in EMR stand in
phase and in a fixed ratio of intensity to each other, with
maxima and nodes in each found at the same places in space.
EMR carries energy—sometimes called radiant energy—through space
continuously away from the source (this is not true of the near-
field part of the EM field). EMR also carries
both momentum and angular momentum. These properties may all be
imparted to matter with which it interacts. EMR is produced from
other types of energy when created, and it is converted to other
types of energy when it is destroyed. Thephoton is
the quantum of the electromagnetic interaction, and is the basic
"unit" or constituent of all forms of EMR. The quantum nature of
light becomes more apparent at high frequencies (thus high
photon energy). Such photons behave more like particles than
lower-frequency photons do.
In classical physics, EMR is considered to be produced
when charged particles are accelerated by forces acting on
them. Electrons are responsible for emission of most EMR because
they have low mass, and therefore are easily accelerated by a
variety of mechanisms. Rapidly moving electrons are most sharply
accelerated when they encounter a region of force, so they are
responsible for producing much of the highest frequency
electromagnetic radiation observed in nature. Quantum processes
can also produce EMR, such as when atomic nucleiundergo gamma
decay, and processes such as neutral pion decay.
The effects of EMR upon biological systems (and also to many
This diagram shows a plane linearly polarized EMR wave propagating from left to right.
The electric field is in a vertical plane and the magnetic field in a horizontal
plane. The two types of fields in EMR waves are always in phase with each other with a
fixed ratio of electric to magnetic field intensity.
Other chemical systems, under standard conditions) depends both
upon the radiation's power and frequency. For lower frequencies
of EMR up to those of visible light (i.e., radio, microwave,
infrared), the damage done to cells and also to many ordinary
materials under such conditions is determined mainly by heating
29
effects, and thus by the radiation power. By contrast, for
higher frequency radiations at ultraviolet frequencies and above
(i.e., X-rays and gamma rays) the damage to chemical materials
and living cells by EMR is far larger than that done by simple
heating, due to the ability of single photons in such high
frequency EMR to damage individual molecules chemically.
Electron transport chain
The electron transport chain consists of a spatially separated
series of redox reactions in which electrons are transferred
from a donor molecule to an acceptor molecule. The underlying
force driving these reactions is the Gibbs free energy of the
reactants and products. The Gibbs free energy is the energy
available ("free") to do work. Any reaction that decreases the
overall Gibbs free energy of a system is thermodynamically
spontaneous.
The function of the electron transport chain is to produce a
transmembrane proton electrochemical gradient as a result of the
redox reactions.[1]
If protons flow back through the membrane,
they enable mechanical work, such as rotating
bacterial flagella. ATP synthase, an enzyme
highly conserved among all domains of life, converts this
mechanical work into chemical energy by producing ATP, which
powers most cellular reactions. A small amount of ATP is
available from substrate-level phosphorylation,
for example, in glycolysis. In most organisms the majority of ATP is generated in
electron transport chains, while only some obtain ATP by fermentation.
30
The electron transport chain in the mitochondrion is the site
of oxidative phosphorylation in eukaryotes. The NADH and
succinate generated in the citric acid cycle are oxidized,
providing energy to power ATP synthase.
Equation
In mathematics, an equation is a formula of the form A = B,
where A and B are expressions that may contain one or
several variablescalled unknowns, and "=" denotes
the equality binary relation. Although written in the form
of proposition, an equation is not a statementthat is either
true or false, but a problem consisting of finding the values,
called solutions, that, when substituted for the unknowns, yield
equal values of the expressions A and B. For example, 2 is the
Unique solution of
the equation x + 2 = 4, in which
the unknown is x.[1]
Historically,
equations arose from the
mathematical discipline
of algebra, but later become
ubiquitous. "Equations" should
not be confused
with "identities", which are
presented with the same notation but have a different meaning:
for example 2 + 2 = 4 andx + y = y + x are identities (which
implies they are necessarily true) in arithmetic, and do not
constitute a values-finding problem, even when variables are
present as in the latter example.
Illustration of a simple
equation; x, y, zare real
numbers, analogous to weights.
The term "equation" may also refer to a relation between some
variables that is presented as the equality of some expressions
written in terms of those variables' values. For example
the equation of the unit circle is x2
+ y2
= 1, which means that
a point belongs to the circle if and only if its coordinates are
related by this equation. Most physical lawsare expressed by
equations. One of the most famous ones is Einstein's
equation E = mc2
.
The = symbol was invented by Robert Recorde (1510–1558), who
31
considered that nothing could be more equal than parallel
straight lines with the same length.
Extinction
A species is extinct when the last existing member dies.
Extinction therefore becomes a certainty when there are no
surviving individuals that can reproduce and create a new
generation. A species may become functionally extinct when only
a handful of individuals survive, which cannot reproduce due to
poor health, age, sparse distribution over a large range, a lack
of individuals of both sexes (in sexually reproducing species),
or other reasons.
Pinpointing the extinction (or pseudoextinction) of a species
requires a clear definition of that species. If it is to be
declared extinct, the species in question must be uniquely
distinguishable from any ancestor or daughter species, and from
any other closely related species. Extinction of a species (or
replacement by a daughter species) plays a key role in
the punctuated equilibrium hypothesis of Stephen Jay
Gould and Niles Eldredge.
In ecology, extinction is often used informally to refer
to local extinction, in which a species ceases to exist in the
chosen area of study, but may still exist elsewhere. This
phenomenon is also known as extirpation. Local extinctions may
be followed by a replacement of
the species taken from other
locations; wolf reintroduction is
an example of this. Species which
are not extinct are
termed extant. Those that are
extant but threatened by
extinction are referred to
as threatened or endangered species.
Currently an important aspect of extinction is human attempts to
preserve critically endangered species. These are reflected by
the creation of the conservation status "Extinct in the Wild"
(EW). Species listed under this status by the International
Union for Conservation of Nature (IUCN) are not known to have
any living specimens in the wild, and are
32
maintained only in zoos or other artificial environments. Some
of these species are functionally extinct, as they are no longer
part of their natural habitat and it is unlikely the species
will ever be restored to the wild. When possible,
modern zoological institutions try to maintain a viable
population for species preservation and possible
future reintroduction to the wild, through use of carefully
planned breeding programs.
Extinct Species
The extinction of one species' wild population can have knock-on
effects, causing further extinctions. These are also called
"chains of extinction". This is especially common with
extinction of keystone species.
33
F
Facula
A facula (plural: faculae), Latin for "little torch", is
literally a "bright spot." The term
has several common technical uses.
It is used in planetary
nomenclature for naming certain
surface features of planets and
moons, and is also a type of
surface phenomenon on the Sun. In
addition, a bright region in the
projected field of a light source
is sometimes referred to as a
facula, and photographers often use
the term to describe bright,
typically circular features in
photographs that correspond to
light sources or bright reflections
in a defocused image.
Solar faculae are bright spots that form in the canyons
between solar granules, short-lived convection cells several
thousand kilometers across that constantly form and dissipate
over timescales of several minutes. Faculae are produced by
concentrations of magnetic field lines. Strong concentrations of
faculae appear in solar activity, with or withoutsunspots. The
faculae and the sunspots contribute noticeably to variations in
the "solar constant". The chromospheric counterpart of a facular
region is called a plage.
Facundity
Fecundity, derived from the word fecund, generally refers to the
ability to reproduce. In demography, fecundity is
the potential reproductive capacity of an individual
orpopulation. In biology, the definition is more equivalent
34
to fertility, or the actual reproductive rate of on an organism
or population, measured by the number
of gametes (eggs), seed set,
or asexual propagules. This
difference is because
demography
considers human fecundity
which is often intentionally
limited, while biology
assumes that organisms do not
limit fertility. Fecundity is
under both genetic and
environmental control, and is
the major measure of fitness. Fecundation is another term
for fertilization. Superfecundity refers to an organism's
ability to store another organism's sperm (after copulation) and
fertilize its own eggs from that store after a period of time,
essentially making it appear as though fertilization occurred
without sperm (i.e. parthenogenesis).
Fecundity is important and well studied in the field
of population ecology. Fecundity can increase or decrease in
a population according to current conditions and
certain regulating factors. For instance, in times of hardship
for a population, such as a lack of food, juvenile and
eventually adult fecundity has been shown to decrease.
Fecundity has also been shown to increase in ungulates with
relation to warmer weather.
In sexual evolutionary biology, especially in sexual selection,
fecundity is contrasted to reproductivity..
It is the ability of organism to
breed
In obstetrics and gynecology, fecundability is the probability
of being pregnant in a single menstrual cycle, and fecundity is
the probability of achieving a live birth within a single cycle.
Fahrenheit
On the Fahrenheit scale,the freezing point of water is 32
35
32 degrees Fahrenheit (°F) and the boiling point 212 °F
(atstandard atmospheric pressure). This puts the boiling and
freezing points of water exactly
180 degrees apart.[9]
Therefore, a
degree on the Fahrenheit scale
is 1
⁄180 of the interval between
the freezing point and the
boiling point. On the Celsius
scale, the freezing and boiling
points of water are 100 degrees
apart.
A temperature interval of 1 °F is
equal to an interval
of 5
⁄9 degrees Celsius. The
Fahrenheit and Celsius scales intersect at −40° (−40 °F and
−40 °C represent the same temperature).
Absolute zero is −273.15 °C or −459.67 °F.
The Rankine temperature scale uses degree intervals of the same
size as those of the Fahrenheit scale, except that absolute zero
is 0 R – the same way that the Kelvin temperature scale matches
the Celsius scale, except that absolute zero is 0 K.[9]
The Fahrenheit scale uses the symbol ° to denote a point on the
temperature scale (as does Celsius) and the letter F to indicate
the use of the Fahrenheit scale (e.g. "Gallium melts at
85.5763 °F"),[10]
as well as to denote a difference between
temperatures or an uncertainty in temperature (e.g. "The output
of the heat exchanger experiences an increase of 72 °F" and "Our
standard uncertainty is ±5 °F").
A rule of thumb for conversion between degrees Celsius and
degrees Fahrenheit is as follows:
enheit
s
Fahrenheit to Celsius: Subtract 32 and halve the resulting
number.
Celsius to Fahrenheit: Double the number and add 32.
This formula gives an answer correct to within 1 °C for 50 °F
(10 °C). At 0 °F (-17.8 °C) and 100 °F (37.8 °C), it gives
answers of -15 °C and 35 °C, respectively. Outside this range,
the error is bigger. For an accurate conversion, consider that 1
Celsius is equal to 1.8 Fahrenheit: 32 F = 0 C, 50 F = 10 C, 68
F = 20 C, 86 F = 30 C, 104 F = 40 C, and so on.
36
Free-electron laser
A free-electron laser (FEL), is a type of laser that shares the
same optical properties as conventional lasers such as emitting
abeam consisting of coherent electromagnetic radiation that can
reach high power, but that uses some very different operating
principles to form the beam. Unlike gas-, liquid-, or solid-
state lasers such as diode lasers, in which electrons are
excited in bound atomic or molecular states, free-electron
lasers use a relativistic electron beam that moves freely
through a magnetic structure, hence the term free electron as
the lasing medium. The free-electron laser has the
widestfrequency range of any laser type, and can be widely
tunable, currently ranging in wavelength from microwaves,
throughterahertz radiation and infrared, to the visible
spectrum, ultraviolet, and X-ray.
Free-electron lasers were invented by John Madey in 1976
at Stanford University. The work emanates from research done
byHans Motz and his coworkers, who built
an undulator at Stanford in 1953, using the wiggler magnetic
configuration which is at the heart of a free electron laser.
Madey used a 24 MeV electron beam and 5 m long wiggler to
amplify a signal. Soon afterward, other laboratories with
accelerators started developing such lasers. To create an FEL, a
beam of electrons is accelerated to almost the speed of light.
The beam passes through an undulator, a side to side magnetic
field produced by a periodic arrangement of magnets with
alternating poles across the beam path. The general direction of
the beam is called the longitudinal direction, and the direction
across the beam path is called transverse. This array of magnets
is commonly known as an undulator in the light source community,
or a wiggler in the FEL community,[citation needed]
because it forces
the electrons in the beam to wiggle transversely along a
sinusoidal path about the axis of the undulator.
The transverse acceleration of the electrons across this path
results in the release of photons (synchrotron radiation), which
are monochromatic but still incoherent, because the
electromagnetic waves from randomly distributed electrons
interfere constructively and destructively in time, and the
resulting radiation power scales linearly with the number of
electrons. If an external laser is provided or if the
synchrotron radiation becomes sufficiently strong, the
transverse electric field of the radiation beam interacts with
the transverse electron current created by the sinusoidal
wiggling motion, causing some electrons to gain and others to
37
lose energy to the optical field.
This energy modulation evolves into electron density (current)
modulations with a period of one optical wavelength. The
electrons are thus bunched into little clumps,
called microbunches, separated by one optical wavelength along
the axis. Whereas conventional undulators would cause the
electrons to radiate independently, the radiation emitted by the
microbunched electrons are in phase, and the fields add
together coherently.
The FEL radiation intensity grows, causing additional
microbunching of the electrons, which continue to radiate in
phase with each other. This process continues until the
electrons are completely microbunched and the radiation reaches
a saturated power several orders of magnitude higher than that
of the undulator radiation.
The wavelength of the radiation
emitted can be readily tuned by
adjusting the energy of the electron
beam or the magnetic field strength
of the undulators.
FELs are relativistic machines. The
wavelength of the emitted
radiation, , is given by
,
or when the wiggler strength parameter K, discussed below, is
small
,
The free-electron laser FELIX at
the FOM Institute for Plasma
Physics
where is the undulator wavelength (the spatial period of the
magnetic field), is the relativistic Lorentz factor and the
proportionality constant depends on the undulator geometry and
is of the order of 1.
This formula can be understood as a combination of two
relativistic effects. Imagine you are sitting on an electron
passing through the undulator. Due to Lorentz contraction the
undulator is shortened by a factor and the electron
experiences much shorter undulator wavelength . However, the
radiation emitted at this
38
wavelength is observed in the laboratory frame of reference and
the relativistic Doppler effect brings the second factor to
the above formula. Rigorous derivation from Maxwell's equations
gives the divisor of 2 and the proportionality constant. In an
x-ray FEL the typical undulator wavelength of 1 cm is transformed
to x-ray wavelengths on the order of 1 nm by ≈ 2000, i.e. the
electrons have to travel with the speed of 0.9999998c.
Friction
The classic rules of sliding friction were discovered
by Leonardo da Vinci (1452–1519), but remained unpublished in
his notebooks. They were rediscovered by Guillaume
Amontons (1699). Amontons presented the
nature of friction in terms of surface
irregularities and the force required to
raise the weight pressing the surfaces
together. This view was further elaborated
by Belidor (representation of rough
surfaces with spherical asperities,
1737) and Leonhard Euler (1750), who
derived the angle of repose of a weight on
an inclined plane and first distinguished
between static and kinetic friction. A
different explanation was provided by
Desaguliers (1725), who demonstrated the
strong cohesion forces between lead spheres
of which a small cap is cut off and which
were then brought into contact with each
other.
Block on a ramp (top) and corresponding free body
diagramof just the block (bottom). For equilibrium,
the line of action of the three force arrows must
intersect at a common point.
The understanding of friction was further developed by Charles-
Augustin de Coulomb (1785). Coulomb investigated the influence
of four main factors on friction: the nature of the materials in
contact and their surface coatings; the extent of the surface
area; the normal pressure (or load); and the length of time that
the surfaces remained in contact (time of repose). Coulomb
further considered the
39
influence of sliding velocity, temperature and humidity, in
order to decide between the different explanations on the nature
of friction that had been proposed. The
distinction between static and dynamic friction is made in
Coulomb's friction law (see below), although this distinction
was already drawn by Johann Andreas von Segner in 1758. The
effect of the time of repose was explained by Musschenbroek
(1762) by considering the surfaces of fibrous materials, with
fibers meshing together, which takes a finite time in which the
friction increases.
John Leslie (1766–1832) noted a weakness in the views of
Amontons and Coulomb. If friction arises from a weight being
drawn up the inclined plane of successive asperities, why isn't
it balanced then through descending the opposite slope? Leslie
was equally skeptical about the role of adhesion proposed by
Desaguliers, which should on the whole have the same tendency to
accelerate as to retard the motion.
In his view friction should be seen as a time-dependent
process of flattening, pressing down asperities, which creates
new obstacles in what were cavities before.
Arthur Morrin (1833) developed the concept of sliding versus
rolling friction. Osborne Reynolds (1866) derived the equation
of viscous flow. This completed the classic empirical model of
friction (static, kinetic, and fluid) commonly used today in
engineering.
The focus of research during the last century has been to
understand the physical mechanisms behind friction. F. Phillip
Bowden and David Tabor (1950) showed that at a microscopic
level, the actual area of contact between surfaces is a very
small fraction of the apparent area. This actual area of
contact, caused by "asperities" (roughness) increases with
pressure, explaining the proportionality between normal force
and frictional force. The development of the atomic force
microscope (1986) has recently enabled scientists to study
friction at the atomic scale
40
INDEX
A
Acetylene, p.1
Alcohols, pp.1-2
Amino acid, pp.2-3
Aromatic
hydrocarbon, p.4
Atoms, p.5
B
Bacteria, pp.6-7
Biochemistry,
pp.7-8
Biomass, pp.9-10
Biosynthesis, p.10
Buoyancy, p.11
C
Cell, pp.12-13
Cellulose, pp.13-
14
Cleopatra, pp.14-
15
Climate, pp.15-16
Cro-magnon, pp.16-
17
D
Density, pp.18-19
Destructive
interference,
pp.19-21
Diffraction,
pp.22-23
Direct current,
p.26
Distance, pp.23-25
E
Earthquake, pp.27-
28
Electromagnetic
radiation, pp.28-
30
Electron transport
chain, pp.30-31
Equation, pp.31-32
Extinction, pp.32-
33
F
Facula, p.34
Fecundity, pp.34-
35
Fahrenheit, pp.35-
36
Free electron
loser, pp.37-39
Friction, pp.39-40

Weitere ähnliche Inhalte

Was ist angesagt?

Ch2 Ppt Lect 1
Ch2 Ppt Lect 1Ch2 Ppt Lect 1
Ch2 Ppt Lect 1
bholmes
 
P p carbohydrates wnotes #4
P p carbohydrates wnotes #4P p carbohydrates wnotes #4
P p carbohydrates wnotes #4
ksprattler
 
8.3 photosynthesis
8.3 photosynthesis8.3 photosynthesis
8.3 photosynthesis
Bob Smullen
 

Was ist angesagt? (18)

Ch2 Ppt Lect 1
Ch2 Ppt Lect 1Ch2 Ppt Lect 1
Ch2 Ppt Lect 1
 
Ch2 Ppt Lect 1
Ch2 Ppt Lect 1Ch2 Ppt Lect 1
Ch2 Ppt Lect 1
 
Photosynthesis
PhotosynthesisPhotosynthesis
Photosynthesis
 
holozoic nutrition
holozoic nutrition holozoic nutrition
holozoic nutrition
 
Methanogens by kk sahu
Methanogens by kk sahu Methanogens by kk sahu
Methanogens by kk sahu
 
P p carbohydrates wnotes #4
P p carbohydrates wnotes #4P p carbohydrates wnotes #4
P p carbohydrates wnotes #4
 
Free radicles
Free radiclesFree radicles
Free radicles
 
Photosynthesis
PhotosynthesisPhotosynthesis
Photosynthesis
 
Photosynthesis in plants
Photosynthesis in plantsPhotosynthesis in plants
Photosynthesis in plants
 
Photosynthesis
PhotosynthesisPhotosynthesis
Photosynthesis
 
Free radicles
Free radiclesFree radicles
Free radicles
 
Photosynthesis (Light and Dark reaction of photosynthesis)
Photosynthesis (Light and Dark reaction of photosynthesis)Photosynthesis (Light and Dark reaction of photosynthesis)
Photosynthesis (Light and Dark reaction of photosynthesis)
 
8.3 photosynthesis
8.3 photosynthesis8.3 photosynthesis
8.3 photosynthesis
 
Photosynthesis Lecture for Lesson 1
Photosynthesis Lecture for Lesson 1Photosynthesis Lecture for Lesson 1
Photosynthesis Lecture for Lesson 1
 
Bioenergetics
BioenergeticsBioenergetics
Bioenergetics
 
Classification
ClassificationClassification
Classification
 
nitrate and sulfate reduction ; methanogenesis and acetogenesis
nitrate and sulfate reduction ; methanogenesis and acetogenesisnitrate and sulfate reduction ; methanogenesis and acetogenesis
nitrate and sulfate reduction ; methanogenesis and acetogenesis
 
Bioenergetics
BioenergeticsBioenergetics
Bioenergetics
 

Andere mochten auch

Ciclo del agua
Ciclo del aguaCiclo del agua
Ciclo del agua
ccecig
 
ENCYCLOPEDIA- Volume 3..final
ENCYCLOPEDIA- Volume 3..finalENCYCLOPEDIA- Volume 3..final
ENCYCLOPEDIA- Volume 3..final
Leizel Despi
 
Treatment sheet 4
Treatment sheet 4Treatment sheet 4
Treatment sheet 4
nickdixon1
 
Dal prodotto di eccellenza all'offerta turistica enogastronomica
Dal prodotto di eccellenza all'offerta turistica enogastronomicaDal prodotto di eccellenza all'offerta turistica enogastronomica
Dal prodotto di eccellenza all'offerta turistica enogastronomica
Ida Paradiso
 
Horror movie call sheet
Horror movie call sheetHorror movie call sheet
Horror movie call sheet
nickdixon1
 
Mapa otwartych zasobów edukacyjnych
Mapa otwartych zasobów edukacyjnychMapa otwartych zasobów edukacyjnych
Mapa otwartych zasobów edukacyjnych
aniakosm
 
Affinity Diagrams (Diagramy pokrewieństwa)
Affinity Diagrams (Diagramy pokrewieństwa)Affinity Diagrams (Diagramy pokrewieństwa)
Affinity Diagrams (Diagramy pokrewieństwa)
Tomasz Skórski
 
Horror Genre Conventions
Horror Genre ConventionsHorror Genre Conventions
Horror Genre Conventions
JamesAllann
 
Recce of Location
Recce of Location Recce of Location
Recce of Location
nickdixon1
 
ENCYCLOPEDIA- Volume 2
ENCYCLOPEDIA- Volume 2 ENCYCLOPEDIA- Volume 2
ENCYCLOPEDIA- Volume 2
Leizel Despi
 
History of Horror
History of HorrorHistory of Horror
History of Horror
JamesAllann
 

Andere mochten auch (20)

asdfghujk
asdfghujkasdfghujk
asdfghujk
 
Ciclo del agua
Ciclo del aguaCiclo del agua
Ciclo del agua
 
ENCYCLOPEDIA- Volume 3..final
ENCYCLOPEDIA- Volume 3..finalENCYCLOPEDIA- Volume 3..final
ENCYCLOPEDIA- Volume 3..final
 
Treatment sheet 4
Treatment sheet 4Treatment sheet 4
Treatment sheet 4
 
ixtract - Tears of the sun
ixtract - Tears of the sunixtract - Tears of the sun
ixtract - Tears of the sun
 
Dal prodotto di eccellenza all'offerta turistica enogastronomica
Dal prodotto di eccellenza all'offerta turistica enogastronomicaDal prodotto di eccellenza all'offerta turistica enogastronomica
Dal prodotto di eccellenza all'offerta turistica enogastronomica
 
Newsletter - AMA
Newsletter - AMANewsletter - AMA
Newsletter - AMA
 
Horror movie call sheet
Horror movie call sheetHorror movie call sheet
Horror movie call sheet
 
Mapa otwartych zasobów edukacyjnych
Mapa otwartych zasobów edukacyjnychMapa otwartych zasobów edukacyjnych
Mapa otwartych zasobów edukacyjnych
 
La computadora
La computadoraLa computadora
La computadora
 
Affinity Diagrams (Diagramy pokrewieństwa)
Affinity Diagrams (Diagramy pokrewieństwa)Affinity Diagrams (Diagramy pokrewieństwa)
Affinity Diagrams (Diagramy pokrewieństwa)
 
Game design
Game designGame design
Game design
 
Horror Genre Conventions
Horror Genre ConventionsHorror Genre Conventions
Horror Genre Conventions
 
Recce of Location
Recce of Location Recce of Location
Recce of Location
 
Bab i
Bab  iBab  i
Bab i
 
Tips for proofreading
Tips for proofreadingTips for proofreading
Tips for proofreading
 
ENCYCLOPEDIA- Volume 2
ENCYCLOPEDIA- Volume 2 ENCYCLOPEDIA- Volume 2
ENCYCLOPEDIA- Volume 2
 
Melanie dean pecha kucha
Melanie dean pecha kuchaMelanie dean pecha kucha
Melanie dean pecha kucha
 
WUD Trójmiasto - Specjaliści User Experience w Polsce w 2015 roku
WUD Trójmiasto - Specjaliści User Experience w Polsce w 2015 rokuWUD Trójmiasto - Specjaliści User Experience w Polsce w 2015 roku
WUD Trójmiasto - Specjaliści User Experience w Polsce w 2015 roku
 
History of Horror
History of HorrorHistory of Horror
History of Horror
 

Ähnlich wie ENCYCLOPEDIA -Volume 1

05 elements _ros__org._comp
05 elements _ros__org._comp05 elements _ros__org._comp
05 elements _ros__org._comp
MUBOSScz
 

Ähnlich wie ENCYCLOPEDIA -Volume 1 (20)

History, Classification, Uses of organic chemistry
History, Classification, Uses of organic chemistryHistory, Classification, Uses of organic chemistry
History, Classification, Uses of organic chemistry
 
Intro to Organic chemistry
Intro to Organic chemistryIntro to Organic chemistry
Intro to Organic chemistry
 
biomolecules-6.pdf
biomolecules-6.pdfbiomolecules-6.pdf
biomolecules-6.pdf
 
Introduction of organic chemistry
Introduction of organic chemistryIntroduction of organic chemistry
Introduction of organic chemistry
 
important slide related to physiology as well different cell structure.pptx
important slide related to physiology as well different cell structure.pptximportant slide related to physiology as well different cell structure.pptx
important slide related to physiology as well different cell structure.pptx
 
Respiration presentation
Respiration presentationRespiration presentation
Respiration presentation
 
Ionization and Ph of Amino Acid
Ionization and Ph of Amino AcidIonization and Ph of Amino Acid
Ionization and Ph of Amino Acid
 
Biochemistry 2015
Biochemistry 2015Biochemistry 2015
Biochemistry 2015
 
Biochemistry
BiochemistryBiochemistry
Biochemistry
 
Assigment In Human Anato
Assigment In  Human  AnatoAssigment In  Human  Anato
Assigment In Human Anato
 
Assigment In Human Anato
Assigment In Human AnatoAssigment In Human Anato
Assigment In Human Anato
 
lecture 4 basic concepts of organic chemistry.pptx
lecture 4 basic concepts of organic chemistry.pptxlecture 4 basic concepts of organic chemistry.pptx
lecture 4 basic concepts of organic chemistry.pptx
 
Biomolecules
BiomoleculesBiomolecules
Biomolecules
 
Microbial nutrient requirements (part 2)
Microbial nutrient requirements  (part 2)Microbial nutrient requirements  (part 2)
Microbial nutrient requirements (part 2)
 
Chapt03 lecture
Chapt03 lectureChapt03 lecture
Chapt03 lecture
 
05 elements _ros__org._comp
05 elements _ros__org._comp05 elements _ros__org._comp
05 elements _ros__org._comp
 
Introduction to organic compounds
Introduction to organic compoundsIntroduction to organic compounds
Introduction to organic compounds
 
Hydrocarbons
HydrocarbonsHydrocarbons
Hydrocarbons
 
Bt 202 aug 19 2011new
Bt 202 aug 19 2011newBt 202 aug 19 2011new
Bt 202 aug 19 2011new
 
ecosystem - Copy.pptx
ecosystem - Copy.pptxecosystem - Copy.pptx
ecosystem - Copy.pptx
 

Mehr von Leizel Despi

Visuals- Educational Technology 9
Visuals- Educational Technology  9Visuals- Educational Technology  9
Visuals- Educational Technology 9
Leizel Despi
 
Article in Logic and Ethics
Article in Logic and EthicsArticle in Logic and Ethics
Article in Logic and Ethics
Leizel Despi
 
ENCYCLOPEDIA -Volume 3
ENCYCLOPEDIA -Volume 3ENCYCLOPEDIA -Volume 3
ENCYCLOPEDIA -Volume 3
Leizel Despi
 
Volume 2 with pages. .
Volume 2 with pages. .Volume 2 with pages. .
Volume 2 with pages. .
Leizel Despi
 
Article iii section 2
Article iii section 2Article iii section 2
Article iii section 2
Leizel Despi
 
I knew i loved you
I knew i loved youI knew i loved you
I knew i loved you
Leizel Despi
 
Catch my breath lyrics final
Catch my breath lyrics finalCatch my breath lyrics final
Catch my breath lyrics final
Leizel Despi
 
SCIENCE DEVELOPMENTS DURING 18TH AND 19TH CENTURY
SCIENCE DEVELOPMENTS DURING 18TH AND 19TH CENTURYSCIENCE DEVELOPMENTS DURING 18TH AND 19TH CENTURY
SCIENCE DEVELOPMENTS DURING 18TH AND 19TH CENTURY
Leizel Despi
 
DEFINITION AND IMPORTANCE OF PRINCIPLES OF TEACHING
DEFINITION AND IMPORTANCE OF  PRINCIPLES OF TEACHINGDEFINITION AND IMPORTANCE OF  PRINCIPLES OF TEACHING
DEFINITION AND IMPORTANCE OF PRINCIPLES OF TEACHING
Leizel Despi
 
LEV VYGOTSKY SOCIO-CULTURAL THEORY OF DEVELOPMENT
LEV VYGOTSKY SOCIO-CULTURAL THEORY OF DEVELOPMENTLEV VYGOTSKY SOCIO-CULTURAL THEORY OF DEVELOPMENT
LEV VYGOTSKY SOCIO-CULTURAL THEORY OF DEVELOPMENT
Leizel Despi
 
Major foundations of Curriculum Development
Major foundations of Curriculum DevelopmentMajor foundations of Curriculum Development
Major foundations of Curriculum Development
Leizel Despi
 
Literary Appreciation
Literary AppreciationLiterary Appreciation
Literary Appreciation
Leizel Despi
 
Science during 18th and 19th century
Science during 18th and 19th centuryScience during 18th and 19th century
Science during 18th and 19th century
Leizel Despi
 
Development of Science in 18th to 19th century
Development of Science in 18th to 19th centuryDevelopment of Science in 18th to 19th century
Development of Science in 18th to 19th century
Leizel Despi
 
Energy from fatty acid oxidation
Energy from fatty acid oxidationEnergy from fatty acid oxidation
Energy from fatty acid oxidation
Leizel Despi
 

Mehr von Leizel Despi (20)

Cupid and Psyche
Cupid and Psyche Cupid and Psyche
Cupid and Psyche
 
Visuals- Educational Technology 9
Visuals- Educational Technology  9Visuals- Educational Technology  9
Visuals- Educational Technology 9
 
Indian literature- RAMAYANA
Indian literature- RAMAYANAIndian literature- RAMAYANA
Indian literature- RAMAYANA
 
Article in Logic and Ethics
Article in Logic and EthicsArticle in Logic and Ethics
Article in Logic and Ethics
 
ENCYCLOPEDIA -Volume 3
ENCYCLOPEDIA -Volume 3ENCYCLOPEDIA -Volume 3
ENCYCLOPEDIA -Volume 3
 
Volume 2 with pages. .
Volume 2 with pages. .Volume 2 with pages. .
Volume 2 with pages. .
 
Giving Titles
Giving TitlesGiving Titles
Giving Titles
 
Article iii section 2
Article iii section 2Article iii section 2
Article iii section 2
 
I knew i loved you
I knew i loved youI knew i loved you
I knew i loved you
 
Catch my breath lyrics final
Catch my breath lyrics finalCatch my breath lyrics final
Catch my breath lyrics final
 
SCIENCE DEVELOPMENTS DURING 18TH AND 19TH CENTURY
SCIENCE DEVELOPMENTS DURING 18TH AND 19TH CENTURYSCIENCE DEVELOPMENTS DURING 18TH AND 19TH CENTURY
SCIENCE DEVELOPMENTS DURING 18TH AND 19TH CENTURY
 
DEFINITION AND IMPORTANCE OF PRINCIPLES OF TEACHING
DEFINITION AND IMPORTANCE OF  PRINCIPLES OF TEACHINGDEFINITION AND IMPORTANCE OF  PRINCIPLES OF TEACHING
DEFINITION AND IMPORTANCE OF PRINCIPLES OF TEACHING
 
LEV VYGOTSKY SOCIO-CULTURAL THEORY OF DEVELOPMENT
LEV VYGOTSKY SOCIO-CULTURAL THEORY OF DEVELOPMENTLEV VYGOTSKY SOCIO-CULTURAL THEORY OF DEVELOPMENT
LEV VYGOTSKY SOCIO-CULTURAL THEORY OF DEVELOPMENT
 
Major foundations of Curriculum Development
Major foundations of Curriculum DevelopmentMajor foundations of Curriculum Development
Major foundations of Curriculum Development
 
Literary Appreciation
Literary AppreciationLiterary Appreciation
Literary Appreciation
 
Science during 18th and 19th century
Science during 18th and 19th centuryScience during 18th and 19th century
Science during 18th and 19th century
 
Steam engine
Steam engineSteam engine
Steam engine
 
Development of Science in 18th to 19th century
Development of Science in 18th to 19th centuryDevelopment of Science in 18th to 19th century
Development of Science in 18th to 19th century
 
Energy from fatty acid oxidation
Energy from fatty acid oxidationEnergy from fatty acid oxidation
Energy from fatty acid oxidation
 
Middle age -STS
Middle age -STSMiddle age -STS
Middle age -STS
 

Kürzlich hochgeladen

Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Sérgio Sacani
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disks
Sérgio Sacani
 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Sérgio Sacani
 
Pests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdfPests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdf
PirithiRaju
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Sérgio Sacani
 
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Lokesh Kothari
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
PirithiRaju
 

Kürzlich hochgeladen (20)

Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts ServiceJustdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
 
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disks
 
Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.
 
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
 
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 60009654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
 
GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)
 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)
 
module for grade 9 for distance learning
module for grade 9 for distance learningmodule for grade 9 for distance learning
module for grade 9 for distance learning
 
Factory Acceptance Test( FAT).pptx .
Factory Acceptance Test( FAT).pptx       .Factory Acceptance Test( FAT).pptx       .
Factory Acceptance Test( FAT).pptx .
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
 
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​
 
Pests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdfPests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdf
 
Site Acceptance Test .
Site Acceptance Test                    .Site Acceptance Test                    .
Site Acceptance Test .
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
 
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
 

ENCYCLOPEDIA -Volume 1

  • 1. A Acetylene Simplest alkyne, CH. A colourless, flammable, explosive gas, it is used as a fuel in welding and cutting metals and as a raw material for many organic compounds and plastics. It is produced by reaction of water with calcium carbide, passage of a hydrocarbon through an electric arc, or partial combustion of methane. Decomposing it liberates heat; depending on degree of purity, it is also an explosive. An acetylene torch reaches about 6,000 °F (3,300 °C), hotter than combustion of any other known gas mixture. Alcohols Alcohol, any of a class of organic compounds characterized by one or more hydroxyl (−OH) groups attached to a carbon atom of an alkyl group (hydrocarbon chain). Alcohols may be considered as organic derivatives of water (H2O) in which one of
  • 2. 1 the hydrogen atoms has been replaced by an alkyl group, typically represented by R in organic structures. For example, in ethanol (or ethyl alcohol) the alkyl group is the ethyl group, −CH2CH3. Alcohols are among the most common organic compounds. They are used as sweeteners and in making perfumes, are valuable intermediates in the synthesis of other compounds, and are among the most abundantly produced organic chemicals in industry. Perhaps the two best-known alcohols are ethanol and methanol (or methyl alcohol). Ethanol is used in toiletries, pharmaceuticals, and fuels, and it is used to sterilize hospital instruments. It is, moreover, the alcohol in alcoholic beverages. The anesthetic ether is also made from ethanol. Methanol is used as a solvent, as a raw material for the manufacture of formaldehyde and special resins, in special fuels, in antifreeze, and for cleaning metals. Amino Acid Amino acids are biologically importa nt organic compounds composed of amine (-NH2) and carboxylic acid (-COOH) functional groups, along with a side- chain specific to each amino acid. The key elements of an amino acid are carbon,hydrogen, oxygen, and nitrogen, though other elements are found in the side-chains of certain amino acids. About 500 amino acids are known and can be classified in many ways. They can be classified according to 2
  • 3. the core structural functional groups' locations as alpha- (α-), beta- (β-), gamma- (γ-) or delta- (δ-) amino acids; other categories relate to polarity, pH level, and side-chain group type (aliphatic, acyclic, aromatic, containing hydroxyl or sulfur, etc.). In the form of proteins, amino acids comprise the second-largest component (water is the largest) of human muscles, cells and other tissues. Outside proteins, amino acids perform critical roles in processes such as neurotransmitter transport and biosynthesis. Amino acids having both the amine and the carboxylic acid groups attached to the first (alpha-) carbon atom have particular importance in biochemistry. They are known as 2-, alpha-, or α- amino acids (genericformula H2NCHRCOOH in most caseswhere R is an organic substituent known as a "side-chain");often the term "amino acid" is used to refer specifically to these. They include the 22proteinogenic ("protein-building") amino acids, which combine into peptide chains ("polypeptides") to form the building-blocks of a vast array of proteins. These are all L- stereoisomers ("left-handed" isomers), although a few D-amino acids ("right-handed") occur in bacterial envelopes and some antibiotics. Twenty of the proteinogenic amino acids are encoded directly by triplet codons in the genetic code and are known as "standard" amino acids. The other two ("non-standard" or "non-canonical") are pyrrolysine (found inmethanogenic organisms and other eukaryotes) andselenocysteine (present in many noneukaryotes as well as most eukaryotes). For example, 25 human proteins include selenocysteine (Sec) in their primary structure, and the structurally characterized enzymes (selenoenzymes) employ Sec as the catalytic moiety in their active sites. Pyrrolysine and selenocysteine are encoded via variant codons; for example, selenocysteine is encoded by stop codon and SECIS element. Codon–tRNAcombinations not found in nature can also be used to"expand" the genetic code and create novel proteins known as alloproteins incorporating non-proteinogenic amino acids. Many important proteinogenic and non-proteinogenic amino acids also play critical non-protein roles within the body. For example, in the human brain, glutamate (standard glutamic acid) and gamma-amino-butyric acid ("GABA", non-standard gamma-amino acid) are, respectively, the main excitatory and inhibitory neurotransmitters; hydroxyproline (a major component of the connective tissue collagen) is synthesised from proline; the 3
  • 4. standard amino acid glycine is used to synthesise porphyrins used inred blood cells; and the non-standard carnitine is used in lipid transport. Nine of the 20 standard amino acids are called "essential" for humans because they cannot be created from other compounds by the human body and, so, must be taken in as food. Others may be conditionally essential for certain ages or medical conditions. Essential amino acids may also differ between species. Because of their biological significance, amino acids are important in nutrition and are commonly used in nutritional supplements,fertilizers, and food technology. Industrial uses include the production of drugs, biodegradable plastics, and chiral catalysts. Aromatic Hydrocarbon An aromatic hydrocarbon or arene (or sometimes aryl hydrocarbon)is a hydrocarboncharacterized by general alternating double and single bonds between carbons. The term 'aromatic' was assigned before the physical mechanism determining aromaticity was discovered, and was derived from the fact that many of the compounds have a sweet scent. The configuration of six carbon atoms in aromatic compounds is known as a benzene ring, after the simplest possible such hydrocarbon,benzene. Aromatic hydrocarbons can be monocyclic(MAH) or polycyclic (PAH). Some non-benzene-based compounds called heteroarenes, which follow Hückel's rule, are also aromatic compounds. In these compounds, at least one carbon atom is replaced by one of theheteroatoms oxygen, nitrogen, or sulfur. Examples of non- benzene compounds with aromatic properties are furan, a heterocyclic compound with a five-membered ring that includes an oxygen atom, andpyridine, a heterocyclic compound with a six- membered ring containing one nitrogen atom 4
  • 5. Atoms The atom is a basic unit of matter that consists of a dense central nucleussurrounded by a cloud of negatively charged electrons. The atomic nucleuscontains a mix of positively charged protons and electrically neutral neutrons(except in the case of hydrogen-1, which is the only stable nuclide with no neutrons). The electrons of an atom are bound to the nucleus by theelectromagnetic force. Likewise, a group of atoms can remain bound to each other by chemical bonds based on the same force, forming a molecule. An atom containing an equal number of protons and electrons is electrically neutral, otherwise it is positively or negatively charged and is known as an ion. An atom is classified according to the number of protons and neutrons in its nucleus: the number of protons determines the chemical element, and thenumber of neutrons determines the isotope of the element. Chemical atoms, which in science now carry the simple name of "atom," are minuscule objects with diameters of a few tenths of a nanometer and tiny masses proportional to the volume implied by these dimensions. Atoms can only be observed individually using special instruments such as the scanning tunneling microscope. Over 99.94% of an atom's mass is concentrated in the nucleus, with protons and neutrons having roughly equal mass. Each element has at least one isotope with an unstable nucleus that can undergoradioactive decay. This can result in a transmutation that changes the number of protons or neutrons in a nucleus. Electrons that are bound to atoms possess a set of stable energy levels, or orbitals, and can undergo transitions between them by absorbing or emitting photons that match the energy differences between the levels. The electrons determine the chemical properties of an element, and strongly influence an atom's magneticproperties. The principles of quantum mechanics have been successfully used to model the observed properties of the atom. 5
  • 6. B Bacteria Bacteria (; singular: bacterium) constitute a large domain of prokaryotic microorganisms. Typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. Bacteria were among the first life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep portions of Earth's crust. Bacteria also live in symbiotic and parasitic relationships with plants and animals. They are also known to have flourished in manned spacecraft. There are typically 40 million bacterial cells in a gram of soil and a million bacterial cells in a milliliter of fresh water. There are approximately 5×1030 bacteria on Earth, forming a biomass which exceeds that of all plants and animals. Bacteria are vital in recycling nutrients, with many of the stages in nutrient cycles dependent on these organisms, such as the fixation of nitrogen from the atmosphere and putrefaction. In the biological communities surrounding hydrothermal vents and cold seeps, bacteria provide the nutrients needed to sustain life by converting dissolved compounds such as hydrogen sulphide and methane to energy. On 17 March 2013, researchers reported data that suggested bacterial life forms thrive in the Mariana Trench, the deepest spot on the Earth. Other researchers reported related studies that microbes thrive inside rocks up to 1900 feet below the sea floor under 8500 feet of ocean off the coast of the northwestern United States. According to one of the researchers, ―You can find microbes everywhere — they're
  • 7. extremely adaptable to conditions, and survive wherever they are." 6 Most bacteria have not been characterized, and only about half of the phyla of bacteria have species that can be grown in the laboratory. The study of bacteria is known as bacteriology, a branch of microbiology. There are approximately ten times as many bacterial cells in the human flora as there are human cells in the body, with the largest number of the human flora being in the gut flora, and a large number on the skin. The vast majority of the bacteria in the body are rendered harmless by the protective effects of the immune system, and some are beneficial. However, several species of bacteria are pathogenic and cause infectious diseases, including cholera, syphilis, anthrax, leprosy, and bubonic plague. The most common fatal bacterial diseases are respiratory infections, with tuberculosis alone killing about 2 million people a year, mostly in sub-Saharan Africa. In developed countries, antibiotics are used to treat bacterial infections and are also used in farming, making antibiotic resistance a growing problem. In industry, bacteria are important in sewage treatment and the breakdown of oil spills, the production of cheese and yogurt through fermentation, and the recovery of gold, palladium, copper and other metals in the mining sector, as well as in biotechnology, and the manufacture of antibiotics and other chemicals. Once regarded as plants constituting the class Schizomycetes, bacteria are now classified as prokaryotes. Unlike cells of animals and other eukaryotes, bacterial cells do not contain a nucleus and rarely harbour membrane-bound organelles. Although the term bacteria traditionally included all prokaryotes, the scientific classification changed after the discovery in the 1990s that prokaryotes consist of two very different groups of organisms that evolved from an ancient common ancestor. These evolutionary domains are called Bacteria and Archaea. Biodiversity Is the degree of variation of life. This can refer to genetic variation, species variation, or ecosystem variation within an
  • 8. area, biome, or planet. Terrestrial biodiversity tends to be highest at low latitudes near the equator, which seems to be the result of the warm climate and high primary productivity. Marine 7 biodiversity tends to be highest along coasts in the Western Pacific, where sea surface temperature is highest and in mid-latitudinal band in all oceans. Biodiversity generally tends to cluster in hot spots and has been increasing through time but will be likely to slow in the future. Rapid environmental changes typically cause mass extinctions. One estimate is that <1%–3% of the species that have existed on Earth are extant. The earliest evidences for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. Since life began on Earth, five major mass extinctions and several minor events have led to large and sudden drops in biodiversity. The Phanerozoic eon (the last 540 million years) marked a rapid growth in biodiversity via the Cambrian explosion—a period during which the majority of multicellular phyla first appeared. The next 400 million years included repeated, massive biodiversity losses classified as mass extinction events. In the Carboniferous, rainforest collapse led to a great loss of plant and animal life. The Permian–Triassic extinction event, 251 million years ago, was the worst; vertebrate recovery took 30 million years.The most recent, the Cretaceous–Paleogene extinction event, occurred 65 million years ago and has often attracted more attention than others because it resulted in the extinction of the dinosaurs. The period since the emergence of humans has displayed an ongoing biodiversity reduction and an accompanying loss of genetic diversity. Named the Holocene extinction, the reduction is caused primarily by human impacts,
  • 9. particularly habitat destruction. Conversely, biodiversity impacts human health in a number of ways, both positively and negatively. 8 Biomass Is biological material derived from living, or recently living organisms. It most often refers to plants or plant- derived materials which are specifically called lignocellulosic biomass. As an energy source, biomass can either be used directly via combustion to produce heat, or indirectly after converting it to various forms of biofuel. Conversion of biomass to biofuel can be achieved by different methods which are broadly classified into: thermal, chemical, and biochemical methods. Wood remains the largest biomass energy source today;examples include forest residues (such as dead trees, branches and tree stumps), yard clippings, wood chips and even municipal solid waste. In the second sense, biomass includes plant or animal matter that can be converted into fibers or other industrial chemicals, including biofuels. Industrial biomass can be grown from numerous types of plants, including miscanthus, switchgrass, hemp, corn, poplar, willow, sorghum, sugarcane, bamboo and a variety of tree species, ranging from eucalyptus to oil palm (palm oil). Plant energy is produced by crops specifically grown for use as fuel that offer high biomass output per hectare with low input energy. Some examples of these plants are wheat, which typically yield 7.5–8 tons (tonnes?) of grain per hectare, and straw, which typically yield 3.5–5 tons (tonnes?) per hectare in the UK.The grain can be used for liquid transportation fuels while the straw can be burned to produce heat or electricity. Plant biomass can also be degraded from cellulose to glucose through a
  • 10. series of chemical treatments, and the resulting sugar can then be used as a first generation biofuel. Biomass can be converted to other usable forms of energy like methane gas or transportation fuels like ethanol and biodiesel. 9 Rotting garbage, and agricultural and human waste, all release methane gas—also called "landfill gas" or "biogas." Crops, such as corn and sugar cane, can be fermented to produce the transportation fuel, ethanol. Biodiesel, another transportation fuel, can be produced from left-over food products like vegetable oils and animal fats.Also, biomass to liquids (BTLs) and cellulosic ethanol are still under research. Biosynthesis (Also called biogenesis or anabolism) is a multi-step, enzyme- catalyzed process where substrates are converted into more complex products. In biosynthesis, simple compounds are modified, converted into other compounds, or joined together to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellularorganelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may require coenzymes (e.g.NADH, NADPH). These elements createmonomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers
  • 11. joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds. 10 Buoyancy Is an upward force exerted by a fluid that opposes the weight of an immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid. Thus a column of fluid, or an object submerged in the fluid, experiences greater pressure at the bottom of the column than at the top. This difference in pressure results in a net force that tends to accelerate an object upwards. The magnitude of that force is proportional to the difference in the pressure between the top and the bottom of the column, and (as explained by Archimedes' principle) is also equivalent to the weight of the fluid that would otherwise occupy the column, i.e. the displaced fluid. For this reason, an object whose density is greater than that of the fluid in which it is submerged tends to sink. If the object is either less dense than the liquid or is shaped appropriately (as in a boat), the force can keep the object afloat. This can occur only in a reference frame which either has a gravitational field or is accelerating due to a force other than gravity defining a "downward" direction (that is, a non-inertial reference frame). In a situation of fluid statics, the net upward buoyancy force is equal to the magnitude of the weight of fluid displaced by the body. The center of buoyancy of an object is the centroid of the displaced volume of fluid.
  • 12. 11 C CELL Cell, in biology, the unit of structure and function of which all plants and animals are composed. The cell is the smallest unit in the living organism that is capable of integrating the essential life processes. There are many unicellular organisms, e.g., bacteria and protozoans, in which the single cell performs all life functions. In higher organisms, a division of labor has evolved in which groups of cells have differentiated into specialized tissues, which in turn are grouped into organs and organ systems. Cells can be separated into two major groups— prokaryotes, cells whose DNA is not segregated within a well-defined nucleus surrounded by a membranous nuclear envelope, and eukaryotes, those with a membrane-enveloped nucleus. The bacteria (kingdom Monera) are prokaryotes. They are smaller in size and simpler in internal structure than eukaryotes and are believed to have
  • 13. evolved much earlier. All organisms other than bacteria consists of one or more eukaryotic cells. All cells share a number of common properties; they store information in genes made of DNA; they use proteins as their main structural material; they synthesize proteins in the cell's ribosomes using the information encoded in the DNA and mobilized by means of RNA; they use adenosine triphosphate as the means of transferring energy for the cell's internal processes; and they 12 are enclosed by a cell membrane, composed of proteins and a double layer of lipid molecules, that controls the flow of materials into and out of the cell. CELLULOSE Cellulose is an organic compound with the formula (C6H10O5) n, a polysaccharide consisting of a linear chain of several hundred to over ten thousand β(1→4) linked D-glucose units. Cellulose is an important structural component of the primary cell wall of green plants, many forms of algae and the oomycetes. Some species of bacteria secrete it to form biofilms.Cellulose is the most abundant organic polymer on Earth. The cellulose content of cotton fiber is 90%, that of wood is 40–50% and that of dried hemp is approximately 45%. Cellulose is mainly used to produce paperboard and paper. Smaller quantities are converted into a wide variety of derivative products such as cellophane and rayon. Conversion of cellulose from energy crops into biofuels such as cellulosic ethanol is under investigation as an alternative fuel source. Cellulose for industrial use is mainly obtained from wood pulp and cotton.
  • 14. Some animals, particularly ruminants and termites, can digest cellulose with the help of symbiotic micro-organisms that live in their guts, such as Trichonympha. Humans can digest cellulose to some extent,however it mainly acts as a hydrophilic bulking agent for feces and is often referred to as a "dietary fiber".Cellulose was discovered in 1838 by the French chemist Anselme Payen, who isolated it from plant matter and determined its chemical formula.Cellulose was used to produce the first successful thermoplastic polymer, celluloid, by Hyatt Manufacturing Company in 1870. Production to rayon ("artificial 13 silk") from cellulose began in the 1890s and cellophane was invented in 1912. Hermann Staudinger determined the polymer structure of cellulose in 1920. The compound was first chemically synthesized (without the use of any biologically derived enzymes) in 1992, by Kobayashi and Shod. CLEOPATRA Cleopatra VII Philopator (Greek: Κλεοπάτρα Φιλοπάτωρ; Late 69 BC] – August 12, 30 BC), known to history as Cleopatra, was the last active pharaoh of Ancient Egypt, only shortly survived by her son, Caesarion as pharaoh. She was a member of the Ptolemaic dynasty, a family of Greek origin that ruled Ptolemaic Egypt after Alexander the Great's death during the Hellenistic period. The Ptolemies, throughout their dynasty, spoke Greek and refused to speak Egyptian, which is the reason that Greek as well as Egyptian languages were used on official court documents such as the Rosetta Stone.By contrast, Cleopatra did learn to speak Egyptian and represented herself as the reincarnation of an Egyptian goddess, Isis.
  • 15. Cleopatra originally ruled jointly with her father, Ptolemy XII Auletes, and later with her brothers, Ptolemy XIII and Ptolemy XIV, whom she married as per Egyptian custom, but eventually she became sole ruler. As pharaoh, she consummated a liaison with Julius Caesar that solidified her grip on the throne. She later elevated her son with Caesar, Caesarion, to co-ruler in name. After Caesar's assassination in 44 BC, she aligned with Mark Antony in opposition to Caesar's legal heir, Gaius Julius Caesar Octavianus(later known as Augustus. With Antony, she bore the twins Cleopatra Selene II and Alexander Helios, and another son, 14 Ptolemy Philadelphus (her unions with her brothers had produced no children). After losing the Battle of Actium to Octavian's forces, Antony committed suicide. Cleopatra followed suit, according to tradition killing herself by means of an asp bite on August 12, 30 BC. She was briefly outlived by Caesarion, who was declared pharaoh by his supporters but soon killed on Octavian's orders. Egypt became the Roman province of Aegyptus. To this day, Cleopatra remains a popular figure in Western culture. Her legacy survives in numerous works of art and the many dramatizations of her story in literature and other media, including William Shakespeare's tragedy Antony and Cleopatra, Jules Massenet's opera Cléopâtre and the 1963 film Cleopatra. In most depictions, Cleopatra is portrayed as a great beauty, and her successive conquests of the world's most powerful men are taken as proof of her aesthetic and sexual appeal. CLIMATE Climate is a measure of the average pattern of variation in temperature, humidity, atmospheric pressure, wind, precipitation, atmospheric particle count and other meteorological variables in a given region over long periods of time. Climate is
  • 16. different than weather, in that weather only describes the short-term conditions of these variables in a given region. A region's climate is generated by the climate system, which has five components: atmosphere, hydrosphere, cryosphere, land surface, and biosphere The climate of a location is affected by its latitude, terrain, 15 and altitude, as well as nearby water bodies and their currents. Climates can be classified according to the average and the typical ranges of different variables, most commonly temperature and precipitation. The most commonly used classification scheme was originally developed by Wladimir Köppen. The Thornthwaite system.in use since 1948, incorporates evapotranspiration along with temperature and precipitation information and is used in studying animal species diversity and potential effects of climate changes. The Bergeron and Spatial Synoptic Classification systems focus on the origin of air masses that define the climate of a region. Paleoclimatology is the study of ancient climates. Since direct observations of climate are not available before the 19th century, paleoclimates are inferred from proxy variables that include non-biotic evidence such as sediments found in lake beds and ice cores, and biotic evidence such as tree rings and coral. Climate models are mathematical models of past, present and future climates. Climate change may occur over long and short timescales from a variety of factors; recent warming is discussed in global warming. Climate is commonly defined as the weather averaged over a long period.The standard averaging period is 30 years, but other periods may be used depending on the purpose. Climate also includes statistics other than the average, such as the magnitudes of day-to-day or year-to-year variations. CRO-MAGNON Cro-Magnon man (krō-măgˈnən, – mănˈyən) , an early Homo sapiens
  • 17. (the species to which modern humans belong) that lived about 40,000 years ago. Skeletal remains and associated artifacts of the of the Aurignacian culture were first found in 1868 in Les Eyzies, Dordogne, France. Later discoveries were made in a number of caverns in 16 the Dordogne valley, Solutré, and in Spain, Germany, and central Europe. Cro-Magnon man was anatomically identical to modern humans, but differed significantly from Neanderthals , who disappear in the fossil about 10,000 years after the appearance of Aurignacian and other upper Paleolithic populations (e.g. the Perigordian culture). The abrupt disappearance of Neanderthal populations and the associated Mousterian technologies, the sudden appearance of modern Homo sapiens (who had arisen earlier in Africa and migrated to Europe) and the associated upper Paleolithic technologies, and the absence of transitional anatomical or technological forms have led most researchers to conclude that Neanderthals were driven to extinction through competition with Cro-Magnon or related populations. Greater linguistic competence and cultural sophistication are often suggested as characteristics tilting the competitive balance in 17avour of upper Paleolithic groups. Finely crafted stone and bone tools, shell and ivory jewelry, and polychrome paintings found on cave walls all testify to the cultural advancement of Cro-Magnon man.
  • 18. 17 D Density A graduated cylindercontaining various coloured liquids with different densities. The density, or more precisely, the volumetric mass density, of a substance is its mass per unit volume. The symbol most often used for density is ρ (the lower case Greek letter rho). Mathematically, density is defined as mass divided by volume: Where ρ is the density, m is the mass, and V is the volume. In some cases (for instance, in the United States oil and gas industry), density is loosely defined as its weight per unit volume, although this is scientifically inaccurate – this quantity is more specifically called specific weight. For a pure substance the density has the same numerical value as its mass concentration. Different materials usually have different densities, and density may be relevant to buoyancy, purity and packaging. Osmium and iridium are the densest known elements at standard conditions for temperature and pressure but certain chemical compounds may be denser.
  • 19. To simplify comparisons of density across different systems of units, it is sometimes replaced by the dimensionless quantity "specific gravity" or "relative density", i.e. the ratio of the density of the material to that of a standard material, usually water. Thus a specific 18 gravity less than one means that the substance floats in water. The density of a material varies with temperature and pressure. This variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object and thus increases its density. Increasing the temperature of a substance (with a few exceptions) decreases its density by increasing its volume. In most materials, heating the bottom of a fluid results in convection of the heat from the bottom to the top, due to the decrease in the density of the heated fluid. This causes it to rise relative to more dense unheated material. The reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is an intensive property in that increasing the amount of a substance does not increase its density; rather it increases its mass. Destructive interference Once we have the condition for constructive interference, destructive interference is a straightforward extension. The basic requirement for destructive interference is that the two waves are shifted by half a wavelength. This means that the path difference for the Waves with the same frequency traveling in opposite directions. two waves must be: R1 – R2 = ˈ /2. But, since we can always shift a wave by one full wavelength, the full condition for destructive interference becomes:
  • 20. R1 – R2 = ˈ /2 + nˈ .Now that we have mathematical statements for the requirements for constructive and destructive interference, we can apply them to a new situation and see what happens.To create two waves traveling in opposite directions, we can take 19 our two speakers and point them at each other, as shown in the figure above. We again want to find the conditions for constructive and destructive interference. As we have seen, the simplest way to get constructive interference is for the distance from the observer to each source to be equal. Using our mathematical terminology, we want R1 – R2 = 0, or R1 = R2. Looking at the figure above, we see that the point where the two paths are equal is exactly midway between the two speakers (the point M in the figure). At this point, there will be constructive interference, and the sound will be strong. It makes sense to use the midpoint as a reference, as we know that we have constructive interference. How far must we move our observer to get to destructive interference? If we move to the left by an amount x, the distance R1 increases by x and the distance R2 decreases by x. If R1 increases and R2 decreases, the difference between the two R1 – R2 increases by an amount 2x. So, at the point x, the path difference is R1 – R2 = 2x. Now comes the tricky part. If 2x happens to be equal to ˈ /2, we have met the conditions for destructive interference. Therefore, if 2x = ˈ /2, or x = ˈ /4, we have destructive interference. To put it another way, in the situation above, if you move one quarter of a wavelength away from the midpoint, you will find destructive interference and the sound will sound very weak, or you might not hear anything at all. What happens if we keep moving our observation point? If the path difference, 2x, equal one whole wavelength, we will have constructive interference, 2x = ˈ . Solving for x, we have x = ˈ /2. In other words, if we move by half a wavelength, we will again have constructive interference and the sound will be loud. As we keep moving the observation point, we will find that we keep going through points of constructive and destructive interference. This is a bit more complicated than the first example, where we had either constructive or destructive interference regardless of where we listened. In this case, whether there is constructive or destructive interference depends on where we are listening. However, the fundamental conditions on the path difference are still the same.
  • 21. What does this pattern of constructive and destructive interference look like? We can map it out by indicating where we have constructive (x) and destructive (ˈ ) interference: 20 What we see is a repeating pattern of constructive and destructive interference, and it takes a distance of ˈ /4 to get from one to the other. Where have we seen this pattern before? At a point of constructive interference, the amplitude of the wave is large and this is just like an antinode. At a point of destructive interference, the amplitude is zero and this is like an node. So, if we think of the point above as antinodes and nodes, we see that we have exactly the same pattern of nodes and antinodes as in a standing wave. From this, we must conclude that two waves traveling in opposite directions create a standing wave with the same frequency! You can get a more intuitive understanding of this by looking at the Physlet entitled Superposition. Translating the interference conditions into mathematical statements is an essential part of physics and can be quite difficult at first. Moreover, a rather subtle distinction was made that you might not have noticed. On the one hand, we have some physical situation or geometry. This refers to the placement of the speakers and the position of the observer. This really has nothing to do with waves and it simply depends on how the problem was set up. Given a particular setup, you can always figure out the path length from the observer to the two sources of the waves that are going to interference and hence you can also find the path difference R1 – R2. On the other hand, completely independent of the geometry, there is a property of waves called superposition that can lead to constructive or destructive interference. We can express these conditions mathematically as: R1 – R2 = 0 + nˈ , for constructive interference, and R1 – R2 = ˈ /2 + nˈ for destructive interference. Again, R1 – R2 was determined from the geometry of the problem. These two aspects must be understood separately: how to calculate the path difference and the conditions determining the type of interference.
  • 22. 21 Diffraction Diffraction pattern of red laser beam made on a plate after passing a small circular hole in another plate Diffraction refers to various phenomena which occur when a wave encounters an obstacle. In classical physics, the diffraction phenomenon is described as the apparent bending of waves around small obstacles and the spreading out of waves past small openings. Similar effects occur when a light wave travels through a medium with a varying refractive index, or a sound wave travels through one with varying acoustic impedance. Diffraction occurs with all waves, including sound waves, water waves, and electromagnetic waves such asvisible light, X-rays and radio waves. As physical objects have wave-like properties (at the atomic level), diffraction also occurs with matter and can be studied according to the principles of quantum mechanics. Italian scientist Francesco Maria Grimaldi coined the word "diffraction" and was the first to record accurate observations of the phenomenon in 1660. Richard Feynman wrote: No-one has ever been able to define the difference between interference and diffraction satisfactorily. It is just a question of usage, and there is no specific, important physical difference between them. He suggested that when there are only a few sources, say two, we call it interference, as in Young's slits, but with a large number of sources, the process is labelled diffraction. While diffraction occurs whenever propagating waves encounter such changes, its effects are generally most pronounced for waves whose wavelength is roughly similar to the dimensions of the diffracting objects. If the obstructing object provides
  • 23. multiple, closely spaced openings, a complex pattern of varying intensity can result. This is due to the superposition, or interference, of different parts of a wave that travels to 22 the observer by different paths. The formalism of diffraction can also describe the way in which waves of finite extent propagate in free space. For example, the expanding profile of a laser beam, the beam shape of a radar antenna and the field of view of an ultrasonic transducer can all be analyzed using diffraction equations. The effects of diffraction of light were first carefully observed and characterized by Francesco Maria Grimaldi, who also coined the term diffraction, from the Latin diffringere, 'to break into pieces', referring to light breaking up into different directions. The results of Grimaldi's observations were published posthumously in 1665. Isaac Newton studied these effects and attributed them to inflexion of light rays. James Gregory (1638–1675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating to be discovered. Thomas Young performed a celebrated experiment in 1803 demonstrating interference from two closely spaced slits. Explaining his results by interference of the waves emanating from the two different slits, he deduced that light must propagate as waves. Augustin-Jean Fresnel did more definitive studies and calculations of diffraction, made public in 1815 and 1818, and thereby gave great support to the wave theory of light that had been advanced by Christiaan Huygens and reinvigorated by Young, against Newton's particle theory. Distance Also known as fairness, is a numerical description of how far apart objects are. In physics or everyday usage, distance may refer to a physical length, or an estimation based on other criteria (e.g. "two counties over"). In mathematics, a distance function or metric is a generalization of the concept of physical distance. A metric is a function that behaves
  • 24. according to a specific set of rules, and is a concrete way of describing what it means for elements of some space to be "close 23 to" or "far away from" each other. In most cases, "distance from A to B" is interchangeable with "distance between B and A". In analytic geometry, the distance between two points of the xy-plane can be found using the distance formula. The distance between (x1, y1) and (x2, y2) is given by: Similarly, given points (x1, y1, z1) and (x2, y2, z2) in three- space, the distance between them is: Illustration of distance These formula are easily derived by constructing a right triangle with a leg on the hypotenuse of another (with the other leg orthogonal to the plane that contains the 1st triangle) and applying the Pythagorean theorem. In the study of complicated geometries,we call this (most common) type of distance Euclidean distance,as it is derived from the Pythagorean theorem,which does not hold in Non- Euclidean geometries.This distance formula can also be expanded into the arc-length formula. Distance in Euclidean space In the Euclidean space Rn , the distance between two points is usually given by the Euclidean distance (2-norm distance). Other distances, based on other norms, are sometimes used instead. For a point (x1, x2, ...,xn) and a point (y1, y2, ...,yn), the Minkowski distance of order p (p-norm distance) is defined as: 1-norm distance 2-norm distance
  • 25. p-norm distance 24 infinity norm distance p need not be an integer, but it cannot be less than 1, because otherwise the triangle inequality does not hold. The 2-norm distance is the Euclidean distance, a generalization of the Pythagorean theorem to more than two coordinates. It is what would be obtained if the distance between two points were measured with a ruler: the "intuitive" idea of distance. The 1-norm distance is more colourfully called the taxicab norm or Manhattan distance, because it is the distance a car would drive in a city laid out in square blocks (if there are no one-way streets). The infinity norm distance is also called Chebyshev distance. In 2D, it is the minimum number of moves kings require to travel between two squares on a chessboard. The p-norm is rarely used for values of p other than 1, 2, and infinity, but see super ellipse. In physical space the Euclidean distance is in a way the most natural one, because in this case the length of a rigid body does not change with rotation. Variational formulation of distance The Euclidean distance between two points in space ( and ) may be written in a variational form where the distance is the minimum value of an integral: Here is the trajectory (path) between the two points. The value of the integral (D) represents the
  • 26. length of this trajectory. The distance is the minimal value of this integral and is obtained when where is the optimal trajectory. In the familiar Euclidean case (the above integral) this optimal 25 Direct current Direct current (DC) is the unidirectional flow ofelectric charge. Direct current is produced by sources such as batteries, thermocouples, solar cells, and commutator-type electric machines of thedynamo type. Direct current may flow in a conductorsuch as a wire, but can also flow throughsemiconductors, insulators, or even through avacuum as in electron or ion beams. The electric current flows in a constant direction, distinguishing it from alternating current (AC). A term formerly used for direct current was galvanic current. Direct current may be obtained from an alternating current supply by use of a current-switching arrangement called a rectifier, which contains electronic elements (usually) or electromechanical elements (historically) that allow current to flow only in one direction. Direct current may be made into alternating current with an inverter or a motor-generator set. The first commercial electric power transmission (developed by Thomas Edison in the late nineteenth century) used direct current. Because of the significant advantages of alternating current over direct current in transforming and transmission, electric power distribution is nearly all alternating current today. In the mid-1950s, HVDC transmission was developed, and is now an option instead of long-distance high voltage alternating current systems. For long distance underseas cables (e.g. between countries, such as NorNed), this is the only technically feasible option. For applications requiring direct current, such as third rail power systems, alternating current is distributed to a substation, which utilizes a rectifier to convert the power to direct current. SeeWar of Currents. Direct current is used to charge batteries, and in nearly all electronic systems, as the power supply. Very large quantities of direct-current power are used in production of aluminum and otherelectrochemical processes. Direct current is used for some railway propulsion, especially in urban areas. High-voltage direct current is used to transmit large amounts of power from
  • 27. remote generation sites or to interconnect alternating current power grids. 26 E Earthquake An earthquake (also known as a quake, tremor or temblor) is the result of a sudden release of energy in the Earth'scrust that creates seismic waves. The seismicity, seismism or seismic activity of an area refers to the frequency, type and size of earthquakes experienced over a period of time. Earthquakes are measured using observations from seismometers. The moment magnitude is the most common scale on which earthquakes larger than approximately 5 are reported for the entire globe. The more numerous earthquakes smaller than magnitude 5 reported by national seismological observatories are measured mostly on the local magnitude scale, also referred to as the Richter scale. These two scales are numerically similar over their range of validity. Magnitude 3 or lower earthquakes are mostly almost imperceptible or weak and magnitude 7 and over potentially cause serious damage over larger areas, depending on their depth. The largest earthquakes in historic times have been of magnitude slightly over 9, although there is no limit to the possible magnitude. The most recent large earthquake of magnitude 9.0 or larger was a9.0 Faul types
  • 28. magnitude earthquake in Japan in 2011 (as of October 2012), and it was the largest Japanese earthquake since records began. Intensity of shaking is measured on the modified Mercalli scale. The shallower an earthquake, the more damage to structures it causes, all else being equal. At the Earth's surface, earthquakes manifest themselves by shaking and sometimes displacement of the ground. When 27 the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides, and occasionally volcanic activity. In its most general sense, the word earthquake is used to describe any seismic event — whether natural or caused by humans — that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults, but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake's point of initial rupture is called its focus or hypocenter. The epicenter is the point at ground level directly above the hypocenter. Electromagnetic radiation Electromagnetic radiation (EM radiation or EMR) is a form of radiant energy, propagating through space via photon wave particles. In a vacuum, it propagates at a characteristic speed, the speed of light, normally in straight lines. EMR is emitted and absorbed by charged particles. As an electromagnetic wave, it has both electric and magnetic field components, which oscillate in a fixed relationship to one another, perpendicular to each other and perpendicular to the direction of energy and wave propagation. EMR is characterized by the frequency or wavelength of its wave. The electromagnetic spectrum, in order of increasing frequency and decreasing wavelength, consists of radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays. The eyes of various organisms sense a somewhat variable but relatively small range of frequencies of EMR called the visible spectrum or light. Higher frequencies correspond to proportionately more energy carried by each photon; for instance, a single gamma ray photon carries far more energy than a single photon of visible light.
  • 29. Electromagnetic radiation is associated with EM fields that are free to propagate themselves without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field. In this language, the near field refers to EM fields near the charges and current that directly produced them, as for example with simple magnets 28 and static electricity phenomena. In EMR, the magnetic and electric fields are each induced by changes in the other type of field, thus propagating itself as a wave. This close relationship assures that both types of fields in EMR stand in phase and in a fixed ratio of intensity to each other, with maxima and nodes in each found at the same places in space. EMR carries energy—sometimes called radiant energy—through space continuously away from the source (this is not true of the near- field part of the EM field). EMR also carries both momentum and angular momentum. These properties may all be imparted to matter with which it interacts. EMR is produced from other types of energy when created, and it is converted to other types of energy when it is destroyed. Thephoton is the quantum of the electromagnetic interaction, and is the basic "unit" or constituent of all forms of EMR. The quantum nature of light becomes more apparent at high frequencies (thus high photon energy). Such photons behave more like particles than lower-frequency photons do. In classical physics, EMR is considered to be produced when charged particles are accelerated by forces acting on them. Electrons are responsible for emission of most EMR because they have low mass, and therefore are easily accelerated by a variety of mechanisms. Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature. Quantum processes can also produce EMR, such as when atomic nucleiundergo gamma decay, and processes such as neutral pion decay. The effects of EMR upon biological systems (and also to many This diagram shows a plane linearly polarized EMR wave propagating from left to right. The electric field is in a vertical plane and the magnetic field in a horizontal
  • 30. plane. The two types of fields in EMR waves are always in phase with each other with a fixed ratio of electric to magnetic field intensity. Other chemical systems, under standard conditions) depends both upon the radiation's power and frequency. For lower frequencies of EMR up to those of visible light (i.e., radio, microwave, infrared), the damage done to cells and also to many ordinary materials under such conditions is determined mainly by heating 29 effects, and thus by the radiation power. By contrast, for higher frequency radiations at ultraviolet frequencies and above (i.e., X-rays and gamma rays) the damage to chemical materials and living cells by EMR is far larger than that done by simple heating, due to the ability of single photons in such high frequency EMR to damage individual molecules chemically. Electron transport chain The electron transport chain consists of a spatially separated series of redox reactions in which electrons are transferred from a donor molecule to an acceptor molecule. The underlying force driving these reactions is the Gibbs free energy of the reactants and products. The Gibbs free energy is the energy available ("free") to do work. Any reaction that decreases the overall Gibbs free energy of a system is thermodynamically spontaneous. The function of the electron transport chain is to produce a transmembrane proton electrochemical gradient as a result of the redox reactions.[1] If protons flow back through the membrane, they enable mechanical work, such as rotating
  • 31. bacterial flagella. ATP synthase, an enzyme highly conserved among all domains of life, converts this mechanical work into chemical energy by producing ATP, which powers most cellular reactions. A small amount of ATP is available from substrate-level phosphorylation, for example, in glycolysis. In most organisms the majority of ATP is generated in electron transport chains, while only some obtain ATP by fermentation. 30 The electron transport chain in the mitochondrion is the site of oxidative phosphorylation in eukaryotes. The NADH and succinate generated in the citric acid cycle are oxidized, providing energy to power ATP synthase. Equation In mathematics, an equation is a formula of the form A = B, where A and B are expressions that may contain one or several variablescalled unknowns, and "=" denotes the equality binary relation. Although written in the form of proposition, an equation is not a statementthat is either true or false, but a problem consisting of finding the values, called solutions, that, when substituted for the unknowns, yield equal values of the expressions A and B. For example, 2 is the Unique solution of the equation x + 2 = 4, in which the unknown is x.[1] Historically, equations arose from the mathematical discipline of algebra, but later become ubiquitous. "Equations" should not be confused with "identities", which are presented with the same notation but have a different meaning: for example 2 + 2 = 4 andx + y = y + x are identities (which implies they are necessarily true) in arithmetic, and do not constitute a values-finding problem, even when variables are present as in the latter example. Illustration of a simple equation; x, y, zare real numbers, analogous to weights.
  • 32. The term "equation" may also refer to a relation between some variables that is presented as the equality of some expressions written in terms of those variables' values. For example the equation of the unit circle is x2 + y2 = 1, which means that a point belongs to the circle if and only if its coordinates are related by this equation. Most physical lawsare expressed by equations. One of the most famous ones is Einstein's equation E = mc2 . The = symbol was invented by Robert Recorde (1510–1558), who 31 considered that nothing could be more equal than parallel straight lines with the same length. Extinction A species is extinct when the last existing member dies. Extinction therefore becomes a certainty when there are no surviving individuals that can reproduce and create a new generation. A species may become functionally extinct when only a handful of individuals survive, which cannot reproduce due to poor health, age, sparse distribution over a large range, a lack of individuals of both sexes (in sexually reproducing species), or other reasons. Pinpointing the extinction (or pseudoextinction) of a species requires a clear definition of that species. If it is to be declared extinct, the species in question must be uniquely distinguishable from any ancestor or daughter species, and from any other closely related species. Extinction of a species (or replacement by a daughter species) plays a key role in the punctuated equilibrium hypothesis of Stephen Jay Gould and Niles Eldredge. In ecology, extinction is often used informally to refer to local extinction, in which a species ceases to exist in the chosen area of study, but may still exist elsewhere. This phenomenon is also known as extirpation. Local extinctions may be followed by a replacement of the species taken from other locations; wolf reintroduction is an example of this. Species which are not extinct are termed extant. Those that are extant but threatened by extinction are referred to
  • 33. as threatened or endangered species. Currently an important aspect of extinction is human attempts to preserve critically endangered species. These are reflected by the creation of the conservation status "Extinct in the Wild" (EW). Species listed under this status by the International Union for Conservation of Nature (IUCN) are not known to have any living specimens in the wild, and are 32 maintained only in zoos or other artificial environments. Some of these species are functionally extinct, as they are no longer part of their natural habitat and it is unlikely the species will ever be restored to the wild. When possible, modern zoological institutions try to maintain a viable population for species preservation and possible future reintroduction to the wild, through use of carefully planned breeding programs. Extinct Species The extinction of one species' wild population can have knock-on effects, causing further extinctions. These are also called "chains of extinction". This is especially common with extinction of keystone species.
  • 34. 33 F Facula A facula (plural: faculae), Latin for "little torch", is literally a "bright spot." The term has several common technical uses. It is used in planetary nomenclature for naming certain surface features of planets and moons, and is also a type of surface phenomenon on the Sun. In addition, a bright region in the projected field of a light source is sometimes referred to as a facula, and photographers often use the term to describe bright, typically circular features in photographs that correspond to light sources or bright reflections in a defocused image. Solar faculae are bright spots that form in the canyons between solar granules, short-lived convection cells several thousand kilometers across that constantly form and dissipate over timescales of several minutes. Faculae are produced by concentrations of magnetic field lines. Strong concentrations of faculae appear in solar activity, with or withoutsunspots. The faculae and the sunspots contribute noticeably to variations in the "solar constant". The chromospheric counterpart of a facular region is called a plage.
  • 35. Facundity Fecundity, derived from the word fecund, generally refers to the ability to reproduce. In demography, fecundity is the potential reproductive capacity of an individual orpopulation. In biology, the definition is more equivalent 34 to fertility, or the actual reproductive rate of on an organism or population, measured by the number of gametes (eggs), seed set, or asexual propagules. This difference is because demography considers human fecundity which is often intentionally limited, while biology assumes that organisms do not limit fertility. Fecundity is under both genetic and environmental control, and is the major measure of fitness. Fecundation is another term for fertilization. Superfecundity refers to an organism's ability to store another organism's sperm (after copulation) and fertilize its own eggs from that store after a period of time, essentially making it appear as though fertilization occurred without sperm (i.e. parthenogenesis). Fecundity is important and well studied in the field of population ecology. Fecundity can increase or decrease in a population according to current conditions and certain regulating factors. For instance, in times of hardship for a population, such as a lack of food, juvenile and eventually adult fecundity has been shown to decrease. Fecundity has also been shown to increase in ungulates with relation to warmer weather. In sexual evolutionary biology, especially in sexual selection, fecundity is contrasted to reproductivity.. It is the ability of organism to breed
  • 36. In obstetrics and gynecology, fecundability is the probability of being pregnant in a single menstrual cycle, and fecundity is the probability of achieving a live birth within a single cycle. Fahrenheit On the Fahrenheit scale,the freezing point of water is 32 35 32 degrees Fahrenheit (°F) and the boiling point 212 °F (atstandard atmospheric pressure). This puts the boiling and freezing points of water exactly 180 degrees apart.[9] Therefore, a degree on the Fahrenheit scale is 1 ⁄180 of the interval between the freezing point and the boiling point. On the Celsius scale, the freezing and boiling points of water are 100 degrees apart. A temperature interval of 1 °F is equal to an interval of 5 ⁄9 degrees Celsius. The Fahrenheit and Celsius scales intersect at −40° (−40 °F and −40 °C represent the same temperature). Absolute zero is −273.15 °C or −459.67 °F. The Rankine temperature scale uses degree intervals of the same size as those of the Fahrenheit scale, except that absolute zero is 0 R – the same way that the Kelvin temperature scale matches the Celsius scale, except that absolute zero is 0 K.[9] The Fahrenheit scale uses the symbol ° to denote a point on the temperature scale (as does Celsius) and the letter F to indicate the use of the Fahrenheit scale (e.g. "Gallium melts at 85.5763 °F"),[10] as well as to denote a difference between temperatures or an uncertainty in temperature (e.g. "The output of the heat exchanger experiences an increase of 72 °F" and "Our standard uncertainty is ±5 °F"). A rule of thumb for conversion between degrees Celsius and degrees Fahrenheit is as follows: enheit s
  • 37. Fahrenheit to Celsius: Subtract 32 and halve the resulting number. Celsius to Fahrenheit: Double the number and add 32. This formula gives an answer correct to within 1 °C for 50 °F (10 °C). At 0 °F (-17.8 °C) and 100 °F (37.8 °C), it gives answers of -15 °C and 35 °C, respectively. Outside this range, the error is bigger. For an accurate conversion, consider that 1 Celsius is equal to 1.8 Fahrenheit: 32 F = 0 C, 50 F = 10 C, 68 F = 20 C, 86 F = 30 C, 104 F = 40 C, and so on. 36 Free-electron laser A free-electron laser (FEL), is a type of laser that shares the same optical properties as conventional lasers such as emitting abeam consisting of coherent electromagnetic radiation that can reach high power, but that uses some very different operating principles to form the beam. Unlike gas-, liquid-, or solid- state lasers such as diode lasers, in which electrons are excited in bound atomic or molecular states, free-electron lasers use a relativistic electron beam that moves freely through a magnetic structure, hence the term free electron as the lasing medium. The free-electron laser has the widestfrequency range of any laser type, and can be widely tunable, currently ranging in wavelength from microwaves, throughterahertz radiation and infrared, to the visible spectrum, ultraviolet, and X-ray. Free-electron lasers were invented by John Madey in 1976 at Stanford University. The work emanates from research done byHans Motz and his coworkers, who built an undulator at Stanford in 1953, using the wiggler magnetic configuration which is at the heart of a free electron laser. Madey used a 24 MeV electron beam and 5 m long wiggler to amplify a signal. Soon afterward, other laboratories with accelerators started developing such lasers. To create an FEL, a beam of electrons is accelerated to almost the speed of light. The beam passes through an undulator, a side to side magnetic field produced by a periodic arrangement of magnets with alternating poles across the beam path. The general direction of the beam is called the longitudinal direction, and the direction across the beam path is called transverse. This array of magnets is commonly known as an undulator in the light source community, or a wiggler in the FEL community,[citation needed] because it forces the electrons in the beam to wiggle transversely along a sinusoidal path about the axis of the undulator.
  • 38. The transverse acceleration of the electrons across this path results in the release of photons (synchrotron radiation), which are monochromatic but still incoherent, because the electromagnetic waves from randomly distributed electrons interfere constructively and destructively in time, and the resulting radiation power scales linearly with the number of electrons. If an external laser is provided or if the synchrotron radiation becomes sufficiently strong, the transverse electric field of the radiation beam interacts with the transverse electron current created by the sinusoidal wiggling motion, causing some electrons to gain and others to 37 lose energy to the optical field. This energy modulation evolves into electron density (current) modulations with a period of one optical wavelength. The electrons are thus bunched into little clumps, called microbunches, separated by one optical wavelength along the axis. Whereas conventional undulators would cause the electrons to radiate independently, the radiation emitted by the microbunched electrons are in phase, and the fields add together coherently. The FEL radiation intensity grows, causing additional microbunching of the electrons, which continue to radiate in phase with each other. This process continues until the electrons are completely microbunched and the radiation reaches a saturated power several orders of magnitude higher than that of the undulator radiation. The wavelength of the radiation emitted can be readily tuned by adjusting the energy of the electron beam or the magnetic field strength of the undulators. FELs are relativistic machines. The wavelength of the emitted radiation, , is given by , or when the wiggler strength parameter K, discussed below, is small , The free-electron laser FELIX at the FOM Institute for Plasma Physics
  • 39. where is the undulator wavelength (the spatial period of the magnetic field), is the relativistic Lorentz factor and the proportionality constant depends on the undulator geometry and is of the order of 1. This formula can be understood as a combination of two relativistic effects. Imagine you are sitting on an electron passing through the undulator. Due to Lorentz contraction the undulator is shortened by a factor and the electron experiences much shorter undulator wavelength . However, the radiation emitted at this 38 wavelength is observed in the laboratory frame of reference and the relativistic Doppler effect brings the second factor to the above formula. Rigorous derivation from Maxwell's equations gives the divisor of 2 and the proportionality constant. In an x-ray FEL the typical undulator wavelength of 1 cm is transformed to x-ray wavelengths on the order of 1 nm by ≈ 2000, i.e. the electrons have to travel with the speed of 0.9999998c. Friction The classic rules of sliding friction were discovered by Leonardo da Vinci (1452–1519), but remained unpublished in his notebooks. They were rediscovered by Guillaume Amontons (1699). Amontons presented the nature of friction in terms of surface irregularities and the force required to raise the weight pressing the surfaces together. This view was further elaborated by Belidor (representation of rough surfaces with spherical asperities, 1737) and Leonhard Euler (1750), who derived the angle of repose of a weight on an inclined plane and first distinguished between static and kinetic friction. A different explanation was provided by Desaguliers (1725), who demonstrated the strong cohesion forces between lead spheres of which a small cap is cut off and which were then brought into contact with each other. Block on a ramp (top) and corresponding free body diagramof just the block (bottom). For equilibrium, the line of action of the three force arrows must intersect at a common point.
  • 40. The understanding of friction was further developed by Charles- Augustin de Coulomb (1785). Coulomb investigated the influence of four main factors on friction: the nature of the materials in contact and their surface coatings; the extent of the surface area; the normal pressure (or load); and the length of time that the surfaces remained in contact (time of repose). Coulomb further considered the 39 influence of sliding velocity, temperature and humidity, in order to decide between the different explanations on the nature of friction that had been proposed. The distinction between static and dynamic friction is made in Coulomb's friction law (see below), although this distinction was already drawn by Johann Andreas von Segner in 1758. The effect of the time of repose was explained by Musschenbroek (1762) by considering the surfaces of fibrous materials, with fibers meshing together, which takes a finite time in which the friction increases. John Leslie (1766–1832) noted a weakness in the views of Amontons and Coulomb. If friction arises from a weight being drawn up the inclined plane of successive asperities, why isn't it balanced then through descending the opposite slope? Leslie was equally skeptical about the role of adhesion proposed by Desaguliers, which should on the whole have the same tendency to accelerate as to retard the motion. In his view friction should be seen as a time-dependent process of flattening, pressing down asperities, which creates new obstacles in what were cavities before. Arthur Morrin (1833) developed the concept of sliding versus rolling friction. Osborne Reynolds (1866) derived the equation of viscous flow. This completed the classic empirical model of friction (static, kinetic, and fluid) commonly used today in engineering. The focus of research during the last century has been to understand the physical mechanisms behind friction. F. Phillip Bowden and David Tabor (1950) showed that at a microscopic level, the actual area of contact between surfaces is a very small fraction of the apparent area. This actual area of
  • 41. contact, caused by "asperities" (roughness) increases with pressure, explaining the proportionality between normal force and frictional force. The development of the atomic force microscope (1986) has recently enabled scientists to study friction at the atomic scale 40 INDEX A Acetylene, p.1 Alcohols, pp.1-2 Amino acid, pp.2-3 Aromatic hydrocarbon, p.4 Atoms, p.5 B Bacteria, pp.6-7 Biochemistry, pp.7-8 Biomass, pp.9-10 Biosynthesis, p.10 Buoyancy, p.11 C Cell, pp.12-13 Cellulose, pp.13- 14 Cleopatra, pp.14- 15 Climate, pp.15-16 Cro-magnon, pp.16- 17 D Density, pp.18-19 Destructive interference, pp.19-21 Diffraction, pp.22-23 Direct current, p.26 Distance, pp.23-25 E Earthquake, pp.27- 28
  • 42. Electromagnetic radiation, pp.28- 30 Electron transport chain, pp.30-31 Equation, pp.31-32 Extinction, pp.32- 33 F Facula, p.34 Fecundity, pp.34- 35 Fahrenheit, pp.35- 36 Free electron loser, pp.37-39 Friction, pp.39-40