Math 511 Problem Set 4, due September 21Note Problems 1 tAbramMartino96
Math 511 Problem Set 4, due September 21
Note: Problems 1 through 7 are the ones to be turned in. The remainder of the problems are
for extra functional analytic goodness.
1. Fix a,b ∈ R with a < b. Show that {1, t, t2, . . . , tn} is a linearly independent subset of
C[a,b]. From this conclude that {1, t, t2, t3, . . .} is a linearly independent set in C[a,b]. Give
an example of a function f ∈ C[a,b] so that f /∈ span{1, t, t2, . . .}.
2. Prove that if 1 ≤ p1 ≤ p2 ≤∞ then lp1 ⊆ lp2 .
3. Consider C[0, 2] with the function ‖ ·‖1 defined by
‖f‖1 =
∫ 2
0
|f(x)|dx, for f ∈ C[0, 2].
(a) Prove that ‖ ·‖1 is a norm.
(b) Prove that the normed linear space (C[0, 2],‖·‖1) is not complete (and thus not a Banach
space) by considering the sequence of functions
fn(x) =
1, x ≤ 1 − 1
n
n−nx, 1 − 1
n
< x < 1 + 1
n
−1, x ≥ 1 + 1
n
.
Show these are continuous functions, this sequence is a Cauchy sequence in the metric
derived from ‖ ·‖1, but that this sequence does not converge in C[0, 2] with this metric.
4. Let V be a vector space over R or C. A subset A ⊆ V is convex if for any v,w ∈ A and any
λ ∈ [0, 1] then λv + (1 −λ)w ∈ A, i.e. the segement connecting v and w is also in A.
(a) Let W be a vector subspace of V . Show that W is convex.
(b) Let X be a normed linear space. Show that the unit ball B1(0) is convex.
5. show that c ⊆ l∞ is a vector subspace of l∞ (see 1.5-3 for the definition of c) and so is c0, the
set of all sequences (xn) so that limn→∞ xn = 0.
6. Let 1 ≤ p < ∞ and en ∈ lp be the sequence with 1 in the nth place and 0 in all othe coordinates.
Show that {en : n ∈ N} is a Schauder basis for lp.
7. Now if X is a Banach space and (yn) a sequence in X, prove that
∑∞
n=1 ‖yn‖ < ∞ does imply
the convergence of
∑∞
n=1 yn. Thus in Banach spaces, absolute convergence implies convergence
of the series.
The following questions are for you to think about and not to be turned in.
1001. What is the completion of (0, 1) as a metric subspace of R with the euclidean metric?
Explain.
1002. Show that the discrete metric on a nontrivial vector space cannot be obtained from a norm.
1003. Show that if a normed vector space has a Schauder basis, then the space is separable. (You
can use a similar argument to your proof that lp is separable for 1 ≤ p < ∞.)
1004. Prove the general Hölder inequality: Suppose 1 ≤ r < p < ∞, and assume that
1
p
+
1
q
=
1
r
.
Show that for x = (x1,x2, . . .) and y = (y1,y2, . . .), and if we define the componentwise product
xy = (x1y1,x2y2, . . .), then
‖xy‖r ≤‖x‖p‖y‖q.
You may assume that x ∈ lp and y ∈ lq, although this is not necessary. (Hint: 1 = 1p
r
+ 1q
r
, and
use the regular Hölder inequality on particular sequences).
(Note: We can extend this to let p = r, and in this case q = ∞. The result will still hold.)
1005. Give an example of a subspace of l∞ which is not closed. Repeat for l2. (Hint: Look at
problem 3, p. 70)
1006. Let X be a normed vector space. Show that the convergenc ...
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Math 511 Problem Set 4, due September 21Note Problems 1 tAbramMartino96
Math 511 Problem Set 4, due September 21
Note: Problems 1 through 7 are the ones to be turned in. The remainder of the problems are
for extra functional analytic goodness.
1. Fix a,b ∈ R with a < b. Show that {1, t, t2, . . . , tn} is a linearly independent subset of
C[a,b]. From this conclude that {1, t, t2, t3, . . .} is a linearly independent set in C[a,b]. Give
an example of a function f ∈ C[a,b] so that f /∈ span{1, t, t2, . . .}.
2. Prove that if 1 ≤ p1 ≤ p2 ≤∞ then lp1 ⊆ lp2 .
3. Consider C[0, 2] with the function ‖ ·‖1 defined by
‖f‖1 =
∫ 2
0
|f(x)|dx, for f ∈ C[0, 2].
(a) Prove that ‖ ·‖1 is a norm.
(b) Prove that the normed linear space (C[0, 2],‖·‖1) is not complete (and thus not a Banach
space) by considering the sequence of functions
fn(x) =
1, x ≤ 1 − 1
n
n−nx, 1 − 1
n
< x < 1 + 1
n
−1, x ≥ 1 + 1
n
.
Show these are continuous functions, this sequence is a Cauchy sequence in the metric
derived from ‖ ·‖1, but that this sequence does not converge in C[0, 2] with this metric.
4. Let V be a vector space over R or C. A subset A ⊆ V is convex if for any v,w ∈ A and any
λ ∈ [0, 1] then λv + (1 −λ)w ∈ A, i.e. the segement connecting v and w is also in A.
(a) Let W be a vector subspace of V . Show that W is convex.
(b) Let X be a normed linear space. Show that the unit ball B1(0) is convex.
5. show that c ⊆ l∞ is a vector subspace of l∞ (see 1.5-3 for the definition of c) and so is c0, the
set of all sequences (xn) so that limn→∞ xn = 0.
6. Let 1 ≤ p < ∞ and en ∈ lp be the sequence with 1 in the nth place and 0 in all othe coordinates.
Show that {en : n ∈ N} is a Schauder basis for lp.
7. Now if X is a Banach space and (yn) a sequence in X, prove that
∑∞
n=1 ‖yn‖ < ∞ does imply
the convergence of
∑∞
n=1 yn. Thus in Banach spaces, absolute convergence implies convergence
of the series.
The following questions are for you to think about and not to be turned in.
1001. What is the completion of (0, 1) as a metric subspace of R with the euclidean metric?
Explain.
1002. Show that the discrete metric on a nontrivial vector space cannot be obtained from a norm.
1003. Show that if a normed vector space has a Schauder basis, then the space is separable. (You
can use a similar argument to your proof that lp is separable for 1 ≤ p < ∞.)
1004. Prove the general Hölder inequality: Suppose 1 ≤ r < p < ∞, and assume that
1
p
+
1
q
=
1
r
.
Show that for x = (x1,x2, . . .) and y = (y1,y2, . . .), and if we define the componentwise product
xy = (x1y1,x2y2, . . .), then
‖xy‖r ≤‖x‖p‖y‖q.
You may assume that x ∈ lp and y ∈ lq, although this is not necessary. (Hint: 1 = 1p
r
+ 1q
r
, and
use the regular Hölder inequality on particular sequences).
(Note: We can extend this to let p = r, and in this case q = ∞. The result will still hold.)
1005. Give an example of a subspace of l∞ which is not closed. Repeat for l2. (Hint: Look at
problem 3, p. 70)
1006. Let X be a normed vector space. Show that the convergenc ...
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
Lecture13p.pdf.pdfThedeepness of freedom are threevalues.docxcroysierkathey
Lecture13p.pdf.pdf
Thedeepness of freedom are threevalues at thenude
functional Notconforming
patrtaf.us
vi sci x
I beease
ittouch
41 u VCsci inhalfedgeL
U VCI't x
Since u CPz are sci sc 7 that it have 3 Zeusunless e o
E is P anisolvent it forgiven I lop C P s t 4 p di
same
degree
y i l N Yi C E
Sabi n ofsystem YCp g
This is equivalent to say theonlypolynomial C PthetinterpolateZero
data Yifp o is the Zeno poly
vcpi.POTFF.gg In Edem
e e
I CRIvalue VCR Ca Ya
metfunctor
p
E3
pjJ Chip J
Shun E is p unisolvent
Y Cul VCR7 0
Xz V UCR o
rf VI UCB 0
Then over the edge PP we hone
C P havingtworootsPR
D This implies we 0
If e consider the other to edges G e b thesame
argument we can see Eo tht means W Lv o hersonly trivial
Solution
Then Yi CUI Ri for any Xi
E is P unisowent
y
csiy
Ya
f P Y cnn.PT
III Ldj Pg I Pre 2 ily
a PyO ein a
451214 7 f p i y g d CP f
ftp b f CRI I B so
fickle Cps O y Cp 7 L
Escaple5 in lectureto
Lecture_03_S08.pdf
LECTURE # 3:
ABSTRACT RITZ-GALERKIN METHOD
MATH610: NUMERICAL METHODS FOR PDES:
RAYTCHO LAZAROV
1. Variational Formulation
In the previous lecture we have introduced the following space of functions
defined on (0, 1):
(1)
V =
v :
v(x) is continuous function on (0, 1);
v′(x) exists in generalized sense and in L2(0, 1);
v(0) = v(1) = 0
:= H10 (0, 1)
and equipped it with the L2 and H1 norms
‖v‖ = (v,v)1/2 and ‖v‖V = (v,v)
1/2
V =
(∫ 1
0
(u′2 + u2)dx
)1
2
.
We also introduced the following variational and minimization problems:
(V ) find u ∈ V such that a(u,v) = L(v), ∀ v ∈ V,
(M) find u ∈ V such that F(u) ≤ F(v), ∀ v ∈ V,
where a(u,v) is a bilinear form that is symmetric, coercive and contin-
uous on V and L(v) is continuous on V and F(v) = 1
2
a(u,u) −L(v).
As an example we can take
a(u,v) ≡
∫ 1
0
(k(x)u′v′ + q(x)uv) dx and L(v) ≡
∫ 1
0
f(x)v dx.
Here we have assumed that there are positive constants k0, k1, M such that
(2) k1 ≥ k(x) ≥ k0 > 0, M ≥ q(x) ≥ 0, f ∈ L2(0, 1).
These are sufficient for the symmetry, coercivity and continuity of the
bilinear form a(., .) and the continuity of the linear form L(v).
The proof of these properties follows from the following theorem:
Theorem 1. Let u ∈ V ≡ H10 (0, 1). Then the following inequalities are
valid:
(3)
|u(x)|2 ≤ C1
∫ 1
0
(u′(x))2dx for any x ∈ (0, 1),∫ 1
0
u2(x)dx ≤ C0
∫ 1
0
(u′(x))2dx.
with constants C0 and C1 that are independent of u.
1
2 MATH610: NUMERICAL METHODS FOR PDES: RAYTCHO LAZAROV
Proof: We give two proofs. The simple one proves the above inequali-
ties with C0 = 1/2 and C1 = 1. The better proof establishes the above
inequalities with C0 = 1/6 and C1 = 1/4.
Indeed, for any x ∈ (0, 1) we have:
u(x) = u(0) +
∫ x
0
u′(s)ds.
Since u ∈ H10 (0, 1) then u(0) = 0. We square this equality and apply
Cauchy-Swartz inequality:
(4) |u(x)|2 =
∣∣∣∫ x
0
u′(s)ds
∣∣∣2 ≤ ∫ x
0
1ds
∫ x
0
(u′(s))2ds ≤ x
∫ x
0
(u′(s))2ds.
Taking the maximal value of x on the right hand side of this inequality
w ...
Lecture13p.pdf.pdfThedeepness of freedom are threevalues.docxjeremylockett77
Lecture13p.pdf.pdf
Thedeepness of freedom are threevalues at thenude
functional Notconforming
patrtaf.us
vi sci x
I beease
ittouch
41 u VCsci inhalfedgeL
U VCI't x
Since u CPz are sci sc 7 that it have 3 Zeusunless e o
E is P anisolvent it forgiven I lop C P s t 4 p di
same
degree
y i l N Yi C E
Sabi n ofsystem YCp g
This is equivalent to say theonlypolynomial C PthetinterpolateZero
data Yifp o is the Zeno poly
vcpi.POTFF.gg In Edem
e e
I CRIvalue VCR Ca Ya
metfunctor
p
E3
pjJ Chip J
Shun E is p unisolvent
Y Cul VCR7 0
Xz V UCR o
rf VI UCB 0
Then over the edge PP we hone
C P havingtworootsPR
D This implies we 0
If e consider the other to edges G e b thesame
argument we can see Eo tht means W Lv o hersonly trivial
Solution
Then Yi CUI Ri for any Xi
E is P unisowent
y
csiy
Ya
f P Y cnn.PT
III Ldj Pg I Pre 2 ily
a PyO ein a
451214 7 f p i y g d CP f
ftp b f CRI I B so
fickle Cps O y Cp 7 L
Escaple5 in lectureto
Lecture_03_S08.pdf
LECTURE # 3:
ABSTRACT RITZ-GALERKIN METHOD
MATH610: NUMERICAL METHODS FOR PDES:
RAYTCHO LAZAROV
1. Variational Formulation
In the previous lecture we have introduced the following space of functions
defined on (0, 1):
(1)
V =
v :
v(x) is continuous function on (0, 1);
v′(x) exists in generalized sense and in L2(0, 1);
v(0) = v(1) = 0
:= H10 (0, 1)
and equipped it with the L2 and H1 norms
‖v‖ = (v,v)1/2 and ‖v‖V = (v,v)
1/2
V =
(∫ 1
0
(u′2 + u2)dx
)1
2
.
We also introduced the following variational and minimization problems:
(V ) find u ∈ V such that a(u,v) = L(v), ∀ v ∈ V,
(M) find u ∈ V such that F(u) ≤ F(v), ∀ v ∈ V,
where a(u,v) is a bilinear form that is symmetric, coercive and contin-
uous on V and L(v) is continuous on V and F(v) = 1
2
a(u,u) −L(v).
As an example we can take
a(u,v) ≡
∫ 1
0
(k(x)u′v′ + q(x)uv) dx and L(v) ≡
∫ 1
0
f(x)v dx.
Here we have assumed that there are positive constants k0, k1, M such that
(2) k1 ≥ k(x) ≥ k0 > 0, M ≥ q(x) ≥ 0, f ∈ L2(0, 1).
These are sufficient for the symmetry, coercivity and continuity of the
bilinear form a(., .) and the continuity of the linear form L(v).
The proof of these properties follows from the following theorem:
Theorem 1. Let u ∈ V ≡ H10 (0, 1). Then the following inequalities are
valid:
(3)
|u(x)|2 ≤ C1
∫ 1
0
(u′(x))2dx for any x ∈ (0, 1),∫ 1
0
u2(x)dx ≤ C0
∫ 1
0
(u′(x))2dx.
with constants C0 and C1 that are independent of u.
1
2 MATH610: NUMERICAL METHODS FOR PDES: RAYTCHO LAZAROV
Proof: We give two proofs. The simple one proves the above inequali-
ties with C0 = 1/2 and C1 = 1. The better proof establishes the above
inequalities with C0 = 1/6 and C1 = 1/4.
Indeed, for any x ∈ (0, 1) we have:
u(x) = u(0) +
∫ x
0
u′(s)ds.
Since u ∈ H10 (0, 1) then u(0) = 0. We square this equality and apply
Cauchy-Swartz inequality:
(4) |u(x)|2 =
∣∣∣∫ x
0
u′(s)ds
∣∣∣2 ≤ ∫ x
0
1ds
∫ x
0
(u′(s))2ds ≤ x
∫ x
0
(u′(s))2ds.
Taking the maximal value of x on the right hand side of this inequality
w.
MA500-2: Topological Structures 2016
Aisling McCluskey, Daron Anderson
[email protected], [email protected]
Contents
0 Preliminaries 2
1 Topological Groups 8
2 Morphisms and Isomorphisms 15
3 The Second Isomorphism Theorem 27
4 Topological Vector Spaces 42
5 The Cayley-Hamilton Theorem 43
6 The Arzelà-Ascoli theorem 44
7 Tychonoff ’s Theorem if Time Permits 45
Continuous assessment 30%; final examination 70%. There will be a weekly
workshop led by Daron during which there will be an opportunity to boost
continuous assessment marks based upon workshop participation as outlined in
class.
This module is self-contained; the notes provided shall form the module text.
Due to the broad range of topics introduced, there is no recommended text.
However General Topology by R. Engelking is a graduate-level text which has
relevant sections within it. Also Undergraduate Topology: a working textbook by
McCluskey and McMaster is a useful revision text. As usual, in-class discussion
will supplement the formal notes.
1
0 PRELIMINARIES
0 Preliminaries
Reminder 0.1. A topology τ on the set X is a family of subsets of X, called
the τ-open sets, satisfying the three axioms.
(1) Both sets X and ∅ are τ-open
(2) The union of any subfamily is again a τ-open set
(3) The intersection of any two τ-open sets is again a τ-open set
We refer to (X,τ) as a topological space. Where there is no danger of ambi-
guity, we suppress reference to the symbol denoting the topology (in this case,
τ) and simply refer to X as a topological space and to the elements of τ as its
open sets. By a closed set we mean one whose complement is open.
Reminder 0.2. A metric on the set X is a function d: X×X → R satisfying
the five axioms.
(1) d(x,y) ≥ 0 for all x,y ∈ X
(2) d(x,y) = d(y,x) for x,y ∈ X
(3) d(x,x) = 0 for every x ∈ X
(4) d(x,y) = 0 implies x = y
(5) d(x,z) ≤ d(x,y) + d(y,z) for all x,y,z ∈ X
Axiom (5) is often called the triangle inequality.
Definition 0.3. If d′ : X × X → R satisfies axioms (1), (2), (3) and (5) but
maybe not (4) then we call it a pseudo-metric.
Reminder 0.4. Every metric on X induces a topology on X, called the metric
topology. We define an open ball to be a set of the form
B(x,r) = {y ∈ X : d(x,y) < r}
for any x ∈ X and r > 0. Then a subset G of X is defined to be open (wrt the
metric topology) if for each x ∈ G, there is r > 0 such that B(x,r) ⊂ G. Thus
open sets are arbitrary unions of open balls.
Topological Structures 2016 2 Version 0.15
0 PRELIMINARIES
The definition of the metric topology makes just as much sense when we are
working with a pseudo-metric. Open balls are defined in the same manner, and
the open sets are exactly the unions of open balls. Pseudo-metric topologies are
often neglected because they do not have the nice property of being Hausdorff.
Reminder 0.5. Suppose f : X → Y is a function between the topological
spaces X and Y . We say f is continuous to mean that whenever U is open in
Y ...
First principle, power rule, derivative of constant term, product rule, quotient rule, chain rule, derivatives of trigonometric functions and their inverses, derivatives of exponential functions and natural logarithmic functions, implicit differentiation, parametric differentiation, L'Hopital's rule
I am Frank P. I am a Physical Chemistry Assignment Expert at eduassignmenthelp.com. I hold a Ph.D. in Physical Chemistry, from Malacca, Malaysia. I have been helping students with their homework for the past 6 years. I solve assignments related to Physical Chemistry.
Visit eduassignmenthelp.com or email info@eduassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Physical Chemistry Assignments.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
Lecture13p.pdf.pdfThedeepness of freedom are threevalues.docxcroysierkathey
Lecture13p.pdf.pdf
Thedeepness of freedom are threevalues at thenude
functional Notconforming
patrtaf.us
vi sci x
I beease
ittouch
41 u VCsci inhalfedgeL
U VCI't x
Since u CPz are sci sc 7 that it have 3 Zeusunless e o
E is P anisolvent it forgiven I lop C P s t 4 p di
same
degree
y i l N Yi C E
Sabi n ofsystem YCp g
This is equivalent to say theonlypolynomial C PthetinterpolateZero
data Yifp o is the Zeno poly
vcpi.POTFF.gg In Edem
e e
I CRIvalue VCR Ca Ya
metfunctor
p
E3
pjJ Chip J
Shun E is p unisolvent
Y Cul VCR7 0
Xz V UCR o
rf VI UCB 0
Then over the edge PP we hone
C P havingtworootsPR
D This implies we 0
If e consider the other to edges G e b thesame
argument we can see Eo tht means W Lv o hersonly trivial
Solution
Then Yi CUI Ri for any Xi
E is P unisowent
y
csiy
Ya
f P Y cnn.PT
III Ldj Pg I Pre 2 ily
a PyO ein a
451214 7 f p i y g d CP f
ftp b f CRI I B so
fickle Cps O y Cp 7 L
Escaple5 in lectureto
Lecture_03_S08.pdf
LECTURE # 3:
ABSTRACT RITZ-GALERKIN METHOD
MATH610: NUMERICAL METHODS FOR PDES:
RAYTCHO LAZAROV
1. Variational Formulation
In the previous lecture we have introduced the following space of functions
defined on (0, 1):
(1)
V =
v :
v(x) is continuous function on (0, 1);
v′(x) exists in generalized sense and in L2(0, 1);
v(0) = v(1) = 0
:= H10 (0, 1)
and equipped it with the L2 and H1 norms
‖v‖ = (v,v)1/2 and ‖v‖V = (v,v)
1/2
V =
(∫ 1
0
(u′2 + u2)dx
)1
2
.
We also introduced the following variational and minimization problems:
(V ) find u ∈ V such that a(u,v) = L(v), ∀ v ∈ V,
(M) find u ∈ V such that F(u) ≤ F(v), ∀ v ∈ V,
where a(u,v) is a bilinear form that is symmetric, coercive and contin-
uous on V and L(v) is continuous on V and F(v) = 1
2
a(u,u) −L(v).
As an example we can take
a(u,v) ≡
∫ 1
0
(k(x)u′v′ + q(x)uv) dx and L(v) ≡
∫ 1
0
f(x)v dx.
Here we have assumed that there are positive constants k0, k1, M such that
(2) k1 ≥ k(x) ≥ k0 > 0, M ≥ q(x) ≥ 0, f ∈ L2(0, 1).
These are sufficient for the symmetry, coercivity and continuity of the
bilinear form a(., .) and the continuity of the linear form L(v).
The proof of these properties follows from the following theorem:
Theorem 1. Let u ∈ V ≡ H10 (0, 1). Then the following inequalities are
valid:
(3)
|u(x)|2 ≤ C1
∫ 1
0
(u′(x))2dx for any x ∈ (0, 1),∫ 1
0
u2(x)dx ≤ C0
∫ 1
0
(u′(x))2dx.
with constants C0 and C1 that are independent of u.
1
2 MATH610: NUMERICAL METHODS FOR PDES: RAYTCHO LAZAROV
Proof: We give two proofs. The simple one proves the above inequali-
ties with C0 = 1/2 and C1 = 1. The better proof establishes the above
inequalities with C0 = 1/6 and C1 = 1/4.
Indeed, for any x ∈ (0, 1) we have:
u(x) = u(0) +
∫ x
0
u′(s)ds.
Since u ∈ H10 (0, 1) then u(0) = 0. We square this equality and apply
Cauchy-Swartz inequality:
(4) |u(x)|2 =
∣∣∣∫ x
0
u′(s)ds
∣∣∣2 ≤ ∫ x
0
1ds
∫ x
0
(u′(s))2ds ≤ x
∫ x
0
(u′(s))2ds.
Taking the maximal value of x on the right hand side of this inequality
w ...
Lecture13p.pdf.pdfThedeepness of freedom are threevalues.docxjeremylockett77
Lecture13p.pdf.pdf
Thedeepness of freedom are threevalues at thenude
functional Notconforming
patrtaf.us
vi sci x
I beease
ittouch
41 u VCsci inhalfedgeL
U VCI't x
Since u CPz are sci sc 7 that it have 3 Zeusunless e o
E is P anisolvent it forgiven I lop C P s t 4 p di
same
degree
y i l N Yi C E
Sabi n ofsystem YCp g
This is equivalent to say theonlypolynomial C PthetinterpolateZero
data Yifp o is the Zeno poly
vcpi.POTFF.gg In Edem
e e
I CRIvalue VCR Ca Ya
metfunctor
p
E3
pjJ Chip J
Shun E is p unisolvent
Y Cul VCR7 0
Xz V UCR o
rf VI UCB 0
Then over the edge PP we hone
C P havingtworootsPR
D This implies we 0
If e consider the other to edges G e b thesame
argument we can see Eo tht means W Lv o hersonly trivial
Solution
Then Yi CUI Ri for any Xi
E is P unisowent
y
csiy
Ya
f P Y cnn.PT
III Ldj Pg I Pre 2 ily
a PyO ein a
451214 7 f p i y g d CP f
ftp b f CRI I B so
fickle Cps O y Cp 7 L
Escaple5 in lectureto
Lecture_03_S08.pdf
LECTURE # 3:
ABSTRACT RITZ-GALERKIN METHOD
MATH610: NUMERICAL METHODS FOR PDES:
RAYTCHO LAZAROV
1. Variational Formulation
In the previous lecture we have introduced the following space of functions
defined on (0, 1):
(1)
V =
v :
v(x) is continuous function on (0, 1);
v′(x) exists in generalized sense and in L2(0, 1);
v(0) = v(1) = 0
:= H10 (0, 1)
and equipped it with the L2 and H1 norms
‖v‖ = (v,v)1/2 and ‖v‖V = (v,v)
1/2
V =
(∫ 1
0
(u′2 + u2)dx
)1
2
.
We also introduced the following variational and minimization problems:
(V ) find u ∈ V such that a(u,v) = L(v), ∀ v ∈ V,
(M) find u ∈ V such that F(u) ≤ F(v), ∀ v ∈ V,
where a(u,v) is a bilinear form that is symmetric, coercive and contin-
uous on V and L(v) is continuous on V and F(v) = 1
2
a(u,u) −L(v).
As an example we can take
a(u,v) ≡
∫ 1
0
(k(x)u′v′ + q(x)uv) dx and L(v) ≡
∫ 1
0
f(x)v dx.
Here we have assumed that there are positive constants k0, k1, M such that
(2) k1 ≥ k(x) ≥ k0 > 0, M ≥ q(x) ≥ 0, f ∈ L2(0, 1).
These are sufficient for the symmetry, coercivity and continuity of the
bilinear form a(., .) and the continuity of the linear form L(v).
The proof of these properties follows from the following theorem:
Theorem 1. Let u ∈ V ≡ H10 (0, 1). Then the following inequalities are
valid:
(3)
|u(x)|2 ≤ C1
∫ 1
0
(u′(x))2dx for any x ∈ (0, 1),∫ 1
0
u2(x)dx ≤ C0
∫ 1
0
(u′(x))2dx.
with constants C0 and C1 that are independent of u.
1
2 MATH610: NUMERICAL METHODS FOR PDES: RAYTCHO LAZAROV
Proof: We give two proofs. The simple one proves the above inequali-
ties with C0 = 1/2 and C1 = 1. The better proof establishes the above
inequalities with C0 = 1/6 and C1 = 1/4.
Indeed, for any x ∈ (0, 1) we have:
u(x) = u(0) +
∫ x
0
u′(s)ds.
Since u ∈ H10 (0, 1) then u(0) = 0. We square this equality and apply
Cauchy-Swartz inequality:
(4) |u(x)|2 =
∣∣∣∫ x
0
u′(s)ds
∣∣∣2 ≤ ∫ x
0
1ds
∫ x
0
(u′(s))2ds ≤ x
∫ x
0
(u′(s))2ds.
Taking the maximal value of x on the right hand side of this inequality
w.
MA500-2: Topological Structures 2016
Aisling McCluskey, Daron Anderson
[email protected], [email protected]
Contents
0 Preliminaries 2
1 Topological Groups 8
2 Morphisms and Isomorphisms 15
3 The Second Isomorphism Theorem 27
4 Topological Vector Spaces 42
5 The Cayley-Hamilton Theorem 43
6 The Arzelà-Ascoli theorem 44
7 Tychonoff ’s Theorem if Time Permits 45
Continuous assessment 30%; final examination 70%. There will be a weekly
workshop led by Daron during which there will be an opportunity to boost
continuous assessment marks based upon workshop participation as outlined in
class.
This module is self-contained; the notes provided shall form the module text.
Due to the broad range of topics introduced, there is no recommended text.
However General Topology by R. Engelking is a graduate-level text which has
relevant sections within it. Also Undergraduate Topology: a working textbook by
McCluskey and McMaster is a useful revision text. As usual, in-class discussion
will supplement the formal notes.
1
0 PRELIMINARIES
0 Preliminaries
Reminder 0.1. A topology τ on the set X is a family of subsets of X, called
the τ-open sets, satisfying the three axioms.
(1) Both sets X and ∅ are τ-open
(2) The union of any subfamily is again a τ-open set
(3) The intersection of any two τ-open sets is again a τ-open set
We refer to (X,τ) as a topological space. Where there is no danger of ambi-
guity, we suppress reference to the symbol denoting the topology (in this case,
τ) and simply refer to X as a topological space and to the elements of τ as its
open sets. By a closed set we mean one whose complement is open.
Reminder 0.2. A metric on the set X is a function d: X×X → R satisfying
the five axioms.
(1) d(x,y) ≥ 0 for all x,y ∈ X
(2) d(x,y) = d(y,x) for x,y ∈ X
(3) d(x,x) = 0 for every x ∈ X
(4) d(x,y) = 0 implies x = y
(5) d(x,z) ≤ d(x,y) + d(y,z) for all x,y,z ∈ X
Axiom (5) is often called the triangle inequality.
Definition 0.3. If d′ : X × X → R satisfies axioms (1), (2), (3) and (5) but
maybe not (4) then we call it a pseudo-metric.
Reminder 0.4. Every metric on X induces a topology on X, called the metric
topology. We define an open ball to be a set of the form
B(x,r) = {y ∈ X : d(x,y) < r}
for any x ∈ X and r > 0. Then a subset G of X is defined to be open (wrt the
metric topology) if for each x ∈ G, there is r > 0 such that B(x,r) ⊂ G. Thus
open sets are arbitrary unions of open balls.
Topological Structures 2016 2 Version 0.15
0 PRELIMINARIES
The definition of the metric topology makes just as much sense when we are
working with a pseudo-metric. Open balls are defined in the same manner, and
the open sets are exactly the unions of open balls. Pseudo-metric topologies are
often neglected because they do not have the nice property of being Hausdorff.
Reminder 0.5. Suppose f : X → Y is a function between the topological
spaces X and Y . We say f is continuous to mean that whenever U is open in
Y ...
First principle, power rule, derivative of constant term, product rule, quotient rule, chain rule, derivatives of trigonometric functions and their inverses, derivatives of exponential functions and natural logarithmic functions, implicit differentiation, parametric differentiation, L'Hopital's rule
I am Frank P. I am a Physical Chemistry Assignment Expert at eduassignmenthelp.com. I hold a Ph.D. in Physical Chemistry, from Malacca, Malaysia. I have been helping students with their homework for the past 6 years. I solve assignments related to Physical Chemistry.
Visit eduassignmenthelp.com or email info@eduassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Physical Chemistry Assignments.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Planning Of Procurement o different goods and services
Optimization Methods for Machine Learning and Engineering: Optimization in Vector Spaces
1. Optimization Methods for Machine Learning and Engineering
Lecture 7 – Optimization in Vector Spaces
Julius Pfrommer
Updated February 12, 2021
CC BY-SA 4.0
2. Agenda
1. Vector Spaces
2. Norms and Banach Spaces
3. Inner Products, Hilbert Spaces and the Projection Theorem
4. Applications
1/24
4. Vector Spaces
A set of elements X with the operations
Addition: ∀x, y ∈ X, x + y ∈ X
Scalar Multiplication: ∀x ∈ X, α ∈ R, αx ∈ X
is called a vector space if in addition the following axioms are fulfilled
for any elements x, y, z ∈ X and scalars α, β ∈ R:
1. x + y = y + x (Commutative Law)
2. (x + y) + z = x + (y + z) (Associative Law)
3. (αβ)x = α(βx) (Associative Law)
4. α(x + y) = αx + αy (Distributive Law)
5. (α + β)x = αx + βx (Distributive Law)
6. ∃0 ∈ X such that x + 0 = x, ∀ x ∈ X (Null Vector)
7. 0x = 0, 1x = x
The operations on vectors from Rn
Addition
x
y
x + y
Scalar Multiplication
2x
x
The elements of X could be from Rn
, keeping with our previous notion of a “vector”. But many other types
of mathematical objects also form vector spaces. And not all X ⊂ Rn
obey the axioms.
2/24
5. Quiz: Is X a Vector Space?
X = Rn
with n ∈ N
Yes
3/24
6. Quiz: Is X a Vector Space?
0
X
p
M
M = {m ∈ R3
| m3 = 0}
p ∈ R3
X = {m ∈ M + p}
No
4/24
7. Quiz: Is X a Vector Space?
M
0
M ⊆ Rn
with n ∈ N
X = {x | ∃m ∈ M, α ≥ 0
x = αm}
No
5/24
8. Quiz: Is X a Vector Space?
f(t)
t
sin(t)
cos(t)
X = {f | ∃α, β ∈ R,
f = t 7→ α sin(t) + β cos(t)}
Yes
For the addition of functions use (f + g)(t) = f(t) + g(t).
The null vector 0 of function-space is f0(t) = 0.
6/24
9. Linear Combinations, Linear Dependence, Basis and Dimensions
A vector x from a vector space X is a linear combination
of the vectors {y1, y2, . . . } ⊆ X if there exist scalars
{α1, α2, . . . } so that x =
P
i αixi. Note that we could
have infinitely many yi for the linear combination.
The vectors {x1, . . . , xn} from a vector space X are linearly
independent if
P
i αixi = 0 implies αi = 0 for all i.
A set of linearly independent vectors {x1, x2, . . . } is called
a basis of X if its linear combinations span X.
The dimension of a vector space X is defined by the number
of elements in its basis.
We first encountered these concepts in the context of Linear
Algebra in Rn
. But they are more general and can be applied
to any vector space. This is a common theme for this lecture.
0
X
x1
x2
y
• Vectors x1 and x2 are a basis for the
vector space X
• y is linearly independent from x1 and
x2 and is therefore not an element of X
7/24
11. Normed Vector Spaces
A normed vector space additionally has a real-valued function that maps
each element x ∈ X into a real number kxk called the norm of x where
the following axioms hold:
1. kxk ≥ 0, kxk = 0 iff x = 0
2. kx + yk ≤ kxk + kyk ∀x, y ∈ X (Triangle Inequality)
3. kαxk = |α| · kxk ∀α ∈ R
Every norm implies a metric, i.e. a distance function d between vectors
x, y ∈ X:
d(x, y) := kx − yk
There, from the norm axioms, we have
1. d(x, y) = 0 ⇔ x = y (Identity of Indiscernibles)
2. d(x, z) ≤ d(x, y) + d(y, z) (Triangle Inequality)
3. d(x, y) = d(y, x) (Symmetry)
Norm of a vector as its length
x
kxk
Distance between vectors
x
y d(x, y)
8/24
12. The p-Norms
For elements x ∈ Rn
, the previously encountered Euclidean
Norm k · k2 is only a special case from the family of p-Norms
kxkp =
P
i |xi|p
1/p
for p ≥ 1. Other common values for p are:
p = 1 The Manhattan Norm is simply the sum of the absolute
values.
p = ∞ The Maximum Norm arises in the limit when p is
increased. It can be defined alternatively as
kxk∞ = max
i
|xi|.
In the example on the right-hand side, there is a unique shortest
path in the Euclidean distance (implied by the Euclidean Norm)
across the grid (red). In Manhattan distance there are several
paths with the same length.
Distances in the Manhattan Norm (p = 1)
Unit circle for different p
p = ∞, p = 2, p = 1, p = 1/2
9/24
13. Convergence and Banach Spaces
In the context of open/closed sets, we previously saw a convergent sequence.
Now we can make this notion of convergence more precise.
Let {xi} ⊆ X an infinite series from the normed vector space X. The
series converges if there exists some element y ∈ X for which ky − xik
converges to zero. More precisely, for every ε 0 there exists an index m
such that ky − xik ε for all i ≥ m. We then write xi → y.
A sequence {xi} is said to be a Cauchy sequence if kxi − xjk → 0 as
i, j → ∞; i.e., given ε 0, there is an index m such that kxi − xjk ε
for all i, j ≥ m.
In a normed space every convergent sequence is a Cauchy sequence.
A normed vector space X is complete if every Cauchy sequence
from X has a limit in X. A complete normed vector space is called
a Banach space.
Stefan Banach (1892 – 1945)
A non-Cauchy sequence
10/24
14. Completeness and the existence of Fixed Points
In a normed vector space, any finite-dimensional
subspace is complete. So all normed vector spaces
embedded in Rn
are Banach spaces.
Completeness is a prerequisite for many of the
optimization algorithms we saw prior. For example, to
show convergence of Gradient Descent and the
Newton Method in general normed vector spaces.
Let S be a subset of a normed vector space X and let
f be a transformation f : S → S. Then f is a
contraction if there exists an α with 0 ≤ α 1 such
that kf(x) − f(y)k ≤ αkx − yk for all x, y ∈ S.
Banach Fixed Point Theorem
If f is a contraction on a closed subset S of a Banach
space, there is a fixed point x∗
∈ S satisfying
x∗
= f(x∗
). Furthermore, x∗
can be obtained by
starting with an arbitrary x0 ∈ S and following a
sequence xi+1 = f(xi).
A Non-Complete Normed Space [Luenberger69]
Consider the normed vector space of continuous functions
L2
[0, 1]. Let a sequence of functions fi from this space:
fi(t) =
0 for 0 ≤ t ≤ 1
2
− 1
i
it − i
2
+ 1 for 1
2
− 1
i
≤ t ≤ 1
2
1 for t ≥ 1
2
Each function fi is continuous for finite i. However the
sequence converges in the limit to the step function which
is not continuous and not in L2
[0, 1]. 11/24
16. The Inner Product
Let X a vector space. The inner product hx | yi is a function defined on X × X that maps each pair of
vectors x, y ∈ X to a scalar while satisfying the following axioms:
1. hx + y | zi = hx | zi + hy | zi (Linearity in the first argument)
2. hλx | yi = λhx | yi (Linearity in the first argument)
3. hx | yi = hy | xi (Conjugate Symmetry)
4. hx | xi ≥ 0 and hx | xi = 0 iff x = 0 (Positive Definiteness)
The overbar denotes complex conjugation (complex-valued vector spaces are not considered in the course).
A vector space with an inner product defined is a pre-Hilbert space.
Every inner product implies a norm kxk =
p
hx | xi.
Euclidean Inner Product
A vector space X ⊆ Rn
with elements x, y and
the inner product
hx | yi =
n
X
i=1
xiyi .
Function Spaces
The vector space L2
[a, b] of continuous functions
f, g with
R b
a
f(t)2
dt ∞ and the inner product
hf | gi =
Z b
a
f(t)g(t)dt .
12/24
17. Orthogonality and the Projection Theorem
Two elements x, y from a pre-Hilbert space are said to be orthogonal if hx | yi = 0, denoted as x ⊥ y .
If x, y are orthogonal x ⊥ y, then kx + yk2
= kxk2
+ kyk2
.
Proof: kx + yk2
= hx + y | x + yi = hx | x + yi + hy | x + yi = hx + y | xi + hx + y | yi =
hx | xi + hy | xi + hx | yi + hy | yi = kxk2
+ kyk2
Consider the following optimization problem: Let a pre-Hilbert space
X and a subspace M ⊂ X. Given an element x ∈ X, what is the
element m ∈ M that minimizes kx − mk?
Projection Theorem for pre-Hilbert Spaces see [Luenberger69]
If there is an element m∗
∈ M such that kx−m∗
k ≤ kx−mk
for all m ∈ M, then m∗
is unique. The element m∗
is a unique
minimizer in M iff the residual x − m∗
is orthogonal to M.
m∗
x − m∗
x
0
X
M
13/24
18. Hilbert Spaces
A complete pre-Hilbert space is called a Hilbert space.
Concerning the Projection Theorem, we know that a unique minimizer must exist
for Hilbert spaces.
Results from Linear Algebra are generalized to infinite-dimensional Vector Spaces.
Linear Operators translate between different Hilbert Spaces. Matrix multiplication
is a special case for linear operators in the finite-dimensional case.
Hilbert Spaces are used in many different fields
John Von Neumann. Mathematische Grundlagen der Quan-
tenmechanik. Springer, 1932
Bernhard Schölkopf and Alexander J Smola. Learning
with kernels: support vector machines, regularization, op-
timization, and beyond. MIT press, 2002
Kevin W Cassel. Variational methods with applications in
science and engineering. Cambridge University Press, 2013
David Hilbert (1862 – 1943)
The last person who knew all
of mathematics (Folklore)
14/24
19. Gram-Schmidt-Orthogonalization
In an orthogonal set S all elements are mutually orthogonal ∀x, y ∈ S,
x 6= y ⇒ x ⊥ y.
If S is orthonormal (in addition to orthogonal), then ∀x ∈ S, kxk = 1.
Given x, y ∈ X and kyk = 1, then hx | yiy is the projection of x on y.
The residual of the projection r = x − hx | yiy is orthogonal to y.
Proof: x − hx | yiy y = hx | yi − hx | yihy | yi = 0
Residual of the Projection
r
y
x
hx | yiy
Let {b1, b2, . . . , bn} a finite basis for the subspace M of a Hilbert space H ⊇ M. We can construct an
orthonormal basis {e1, e2, . . . , en} for M using Gram-Schmidt-Orthogonalization:
e1 =
b1
kb1k
, en =
bn −
Pn−1
i=1 hbn | eiiei
kbn −
Pn−1
i=1 hbn | eiieik
By the Projection Theorem we find m∗
∈ M with minimum distance to some x ∈ H as
m∗
= arg min
α1,α2,...,αn
kx −
Pn
i=1 αibik = x −
Pn
i=1hx | eiiei
15/24
20. The Normal Equations
Again, we look at the minimum norm projection m∗
= arg min
α1,α2,...,αn
kx −
Pn
i=1 αibik where the bi span
a subspace of a Hilbert space H. But instead of just m∗
we are also interested in the αi. Gram-Schmidt-
Orthogonalization does not immediately give us those.
From the Projection Theorem we know that the residual x −
Pn
i=1 αibi is orthogonal to all bi.
hx −
Pn
i=1 αibi | bii = 0, ∀i = 1, . . . , n
h
Pn
i=1 αibi | bii = hx | bii, ∀i = 1, . . . , n
We can further unpack the left-hand side to get a system Gα = c of n linear equations with n unknowns.
These are known as the Normal Equations. Note that only c depends on the vector x that we want to project.
hb1 | b1iα1 + hb2 | b1iα2 + . . . + hbn | b1iαn = hx | b1i
hb1 | b2iα1 + hb2 | b2iα2 + . . . + hbn | b2iαn = hx | b2i
.
.
.
.
.
.
.
.
.
.
.
.
hb1 | bniα1 + hb2 | bniα2 + . . . + hbn | bniαn = hx | bni
16/24
21. The Gram Matrix
Let {b1, b2, . . . } a linearly independent basis from a Hilbert space. Its Gram matrix can be precomputed as
G(b1, b2, . . . , bn) =
hb1 | b1i hb2 | b1i . . . hbn | b1i
hb1 | b2i hb2 | b2i . . . hbn | b2i
.
.
.
.
.
.
.
.
.
hb1 | bni hb2 | bni . . . hbn | bni
.
Theorem: The determinant of the Gram matrix is non-null |G(b1, b2, . . . , bn)| 6= 0 iff the bi are
linearly independent.
In that case, the matrix is invertible and we can solve Gα = c for α with standard methods.
Hence, for every finite basis embedded in a Hilbert space, we can compute the minimum distance projection
and express it by coefficients αi for the basis elements.
17/24
22. Minimum Distance in Julia
# Norm and distance function
norm(L, x) = sqrt(inner(L, x, x))
dist(L, x, y) = norm(L, x-y)
# Example for the Euclidean p2-Norm
inner(::Val{:P2}, x, y) = x' * y
dist(Val(:P2), [0,0], [1,1]) # 1.4142
function gram_schmidt(L, b)
e = copy(b)
for i = 1:length(b)
for j=1:i-1
e[i] -= e[j] * inner(L, b[i], e[j])
end
nn = norm(L, e[i])
if nn 0.0001 # Normalize if non-zero
e[i] = e[i] / nn
end
end
return e
end
# Projection on the subspace defined by a (not
# necessarily orthogonal) basis
function proj(L, x, basis)
ob = gram_schmidt(L, basis) # orthonormal basis
return sum([ob[i] * inner(L, x, ob[i])
for i=1:length(ob)])
end
# Returns the projection and its coefficients
# for the basis elements
function proj_normal(L, x, basis)
nb = length(basis)
G = zeros(nb,nb) # Gram matrix, always symmetric
for i=1:nb, j=1:i
G[i,j] = inner(L, basis[i], basis[j])
G[j,i] = G[i,j]
end
c = [inner(L, x, basis[i]) for i=1:nb]
alpha = G c # G * alpha = c
return sum(basis .* alpha), alpha
end
18/24
24. Catching Bad Guys with Eigenfaces
= + α1 + α2 + . . .
• From a database of face images, compute the “average face” and n Eigenfaces.
• The Eigenfaces are extracted using the Eigen-decomposition technique already encountered
for Fibonacci-in-constant-time (not further discussed here).
• The Eigenfaces are a basis for a (finite) n-dimensional vector space.
• For every face image, we can find a minimum-distance projection on the face-space.
This gives us n-dimensional coefficients α that we can use as features.
• Recognize a person by nearest-neighbor lookup for the Eigenface coefficients of known faces.
Lawrence Sirovich and Michael Kirby. “Low-dimensional procedure for the characterization of human faces”.
In: Journal of the Optical Society of America A 4.3 (1987), pp. 519–524
19/24
25. Approximating sin with a Polynomial
The vector space L2
[0, 1] contains continuous functions · : [0, 1] → R
• with the inner product hf, gi =
R 1
0
f(t)g(t)dt and the corresponding
• norm kfk =
qR 1
0
f(t)2dt (restrict to f where kfk ∞).
Let the vector space Pn ⊂ L2
[0, 1] with the polynomials of nth degree.
• A polynomial of nth degree can be represented by an (n + 1)-vector of
its coefficients (including the intercept).
• Any set of polynomial functions spans a subspace of L2
[0, 1]. We can
compute an orthonormal basis for it.
With this, we can perform a minimum-distance projection from the
continuous functions on the polynomials of nth degree.
Which polynomial of nth degree most closely represents g(t) = sin(πt)?
Solve as a minimum norm problem fn = arg min
h∈Pn
kh − gk.
f2(t) ≈ −0.050+4.121t−4.121t2
0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.2
0.4
0.6
0.8
1.0 sin
poly-2
f4(t) ≈ 0.001 + 3.087t +
0.536t2 − 7.247t3 + 3.623t4
0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.2
0.4
0.6
0.8
1.0 sin
poly-4
20/24
26. Approximating sin with a Polynomial in Julia
import Base: +,-,*,/
# Polynomials representation and evaluation
struct Poly
c::Vector{Float64} # Coefficients (intercept 1st)
end
(f::Poly)(x) = sum([x^(i-1)*f.c[i] for i=1:length(f.c)])
# Addition and subtraction
+(f::Poly, g::Poly) = Poly(f.c .+ g.c)
-(f::Poly, g::Poly) = Poly(f.c .- g.c)
# Multiplication and division with a real scalar
*(f::Poly, y::T) where T :Real = Poly(f.c * y)
/(f::Poly, y::T) where T :Real = Poly(f.c / y)
# Examples
pp = Poly([1,2,0])
pp(2.0) # 1 + 2 * 2.0 + 0 * 2.0^2 = 5.0
pp2 = pp*2 + Poly([1,1,1])
pp2(1.5) # 3 + 5 * 1.5 + 1 * 1.5^2 = 12.75
# Inner product for functions from L2[0,1]
function inner(::Val{:L2}, f, g)
dt = 0.001 # Approximate the integral
return sum([f(t)*g(t)*dt for t=0.0:dt:1.0])
end
# Project sin on the second degree polynomials
g(t) = sin(t*pi)
p_basis = [Poly([1,0,0]),
Poly([0,1,0]),
Poly([0,0,1])]
g_proj, g_coeff = proj_normal(Val(:L2), g, p_basis)
# g_coeff = [-0.05016328783041,
# 4.12100032032210,
# -4.12100032032211]
# How is the sine function actually computed
# by the OS / standard math library (libm)?
# - http://www.netlib.org/fdlibm/k_sin.c
# - http://www.netlib.org/fdlibm/s_sin.c
# Or via CORDIC algorithms in hardware
21/24
27. Quadratic Optimization with Equality Constraints
x∗
= arg min
x∈Rn
x
Qx
subject to Ax = b
All solutions fulfilling the equality constraint lie in a linear variety
V . Linear varieties contain elements from some vector space
with an additional offset away from the null vector.
Note that hx | xiQ = x
Qx is a valid inner product. Which
element of V is closest to 0 wrt. the implied distance metric?
1. Find some v that fulfills the constraint Av = b.
2. Let the Hilbert Space H̃ the nullspace of A with the inner
product h· | ·iQ. H̃ is parallel to V . Project v onto H̃:
h = arg ming∈H̃hv | giQ
3. The solution is x∗
= v − h.
Projection with Equality Constraints
0
V
H̃
h
v
x∗
Application Example: Sea-of-Gates
VLSI Optimization [Kleinhans1991]
22/24
28. Conjugate Gradient (CG)
Similar to Gradient Descent, but with an additional processing step for
the gradient [Hestenes1952; Hestenes1980].
First step direction: d(1)
= −∇f(x(0)
)
Later step directions:
1. Start with ˜
d(k)
= −∇f(x(k−1)
).
2. Compute d(k)
by orthogonalization of ˜
d(k)
wrt. the previous
step directions {d(1)
, . . . , d(k−1)
}.
3. Additional linesearch (specialized linesearch methods for CG
exist).
For an unconstrained quadratic optimization problem in n dimensions,
Conjugate Gradient converges within n steps.
The Newton method would solve it in one step. But with the added
cost of computing the Hessian and solving a linear equation for it.
Note that Hestenes et al. developed CG on a Zuse Z4 computer.
Conjugate Gradient for a
Quadratic Objective
Image Source: Wikipedia
23/24
29. Summary of what you learned today
• Vector spaces and their axioms
• Banach spaces and norms beyond Euclidean distances
• Hilbert spaces and inner products with a notion of orthogonality
• Computing an orthonormal basis with the Gram-Schmidt Algorithm
• Minimum-Norm Projection on the subspace of a Hilbert space via the Normal
Equations
• Applications for Minimum-Norm Projection
• Catching Bad Guys with Eigenfaces
• Approximating the sine function with a polynomial
• Quadratic Optimization with Equality Constraints
• Conjugate Gradient
24/24
30. That’s it for today.
See you next week for Lecture 8 on
Duality
24/24
31. Referenzen i
[Cassel2013] Kevin W Cassel. Variational methods with applications in science and engineering. Cambridge
University Press, 2013.
[Hestenes1980] Magnus Rudolph Hestenes. Conjugate direction methods in optimization. Springer, 1980.
[Hestenes1952] Magnus R Hestenes, Eduard Stiefel, et al. “Methods of conjugate gradients for solving linear
systems”. In: Journal of research of the National Bureau of Standards 49.6 (1952), pp. 409–436.
[Kleinhans1991] Jürgen M Kleinhans et al. “GORDIAN: VLSI placement by quadratic programming and slicing
optimization”. In: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
10.3 (1991), pp. 356–365.
[Luenberger69] David G Luenberger. Optimization by Vector Space Methods. John Wiley Sons, 1969.
[Sirovich1987] Lawrence Sirovich and Michael Kirby. “Low-dimensional procedure for the characterization of
human faces”. In: Journal of the Optical Society of America A 4.3 (1987), pp. 519–524.
[Schölkopf2002] Bernhard Schölkopf and Alexander J Smola. Learning with kernels: support vector machines,
regularization, optimization, and beyond. MIT press, 2002.
[VonNeumann1932] John Von Neumann. Mathematische Grundlagen der Quantenmechanik. Springer, 1932.