SlideShare ist ein Scribd-Unternehmen logo
1 von 22
Bahirdar University
Bahirdar institute of technology
Faculty of computing
Department of computer science
Complexity Theory Assignment
TESFAHUNEGN MINWUYELET
Date of Submission: 13/05/2016 G.C
i
Table of Contents
1.What is the characteristicfunctionof set?Explainitwithashortnote;elaborate itwithnotlessthan3
example.............................................................................................................................................1
2.What is meant by complexity class? Define the basic deterministic complexity classes. .....................10
Time and Space Complexity Classes ...............................................................................................11
The following are the canonical complexity classes:........................................................................11
L (complexity)...............................................................................................................................11
P (complexity)...............................................................................................................................12
3.Define Big-O notation and illustrate it with notless than 3 examples................................................12
Formal definition:- .......................................................................................................................14
Theorems you can use without proof.............................................................................................14
4.Write a shortnote aboutthe similarityanddifference betweenTuringmachineandrandomaccess
machine?.........................................................................................................................................16
5.Prove by mathematical induction that n!>2n
, for n≥4. ......................................................................19
1
1.What is the characteristic function of set? Explain it with a short note; elaborate it with
not less than 3 example.
2
3
4
5
6
7
Example 1
8
Example 2
9
Example 3
10
2.What is meant by complexity class? Define the basic deterministic complexity classes.
Typically, a complexity class is defined by (1) a model of computation, (2) a resource (or
collection
of resources), and (3) a function known as the complexity bound for each resource. The models
used to define complexity classes fall into two main categories: (a) machine based models, and
(b) circuit-based models. Turing machines (TMs) and random-access machines (RAMs) are the
two principal families of machine models. When we wish to model real computations,
deterministic machines and circuits are our closest links to reality. Then why consider the other
kinds of machines? There are two main reasons. The most potent reason comes from the
computational problems whose complexity we are trying to understand. The most notorious
examples are the hundreds of natural NP-complete problems. To the extent that we understand
anything about the complexity of these problems, it is because of the model of nondeterministic
Turing machines. Nondeterministic machines do not model physical computation devices, but
they do model real computational problems. There are many other examples where a particular
model of computation has been introduced in order to capture some well-known computational
problem in a complexity class. The second reason is related to the rest. Our desire to understand
real computational problems has forced upon us a repertoire of models of computation and
resource bounds. In order to understand the relationships between these models and bounds, we
combine and mix them and attempt to discover their relative power. Consider, for example,
nondeterminism. By considering the complements of languages accepted by nondeterministic
machines, researchers were naturally led to the notion of alternating machines. When alternating
machines and deterministic machines were compared, a surprising virtual identity of
deterministic space and alternating time emerged. Subsequently, alternation was found to be a
useful way to model efficient parallel computation. This phenomenon, whereby models of
computation are generalized and modified in order to clarify their relative complexity, has
occurred often through the brief history of complexity theory, and has generated some of the
most important new insights. Other underlying principles in complexity theory emerge from the
major theorems showing relationships between complexity classes. These theorems fall into two
broad categories. Simulation theorems show that computations in one class can be simulated by
11
computations that meet the defining resource bounds of another class. The containment of
nondeterministic logarithmic space (NL) in polynomial time (P), and the equality of the class P
with alternating logarithmic space, are simulation theorems. Separation theorems show that
certain complexity classes are distinct. Complexity theory currently has precious few of these.
The main tool used in those separation theorems we have is called diagonalization.
Time and Space Complexity Classes
DTIME[t(n)] is the class of languages decided by deterministic Turing machines of time com-
plexity t(n).
DSPACE[s(n)] is the class of languages decided by deterministic Turing machines of space
complexity s(n).
The following are the canonical complexity classes:
L (complexity)
In computational complexity theory, L (also known as LSPACE) is the complexity class
containing decision problems that can be solved by a deterministic Turing machine using a
logarithmic amount of memory space. Logarithmic space is sufficient to hold a constant number
of pointers into the input and a logarithmic number of Boolean flags and many basic log space
algorithms use the memory in this way.
L is a subclass of NL, which is the class of languages decidable in logarithmic space on a
nondeterministic Turing machine. A problem in NL may be transformed into a problem of
reachability in a directed graph representing states and state transitions of the nondeterministic
machine, and the logarithmic space bound implies that this graph has a polynomial number of
vertices and edges, from which it follows that NL is contained in the complexity class P of
problems solvable in deterministic polynomial time. Thus L ⊆ NL ⊆ P. The inclusion of L into P
can also be proved more directly: a decider using O(log n) space cannot use more than
2O(log n) = nO(1) time, because this is the total number of possible configurations.
L further relates to the class NC in the following way: NC1 ⊆ L ⊆ NL ⊆ NC2. In words, given a
parallel computer C with a polynomial number O(nk) of processors for some constant k, any
problem that can be solved on C in O(log n) time is in L, and any problem in L can be solved in
O(log2 n) time on C.
12
Important open problems include whether L = P, and whether L = NL.
L is low for itself, because it can simulate log-space oracle queries (roughly speaking, "function
calls which use log space") in log space, reusing the same space for each query.
The relatedclassof functionproblems isFL.FL isoftenusedto define logspace reductions.
P (complexity)
In computational complexity theory, P, also known as PTIME or DTIME (nO(1)), is one of the
most fundamental complexity classes. It contains all decision problems that can be solved by a
deterministic Turing machine using a polynomial amount of computation time, or polynomial
time.
A language L is in P if and only if there exists a deterministic Turing machine M, such that
 M runs for polynomial time on all inputs
 For all x in L, M outputs 1
 For all x not in L, M outputs 0
P can also be viewed as a uniform family of Boolean circuits. A language L is in P if and only if
there exists a polynomial-time uniform family of Boolean circuits such that The circuit definition
can be weakened to use only a log space uniform family without changing the complexity class.
deterministicTuringmachine T∈TT∈T canberepresentedasa tuple ⟨Q,Σ,δ,s⟩⟨Q,Σ,δ,s⟩ where QQ is
a finite set of internal states, ΣΣ is a finite tape alphabet, s∈Qs∈Q is TT’s start state, and δδ is a
transition function mapping state-symbol pairs ⟨q,σ⟩⟨q,σ⟩ into state-action pairs ⟨q,a⟩⟨q,a⟩.
Here aa is chosen from the set of actions {σ,⇐,⇒}{σ,⇐,⇒} – i.e. write the symbol σ∈Σσ∈Σ on
the current square, move the head left, or move the head right. Such a function is hence of
type δ:Q×Σ→Q×αδ:Q×Σ→Q×α. On the other hand, a non-deterministic Turing
machine N∈NN∈N is of the form ⟨Q, Σ,Δ,s⟩⟨Q,Σ,Δ,s⟩ where Q,ΣQ,Σ, and ss are as before
but ΔΔ is now only required to be a relation – i.e. Δ⊆(Q×Σ)×(Q×α)Δ⊆(Q×Σ)×(Q×α). As a
consequence, a machine configuration in which NN is in state qq and reading symbol σσ can
lead to finitely many distinct successor configurations –
e.g. it is possible that ΔΔ relates ⟨q,σ⟩⟨q,σ⟩ to both ⟨q′,a′⟩⟨q′,a′⟩ and ⟨q′′,a′′⟩⟨q″,a″⟩ for distinct
states q′q′ and q′′q″ and actions a′a′ and a′′a″.
3.Define Big-O notation and illustrate it with not less than 3 examples.
13
Big O notation (with a capital letter O, not a zero), also called Landau’s symbol, is a symbolism
used in complexity theory, computer science, and mathematics to describe the asymptotic
behavior of functions. Basically, it tells you how fast a function grows or declines. Landau’s
symbol comes from the name of the German number theoretician Edmund Landau who invented
the notation. The letter O is used because the rate of growth of a function is also called its order.
For example, when analyzing some algorithm, one might find that the time (or the number of
steps) it takes to complete a problem of size n is given by T(n) = 4 n2- 2 n + 2. If we ignore
constants (which makes sense because those depend on the particular hardware the program is
run on) and slower growing terms, we could say “T(n) grows at the order of n2" and write T(n)
=O(n2). In mathematics, it is often important to get a handle on the error term of an
approximation. For instance, people will write ex= 1 + x + x2/ 2 + O(x3) for x -> 0 to express
the fact that the error is smaller in absolute value than some constant times x3 if x is close enough
to 0. For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of
the real numbers. We write f(x) = O(g(x)) (or f(x) = O(g(x)) for x -> ∞ to be more precise) if and
only if there exist constants N and C such that |f(x)| ≤ C |g(x)| for all x>Intuitively, this means
that f does not grow faster than g. If a is some real number, we write f(x) = O(g(x)) for x -> a if
and only if there exist constants d > 0 and C such that |f(x)| C |g(x)| for all x with |x-a| < d.
The first definition is the only one used in computer science (where typically only positive
functions with a natural number n as argument are considered; the absolute
values can then be ignored), while both usages appear in mathematics. Here is a list of classes of
functions that are commonly encountered when analyzing algorithms. The slower growing
functions are listed first. c is some arbitrary constant.
Notation name
O(1) constant
O(log(n)) logarithmic
O((log(n)) c) polylogarithmic
O(n) linear
O(n2) quadratic
O(nc) polynomial
O(cn) exponential
14
Note that O(nc) and O(cn) are very different. The latter grows much, much faster, no matter how
big the constant c is. A function that grows faster than any power of n is called super
polynomial. One that grows slower than an exponential function of the form cn is called sub
exponential. An algorithm can require time that is both super polynomial and sub exponential;
examples of this include the fastest algorithms known for integer factorization. Note, too, that O
(log n) is exactly the same as O (log (nc)). The logarithms differ only by a constant factor, and
the big O notation ignores that. Similarly, logs with different constant bases are equivalent. The
above list is useful because of the following fact: if a function f(n) is a sum of functions, one of
which grows faster than the others, then the faster growing one determines the order of f(n).
Example: If f(n) = 10 log(n) + 5 (log(n))3+ 7 n + 3 n2+ 6 n3, then f(n) = O(n3).
One caveat here: the number of summands has to be constant and may not depend on n. This
notation can also be used with multiple variables and with other expressions on the right side of
the equal sign. The notation: f(n,m) = n2+ m3+ O(n+m) represents the statement:∃C ∃ N ∀
n,m>N : f(n,m)n2+m3+C(n+m)
Formal definition: -
Given f, g : N → R+, we say that f ∈ O(g) if there exists some constants c >0, n0 ≥ 0 such that
for every n ≥ n 0, f (n) ≤ cg(n). That is, for sufficiently large n, the rate of growth of f is bounded
by g, up to a constant c. f, g might represent arbitrary functions, or the running time or space
complexity of a program or algorithm.
Theorems you can use without proof
15
Example 1: Prove that running time T(n) = n 3 + 20n + 1 is O(n 3 ) Proof: by the Big-Oh
definition, T(n) is O(n 3 ) if T(n) ≤ c·n 3 for some n ≥ n0 . Let us check this condition: if n 3 +
20n + 1 ≤ c·n 3 then c n n + + ≤ 2 3 20 1 1 . Therefore, the Big-Oh condition holds for n ≥ n0 = 1
and c ≥ 22 (= 1 + 20 + 1). Larger values of n0 result in smaller factors c (e.g., for n0 = 10 c ≥
1.201 and so on) but in any case the above statement is valid.
Example 2: Prove that running time T(n) = n 3 + 20n + 1 is not O(n 2 ) Proof: by the Big-Oh
definition, T(n) is O(n 2 ) if T(n) ≤ c·n 2 for some n ≥ n0 . Let us check this condition: if n 3 +
20n + 1 ≤ c·n 2 then c n n n + + ≤ 2 20 1 . Therefore, the Big-Oh condition cannot hold (the left
side of the latter inequality is growing infinitely, so that there is no such constant factor c).
16
Example 3: Prove that running time T(n) = n 3 + 20n + 1 is O (n 4 ) Proof: by the Big-Oh
definition, T(n) is O(n 4 ) if T(n) ≤ c·n 4 for some n ≥ n0 . Let us check this condition: if n 3 +
20n + 1 ≤ c·n 4 then c n n n + + ≤ 3 4 1 20 1 .Therefore, the Big-Oh condition holds for n ≥ n0 =
1 and c ≥ 22 (= 1 + 20 + 1). Larger values of n0 result in smaller factors c (e.g., for n0 = 10 c ≥
0.10201 and so on) but in any case the above statement is valid.
Example 4: Prove that running time T(n) = n 3 + 20n is Ω(n 2 ) Proof: by the Big-Omega
definition, T(n) is Ω(n 2 ) if T(n) ≥ c·n 2 for some n ≥ n0 . Let us check this condition: if n 3 +
20n ≥ c·n 2 then c n n + ≥ 20 . The left side of this inequality has the minimum value of 8.94 for
n = 20 ≅ 4.47 Therefore, the Big-Omega condition holds for n ≥ n0 = 5 and c ≤ 9. Larger values
of n0 result in larger factors c (e.g., for n0 = 10 c ≤ 12.01) but in any case the above statement is
valid.
4.Write a short note about the similarity and difference between Turing machine and
random access machine?
 define the basic deterministic complexity classes, L, P and PSPACE.
 Definition 1: Given a set A, we say that A ∈ L iff there is a Turing machine which
computes the characteristic function of A in space O(log n).
 big O notation usually only provides an upper bound on the growth rate of the function
 Definition 2: Given a set A, we say that A ∈ P iff there is a Turing machine which for
some constant k computes the characteristic function of A in time O(nk)
 Definition 3: Given a set A, we say that A ∈ PSPACE iff there is a Turing machine which
for some constant k computes the characteristic function of A in space O(nk)
 Theorem 1: L ⊂ PSPACE
 Theorem 2: P ⊆ PSPACE
 since a Turing machine cannot use more space than time.
 Theorem 3: L ⊆ P
 a machine which runs in logarithmic space also runs in polynomial time.
 Turing machine seems incredibly inefficient and thus we will compare it to a model of
computation which is more or less a normal computer (programmed in assembly
language).
 This type of computer is called a Random Access Machine (RAM)
17
 A RAM has a finite control and infinite number of registers and two accumulators.
 Both the registers and the accumulators can hold arbitrarily large integers.
 We will let r(i) be the content of register i and ac1 and ac2 the contents of the
accumulators.
 The finite control can read a program and has a read-only input-tape and a write-only
output tape.
 In one step a RAM can carry out the following instructions.
 Add, subtract, divide or multiply the two numbers in ac1 and ac2,the result ends up in ac1.
 Make conditional and unconditional jumps. (Condition ac1 > 0 or ac1 = 0).
 Load something into an accumulator, e.g. ac1 = r(k) for constant k or ac1= r(ac1),similarly
for ac2.
 Store the content of an accumulator, e.g. r(k) = ac1 for constant k or r(ac2) = ac1, similarly
for ac2.
 Read input ac1 = input(ac2).
 Write an output.
 Use constants in the program.
 Halt
 Definition 4: The time to do a particular instruction on a RAM is1+ log(k + 1) where k is
the least upper bound on the integers involved in the instruction. The time for a
computation on a RAM is the sum of the times for the individual instructions.
 Definition 5:
18
 Intuitively the RAM is more powerful than a Turing machine.
 The size of a computer word is bounded by a constant and operations on larger numbers
require us to access a number of memory cells which is proportional to logarithm of the
number used.
 Theorem 4: If a Turing machine can compute a function in time T(n) and space S(n), for
T(n) ≥ n and S(n) ≥ log n then the same function can be computed in time O(T2(n)) and
space O(S(n)) on a RAM.
 fact a Turing machine is not that much less powerful than a RAM.
 Theorem 5:- If a function f can be computed by a RAM in time T(n) and space S(n) then
f can be computed in time O(T2(n)) and space S(n) on a Turing machine.
 Example 1:- Given two n-digit numbers x and y written in binary, write the instruction
that computes their sum.
 it can be done in logarithmic space.
 We have x = ∑i=0
n-1xi2i and y = ∑i=0
n-1yi2i
 x + y is computed by the following instruction
carry= 0
For i = 0 to n − 1
bit = xi + yi + carry
carry = 0
If bit ≥ 2 then carry = 1, bit = bit − 2
write bit
next i
write carry
 This can clearly be done in O(log n) space and thus addition belongs to L.
 Example 2:- Given two n-digit numbers x and y written in binary, write a machine
instruction that compute their product.
◦ This can be done in P time o(n2)
carry= 0
19
For i = 0 to 2n − 2
low = max(0, i − (n − 1))
high = min(n − 1, i)
For j = low to high, carry = carry + xj ∗ yi−j
write lsb(carry)
carry = carry/2
next i
write carry with least significant bit first
5.Prove by mathematical induction that n!>2n, for n≥4.
Basis: 4! = 24 > 16 = 24.
Induction:
IH: n! > 2 n
NTS: (n + 1)! > 2 n+1
(n + 1)! = n! · (n + 1) (definition of !)
> 2 n · (n + 1) (IH)
> 2 n · 2 (n ≥ 4)
= 2n+1
20
Reference

Weitere ähnliche Inhalte

Was ist angesagt?

Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time SystemsSara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
knowdiff
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy method
hodcsencet
 

Was ist angesagt? (20)

Constraint Satisfaction Problem (CSP) : Cryptarithmetic, Graph Coloring, 4- Q...
Constraint Satisfaction Problem (CSP) : Cryptarithmetic, Graph Coloring, 4- Q...Constraint Satisfaction Problem (CSP) : Cryptarithmetic, Graph Coloring, 4- Q...
Constraint Satisfaction Problem (CSP) : Cryptarithmetic, Graph Coloring, 4- Q...
 
Turing machine - theory of computation
Turing machine - theory of computationTuring machine - theory of computation
Turing machine - theory of computation
 
Chapter 5 Syntax Directed Translation
Chapter 5   Syntax Directed TranslationChapter 5   Syntax Directed Translation
Chapter 5 Syntax Directed Translation
 
Round robin scheduling
Round robin schedulingRound robin scheduling
Round robin scheduling
 
Linguistic hedges in fuzzy logic
Linguistic hedges in fuzzy logicLinguistic hedges in fuzzy logic
Linguistic hedges in fuzzy logic
 
Fuzzy arithmetic
Fuzzy arithmeticFuzzy arithmetic
Fuzzy arithmetic
 
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time SystemsSara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
 
Recognition-of-tokens
Recognition-of-tokensRecognition-of-tokens
Recognition-of-tokens
 
Recursion tree method
Recursion tree methodRecursion tree method
Recursion tree method
 
NLP_KASHK:Minimum Edit Distance
NLP_KASHK:Minimum Edit DistanceNLP_KASHK:Minimum Edit Distance
NLP_KASHK:Minimum Edit Distance
 
Fuzzy relations
Fuzzy relationsFuzzy relations
Fuzzy relations
 
NLP
NLPNLP
NLP
 
Unit iv
Unit ivUnit iv
Unit iv
 
sum of subset problem using Backtracking
sum of subset problem using Backtrackingsum of subset problem using Backtracking
sum of subset problem using Backtracking
 
Lex
LexLex
Lex
 
Variants of Turing Machine
Variants of Turing MachineVariants of Turing Machine
Variants of Turing Machine
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy method
 
Turing Machine
Turing MachineTuring Machine
Turing Machine
 
Real Time Operating Systems
Real Time Operating SystemsReal Time Operating Systems
Real Time Operating Systems
 
Theory of computation and automata
Theory of computation and automataTheory of computation and automata
Theory of computation and automata
 

Andere mochten auch

Planned & Emergent 'Change'
Planned & Emergent 'Change'Planned & Emergent 'Change'
Planned & Emergent 'Change'
Ammar Faruki
 
Teach EBP? An introduction to Complexity theory!
Teach EBP? An introduction to Complexity theory!Teach EBP? An introduction to Complexity theory!
Teach EBP? An introduction to Complexity theory!
webhostingguy
 
Complexity and Chaos theory
Complexity and Chaos theoryComplexity and Chaos theory
Complexity and Chaos theory
Jennifer Jones
 
SM Lecture Nine (B) - Strategy Development Process
SM Lecture Nine (B) - Strategy Development ProcessSM Lecture Nine (B) - Strategy Development Process
SM Lecture Nine (B) - Strategy Development Process
StratMgt Advisor
 

Andere mochten auch (15)

An Introduction to Complexity Theory
An Introduction to Complexity TheoryAn Introduction to Complexity Theory
An Introduction to Complexity Theory
 
Complexity Theory Basic Concepts
Complexity Theory    Basic ConceptsComplexity Theory    Basic Concepts
Complexity Theory Basic Concepts
 
Complexity theory review
Complexity theory reviewComplexity theory review
Complexity theory review
 
The concept of change management in today’s business world
The concept of change management in today’s business worldThe concept of change management in today’s business world
The concept of change management in today’s business world
 
Planned & Emergent 'Change'
Planned & Emergent 'Change'Planned & Emergent 'Change'
Planned & Emergent 'Change'
 
Teach EBP? An introduction to Complexity theory!
Teach EBP? An introduction to Complexity theory!Teach EBP? An introduction to Complexity theory!
Teach EBP? An introduction to Complexity theory!
 
Complexadaptivesystemstheory 12613245196525-phpapp02
Complexadaptivesystemstheory 12613245196525-phpapp02Complexadaptivesystemstheory 12613245196525-phpapp02
Complexadaptivesystemstheory 12613245196525-phpapp02
 
Of the complex, the simple and the non-complex
Of the complex, the simple and the non-complexOf the complex, the simple and the non-complex
Of the complex, the simple and the non-complex
 
Chaos Theory and the Sciences of Complexity: Foundations for Transforming Edu...
Chaos Theory and the Sciences of Complexity: Foundations for Transforming Edu...Chaos Theory and the Sciences of Complexity: Foundations for Transforming Edu...
Chaos Theory and the Sciences of Complexity: Foundations for Transforming Edu...
 
Management and Complexity Theory Lecture (Anglia Ruskin, Oct 2010)
Management and Complexity Theory Lecture (Anglia Ruskin, Oct 2010)Management and Complexity Theory Lecture (Anglia Ruskin, Oct 2010)
Management and Complexity Theory Lecture (Anglia Ruskin, Oct 2010)
 
Applying design thinking and complexity theory in agile organizations
Applying design thinking and complexity theory in agile organizations Applying design thinking and complexity theory in agile organizations
Applying design thinking and complexity theory in agile organizations
 
Complexity and Chaos theory
Complexity and Chaos theoryComplexity and Chaos theory
Complexity and Chaos theory
 
Characteristics of complex systems
Characteristics of complex systemsCharacteristics of complex systems
Characteristics of complex systems
 
SM Lecture Nine (B) - Strategy Development Process
SM Lecture Nine (B) - Strategy Development ProcessSM Lecture Nine (B) - Strategy Development Process
SM Lecture Nine (B) - Strategy Development Process
 
Complexity Thinking
Complexity ThinkingComplexity Thinking
Complexity Thinking
 

Ähnlich wie Introduction to complexity theory assignment

Formal language & automata theory
Formal language & automata theoryFormal language & automata theory
Formal language & automata theory
NYversity
 
Time and space complexity
Time and space complexityTime and space complexity
Time and space complexity
Ankit Katiyar
 
Tdm probabilistic models (part 2)
Tdm probabilistic  models (part  2)Tdm probabilistic  models (part  2)
Tdm probabilistic models (part 2)
KU Leuven
 
Sienna 2 analysis
Sienna 2 analysisSienna 2 analysis
Sienna 2 analysis
chidabdu
 

Ähnlich wie Introduction to complexity theory assignment (20)

Computational Complexity: Introduction-Turing Machines-Undecidability
Computational Complexity: Introduction-Turing Machines-UndecidabilityComputational Complexity: Introduction-Turing Machines-Undecidability
Computational Complexity: Introduction-Turing Machines-Undecidability
 
Formal language & automata theory
Formal language & automata theoryFormal language & automata theory
Formal language & automata theory
 
NP completeness
NP completenessNP completeness
NP completeness
 
Mcs 031
Mcs 031Mcs 031
Mcs 031
 
Master Thesis on the Mathematial Analysis of Neural Networks
Master Thesis on the Mathematial Analysis of Neural NetworksMaster Thesis on the Mathematial Analysis of Neural Networks
Master Thesis on the Mathematial Analysis of Neural Networks
 
Computational Complexity: Complexity Classes
Computational Complexity: Complexity ClassesComputational Complexity: Complexity Classes
Computational Complexity: Complexity Classes
 
Time and space complexity
Time and space complexityTime and space complexity
Time and space complexity
 
Planted Clique Research Paper
Planted Clique Research PaperPlanted Clique Research Paper
Planted Clique Research Paper
 
Big o
Big oBig o
Big o
 
Lecture 2: Computational Semantics
Lecture 2: Computational SemanticsLecture 2: Computational Semantics
Lecture 2: Computational Semantics
 
Algoritmic Information Theory
Algoritmic Information TheoryAlgoritmic Information Theory
Algoritmic Information Theory
 
Tdm probabilistic models (part 2)
Tdm probabilistic  models (part  2)Tdm probabilistic  models (part  2)
Tdm probabilistic models (part 2)
 
Fractal Dimension of Space-time Diagrams and the Runtime Complexity of Small ...
Fractal Dimension of Space-time Diagrams and the Runtime Complexity of Small ...Fractal Dimension of Space-time Diagrams and the Runtime Complexity of Small ...
Fractal Dimension of Space-time Diagrams and the Runtime Complexity of Small ...
 
Fractal dimension versus Computational Complexity
Fractal dimension versus Computational ComplexityFractal dimension versus Computational Complexity
Fractal dimension versus Computational Complexity
 
Sienna 2 analysis
Sienna 2 analysisSienna 2 analysis
Sienna 2 analysis
 
PRAM algorithms from deepika
PRAM algorithms from deepikaPRAM algorithms from deepika
PRAM algorithms from deepika
 
Theory of computing
Theory of computingTheory of computing
Theory of computing
 
H-MLQ
H-MLQH-MLQ
H-MLQ
 
2009 CSBB LAB 新生訓練
2009 CSBB LAB 新生訓練2009 CSBB LAB 新生訓練
2009 CSBB LAB 新生訓練
 
A Stochastic Limit Approach To The SAT Problem
A Stochastic Limit Approach To The SAT ProblemA Stochastic Limit Approach To The SAT Problem
A Stochastic Limit Approach To The SAT Problem
 

Kürzlich hochgeladen

Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
Chris Hunter
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 

Kürzlich hochgeladen (20)

Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 

Introduction to complexity theory assignment

  • 1. Bahirdar University Bahirdar institute of technology Faculty of computing Department of computer science Complexity Theory Assignment TESFAHUNEGN MINWUYELET Date of Submission: 13/05/2016 G.C
  • 2. i Table of Contents 1.What is the characteristicfunctionof set?Explainitwithashortnote;elaborate itwithnotlessthan3 example.............................................................................................................................................1 2.What is meant by complexity class? Define the basic deterministic complexity classes. .....................10 Time and Space Complexity Classes ...............................................................................................11 The following are the canonical complexity classes:........................................................................11 L (complexity)...............................................................................................................................11 P (complexity)...............................................................................................................................12 3.Define Big-O notation and illustrate it with notless than 3 examples................................................12 Formal definition:- .......................................................................................................................14 Theorems you can use without proof.............................................................................................14 4.Write a shortnote aboutthe similarityanddifference betweenTuringmachineandrandomaccess machine?.........................................................................................................................................16 5.Prove by mathematical induction that n!>2n , for n≥4. ......................................................................19
  • 3. 1 1.What is the characteristic function of set? Explain it with a short note; elaborate it with not less than 3 example.
  • 4. 2
  • 5. 3
  • 6. 4
  • 7. 5
  • 8. 6
  • 12. 10 2.What is meant by complexity class? Define the basic deterministic complexity classes. Typically, a complexity class is defined by (1) a model of computation, (2) a resource (or collection of resources), and (3) a function known as the complexity bound for each resource. The models used to define complexity classes fall into two main categories: (a) machine based models, and (b) circuit-based models. Turing machines (TMs) and random-access machines (RAMs) are the two principal families of machine models. When we wish to model real computations, deterministic machines and circuits are our closest links to reality. Then why consider the other kinds of machines? There are two main reasons. The most potent reason comes from the computational problems whose complexity we are trying to understand. The most notorious examples are the hundreds of natural NP-complete problems. To the extent that we understand anything about the complexity of these problems, it is because of the model of nondeterministic Turing machines. Nondeterministic machines do not model physical computation devices, but they do model real computational problems. There are many other examples where a particular model of computation has been introduced in order to capture some well-known computational problem in a complexity class. The second reason is related to the rest. Our desire to understand real computational problems has forced upon us a repertoire of models of computation and resource bounds. In order to understand the relationships between these models and bounds, we combine and mix them and attempt to discover their relative power. Consider, for example, nondeterminism. By considering the complements of languages accepted by nondeterministic machines, researchers were naturally led to the notion of alternating machines. When alternating machines and deterministic machines were compared, a surprising virtual identity of deterministic space and alternating time emerged. Subsequently, alternation was found to be a useful way to model efficient parallel computation. This phenomenon, whereby models of computation are generalized and modified in order to clarify their relative complexity, has occurred often through the brief history of complexity theory, and has generated some of the most important new insights. Other underlying principles in complexity theory emerge from the major theorems showing relationships between complexity classes. These theorems fall into two broad categories. Simulation theorems show that computations in one class can be simulated by
  • 13. 11 computations that meet the defining resource bounds of another class. The containment of nondeterministic logarithmic space (NL) in polynomial time (P), and the equality of the class P with alternating logarithmic space, are simulation theorems. Separation theorems show that certain complexity classes are distinct. Complexity theory currently has precious few of these. The main tool used in those separation theorems we have is called diagonalization. Time and Space Complexity Classes DTIME[t(n)] is the class of languages decided by deterministic Turing machines of time com- plexity t(n). DSPACE[s(n)] is the class of languages decided by deterministic Turing machines of space complexity s(n). The following are the canonical complexity classes: L (complexity) In computational complexity theory, L (also known as LSPACE) is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of memory space. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of Boolean flags and many basic log space algorithms use the memory in this way. L is a subclass of NL, which is the class of languages decidable in logarithmic space on a nondeterministic Turing machine. A problem in NL may be transformed into a problem of reachability in a directed graph representing states and state transitions of the nondeterministic machine, and the logarithmic space bound implies that this graph has a polynomial number of vertices and edges, from which it follows that NL is contained in the complexity class P of problems solvable in deterministic polynomial time. Thus L ⊆ NL ⊆ P. The inclusion of L into P can also be proved more directly: a decider using O(log n) space cannot use more than 2O(log n) = nO(1) time, because this is the total number of possible configurations. L further relates to the class NC in the following way: NC1 ⊆ L ⊆ NL ⊆ NC2. In words, given a parallel computer C with a polynomial number O(nk) of processors for some constant k, any problem that can be solved on C in O(log n) time is in L, and any problem in L can be solved in O(log2 n) time on C.
  • 14. 12 Important open problems include whether L = P, and whether L = NL. L is low for itself, because it can simulate log-space oracle queries (roughly speaking, "function calls which use log space") in log space, reusing the same space for each query. The relatedclassof functionproblems isFL.FL isoftenusedto define logspace reductions. P (complexity) In computational complexity theory, P, also known as PTIME or DTIME (nO(1)), is one of the most fundamental complexity classes. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time. A language L is in P if and only if there exists a deterministic Turing machine M, such that  M runs for polynomial time on all inputs  For all x in L, M outputs 1  For all x not in L, M outputs 0 P can also be viewed as a uniform family of Boolean circuits. A language L is in P if and only if there exists a polynomial-time uniform family of Boolean circuits such that The circuit definition can be weakened to use only a log space uniform family without changing the complexity class. deterministicTuringmachine T∈TT∈T canberepresentedasa tuple ⟨Q,Σ,δ,s⟩⟨Q,Σ,δ,s⟩ where QQ is a finite set of internal states, ΣΣ is a finite tape alphabet, s∈Qs∈Q is TT’s start state, and δδ is a transition function mapping state-symbol pairs ⟨q,σ⟩⟨q,σ⟩ into state-action pairs ⟨q,a⟩⟨q,a⟩. Here aa is chosen from the set of actions {σ,⇐,⇒}{σ,⇐,⇒} – i.e. write the symbol σ∈Σσ∈Σ on the current square, move the head left, or move the head right. Such a function is hence of type δ:Q×Σ→Q×αδ:Q×Σ→Q×α. On the other hand, a non-deterministic Turing machine N∈NN∈N is of the form ⟨Q, Σ,Δ,s⟩⟨Q,Σ,Δ,s⟩ where Q,ΣQ,Σ, and ss are as before but ΔΔ is now only required to be a relation – i.e. Δ⊆(Q×Σ)×(Q×α)Δ⊆(Q×Σ)×(Q×α). As a consequence, a machine configuration in which NN is in state qq and reading symbol σσ can lead to finitely many distinct successor configurations – e.g. it is possible that ΔΔ relates ⟨q,σ⟩⟨q,σ⟩ to both ⟨q′,a′⟩⟨q′,a′⟩ and ⟨q′′,a′′⟩⟨q″,a″⟩ for distinct states q′q′ and q′′q″ and actions a′a′ and a′′a″. 3.Define Big-O notation and illustrate it with not less than 3 examples.
  • 15. 13 Big O notation (with a capital letter O, not a zero), also called Landau’s symbol, is a symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines. Landau’s symbol comes from the name of the German number theoretician Edmund Landau who invented the notation. The letter O is used because the rate of growth of a function is also called its order. For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of size n is given by T(n) = 4 n2- 2 n + 2. If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say “T(n) grows at the order of n2" and write T(n) =O(n2). In mathematics, it is often important to get a handle on the error term of an approximation. For instance, people will write ex= 1 + x + x2/ 2 + O(x3) for x -> 0 to express the fact that the error is smaller in absolute value than some constant times x3 if x is close enough to 0. For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of the real numbers. We write f(x) = O(g(x)) (or f(x) = O(g(x)) for x -> ∞ to be more precise) if and only if there exist constants N and C such that |f(x)| ≤ C |g(x)| for all x>Intuitively, this means that f does not grow faster than g. If a is some real number, we write f(x) = O(g(x)) for x -> a if and only if there exist constants d > 0 and C such that |f(x)| C |g(x)| for all x with |x-a| < d. The first definition is the only one used in computer science (where typically only positive functions with a natural number n as argument are considered; the absolute values can then be ignored), while both usages appear in mathematics. Here is a list of classes of functions that are commonly encountered when analyzing algorithms. The slower growing functions are listed first. c is some arbitrary constant. Notation name O(1) constant O(log(n)) logarithmic O((log(n)) c) polylogarithmic O(n) linear O(n2) quadratic O(nc) polynomial O(cn) exponential
  • 16. 14 Note that O(nc) and O(cn) are very different. The latter grows much, much faster, no matter how big the constant c is. A function that grows faster than any power of n is called super polynomial. One that grows slower than an exponential function of the form cn is called sub exponential. An algorithm can require time that is both super polynomial and sub exponential; examples of this include the fastest algorithms known for integer factorization. Note, too, that O (log n) is exactly the same as O (log (nc)). The logarithms differ only by a constant factor, and the big O notation ignores that. Similarly, logs with different constant bases are equivalent. The above list is useful because of the following fact: if a function f(n) is a sum of functions, one of which grows faster than the others, then the faster growing one determines the order of f(n). Example: If f(n) = 10 log(n) + 5 (log(n))3+ 7 n + 3 n2+ 6 n3, then f(n) = O(n3). One caveat here: the number of summands has to be constant and may not depend on n. This notation can also be used with multiple variables and with other expressions on the right side of the equal sign. The notation: f(n,m) = n2+ m3+ O(n+m) represents the statement:∃C ∃ N ∀ n,m>N : f(n,m)n2+m3+C(n+m) Formal definition: - Given f, g : N → R+, we say that f ∈ O(g) if there exists some constants c >0, n0 ≥ 0 such that for every n ≥ n 0, f (n) ≤ cg(n). That is, for sufficiently large n, the rate of growth of f is bounded by g, up to a constant c. f, g might represent arbitrary functions, or the running time or space complexity of a program or algorithm. Theorems you can use without proof
  • 17. 15 Example 1: Prove that running time T(n) = n 3 + 20n + 1 is O(n 3 ) Proof: by the Big-Oh definition, T(n) is O(n 3 ) if T(n) ≤ c·n 3 for some n ≥ n0 . Let us check this condition: if n 3 + 20n + 1 ≤ c·n 3 then c n n + + ≤ 2 3 20 1 1 . Therefore, the Big-Oh condition holds for n ≥ n0 = 1 and c ≥ 22 (= 1 + 20 + 1). Larger values of n0 result in smaller factors c (e.g., for n0 = 10 c ≥ 1.201 and so on) but in any case the above statement is valid. Example 2: Prove that running time T(n) = n 3 + 20n + 1 is not O(n 2 ) Proof: by the Big-Oh definition, T(n) is O(n 2 ) if T(n) ≤ c·n 2 for some n ≥ n0 . Let us check this condition: if n 3 + 20n + 1 ≤ c·n 2 then c n n n + + ≤ 2 20 1 . Therefore, the Big-Oh condition cannot hold (the left side of the latter inequality is growing infinitely, so that there is no such constant factor c).
  • 18. 16 Example 3: Prove that running time T(n) = n 3 + 20n + 1 is O (n 4 ) Proof: by the Big-Oh definition, T(n) is O(n 4 ) if T(n) ≤ c·n 4 for some n ≥ n0 . Let us check this condition: if n 3 + 20n + 1 ≤ c·n 4 then c n n n + + ≤ 3 4 1 20 1 .Therefore, the Big-Oh condition holds for n ≥ n0 = 1 and c ≥ 22 (= 1 + 20 + 1). Larger values of n0 result in smaller factors c (e.g., for n0 = 10 c ≥ 0.10201 and so on) but in any case the above statement is valid. Example 4: Prove that running time T(n) = n 3 + 20n is Ω(n 2 ) Proof: by the Big-Omega definition, T(n) is Ω(n 2 ) if T(n) ≥ c·n 2 for some n ≥ n0 . Let us check this condition: if n 3 + 20n ≥ c·n 2 then c n n + ≥ 20 . The left side of this inequality has the minimum value of 8.94 for n = 20 ≅ 4.47 Therefore, the Big-Omega condition holds for n ≥ n0 = 5 and c ≤ 9. Larger values of n0 result in larger factors c (e.g., for n0 = 10 c ≤ 12.01) but in any case the above statement is valid. 4.Write a short note about the similarity and difference between Turing machine and random access machine?  define the basic deterministic complexity classes, L, P and PSPACE.  Definition 1: Given a set A, we say that A ∈ L iff there is a Turing machine which computes the characteristic function of A in space O(log n).  big O notation usually only provides an upper bound on the growth rate of the function  Definition 2: Given a set A, we say that A ∈ P iff there is a Turing machine which for some constant k computes the characteristic function of A in time O(nk)  Definition 3: Given a set A, we say that A ∈ PSPACE iff there is a Turing machine which for some constant k computes the characteristic function of A in space O(nk)  Theorem 1: L ⊂ PSPACE  Theorem 2: P ⊆ PSPACE  since a Turing machine cannot use more space than time.  Theorem 3: L ⊆ P  a machine which runs in logarithmic space also runs in polynomial time.  Turing machine seems incredibly inefficient and thus we will compare it to a model of computation which is more or less a normal computer (programmed in assembly language).  This type of computer is called a Random Access Machine (RAM)
  • 19. 17  A RAM has a finite control and infinite number of registers and two accumulators.  Both the registers and the accumulators can hold arbitrarily large integers.  We will let r(i) be the content of register i and ac1 and ac2 the contents of the accumulators.  The finite control can read a program and has a read-only input-tape and a write-only output tape.  In one step a RAM can carry out the following instructions.  Add, subtract, divide or multiply the two numbers in ac1 and ac2,the result ends up in ac1.  Make conditional and unconditional jumps. (Condition ac1 > 0 or ac1 = 0).  Load something into an accumulator, e.g. ac1 = r(k) for constant k or ac1= r(ac1),similarly for ac2.  Store the content of an accumulator, e.g. r(k) = ac1 for constant k or r(ac2) = ac1, similarly for ac2.  Read input ac1 = input(ac2).  Write an output.  Use constants in the program.  Halt  Definition 4: The time to do a particular instruction on a RAM is1+ log(k + 1) where k is the least upper bound on the integers involved in the instruction. The time for a computation on a RAM is the sum of the times for the individual instructions.  Definition 5:
  • 20. 18  Intuitively the RAM is more powerful than a Turing machine.  The size of a computer word is bounded by a constant and operations on larger numbers require us to access a number of memory cells which is proportional to logarithm of the number used.  Theorem 4: If a Turing machine can compute a function in time T(n) and space S(n), for T(n) ≥ n and S(n) ≥ log n then the same function can be computed in time O(T2(n)) and space O(S(n)) on a RAM.  fact a Turing machine is not that much less powerful than a RAM.  Theorem 5:- If a function f can be computed by a RAM in time T(n) and space S(n) then f can be computed in time O(T2(n)) and space S(n) on a Turing machine.  Example 1:- Given two n-digit numbers x and y written in binary, write the instruction that computes their sum.  it can be done in logarithmic space.  We have x = ∑i=0 n-1xi2i and y = ∑i=0 n-1yi2i  x + y is computed by the following instruction carry= 0 For i = 0 to n − 1 bit = xi + yi + carry carry = 0 If bit ≥ 2 then carry = 1, bit = bit − 2 write bit next i write carry  This can clearly be done in O(log n) space and thus addition belongs to L.  Example 2:- Given two n-digit numbers x and y written in binary, write a machine instruction that compute their product. ◦ This can be done in P time o(n2) carry= 0
  • 21. 19 For i = 0 to 2n − 2 low = max(0, i − (n − 1)) high = min(n − 1, i) For j = low to high, carry = carry + xj ∗ yi−j write lsb(carry) carry = carry/2 next i write carry with least significant bit first 5.Prove by mathematical induction that n!>2n, for n≥4. Basis: 4! = 24 > 16 = 24. Induction: IH: n! > 2 n NTS: (n + 1)! > 2 n+1 (n + 1)! = n! · (n + 1) (definition of !) > 2 n · (n + 1) (IH) > 2 n · 2 (n ≥ 4) = 2n+1