Micro-Scholarship, What it is, How can it help me.pdf
TCP Fairness for Uplink and Downlink Flows in WLANs
1. Table 1: Simulation Parameter
Parameter Value
Simulator Ns2 - 2.29
Simulation Time 15 min
Packet Interval 0.01 sec
Background Data
Traffic CBR / TCP
Packet Size 512 bytes
Transmission Range 100,200,300,400 Kbytes
Routing Protocol DSDV
MAC Protocol IEEE 802.11
1. Scenario will be like this only
2. Example screen is shown, its single queue management
According to the congestion in AP dual queue will reduce the traffic
3.In wired queue management can be seen but in wireless we are not able to see.
Ambit lick Solutions
Mail Id : Ambitlick@gmail.com , Ambitlicksolutions@gmail.Com
2. NETWORK MODULE
Client-server computing or networking is a distributed application
architecture that partitions tasks or workloads between service providers
(servers) and service requesters, called clients. Often clients and servers operate
over a computer network on separate hardware. A server machine is a high-
performance host that is running one or more server programs which share its
resources with clients. A client also shares any of its resources; Clients
therefore initiate communication sessions with servers which await (listen to)
incoming requests.
PACKET SCHEDULING
This packet scheduling policy is simple to implement, and yields good
performance in the common case that node schedules are known, and
information about node availability is accurate. A potential drawback is that a
node crash (or other failure event) can lead to a number of wasted RTSs to the
failed node. When added across channels, the number may exceed the limit of 7
retransmission attempts allowed for a single channel in the IEEE 802.11
Ambit lick Solutions
Mail Id : Ambitlick@gmail.com , Ambitlicksolutions@gmail.Com
3. BANDWIDTH SHARING
Approach where each node requests and grants as much
bandwidth as possible at each turn. Additionally, we compare the RENO algorithm
for packet scheduling to a First-In-First-Out (FIFO) scheduler where all the SDUs
with the same next-hop are enqueued into the same buffer. For this purpose we
simulate a network with an increasing number of nodes, from 2 to 10, arranged in a
chain topology. Each node has one traffic flow directed to the chain end-point
node, carried with a constant bit-rate stream of 1000 bytes packets emulating
infinite bandwidth demands. Congestion control has been extensively studied for
networks running a single protocol. However, when sources sharing the same
network react to different congestion signals, the existing duality model no longer
explains the behavior of bandwidth allocation. The existence and uniqueness
properties of equilibrium in heterogeneous protocol case are examined.
BURSTY TRAFFIC
The end-to-end throughput (or throughput, for short), which is defined as
the number of bits received by the destination node per second for a given traffic
flow, without any MAC overhead. As it can be seen, the throughput steeply
decreases as the number of nodes increases, regardless of the scheme adopted. This
is because an increasing fraction of the channel capacity is employed to relay
packets at intermediate nodes. For instance, with three nodes the end-to-end
Ambit lick Solutions
Mail Id : Ambitlick@gmail.com , Ambitlicksolutions@gmail.Com
4. throughput is about 2/3 of the available raw bandwidth: 1/3 is consumed by the
traffic flow that is one hop from the destination, and 2/3 is consumed by the other
one that has a length of two hops.
Ambit lick Solutions
Mail Id : Ambitlick@gmail.com , Ambitlicksolutions@gmail.Com