SlideShare ist ein Scribd-Unternehmen logo
1 von 14
INTERNATIONAL INSTITUTE OF MANAGEMENT AND
TECHNICAL STUDIES
INTERNET
All the questions are compulsory. The first five questions shall be of 16 marks each and
the last question shall be of 20 marks.
Q1. A. Should a client use the same protocol port number each time it begins? Why or why
not?
Servers are normally known by their well-known port number. For example, every TCP/IP
implementation that provides an FTP server provides that service on TCP port 21. Every Telnet
server is on TCP port 23. Every implementation of TFTP (the Trivial File Transfer Protocol) is on
UDP port 69. Those services that can be provided by any implementation of TCP/IP have well-
known port numbers between 1 and 1023. The well-known ports are managed by the Internet
Assigned Numbers Authority (IANA).
A client usually doesn't care what port number it uses on its end. All it needs to be certain
of is that whatever port number it uses be unique on its host. Client port numbers are
called ephemeral ports (i.e., short lived). This is because a client typically exists only as long
as the user running the client needs its service, while servers typically run as long as the host
is up.
A "port" is just a number. All a "connection to a port" really represents is a packet which has
that number specified in its "destination port" header field. For a stateless protocol (UDP), there
is no problem because "connections" don't exist - multiple people can send packets to the same
port, and their packets will arrive in whatever sequence. Nobody is ever in the "connected" state.
For a stateful protocol (like TCP), a connection is identified by a 4-tuple consisting of source
and destination ports and source and destination IP addresses. So, if two different machines
connect to the same port on a third machine, there are two distinct connections because the
source IPs differ. If the same machine (or two behind NAT or otherwise sharing the same IP
address) connects twice to a single remote end, the connections are differentiated by source
port (which is generally a random high-numbered port).
Simply, if I connect to the same web server twice from my client, the two connections will
have different source ports from my perspective and destination ports from the web servers. So
there is no ambiguity, even though both connections have the same source and destination IP
addresses.
Ports are a way to multiplex IP addresses so that different applications can listen on the same
IP address/protocol pair. Unless an application defines its own higher-level protocol, there is no
way to multiplex a port. If two connections using the same protocol have identical source and
destination IPs and identical source and destination ports, they must be the same connection.
When an Ethernet frame is received at the destination host it starts its way up the protocol
stack and all the headers are removed by the appropriate protocol box. Each protocol box looks
at certain identifiers in its header to determine which box in the next upper layer receives the
data. This is called demultiplexing.
Most networking applications are written assuming one side is the client and the other the
server. The purpose of the application is for the server to provide some defined service for
clients.
We can categorize servers into two classes: iterative or concurrent. An iterative server iterates
through the following steps.
I1. Wait for a client request to arrive.
I2. Process the client request.
I3. Send the response back to the client that sent the request.
I4. Go back to step I1.
The problem with an iterative server is when step I2 takes a while. During this time no other
clients are serviced. A concurrent server, on the other hand, performs the following steps.
Cl. Wait for a client request to arrive.
C2. Start a new server to handle this client's request. This may involve creating a new process,
task, or thread, depending on what the underlying operating system supports. How this step is
performed depends on the operating system.
This new server handles this client's entire request. When complete, this new server
terminates.
C3. Go back to step Cl.
The advantage of a concurrent server is that the server just spawns other servers to handle
the client requests. Each client has, in essence, its own server. Assuming the operating system
allows multiprogramming, multiple clients are serviced concurrently.
The reason we categorize servers, and not clients, is because a client normally can't tell
whether it's talking to an iterative server or a concurrent server.
As a general rule, TCP servers are concurrent, and UDP servers are iterative, but there are a
few exceptions.
B. Write a program that uses “execve” to change the code a process executes?
main()
{
char *arg[]={"AA", "BB",0}; //these are the array of arguments which may include program
name
char *env[]={ "PATH=/home/xyx", "ENV=/***/***", 0}; List of nvironmentvariables
execve("executafilepathinformations, arg, env);
}
Q2. Write down the data structures and message formats needed for a stateless file server.
What happens if two or more clients access the same file? What happens if a client
crashes before closing a file?
A stateless system is one in which the client sends a request to a server, the server carries it out,
and returns the result. Between these requests, no client-specific information is stored on the
server. A stateful system is one where information about client connections is maintained on the
server. State may refer to any information that a server stores about a client: whether a file is open,
whether a file is being modified, cached data on the client, etc.
Statelessness in the context of servers, the question of whether a server is stateless or stateful
centers on the application protocol more than the implementation. If the application protocol
specifies that the meaning of a particular message depends in some way on previous messages, it
may be impossible to provide a stateless interaction.
In essence, the issue of statelessness focuses on whether the application protocol assumes
the responsibility for reliable delivery. To avoid problems and make the interaction reliable, an
application protocol designer must ensure that each message is completely unambiguous. That is,
a message cannot depend on being delivered in order, nor can it depend on previous messages
having been delivered. In essence, the protocol designer must build the interaction so the server
gives the same response no matter when or how many times a request arrives. Mathematicians
use the term idempotent to refer to a mathematical operation that always produces the same
result.
We use the term to refer to protocols that arrange for a server to give the same response to a
given message no matter how many times it arrives. In an internet where the underlying network
can duplicate, delay or deliver messages out of order or where computers running client
applications can crash unexpectedly, the server should be stateless. The server can only be
stateless if the application protocol is designed to make operations idempotent.
Message Creation and Stateless Operations
Data Structures
struct pjsip_send_state
struct pjsip_response_addr
Functions
pj_status_t
pjsip_endpt_send_request_stateless (pjsip_endpoint *endpt, pjsip_tx_data *tdata,void
*token, pjsip_send_callback cb)
Send outgoing request statelessly, the function will take care of which destination and
transport to use based on the information in the message, taking care of URI in the request line
and Route header.
This function is different than pjsip_transport_send() in that this function adds/modify the Via
header as necessary.
Parameters
endpt The endpoint instance.
tdata The transmit data to be sent.
token Arbitrary token to be given back on the callback.
Cb
Optional callback to notify transmission status (also gives chance for application
to discontinue retrying sending to alternate address).
Returns
PJ_SUCCESS, or the appropriate error code.
In a stateless system:
 Each request must be complete — the file has to be fully identified and any offsets specified.
 If a server crashes and then recovers, no state was lost about client connections because
there was no state to maintain. This creates a higher degree of fault tolerance.
 No remote open/close calls are needed (they only serve to establish state).
 There is no server memory devoted to storing per-client data.
 There is no limit on the number of open files on the server; they aren't "open" since the
server maintains no per-client state.
 There are no problems if the client crashes. The server does not have any state to clean up.
Q3. Write a server algorithm that combines delayed allocation with pre allocation. How can
you limit the maximum level of concurrency?
Delayed allocation
Allocation is setting aside, or reserving, space for use. On a computer, it is setting aside the
space on a hard drive for use to store files. The files can be newly created or those being
modified. Data which needs to be written to the harddisk can be saved in RAM or cache.
RAM/cache is able to read/write faster than hard drives. At certain intervals, the data is taken
from RAM/cache and written to the harddisk. The Writeback Time Interval sets how often the
writeback occurs. If there is a loss of power or the system is shut off, the data changes in the
RAM/cache are lost since they have not written to disk.
It is usually best to set the Writeback Time Interval to a lower time frame. The Delayed
Allocation is when the data blocks are written at the Writeback Time Interval.
There are three advantages to Delayed Allocation:
1. Larger sets of blocks are processed before being written. This reduces the processer utilization
by performing the processing all at once, as discussed in Multi-Block Allocation.
2. Reduces fragmentation by allocating a large number of blocks at once which are most likely
contiguous.
3. Reduces processor time and disk space for files that are short-term temporary files wich are
used and deleted in RAM/cache before they are written.
For files where the file size is unknown at the time of writing, usually since it is still being
modified or created, this is the best method.
It is a performance feature (it doesn't change the disk format) found in a few modern filesystems such
as XFS, ZFS, btrfs or Reiser 4, and it consists in delaying the allocation of blocks as much as possible,
contrary to what traditionally filesystems (such as Ext3, reiser3, etc) do: allocate the blocks as soon as
possible. For example, if a process write()s, the filesystem code will allocate immediately the blocks
where the data will be placed - even if the data is not being written right now to the disk and it's going
to be kept in the cache for some time. This approach has disadvantages. For example when a process
is writing continually to a file that grows, successive write()s allocate blocks for the data, but they don't
know if the file will keep growing. Delayed allocation, on the other hand, does not allocate the blocks
immediately when the process write()s, rather, it delays the allocation of the blocks while the file is kept
in cache, until it is really going to be written to the disk. This gives the block allocator the opportunity to
optimize the allocation in situations where the old system couldn't. Delayed allocation plays very nicely
with the two previous features mentioned, extents and multiblock allocation, because in many
workloads when the file is written finally to the disk it will be allocated in extents whose block allocation
is done with the mballoc allocator. The performance is much better, and the fragmentation is much
improved in some workloads.
Pre-Allocation
Similar to Delayed Allocation, the file is in RAM/cache, but the kernel will allocate the space
needed on the hard drive. The file is written with all zeroes and should hopefully be contiguous.
The method guarantees that the storage space is available for the file.
For files where the file sizes are known, then this method is the best because the set space
can be “reserved”.Keep in mind that if the file is accessed before it is written from RAM/cache, the
results will be a file of all bits set to zeroes.
Preallocation rules are processed before the placement rules.
When a file is created, the preallocation value from the policy will be used instead of the default
allocation of one block. The preallocation value is rounded up to the number of blocks required for the
specified amount. For example, a value of 1 byte can be specified, but SAN File System will allocate
one 4-kilobyte block. The maximum preallocation value is 128 megabytes.
Limiting Maximum Number of Concurrency
If your program uses webservices number of simultaneous connections will be limited
to ServicePointManager.DefaultConnectionLimit property. If you want 5 simultaneous
connections it is not enough to use Arrow_Raider's solution.
You also should increase ServicePointManager.DefaultConnectionLimit because it is only 2
by default.
We can use semaphores also to reduce the maximum number of concurrency.
private void RunAllActions(IEnumerable<Action> actions, int maxConcurrency)
{
using(SemaphoreSlim concurrencySemaphore = new emaphoreSlim(maxConcurrency))
{
foreach(Action action in actions)
{
Task.Factory.StartNew(() =>
{
concurrencySemaphore.Wait();
try
{
action();
}
Finally
{
concurrencySemaphore.Release();
}
});
}
}
}
Q4. A. Under what circumstances might a programmer need to pass opaque data objects
between a client and a server?
When you look at Security Builder Crypto function definitions, you will see there are a number
of data types whose names begin with sb_. These types are declared in sbdef.h, and are actually
pointers to undefined data structures. These pointers are used by the library to refer to internally-
defined structures. The actual definitions of the data structures are irrelevant, as they are only
used within the library. These types of data structures are often referred to as opaque or abstract
data types.
When creating and destroying these opaque data types, you must pass a pointer to
an sb_ type to the API function. For example, a pointer to an sb_Params or an sb_Keyobject. In
other cases, where the value of the pointer itself is not changed, you simply supply the value of
the sb_ type variables to the interface functions. In order to understand the function call
sequence, you must become familiar with the opaque data types, or objects.
Some objects cannot be created without other objects; they must be created and destroyed in
a particular order.
The main sb_ types are:
 Global Context (sb_GlobalCtx)
 Yield Context (sb_YieldCtx)
 RNG Context (sb_RNGCtx)
 Parameters Object (sb_Params)
 Key Objects (sb_Key, sb_PublicKey, sb_PrivateKey)
 SB Context (sb_Context)
In computer science, an opaque data type is a data type that is incompletely defined in
an interface, so that its values can only be manipulated by calling subroutines that have access to
the missing information. The concrete representation of the type is hidden from its users. A data
type whose representation is visible is called transparent.
Typical examples of opaque data types include handles for resources provided by
an operating system to application software. For example, the POSIX standard for threads
defines an application programming interface based on a number of opaque types that
represent threads or synchronization primitives like mutexes or condition variables.
An opaque pointer is a special case of an opaque data type, a data type that is declared to be
a pointer to a record or data structure of some unspecified data type. For example, the standard
library that forms part of the specification of the C programming language provides functions
for file input and output that return or take values of type "pointer toFILE" that represent file
streams (see C file input/output), but the concrete implementation of the type FILE is not
specified.
In some protocols, handles are passed from a server to the client. The client passes the
handle back to the server at some later time. Handles are never inspected by clients; they are
obtained and submitted. That is, handles are opaque. The xdr_opaque() primitive is used for
describing fixed-sized opaque bytes.
bool_t
xdr_opaque(xdrs, p, len)
XDR *xdrs;
char *p;
u_int len;
The parameter p is the location of the bytes, len is the number of bytes in the opaque object.
By definition, the actual data contained in the opaque object is not machine portable.In the
SunOS/SVR4 system is another routine for manipulating opaque data. This routine,
the xdr_netobj, sends counted opaque data, much like xdr_opaque(). The following code example
illustrates the syntax of xdr_netobj().
struct netobj {
u_int n_len;
char *n_bytes;
};
typedef struct netobj netobj;
bool_t
xdr_netobj(xdrs, np)
XDR *xdrs;
struct netobj *np;
The xdr_netobj() routine is a filter primitive that translates between variable-length opaque data
and its external representation. The parameter np is the address of the netobj structure containing
both a length and a pointer to the opaque data. The length may be no more
than MAX_NETOBJ_SZ bytes. This routine returns TRUE if it succeeds, FALSE otherwise.
B. What are the major advantages and disadvantages of using a port mapper instead of
well known ports?
Port mapping / Port Mapping is a name given to the combined technique of
1. translating the address or port number of a packet to a new destination
2. possibly accepting such packet(s) in a packet filter (firewall)
3. forwarding the packet according to the routing table.
The destination may be a predetermined network port (assuming protocols like TCP and UDP,
though the process is not limited to these) on a host within a NAT-masqueraded, typically private
network, based on the port number on which it was received at the gateway from the originating
host.
The technique is used to permit communications by external hosts with services provided
within a private local area network.
Advantages:
Port mapping / Port Mapping basically allows an outside computer to connect to a computer
in a private local area network. Some commonly done port forwarding includes forwarding port 21
for FTP access, and forwarding port 80 for web servers. To achieve such results, operating
systems like the Mac OS X and the BSD (Berkeley Software Distribution) will use the pre-installed
in the kernel, ipfirewall (ipfw), to conduct port forwarding. Linux on the other hand would add
iptables to do port forwarding.
Disadvantages:
There are a few downsides or precautions to take with port forwarding / Port Mapping.
 Only one port can be used at a time by one machine.
 Port forwarding also allows any machine in the world to connect to the forwarded port at will, and
thus making the network slightly insecure.
 The port forwarding technology itself is built in a way so that the destination machine will see the
incoming packets as coming from the router rather than the original machine sending out the
packets.
Q5. A. Compare DEC RPC to ONCRPC. How do the two differ
Remote procedure call is a method of supporting the development of applications that require
processes on different systems to communicate and coordinate their activities. This article
pursues a comparison of three important RPC's, namely, Open Network Computing (ONC),
Distributed Computing Environment (DCE), and the ISO specification of a RPC.
A general discussion of the RPC model, and its implementation are followed by describing the
features and capabilities, like the model used, the mechanism of information transfer and call
semantics of the three RPC's. The implementation of ONC and DEC are discussed. In a normal
procedure the call takes place between the procedures of a single process in the same memory
space on a single system, RPC takes place between a client and a server which are two different
systems connected to a network. An important feature discussed while describing the RPC model
is that of data representation.
The client stub, creates a message packet to be sent to the server by converting the input
arguments from the local data representation to a common data representation. On the server
side when the server stub is called by the server runtime, the input arguments are taken from the
message and converted from the common data representation to the local data representation.
ONC RPC: This was one of the first commercial implementations of RPC. Although a modified
implementation called the TI RPC is available, where the difference is in the latter being able to
use different Transport Layer Protocols, yet the success of the more widely used original RPC is
due to the wide use of NFS (Network file system, a client/server application that allows a user to
view and optionally store/update files on a remote computer). ONC supports At-most-once and
Idempotent call semantics. It also supports no-response and broadcast RPC. The type of
authentication supported are none (default), UNIX used ID/group ID and secure RPC.
Secure RPC uses DES (Data encryption Standard, a IBM product which uses more than 72
quadrillion or more possible encryption keys). RPC has reduced procedure declaration supporting
only one input parameter and one output parameter.
The RPC language compiler is called rpcgen which generates an include file, client stub,
server stub. The client stub produced by rpcgen is incomplete and in some cases needs the client
stub code to be generated by the developer. The server stub produced is nearly complete
B. Examine the specification for NFS version 2 and 3 what are the chief difference?
Does version 3 make any changes that are visible or important to a programmer?
The NFS protocol provides transparent remote access to shared file systems across
networks. The NFS protocol is designed to be machine, operating system, network architecture,
and security mechanism, and transport protocol independent. This independence is achieved
through the use of ONC Remote Procedure Call (RPC) primitives built on top of an eXternal Data
Representation (XDR). NFS protocol Version 2 is specified in the Network File System Protocol
Specification
Version 2 of the protocol (defined in RFC 1094, March 1989) originally operated only over
UDP. Its designers meant to keep the server side stateless, with locking (for example)
implemented outside of the core protocol. People involved in the creation of NF S version 2
include Russel Sandberg, Bob Lyon, Bill Joy, Steve Kleiman, and others. The decision to make
the file system stateless was a key decision, since it made recovery from server failures trivial (all
network clients would freeze up when a server crashed, but once the server repaired the file
system and restarted, all the state to retry each transaction was contained in each RPC, which
was retried by the client stub(s).) This design decision allowed UNIX applications (which could not
tolerate file server crashes) to ignore the problem
Version 3 details as follows:
support for 64-bit file sizes and offsets, to handle files larger than 2 gigabytes (GB); support
for asynchronous writes on the server, to improve write performance; additional file attributes in
many replies, to avoid the need to re-fetch them; a READDIRPLUS operation, to get file handles
and attributes along with file names when scanning a directory assorted other improvements.
At the time of introduction of Version 3, vendor support for TCP as a transport-layer protocol
began increasing. While several vendors had already added support for NFS Version 2 with TCP
as a transport, Sun Microsystems added support for TCP as a transport for NFS at the same time
it added support for Version 3. Using TCP as a transport made using NFS over a WAN more
feasible
Q6. A. Is it possible to make the server side of the dictionary program concurrent? Why or
why not?
Writing correct concurrent programs is harder than writing sequential ones. This is because the set of
potential risks and failure modes is larger - anything that can go wrong in a sequential program can
also go wrong in a concurrent one, and with concurrency comes additional hazards not present in
sequential programs such as race conditions, data races, deadlocks, missed signals, and livelock.
Testing concurrent programs is also harder than testing sequential ones. This is trivially true: tests for
concurrent programs are themselves concurrent programs. But it is also true for another reason: the
failure modes of concurrent programs are less predictable and repeatable than for sequential
programs. Failures in sequential programs are deterministic; if a sequential program fails with a given
set of inputs and initial state, it will fail every time. Failures in concurrent programs, on the other hand,
tend to be rare probabilistic events.
Because of this, reproducing failures in concurrent programs can be maddeningly difficult. Not only
might the failure be rare, and therefore not manifest itself frequently, but it might not occur at all in
certain platform configurations, so that bug that happens daily at your customer's site might never
happen at all in your test lab. Further, attempts to debug or monitor the program can introduce timing
or synchronization artifacts that prevents the bug from appearing at all. As in Heisenberg's uncertainty
principle, observing the state of the system may in fact change it.
B. Under what condition will read from a terminal return the value 0?
Any subsequent read from the terminal device shall return the value of zero, indicating end-of-
file. Thus, processes that read a terminal file and test for end-of-file can terminate
appropriately after a disconnect. If the EIO condition as specified in read() also exists, it is
unspecified whether on EOF condition or [EIO] is returned
C. If you had a choice debugging deadlock problem or a livelock Problem, which would
you choose? Why ? How would you proceed?

Weitere ähnliche Inhalte

Was ist angesagt?

Application layer protocol
Application layer protocolApplication layer protocol
Application layer protocol
Tom Hanstead
 
Information on protocols-email protocols
Information on protocols-email protocolsInformation on protocols-email protocols
Information on protocols-email protocols
Priyanka Shinde
 
What is the difference between udp and tcp internet protocols
What is the difference between udp and tcp internet protocols What is the difference between udp and tcp internet protocols
What is the difference between udp and tcp internet protocols
krupalipandya29
 

Was ist angesagt? (20)

TCP- Transmission Control Protocol
TCP-  Transmission Control Protocol TCP-  Transmission Control Protocol
TCP- Transmission Control Protocol
 
User datagram protocol
User datagram protocolUser datagram protocol
User datagram protocol
 
Application layer protocol
Application layer protocolApplication layer protocol
Application layer protocol
 
Application layer protocols
Application layer protocolsApplication layer protocols
Application layer protocols
 
transfer protocols,ftp,smtp,pop,imap
transfer protocols,ftp,smtp,pop,imaptransfer protocols,ftp,smtp,pop,imap
transfer protocols,ftp,smtp,pop,imap
 
Simple Mail Transfer Protocol
Simple Mail Transfer ProtocolSimple Mail Transfer Protocol
Simple Mail Transfer Protocol
 
Information on protocols-email protocols
Information on protocols-email protocolsInformation on protocols-email protocols
Information on protocols-email protocols
 
Week10 transport
Week10 transportWeek10 transport
Week10 transport
 
SMTP - SIMPLE MAIL TRANSFER PROTOCOL
SMTP - SIMPLE MAIL TRANSFER PROTOCOLSMTP - SIMPLE MAIL TRANSFER PROTOCOL
SMTP - SIMPLE MAIL TRANSFER PROTOCOL
 
Transport services
Transport servicesTransport services
Transport services
 
Udp vs-tcp
Udp vs-tcpUdp vs-tcp
Udp vs-tcp
 
file transfer and access utilities
file transfer and access utilitiesfile transfer and access utilities
file transfer and access utilities
 
What is the difference between udp and tcp internet protocols
What is the difference between udp and tcp internet protocols What is the difference between udp and tcp internet protocols
What is the difference between udp and tcp internet protocols
 
Tcp and udp ports
Tcp and udp portsTcp and udp ports
Tcp and udp ports
 
Jaimin chp-6 - transport layer- 2011 batch
Jaimin   chp-6 - transport layer- 2011 batchJaimin   chp-6 - transport layer- 2011 batch
Jaimin chp-6 - transport layer- 2011 batch
 
E mail protocol - SMTP
E mail protocol - SMTPE mail protocol - SMTP
E mail protocol - SMTP
 
TCP Vs UDP
TCP Vs UDP TCP Vs UDP
TCP Vs UDP
 
Computer networks unit v
Computer networks    unit vComputer networks    unit v
Computer networks unit v
 
RPC: Remote procedure call
RPC: Remote procedure callRPC: Remote procedure call
RPC: Remote procedure call
 
UDP and TCP Protocol & Encrytion and its algorithm
UDP and TCP Protocol & Encrytion and its algorithmUDP and TCP Protocol & Encrytion and its algorithm
UDP and TCP Protocol & Encrytion and its algorithm
 

Ähnlich wie Internet

Arun prjct dox
Arun prjct doxArun prjct dox
Arun prjct dox
Baig Mirza
 
Mail Server Project Report
Mail Server Project ReportMail Server Project Report
Mail Server Project Report
Kavita Sharma
 
Final networks lab manual
Final networks lab manualFinal networks lab manual
Final networks lab manual
Jaya Prasanna
 
How a network connection is created A network connection is initi.pdf
How a network connection is created A network connection is initi.pdfHow a network connection is created A network connection is initi.pdf
How a network connection is created A network connection is initi.pdf
arccreation001
 
Networking Java Socket Programming
Networking Java Socket ProgrammingNetworking Java Socket Programming
Networking Java Socket Programming
Mousmi Pawar
 
Introduction to the client server computing By Attaullah Hazrat
Introduction to the client server computing By Attaullah HazratIntroduction to the client server computing By Attaullah Hazrat
Introduction to the client server computing By Attaullah Hazrat
Attaullah Hazrat
 

Ähnlich wie Internet (20)

Network Testing ques
Network Testing quesNetwork Testing ques
Network Testing ques
 
Chapter 3-Processes.ppt
Chapter 3-Processes.pptChapter 3-Processes.ppt
Chapter 3-Processes.ppt
 
Chapter 3-Processes2.pptx
Chapter 3-Processes2.pptxChapter 3-Processes2.pptx
Chapter 3-Processes2.pptx
 
Inter process communication by Dr.C.R.Dhivyaa, Assistant Professor,Kongu Engi...
Inter process communication by Dr.C.R.Dhivyaa, Assistant Professor,Kongu Engi...Inter process communication by Dr.C.R.Dhivyaa, Assistant Professor,Kongu Engi...
Inter process communication by Dr.C.R.Dhivyaa, Assistant Professor,Kongu Engi...
 
Bt0072 computer networks 2
Bt0072 computer networks  2Bt0072 computer networks  2
Bt0072 computer networks 2
 
Transport layer
Transport layerTransport layer
Transport layer
 
Web and internet technology notes for BCA students
Web and internet technology notes for BCA studentsWeb and internet technology notes for BCA students
Web and internet technology notes for BCA students
 
Arun prjct dox
Arun prjct doxArun prjct dox
Arun prjct dox
 
Mail Server Project Report
Mail Server Project ReportMail Server Project Report
Mail Server Project Report
 
Transport layer
Transport layer Transport layer
Transport layer
 
Final networks lab manual
Final networks lab manualFinal networks lab manual
Final networks lab manual
 
Chapter Five - Transport Layer.pptx
Chapter Five - Transport Layer.pptxChapter Five - Transport Layer.pptx
Chapter Five - Transport Layer.pptx
 
Lecture9
Lecture9Lecture9
Lecture9
 
Network protocols
Network protocolsNetwork protocols
Network protocols
 
How a network connection is created A network connection is initi.pdf
How a network connection is created A network connection is initi.pdfHow a network connection is created A network connection is initi.pdf
How a network connection is created A network connection is initi.pdf
 
Mcse question
Mcse questionMcse question
Mcse question
 
Unit 3 - Protocols and Client-Server Applications - IT
Unit 3 - Protocols and Client-Server Applications - ITUnit 3 - Protocols and Client-Server Applications - IT
Unit 3 - Protocols and Client-Server Applications - IT
 
Networking Java Socket Programming
Networking Java Socket ProgrammingNetworking Java Socket Programming
Networking Java Socket Programming
 
Introduction to the client server computing By Attaullah Hazrat
Introduction to the client server computing By Attaullah HazratIntroduction to the client server computing By Attaullah Hazrat
Introduction to the client server computing By Attaullah Hazrat
 
Ajp notes-chapter-04
Ajp notes-chapter-04Ajp notes-chapter-04
Ajp notes-chapter-04
 

Kürzlich hochgeladen

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Kürzlich hochgeladen (20)

GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 

Internet

  • 1. INTERNATIONAL INSTITUTE OF MANAGEMENT AND TECHNICAL STUDIES INTERNET All the questions are compulsory. The first five questions shall be of 16 marks each and the last question shall be of 20 marks. Q1. A. Should a client use the same protocol port number each time it begins? Why or why not? Servers are normally known by their well-known port number. For example, every TCP/IP implementation that provides an FTP server provides that service on TCP port 21. Every Telnet server is on TCP port 23. Every implementation of TFTP (the Trivial File Transfer Protocol) is on UDP port 69. Those services that can be provided by any implementation of TCP/IP have well- known port numbers between 1 and 1023. The well-known ports are managed by the Internet Assigned Numbers Authority (IANA). A client usually doesn't care what port number it uses on its end. All it needs to be certain of is that whatever port number it uses be unique on its host. Client port numbers are called ephemeral ports (i.e., short lived). This is because a client typically exists only as long as the user running the client needs its service, while servers typically run as long as the host is up. A "port" is just a number. All a "connection to a port" really represents is a packet which has that number specified in its "destination port" header field. For a stateless protocol (UDP), there is no problem because "connections" don't exist - multiple people can send packets to the same port, and their packets will arrive in whatever sequence. Nobody is ever in the "connected" state. For a stateful protocol (like TCP), a connection is identified by a 4-tuple consisting of source and destination ports and source and destination IP addresses. So, if two different machines connect to the same port on a third machine, there are two distinct connections because the source IPs differ. If the same machine (or two behind NAT or otherwise sharing the same IP address) connects twice to a single remote end, the connections are differentiated by source port (which is generally a random high-numbered port). Simply, if I connect to the same web server twice from my client, the two connections will have different source ports from my perspective and destination ports from the web servers. So
  • 2. there is no ambiguity, even though both connections have the same source and destination IP addresses. Ports are a way to multiplex IP addresses so that different applications can listen on the same IP address/protocol pair. Unless an application defines its own higher-level protocol, there is no way to multiplex a port. If two connections using the same protocol have identical source and destination IPs and identical source and destination ports, they must be the same connection. When an Ethernet frame is received at the destination host it starts its way up the protocol stack and all the headers are removed by the appropriate protocol box. Each protocol box looks at certain identifiers in its header to determine which box in the next upper layer receives the data. This is called demultiplexing. Most networking applications are written assuming one side is the client and the other the server. The purpose of the application is for the server to provide some defined service for clients. We can categorize servers into two classes: iterative or concurrent. An iterative server iterates through the following steps. I1. Wait for a client request to arrive. I2. Process the client request. I3. Send the response back to the client that sent the request. I4. Go back to step I1. The problem with an iterative server is when step I2 takes a while. During this time no other clients are serviced. A concurrent server, on the other hand, performs the following steps. Cl. Wait for a client request to arrive. C2. Start a new server to handle this client's request. This may involve creating a new process, task, or thread, depending on what the underlying operating system supports. How this step is performed depends on the operating system. This new server handles this client's entire request. When complete, this new server terminates. C3. Go back to step Cl.
  • 3. The advantage of a concurrent server is that the server just spawns other servers to handle the client requests. Each client has, in essence, its own server. Assuming the operating system allows multiprogramming, multiple clients are serviced concurrently. The reason we categorize servers, and not clients, is because a client normally can't tell whether it's talking to an iterative server or a concurrent server. As a general rule, TCP servers are concurrent, and UDP servers are iterative, but there are a few exceptions. B. Write a program that uses “execve” to change the code a process executes? main() { char *arg[]={"AA", "BB",0}; //these are the array of arguments which may include program name char *env[]={ "PATH=/home/xyx", "ENV=/***/***", 0}; List of nvironmentvariables execve("executafilepathinformations, arg, env); } Q2. Write down the data structures and message formats needed for a stateless file server. What happens if two or more clients access the same file? What happens if a client crashes before closing a file? A stateless system is one in which the client sends a request to a server, the server carries it out, and returns the result. Between these requests, no client-specific information is stored on the server. A stateful system is one where information about client connections is maintained on the server. State may refer to any information that a server stores about a client: whether a file is open, whether a file is being modified, cached data on the client, etc. Statelessness in the context of servers, the question of whether a server is stateless or stateful centers on the application protocol more than the implementation. If the application protocol specifies that the meaning of a particular message depends in some way on previous messages, it may be impossible to provide a stateless interaction. In essence, the issue of statelessness focuses on whether the application protocol assumes the responsibility for reliable delivery. To avoid problems and make the interaction reliable, an application protocol designer must ensure that each message is completely unambiguous. That is, a message cannot depend on being delivered in order, nor can it depend on previous messages having been delivered. In essence, the protocol designer must build the interaction so the server gives the same response no matter when or how many times a request arrives. Mathematicians
  • 4. use the term idempotent to refer to a mathematical operation that always produces the same result. We use the term to refer to protocols that arrange for a server to give the same response to a given message no matter how many times it arrives. In an internet where the underlying network can duplicate, delay or deliver messages out of order or where computers running client applications can crash unexpectedly, the server should be stateless. The server can only be stateless if the application protocol is designed to make operations idempotent. Message Creation and Stateless Operations Data Structures struct pjsip_send_state struct pjsip_response_addr Functions pj_status_t pjsip_endpt_send_request_stateless (pjsip_endpoint *endpt, pjsip_tx_data *tdata,void *token, pjsip_send_callback cb) Send outgoing request statelessly, the function will take care of which destination and transport to use based on the information in the message, taking care of URI in the request line and Route header. This function is different than pjsip_transport_send() in that this function adds/modify the Via header as necessary. Parameters endpt The endpoint instance. tdata The transmit data to be sent. token Arbitrary token to be given back on the callback. Cb Optional callback to notify transmission status (also gives chance for application to discontinue retrying sending to alternate address). Returns PJ_SUCCESS, or the appropriate error code.
  • 5. In a stateless system:  Each request must be complete — the file has to be fully identified and any offsets specified.  If a server crashes and then recovers, no state was lost about client connections because there was no state to maintain. This creates a higher degree of fault tolerance.  No remote open/close calls are needed (they only serve to establish state).  There is no server memory devoted to storing per-client data.  There is no limit on the number of open files on the server; they aren't "open" since the server maintains no per-client state.  There are no problems if the client crashes. The server does not have any state to clean up. Q3. Write a server algorithm that combines delayed allocation with pre allocation. How can you limit the maximum level of concurrency? Delayed allocation Allocation is setting aside, or reserving, space for use. On a computer, it is setting aside the space on a hard drive for use to store files. The files can be newly created or those being modified. Data which needs to be written to the harddisk can be saved in RAM or cache. RAM/cache is able to read/write faster than hard drives. At certain intervals, the data is taken from RAM/cache and written to the harddisk. The Writeback Time Interval sets how often the writeback occurs. If there is a loss of power or the system is shut off, the data changes in the RAM/cache are lost since they have not written to disk. It is usually best to set the Writeback Time Interval to a lower time frame. The Delayed Allocation is when the data blocks are written at the Writeback Time Interval. There are three advantages to Delayed Allocation: 1. Larger sets of blocks are processed before being written. This reduces the processer utilization by performing the processing all at once, as discussed in Multi-Block Allocation. 2. Reduces fragmentation by allocating a large number of blocks at once which are most likely contiguous. 3. Reduces processor time and disk space for files that are short-term temporary files wich are used and deleted in RAM/cache before they are written. For files where the file size is unknown at the time of writing, usually since it is still being modified or created, this is the best method.
  • 6. It is a performance feature (it doesn't change the disk format) found in a few modern filesystems such as XFS, ZFS, btrfs or Reiser 4, and it consists in delaying the allocation of blocks as much as possible, contrary to what traditionally filesystems (such as Ext3, reiser3, etc) do: allocate the blocks as soon as possible. For example, if a process write()s, the filesystem code will allocate immediately the blocks where the data will be placed - even if the data is not being written right now to the disk and it's going to be kept in the cache for some time. This approach has disadvantages. For example when a process is writing continually to a file that grows, successive write()s allocate blocks for the data, but they don't know if the file will keep growing. Delayed allocation, on the other hand, does not allocate the blocks immediately when the process write()s, rather, it delays the allocation of the blocks while the file is kept in cache, until it is really going to be written to the disk. This gives the block allocator the opportunity to optimize the allocation in situations where the old system couldn't. Delayed allocation plays very nicely with the two previous features mentioned, extents and multiblock allocation, because in many workloads when the file is written finally to the disk it will be allocated in extents whose block allocation is done with the mballoc allocator. The performance is much better, and the fragmentation is much improved in some workloads.
  • 7. Pre-Allocation Similar to Delayed Allocation, the file is in RAM/cache, but the kernel will allocate the space needed on the hard drive. The file is written with all zeroes and should hopefully be contiguous. The method guarantees that the storage space is available for the file. For files where the file sizes are known, then this method is the best because the set space can be “reserved”.Keep in mind that if the file is accessed before it is written from RAM/cache, the results will be a file of all bits set to zeroes. Preallocation rules are processed before the placement rules.
  • 8. When a file is created, the preallocation value from the policy will be used instead of the default allocation of one block. The preallocation value is rounded up to the number of blocks required for the specified amount. For example, a value of 1 byte can be specified, but SAN File System will allocate one 4-kilobyte block. The maximum preallocation value is 128 megabytes. Limiting Maximum Number of Concurrency If your program uses webservices number of simultaneous connections will be limited to ServicePointManager.DefaultConnectionLimit property. If you want 5 simultaneous connections it is not enough to use Arrow_Raider's solution. You also should increase ServicePointManager.DefaultConnectionLimit because it is only 2 by default. We can use semaphores also to reduce the maximum number of concurrency. private void RunAllActions(IEnumerable<Action> actions, int maxConcurrency) { using(SemaphoreSlim concurrencySemaphore = new emaphoreSlim(maxConcurrency)) { foreach(Action action in actions) { Task.Factory.StartNew(() => { concurrencySemaphore.Wait(); try { action(); } Finally { concurrencySemaphore.Release(); } }); } } } Q4. A. Under what circumstances might a programmer need to pass opaque data objects between a client and a server? When you look at Security Builder Crypto function definitions, you will see there are a number of data types whose names begin with sb_. These types are declared in sbdef.h, and are actually pointers to undefined data structures. These pointers are used by the library to refer to internally-
  • 9. defined structures. The actual definitions of the data structures are irrelevant, as they are only used within the library. These types of data structures are often referred to as opaque or abstract data types. When creating and destroying these opaque data types, you must pass a pointer to an sb_ type to the API function. For example, a pointer to an sb_Params or an sb_Keyobject. In other cases, where the value of the pointer itself is not changed, you simply supply the value of the sb_ type variables to the interface functions. In order to understand the function call sequence, you must become familiar with the opaque data types, or objects. Some objects cannot be created without other objects; they must be created and destroyed in a particular order. The main sb_ types are:  Global Context (sb_GlobalCtx)  Yield Context (sb_YieldCtx)  RNG Context (sb_RNGCtx)  Parameters Object (sb_Params)  Key Objects (sb_Key, sb_PublicKey, sb_PrivateKey)  SB Context (sb_Context) In computer science, an opaque data type is a data type that is incompletely defined in an interface, so that its values can only be manipulated by calling subroutines that have access to the missing information. The concrete representation of the type is hidden from its users. A data type whose representation is visible is called transparent. Typical examples of opaque data types include handles for resources provided by an operating system to application software. For example, the POSIX standard for threads defines an application programming interface based on a number of opaque types that represent threads or synchronization primitives like mutexes or condition variables. An opaque pointer is a special case of an opaque data type, a data type that is declared to be a pointer to a record or data structure of some unspecified data type. For example, the standard library that forms part of the specification of the C programming language provides functions for file input and output that return or take values of type "pointer toFILE" that represent file streams (see C file input/output), but the concrete implementation of the type FILE is not specified.
  • 10. In some protocols, handles are passed from a server to the client. The client passes the handle back to the server at some later time. Handles are never inspected by clients; they are obtained and submitted. That is, handles are opaque. The xdr_opaque() primitive is used for describing fixed-sized opaque bytes. bool_t xdr_opaque(xdrs, p, len) XDR *xdrs; char *p; u_int len; The parameter p is the location of the bytes, len is the number of bytes in the opaque object. By definition, the actual data contained in the opaque object is not machine portable.In the SunOS/SVR4 system is another routine for manipulating opaque data. This routine, the xdr_netobj, sends counted opaque data, much like xdr_opaque(). The following code example illustrates the syntax of xdr_netobj(). struct netobj { u_int n_len; char *n_bytes; }; typedef struct netobj netobj; bool_t xdr_netobj(xdrs, np) XDR *xdrs; struct netobj *np; The xdr_netobj() routine is a filter primitive that translates between variable-length opaque data and its external representation. The parameter np is the address of the netobj structure containing both a length and a pointer to the opaque data. The length may be no more than MAX_NETOBJ_SZ bytes. This routine returns TRUE if it succeeds, FALSE otherwise. B. What are the major advantages and disadvantages of using a port mapper instead of well known ports? Port mapping / Port Mapping is a name given to the combined technique of 1. translating the address or port number of a packet to a new destination 2. possibly accepting such packet(s) in a packet filter (firewall)
  • 11. 3. forwarding the packet according to the routing table. The destination may be a predetermined network port (assuming protocols like TCP and UDP, though the process is not limited to these) on a host within a NAT-masqueraded, typically private network, based on the port number on which it was received at the gateway from the originating host. The technique is used to permit communications by external hosts with services provided within a private local area network. Advantages: Port mapping / Port Mapping basically allows an outside computer to connect to a computer in a private local area network. Some commonly done port forwarding includes forwarding port 21 for FTP access, and forwarding port 80 for web servers. To achieve such results, operating systems like the Mac OS X and the BSD (Berkeley Software Distribution) will use the pre-installed in the kernel, ipfirewall (ipfw), to conduct port forwarding. Linux on the other hand would add iptables to do port forwarding. Disadvantages: There are a few downsides or precautions to take with port forwarding / Port Mapping.  Only one port can be used at a time by one machine.  Port forwarding also allows any machine in the world to connect to the forwarded port at will, and thus making the network slightly insecure.  The port forwarding technology itself is built in a way so that the destination machine will see the incoming packets as coming from the router rather than the original machine sending out the packets. Q5. A. Compare DEC RPC to ONCRPC. How do the two differ Remote procedure call is a method of supporting the development of applications that require processes on different systems to communicate and coordinate their activities. This article pursues a comparison of three important RPC's, namely, Open Network Computing (ONC), Distributed Computing Environment (DCE), and the ISO specification of a RPC. A general discussion of the RPC model, and its implementation are followed by describing the features and capabilities, like the model used, the mechanism of information transfer and call semantics of the three RPC's. The implementation of ONC and DEC are discussed. In a normal procedure the call takes place between the procedures of a single process in the same memory
  • 12. space on a single system, RPC takes place between a client and a server which are two different systems connected to a network. An important feature discussed while describing the RPC model is that of data representation. The client stub, creates a message packet to be sent to the server by converting the input arguments from the local data representation to a common data representation. On the server side when the server stub is called by the server runtime, the input arguments are taken from the message and converted from the common data representation to the local data representation. ONC RPC: This was one of the first commercial implementations of RPC. Although a modified implementation called the TI RPC is available, where the difference is in the latter being able to use different Transport Layer Protocols, yet the success of the more widely used original RPC is due to the wide use of NFS (Network file system, a client/server application that allows a user to view and optionally store/update files on a remote computer). ONC supports At-most-once and Idempotent call semantics. It also supports no-response and broadcast RPC. The type of authentication supported are none (default), UNIX used ID/group ID and secure RPC. Secure RPC uses DES (Data encryption Standard, a IBM product which uses more than 72 quadrillion or more possible encryption keys). RPC has reduced procedure declaration supporting only one input parameter and one output parameter. The RPC language compiler is called rpcgen which generates an include file, client stub, server stub. The client stub produced by rpcgen is incomplete and in some cases needs the client stub code to be generated by the developer. The server stub produced is nearly complete B. Examine the specification for NFS version 2 and 3 what are the chief difference? Does version 3 make any changes that are visible or important to a programmer? The NFS protocol provides transparent remote access to shared file systems across networks. The NFS protocol is designed to be machine, operating system, network architecture, and security mechanism, and transport protocol independent. This independence is achieved through the use of ONC Remote Procedure Call (RPC) primitives built on top of an eXternal Data Representation (XDR). NFS protocol Version 2 is specified in the Network File System Protocol Specification Version 2 of the protocol (defined in RFC 1094, March 1989) originally operated only over UDP. Its designers meant to keep the server side stateless, with locking (for example) implemented outside of the core protocol. People involved in the creation of NF S version 2 include Russel Sandberg, Bob Lyon, Bill Joy, Steve Kleiman, and others. The decision to make
  • 13. the file system stateless was a key decision, since it made recovery from server failures trivial (all network clients would freeze up when a server crashed, but once the server repaired the file system and restarted, all the state to retry each transaction was contained in each RPC, which was retried by the client stub(s).) This design decision allowed UNIX applications (which could not tolerate file server crashes) to ignore the problem Version 3 details as follows: support for 64-bit file sizes and offsets, to handle files larger than 2 gigabytes (GB); support for asynchronous writes on the server, to improve write performance; additional file attributes in many replies, to avoid the need to re-fetch them; a READDIRPLUS operation, to get file handles and attributes along with file names when scanning a directory assorted other improvements. At the time of introduction of Version 3, vendor support for TCP as a transport-layer protocol began increasing. While several vendors had already added support for NFS Version 2 with TCP as a transport, Sun Microsystems added support for TCP as a transport for NFS at the same time it added support for Version 3. Using TCP as a transport made using NFS over a WAN more feasible Q6. A. Is it possible to make the server side of the dictionary program concurrent? Why or why not? Writing correct concurrent programs is harder than writing sequential ones. This is because the set of potential risks and failure modes is larger - anything that can go wrong in a sequential program can also go wrong in a concurrent one, and with concurrency comes additional hazards not present in sequential programs such as race conditions, data races, deadlocks, missed signals, and livelock. Testing concurrent programs is also harder than testing sequential ones. This is trivially true: tests for concurrent programs are themselves concurrent programs. But it is also true for another reason: the failure modes of concurrent programs are less predictable and repeatable than for sequential programs. Failures in sequential programs are deterministic; if a sequential program fails with a given set of inputs and initial state, it will fail every time. Failures in concurrent programs, on the other hand, tend to be rare probabilistic events. Because of this, reproducing failures in concurrent programs can be maddeningly difficult. Not only might the failure be rare, and therefore not manifest itself frequently, but it might not occur at all in certain platform configurations, so that bug that happens daily at your customer's site might never happen at all in your test lab. Further, attempts to debug or monitor the program can introduce timing or synchronization artifacts that prevents the bug from appearing at all. As in Heisenberg's uncertainty principle, observing the state of the system may in fact change it.
  • 14. B. Under what condition will read from a terminal return the value 0? Any subsequent read from the terminal device shall return the value of zero, indicating end-of- file. Thus, processes that read a terminal file and test for end-of-file can terminate appropriately after a disconnect. If the EIO condition as specified in read() also exists, it is unspecified whether on EOF condition or [EIO] is returned C. If you had a choice debugging deadlock problem or a livelock Problem, which would you choose? Why ? How would you proceed?