Definition:In network communication, packet switching is a method or technique in which data is transferred in the form of manageable small chunks called packets. No dedicated circuits are used in packet switching and therefore it can carry many different transmissions at the same time. Each packet which is sent through the network contains information about the source node,destination nodes original data and some other control information. Once the packet reaches their destination
the packets are rearranged and managed to get the original data which was sent by the source node. Packet Switching technique has the ability to send data packets over any path rather than a fixed as in circuit switching because when data is divided into packets, then those packets find their own path. One packet may be using say path 1 and the other might be using path 2. But remember that this is decided on the basis of some routing algorithms and internet traffic. The routers and switches on the network use the header information to determine the most efficient route for moving each packet to its destination. Packet switching allows for efficient use of network bandwidth.
Packet Switching Modes:
Two major packet switching modes exist; datagram switching commonly known as udp, and virtual circuit switching. In the first case destination address in packet determines next hop. Routes may change during session and out-of-order delivery. In the second case each packet carries tag (virtual circuit ID), tag determines next hop/node and a fixed path is determined at call setup time that remains fixed through call. In this case routers maintain per-call state and therefore the packets are delivered in order.
Store And Forward:
Packets are transmitted over each communication link at a rate equal to the full transmission rate of the link. Most packet switches use store and forward transmission at each node. Store and-forward transmission means that the node first receives a packet i.e store and then transmit it to the next node i.e forwarding.
Store And Forward Delay:
The process of store and forward introduces a delay in packet receiving.if a packet size is L bits, and the packet is to be forwarded onto an outbound link of R bps, then the store-and-forward delay is L/R seconds. Consider the network in the diagram given below...it will help you in understanding the concept:
Within each router there are multiple buffers (also called queues). Each link has an input buffer and an output buffer. The input buffer is used to store packets that have just arrived to that link. If a packet that just arrived, needs to be transmitted across a link but finds the link busy with the transmission of another packet, the arriving packet must wait in the output buffer. In this case, packet loss may occur.
|← Circuit Switching - Definition & Techniques||Bandwidth - Definition, Formula & Examples →|