(Note that this is not an official Basic Draft article. It does not meet Basic Draft standards and will not be listed. If you've arrived through a search engine, please use the search function (top-left) to find a more suitable article).
In the past, all LANs are shared. There was the bus topology which is similar in behavior to the hub. In this case, every Unicast packet is sent out to all connected interfaces (except the one where it came from), as if it's a broadcast. (Of course, that's where the MAC address comes in to determine who it is for).
In a hub, there is only one collision domain. Which means to say, communications is half-duplex and only one device may communicate at one time. If two devices begin transmission at the same time, then there is a collision. That is why we need CSMA/CD to detect and resolve collisions. If two hosts attempt to communicate at the same time, then there would only be 50% bandwidth for each host on average.
When a collision is detected, a jam signal is generated, and both parties wait for a random amount of time before retransmission.
In a full-duplex environment, both hosts can communicate at 100% of the link's bandwidth. Which means to say, in Fast Ethernet full-duplex, hosts can send and receive at 100MBps, resulting in 200MBps gross throughput. There can be no collisions in a full-duplex environment.
Latency is the time it takes for a packet to travel from source to destination. Sometimes, Latency may be the SRTT (Source Round-Trip Time) which is the time taken from source to destination and back.
Segmentation of the collision domain can be performed through switches (or traditionally, bridges). Bridges connect to hubs which in turn connect to devices. Each port of the bridge (or switch) is a collision domain. As the number of users decrease in each collision domain, so will the collisions.
Switches work like bridges, in which they maintain a list of Layer 2 addresses called the CAM table. The CAM table is then used to determine whether a frame should be forwarded out of a particular port. Bridges are software based, however, so they increase the latency when sending across different segments.
The segmentation of collision domains by either bridge or switch (but only the switch was mentioned) is known as microsegmentation. In microsegmentation, the switch is seen to create point-to-point segments between two communicating hosts. In a switch, all stations are given dedicated bandwidth, and there is almost no collision.
Ethernet Switch latency is the time it takes for a frame to enter and exit a switch. This is said to be negligible as switches perform switching at "wire speed", but as traffic increase, the switch may need to buffer some of the request resulting in this latency. The latency may also be a result of the switch making decisions on which port to forward a frame out of.
Now, even though switches segment collision domains, they do not segment broadcast domains. That is to say, a broadcast will still reach everyone connected to the switch. A normal switch accounts for one LAN, so we would need to use Routers to separate hosts into multiple LANs. This is the segmentation of Broadcast Domains through the use of Routers.
As we recall, there are three modes of IPv4 communication:
Unicast - One to One communication
Broadcast - One to All communication (MAC address is filled with binary 1)
Multicast - One to Group communication
Frame Forwarding refers to a Frame being forwarded out of a particular port. Frame Filtering refers to a Frame being prevented from exiting a particular port. Switches and bridges performs both.
From this point onwards, switches and bridges will be used interchangeably unless specified.
Switches perform Frame Forwarding and Filtering only if it knows enough information to do so. This information, most importantly the MAC addresses associated to the ports, are stored in the CAM (Content Addressable Memory) table. The CAM table is actually the MAC Address Table being stored in the CAM. Switches can perform filtering based on any Layer 2 field.
Initially all MAC address tables are empty. The switches need to learn MAC addresses through the Source MAC address of each Frame. Frames are Forwarded out of all ports (except the one it came in from) if the Frame is a Broadcast, Multicast or an Unknown Unicast.
A switch would only Forward a Frame if it determines that the destination belongs to a different interface from which it came from. If the destination MAC belongs to the port it came in from, the Frame is said to be Filtered.
Let's look at an example CAM Table:
MAC - Port
A - 1
B - 3
C - 4
Assuming that A sends a packet to B. It does not need to learn A because it already knows it. The since it knows the destination, the Frame is forwarded out of port 3.
If A sends a packet to D, the switch would check its CAM table and realize that it doesn't know the destination. The Frame is forwarded out of all ports except the one it came in from.
When D replies, it learns D's MAC address from the interface the reply came in from. Since the destination of the reply is A and it is known, it is forwarded only out of port 1.
The end result would look like this:
Let's look at an example CAM Table:
MAC - Port
A - 1
B - 3
C - 4
D - 2
Switches can perform symmetric switching (switching between interfaces of the same speed) or asymmetric switching (switching between interfaces of different speeds). A real-life example of symmetric switching is through the use of the Gigabit link to connect to an uplink switch while all other computers are connected to Fast Ethernet ports. Asymmetric requires memory buffering.
Memory buffering can be port-based. Port-based buffering is done for the INCOMING port. Each incoming frame is queued. If the queuing mechanism is FIFO, it is possible for a single frame to cause blocking when the destination port is busy.
Another type of buffering is the shared buffering where all frames use a common memory buffer. Frames in the buffer are linked to the appropriate destination port. This helps balancing between ports of different speeds (In what way? Not explained?).
There are also two ways of forwarding frames. A cut-through switch forwards a Frame immediately after reading the destination MAC. This results in lower latency. However, there are no error checking.
The other way is the Store-and-Forward method, which requires a switch to fully receive a Frame before it is processed. This allows proper processing of the Frame, such us for CRC checks. The Frame is fully copied onboard, checked for CRC, determines the output interface (or interfaces), then forwards it. This is the most reliable method but it results in the highest latency.
Another type of switch is the Fragment Free switching which checks for the Source Address as well as the Length. It filters most errors and checks for collision. (This is the only thing explained about this method).
When we are determining bandwidth in shared environments, remember to count the switch as a device (i.e. If there are 10 computers, divide the bandwidth by 11).
Friday, February 25, 2011
Subscribe to:
Post Comments
(
Atom
)
No comments :
Post a Comment