The first thing to understand is packet structure. As each packet passes through the network, additional bits of...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
information are added as the packet moves through the layers of the OSI model. At the bottom of the stack, or at the physical layer, the complete packet has been formed. All network and MAC addresses have been added. Segment placement counters are included for reassembly at the receiving device, and the network card transmitting the packet sends it out to the network cabling with the proper encoding scheme for electronic movement on the network cable. Each one of these steps, at each layer, has its own intricacies and as such, its own inherent potential for issues.
Let's assume that the user is requesting a 1 MB file from the network. There will be several packets used to initiate the request. These are predominantly dependent on your network operating system, and not covered here. The actual file transmission will be based on your topology and protocols. Assuming that you are using a standard frame, the payload (data portion of your packet) will be carried in bytes 42-1496 or 1454 total bytes maximum. The first 40 octets in an IP packet are reserved for overhead. Overhead is defined as the portion of the packet which carries protocol information, sender and receiver information and other transmission information like sequencing to assure that packets are sent and received in order, etc. Smaller payloads will be padded (stuffed with zeros) until a minimum size is reached, or depending on the protocol, a size will be included in the frame, and the smaller payload will be transmitted.
Therefore a 1 MB file will require roughly 704 frames to transmit. (1,024,000/1454) Each frame will have overhead for destination, source, length, DSAK, SSAP, and control in a raw frame. With the added overhead, the file fully transmitted becomes 1,053,184 in length. If this seems simple -- just wait -- we have to request the file (more packets); find the file on the network (more packets); report that the file is there (more packets) and verify access to the file. Get the picture?
In an Ethernet environment, this above would be simple for one workstation and one server. However, Ethernet uses a CSMA/CD protocol. Carrier Sense Multiple Access/Carrier Detect, allows a workstation to see if the cable is available for sending, then the workstation sends its communication. If we have multiple workstations on the network, one has no means of knowing if another workstation happens to send at the same time (multiple access). This ability creates what is known as a collision domain, or all workstations that could have colliding packets on the same network segment. This process works basically like a group of people wanting to speak. They listen for quiet and then begin speaking. What if someone else speaks at the same time? Each one quiets down and waits for silence again to then begin speaking. This is the same principle for retransmissions. On a busy network, this file may have to retransmit several times before successful completion. In a full duplex network, this is not an issue, however if you are running at half duplex due to design, electronics limitations or because you have autonegotiated down due to poor cabling, this will cause retransmissions and can eat away at your bandwidth.
Token-ring does not exhibit this problem as a transmitting workstation receives a token (right to transmit) from the network. This is analogous to the same room full of people, but with a moderator at the front of the room calling on people to speak. Switches and full duplex operation on your network can minimize or eliminate these retransmissions; however, a network that is too busy or has other problems will still exhibit retransmissions but for other reasons. Today's version of Ethernet resembles token-ring more and more. We are prioritizing applications and packets causing some to flow at a higher priority than others. The data packets that are throttled back may retransmit continuously if the network is too busy to handle all the requests.
One needs to determine what the users will be transmitting. This is especially important in database operations as these type of operations have critical requirements. If a data file is shared, only one user at a time can make changes to a record. Very large files, such as CAD (Computer Aided Drafting) or files that are graphics intensive and executable files stored on the network are demanding of bandwidth. To properly size switches and speed links, it is wise to assume the largest file will be transmitted on a regular basis. This is critical if you are adding voice, video and other real time applications. One way to size these is to look at the probability of zero frames, or in other words, the probability that a network will have no frames at any given time.
For example, if users will be opening and closing records in a database and the average record length is .5 MB, a good capacity planner will examine the number of records their users open and close per day. The equation may look something like this:
10 (Users) * 512 (bytes) * 100 (transactions per hour) * 8 hours per day
4,096,000 (answer from above) / 1454 Payload = 2,817.06 Frames Per Day
The resulting equation is expressed in frames as most switch manufacturers express the speed of their backplanes in frames. This is a very simplistic view based on the initial transaction. The amount calculated should be doubled to allow basic network printing functions. In the case of files with large graphics where the print file is 3 or 4 times greater than the file size, the result should be calculated accordingly.
This equation must be calculated for each department. E-mail and other functions must also be addressed. It is important to view not only daily transactions, but planning must be made for end of month functions and end of year functions. One will also want to examine after hours functions, such as network backups, to assure that everything that is designed to process after hours can finish in its allotted time frame. As you can see, 10 MB network segments can easily become bogged down, particularly on very large networks with many users. Attention must be paid to wide area network (WAN) links, where bandwidth is expensive and less plentiful.
When planning capacity, overall utilization statistics are relatively meaningless. If I examine all the packets through a port for 8 hours, I can not assume that the user was actually using the machine for the entire 8 hour period. So in the span of this statistic, one needs to factor out any periods of inactivity (for instance lunch breaks, time a user is on the phone and not working, etc.).