Basic network concepts and the OSI model in simple terms

Network AdministratorIn this chapter of the journey to learn computer networking technology we explain the OSI Reference Model in simple terms, and expand on the different layers of the OSI model.

The OSI model defines the basic building blocks of computer networking, and is an essential part of a complete understanding of modern TCI/IP networks. The theoretical OSI Reference Model is the creation of the European based International Organization for Standardization (ISO), an independent, non-governmental membership organization that creates standards in numerous areas of technology and industry.

Why is the OSI Reference Model important?

An understanding of the concepts of the OSI  Reference Model is absolutely necessary for someone learning the role of the Network Administrator or the System Administrator. The OSI model is important because many certification tests use it to determine your understanding of computer networking concepts.

The Open Systems Interconnection Reference Model (OSI Reference Model or OSI Model) was originally created as the basis for designing a universal set of protocols called the OSI Protocol Suite.  This suite never achieved widespread success, but the model became a very useful tool for both education and development. The model defines a set of layers and a number of concepts for their use that make understanding networks easier.

The Internet and the TCP/IP family of protocols evolved separately from the OSI model. Often you find teachers, and websites, making direct comparison of the different models. Don't spend too much time trying to compare one versus the other. The two models were developed independently of each other to describe the standards of computer networking.

The TCP/IP Reference Model is not merely a reduced version of the OSI Reference Model with a straight line comparison of the four layers of the TCP/IP model to seven layers of the OSI model. The TCP/IP Reference Model does NOT always line up neatly against the OSI model. People try too hard to make neat comparisons of one model versus the other when there is not always a neat one to one correlation of each aspect.

Simply put the OSI Reference Model is a THEORETICAL model describing a standard of computer networking. The TCP/IP Reference model is based on the ACTUAL standards of the internet which are defined in the collection of Request for Comments (RFC) documents started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications, as discussed in the article  What is the difference between the Internet and OSI reference model.

To learn more about the evolution of the TCP/IP model check out the Geek History article: The 1980s internet protocols become universal language of computers

If are looking for something less technical that focuses more on using a computer network, rather than understand the core concepts of how it works, please visit our companion website The Guru 42 Universe where we discuss managing technology from the perspective of a business owner or department manager.

Check out the section Business success beyond great ideas and good intentions and specifically the article The System Administrator and successful technology integration.


The role of the Network Administrator or the System Administrator

On a small to mid size network there may be little, if any, distinction between a Systems Administrator and a Network Administrator, and the tasks may all be the responsibility of a single post. As the size of the network grows, the distinction between the areas will become more well defined.

In larger organizations the administrator level technology personnel typically are not the first line of support that works with end users, but rather only work on break and fix issues that could not be resolved at the lower levels.

Network administrators are responsible for making sure computer hardware and the network infrastructure itself is maintained properly. The term network monitoring describes the use of a system that constantly monitors a computer network for slow or failing components and that notifies the network administrator in case of outages via email, pager or other alarms.

The typical Systems Administrator, or sysadmin, leans towards the software and NOS (Network Operating System) side of things. Systems Administrators install software releases, upgrades, and patches, resolve software related problems and performs system backups and recovery.

What is the difference between networking and telecommunications?

In a large organization the distinction of telecommunications and networking can vary depending how the organization is structured.  I've worked in smaller companies where anything technology related came under the responsibility of the IT (information technology) department. In larger organizations the roles get a bit more defined and separated.  For instance, in a large organization someone specializing in telecommunications may have little or no role in understanding computer servers and network operating systems.

I am answering this from my very personal perspective.  I began working in the 1970s in telecommunications.  In the military that meant I installed and repaired radio communications and telephone equipment.  In the commercial world I had an FCC (Federal Communications License) which allowed me to work on radio communications equipment.

In the 1990s I began working in computer networking, which would be IT (information technology). I see the distinction there as information is data driven. My responsibilities are computer servers and network operating systems.  The basic premise of a computer network is to share a resource.  The device which allows the resource top be shared is a server.  For instance, a print server allows a printer to be shared, a file server allows files to be shared.

In my current position my title includes "telecommunications and networking." My telecommunications responsibilities include telephones.  Now with IP (internet protocol) based phones, you have the questions of, is it a phone system problem, or a network problem.  The separation was a lot "cleaner" as far as responsibilities before IP based phones. My telecommunications responsibilities also include things like the internal network wiring and dealing with the external issues regarding the connectivity to the building. My networking (IT) responsibilities are the maintenance of the computer servers and the network operating systems that allow resources to be shared.

 


Geek History: The 1980s internet protocols become universal language of computers

Learn basic networking concepts the details of the OSI model

What is the difference between the Internet and OSI reference model

Save

Save

Save

Basic computer networking explained in simple terms

Basic computer networking terms definedSave

Whether you are a business manager learning the language of technology to better communicate with IT staff, or just beginning your IT career, don't overlook a basic understanding of computer networking.

What is computer networking?

The simplest definition of a computer network is a group of computers that are able to communicate with one another and share a resource. A computer network is a collection of hardware and software that enables a group of devices to communicate and provides users with access to shared resources such as data, files, programs, and operations.

Common networking terms

Each device on a network, is called a node. In order for communications to take place, you need the software, the network operating system (NOS) and the means of communication between network computers known as the media.

In computer networking the term media refers to the actual path over which an electrical signal travels as it moves from one component to another. The media can be physical such as a specialized cable or various forms of wireless media such as infrared transmission or radio signals.  A network interface card (NIC) enables two computers to send and receive data over the network media.

What is a protocol?

A Network protocol is a agreed upon set of rules that define how network computers communicate . Different types of computers, using different operating systems, can communicate with each other, and share information as long as they follow the network protocols.

The Internet protocol suite commonly known as TCP/IP is a set of communications protocols used for the Internet and similar networks. You will often see the terms protocol suite or protocol stack used interchangeably. The protocol stack is an implementation of a computer networking protocol suite.

What is a LAN (Local Area Network) versus a WAN (Wide Area Network)?

In a typical LAN (Local Area Network) a group of computers and devices are connected together by a switch, or stack of switches, using a private addressing scheme as defined by the TCP/IP protocol. You may not be familiar with the specific function of a network switch or the definitions of private addressing scheme, they are more advanced topics of computer networking.

Private addresses are unique in relation to other computers on the local network. Routers are found at the boundary of a LAN, connecting them to the larger WAN.

In a WAN (Wide Area Network) you will have multiple LANs connected together using routers. I was taught many years ago that a WAN had nothing to do with the size of a computer network, but was simply connecting multiple LANs together across the public highway system, such as the internet.

People often try to explain concepts like LAN and WAN using terms and descriptions that have nothing to do with the definition. I often see people put numbers of computers into their definitions of LAN and WAN. If you have a three computer LAN than uses the public highway, as in the internet and internet addressing, to connect to another three computer network, the two LANs working together form a WAN.

What is the client server network model?

In the most common network model, client server, at least one centralized server manages shared resources and security for the other network users and computers. A network connection is only made when information needs to be accessed by a user. This lack of a continuous network connection provides network efficiency.

The client requests services or information from the server computer. The server responds to the client's request by sending the results of the request back to the client computer.

Security and permissions can be managed by administrators which cuts down on security and rights issues when dealing with a large number of workstations. This model allows for convenient backup services, reduces network traffic and provides a host of other services that come with the network operating system.

What are Peer-to-Peer Networks?

Simply sharing resources between computers, such as on a typical home network, every computer acts as both a client and a server. Any computer can share resources with another, and any computer can use the resources of another, given proper access rights.

This is a good solution when there are 10 or less users that are in close proximity to each other, but it is difficult to maintain security as the network grows. This model can be a security nightmare, because each workstation setting permissions for shared resources must be maintained at the workstation, and there is no centralized management. This model is only recommended in situations where security is not an issue.

Other Models

Before microcomputers because cost effective dumb terminals were used to access very large main frame computes in remote locations. The local terminal was dumb in the sense that it was nothing more than a way for a keyboard and monitor to access another computer remotely with all the processing occurred on the remote computer. This model, sometimes referred to as a centralized model, is not very common.

The all encompassing footnote

A LAN could use something other than a TCP/IP addressing scheme, but the illustration of a LAN and WAN based network as I describe is a typical implementation

These definitions were written off of the top of my head based on many years of networking experience. Any resemblance to Wiki or any other website is merely coincidental. (Since I am defining basic terms I would hope that they are at least similar!)

I realize that I may have oversimplified some terms, but the goal here at Computerguru.net to deliver a basic understanding of the concepts in simple terms and not deliver a lecture on computer networking fundamentals to define each term. IMHO, I see many answers on various forums that over complicate matters as well as add quite a bit of stray information.

Save

OSI model explained in simple terms

Understanding the mystical OSI Model explained in simple termsAs you begin your quest to learn computer networking one of the first tasks you have before you is a basic understanding of the OSI model.

For many folks understanding the OSI model is like trying to understand some mystical formula that controls the way computer networks operate. 

As we help you to begin your journey to understanding computer networking   We will tackle explaining the complex subject of the computer networking OSI model simple terms in hopes that you will gain an understanding of the reasons behind the definitions

You can find a lot of resources that define the components of the OSI model, but an understanding of the reasons behind the definitions will go a lot way to fully understanding this complex technology model.

The acronym and the organization behind it can get confusing. The formal name for the OSI model is the Open Systems Interconnection model.  Open Systems refers to a cooperative effort to have development of hardware and software among many vendors that could be used together.  The model is a product of the International Organization for Standardization (2) which is often abbreviated ISO.

The logic behind ISO


Before we delve into the OSI model, let us take a moment to understand the organization behind it.  You may have seen the term ISO certified in various technology areas. ISO, International Organization for Standardization,  (1) is the world's largest developer and publisher of International Standards. ISO helps to manage and create many international standards in many technical areas to insure the same quality of a product or process regardless of location or company.

The OSI (Open Systems Interconnection) model provides a set of general design guidelines for data communications systems and gives a standard way to describe how various layers of data communication systems interact. Applying the logic of the ISO standards to computer networking, a computer component, or computer software needs to comply to set of standards so that the product or process will work no matter where in the world we are, and no matter who is the world is producing it.

Putting the OSI model into perspective

Strive for a good understanding of the intent of the model and a few of the core principles, that will go a long way in an overall understanding of computer networking.  Do not focus on the intricate details of the OSI model at first, as the more you read the more confused you may get. The model was created in the 1970s and the technology is ever changing. Many text books will contradict each other on some aspects of the upper layers.  Some of the reasoning behind the upper layers are for processes that are not nearly as useful today as they were many years ago, and for that reason many other network models will blend together the upper three layers into a single layer.

Basic definitions of the OSI Model

The seven layers of the OSI Model  can be remembered by using the following memory aide: All People Seem To Need Data Processing. As you say the phrase, write down the first letter of each word, and that will help you to remember the seven layers in order from highest to lowest: Application, Presentation, Session, Transport, Network, Data Link, and Physical.  We will briefly discuss the lower four layers from the bottom up.

Layer one, the Physical layer provides the path through which data moves among devices on the network.

Layer two, the Data Link layer provides a system through which network devices can share the communication channel.

Layer three, the Network layer's main purpose is to decide which physical path the information should follow from its source to its destination.

Layer four, the Transport layer provides the upper layers with a communication channel to the network.

An analogy to understand the model

Some of reasons behind the OSI model are, to break network communication into smaller, simpler parts that are easier to develop and to facilitate standardization of network components to allow multiple vendor development and support.

Let's take the reasons behind the OSI model and apply them to something totally different to illustrate how they are used. If we wanted to start a railroad and build a new type of train from scratch, and we wanted this train to be able to use existing train tracks, and existing train stations so our new system could get up and running quickly, we would need to understand what existing standards are currently in place.

Even if we never had to build a set of train tracks we would need understand the standards by which train tracks were build and designed so we could assure our train could operate on them, and how the track is shared.  Likewise, in order for components to operate, manufactures must understand the track, layer one, and how the track is shared, layer two.

If we are building trains, not train stations, we need to know the size and shape of other vehicles using the tracks so our trains could use the same track as all the other trains. Layer one of the OSI model gives us the path, or the track we use for communication.  Layer one, referred to as the media, is the wire, or anything that takes the place of the wire, such as fiber optic, infrared, or radio spectrum technology.

Once you have more than one train on the track, you need to find a way to share the track.  Layer two provides a system through which network devices can share the communication channel, or in the case of our analogy, share the track. One of the functions of layer two is called media access control (MAC). If you think about the term media access control you can break it down into the two parts it represents, the media or the track, and access control, or the sharing of the track.

In the OSI model layers one and two represent the the media, or the physical components.  Layers three through seven represent the logical, or the software components.

In layer three of the OSI model, the Network later, the logical decision is made to decide which physical path the information should follow from its source to its destination.

In order to continue our analogy to understand this complex set of rules, think of the track system that has already been built as layers one and two. Once this track system is in place we need a system to control the routing of the train system that runs on the tracks. Think of layers three through seven as processes which affect the train itself, which would represent the actual package of information being transported along the tracks.  The main purpose of layer three is switching and routing.

Layer four of the OSI model, the transport layer ensures the reliability of data delivery by detecting and attempting to correct problems that occurred. In terms of our analogy, think of this as a set of standards and procedures that allows our train to arrive safely at its destination in a timely manner.

Learning and understanding the OSI model can be confusing.. The goal of this article was not meant to define the layers of the OSI model from purely a technical nature, but to offer an analogy to understand why it is needed and how it used to establish standards for data communications.  In our next article we will go over the basic definitions of all the layers of the OSI model.
 

Sources:

(1) http://www.iso.org/iso/about.htm
(2) http://www.iso.org/iso/home.html

 

Learn basic networking concepts the details of the OSI model

OSI Model illustrated
In our previous article, OSI model explained in simple terms, we kept it simple. 

We will continue on from there to explain some of the details of  the OSI model.  We will do our best to break it down into bite sized chunks to help you understand the concepts.

The OSI (Open Systems Interconnect) reference model was developed in the early 1970s by the International Standards Organization (ISO). Provides a  set of general design guidelines for data-communications systems and also gives a standard way to describe how various portions (layers) of  data-communication systems interact.

The hierarchical layering of protocols on a computer that forms the OSI model is known as a stack. A given layer in a stack sends commands to  layers below it and services commands from layers above it.

The Purpose of the OSI Model:

  • breaks network communication into smaller, simpler parts that are easier to develop.
  • facilitates standardization of network components to allow multiple-vendor development and support.
  • allows different types of network hardware and software to communicate with each other.
  • prevents changes in one layer from affecting the other layers so that they can develop more quickly.
  • breaks network communication into smaller parts to make learning it easier to understand.


The seven layers in order from highest to lowest are Application, Presentation, Session, Transport, Network, Data Link, and Physical can be remembered by using the following memory aide: All People Seem To Need Data Processing.

The Application layer includes network software that directly serves the user, providing such things as the user interface and application features. The Application layer is usually made available by using an Application Programmer Interface (API), or hooks, which are made available by the networking vendor.

The Presentation layer translates data to ensure that it is presented properly for the end user, also handles related issues such as data encryption and compression, and how data is structured, as in a database.

The Session layer comes into play primarily at the beginning and end of a transmission. At the beginning of the transmission, it makes known its intent to transmit. At the end of the transmission, the Session layer determines if the transmission was successful. This layer also manages errors that occur in the upper layers, such as a shortage of memory or disk space necessary to complete an operation, or printer errors.

The Transport layer provides the upper layers with a communication channel to the network. The Transport layer collects and reassembles any packets, organizing the segments for delivery and ensuring the reliability of data delivery by detecting and attempting to correct problems that occurred.

The Network layer's main purpose is to decide which physical path the information should follow from its source to its destination.

The Data Link layer provides a system through which network devices can share the communication channel. This function is called media-access control (MAC).

The Physical layer provides the electro-mechanical interface through which data moves among devices on the network.

In the articles that follow we will break down each layer in more detail, covering topics you will need to know as a networking professional.

The Physical Layer of the OSI model

Category 5 network wiring ethernet connector RJ45The Physical Layer is the lowest layer in the seven layer OSI model of computer networking.

The Physical Layer consists of the basic hardware transmission technologies of a network sometime referred to as the physical media.  Physical media provides the electro-mechanical interface through which data moves among devices on the network.

Initially physical media is though of as some sort of wire.  As technology progresses the types of media grows.

Bounded media transmits signals by sending electricity or light over a cable. Unbounded media transmits data without the benefit of a conduit-it might transmit data through open air, water, or even a vacuum.  Simply put, media is the wire, or anything that takes the place of the wire, such as fiber optic, infrared, or radio spectrum technology.

Defintions from the wired world of data transmission:

Public Switched Telephone Network (PSTN), also referred to as Plain Old Telephone Service (POTS), connections run over the standard copper phone lines found in most homes

Integrated Services Digital Network (ISDN) uses a single wire or fiber optic line to carry voice, data, and video signals.

Basic Rate Interface (BRI) is most commonly used in residential ISDN connections. It's composed of two bearer (B) channels at 64 Kbps each for a total of 128 Kbps (used for voice and data) and one delta (D) channel at 16 Kbps (used for controlling the B channels and signal transmission). The total bandwidth is up to 144 Kbps.

Primary Rate Interface (PRI) is most commonly used between a PBX (Private Branch Exchange) at the customer's site and the central office of the phone company. It is composed of  23 B channels at 64 Kbps and one D channel at 64 Kbps. The total bandwidth is up to 1,536 Kbps.

Digital Subscriber Line (DSL) technologies use existing, regular copper phone lines to transmit data. DSL hardware can transmit data using three channels over the same wire.  In a typical set up, a user connected through a DSL hookup can send data at 640 Kbps, receive data at 1.5 Mbps, and still carry on a standard phone conversion over one line.

T-Carrier Technology is a digital transmission service used to create point-to-point private networks and to establish direct connections to Internet Service Providers. It uses  four wires, one pair to transmit and another to receive.

T-1 lines support data transfer at rates of 1.544 megabits per second. Each T-1 line contains 24 channels. The E1 line is the European counterpart that transmits data at 2.048 Mbps.

T-3 has 672 (64 Kbps) channels, for a total data rate of 44.736 Mbps. The E3 line is the European counterpart that transmits data at 34.368 Mbps.

Cable connections provide access to the Internet through the same coaxial cable that brings cable TV into your home. A signal splitter installed by the cable company isolates the Internet signals from the TV signals. The two-way cable connection is always available and can be very fast. Speeds up to 30 Mbps are claimed to be possible, although speeds in the 1 to 2 Mbps range are more typical.

Unbounded media examples of data transmission:

Narrow band radio, laser, and microwave , transmission can not occur through steel or load bearing walls.

Satellite has a transmission delay of 240 to 300 milliseconds

Terrestrial microwave  is commonly used for long distance voice and video transmissions, and for short distance high speed links between buildings.

Laser is resistant to eavesdropping and capable of high transmission rates; susceptible to attenuation and interference.

Spread spectrum radio frequencies are divided into channel or hops.

The Physical Layer: data communications definitions

While in your world many of the older technologies of data communications may be replaced with modern one, there are many reasons why you may need to know about them.  You may get a better understanding of how things are done on your current network if you understand the evolution of the network. 

If you ever work in consulting you may be surprised to find out how much of what you call obsolete is still in use.  You will also find questions on older technologies on various certification tests.

Data communications definitions:

In the early days of connecting your computer to the internet most folks had Public Switched Telephone Network (PSTN), also referred to as Plain Old Telephone Service (POTS),  and all connections were run over the standard copper phone lines. In order for the digital world of computers to talk over analog phone lines you needed to use a MODEM.

The term MODEM comes from the words modulator and demodulator, it is a device that modulates a carrier signal to encode digital information, and also demodulates such a carrier signal  to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data.

Modem standards, or V dot modem standards, are defined by the ITU (International Telecommunications Union).  The FCC has limited the speed of analog transmissions to 53 Kbps

Twisted Pair CablingTwisted pair cabling is a common form of wiring in which two conductors are wound around each other for the purposes of canceling out electromagnetic interference which can cause crosstalk. The number of twists per meter make up part of the specification for a given type of cable.

The two major types of twisted-pair cabling are unshielded twisted-pair (UTP) and shielded twisted-pair (STP).

UTP - Unshielded Twisted Pair; uses RJ-45, RJ-11, RS-232, and RS-449 connectors, max length is 100 meters, speed is up to 100Mps. Cheap, easy to install, length becomes a  problem. Can be CAT 2,3,4 or 5 quality grades.

In shielded twisted-pair (STP) the inner wires are encased in a sheath of foil or braided wire mesh. Shielded Twisted Pair uses RJ-45, RJ-11, RS-232, and RS-449 connectors, max length is 100 meters, speed is up to 500Mps. Not as inexpensive as UTP, easy to install, length becomes a problem. Can be CAT 2,3,4 or 5 quality grades.

Category 1 Traditional UTP telephone cable can transmit voice signals but not data. Most telephone cable installed prior to 1983 is Category 1.

Category 2 UTP cable is made up of four twisted-pair wires, certified for transmitting data up to 4 Mbps (megabits per second).

Category 3 UTP cable is made up of four twisted-pair wires, each twisted three times per foot. Category 3 is certified to transmit data up to 10 Mbps.

Category 4 UTP
cable is made up of four twisted-pair wires, certified to transmit data up to 16 Mbps.

Category 5 UTP cable is made up of four twisted-pair wires, certified to transmit data up to 100 Mbps.

Twisted-pair Ethernet cable has the following specifications:

  • a maximum of 1,024 attached workstations
  • a maximum of 4 repeaters between communicating workstations
  • a maximum segment length of 328 feet (100 meters).

100BASE-TX specification uses two pairs of Category 5 UTP or Category 1 STP cabling at a 100 Mbps data transmission speed. Each segment can be up to 100 meters long.

100BASE-T4 specification uses four pairs of Category 3, 4, or 5 UTP cabling at a 100 Mbps data transmission speed with standard RJ-45 connectors. Each segment can be up to 100 meters long.

Fiber optic cable (IEEE 802.8) in which the center core, a glass cladding composed of varying layers of reflective glass, refracts light back into the core. Max length is 25 kilometers, speed is up to 2Gbps but very expensive. Best used for a backbone due to cost.

100BASE-FX specification uses two-strand 62.5/125 micron multi- or single-mode fiber media. Half-duplex, multi-mode fiber media has a maximum segment length of 412 meters. Full-duplex, single-mode fiber media has a maximum segment length of 10,000 meters.

 

The Physical Layer: Fast Ethernet Specifications

Continuing next with data communications definitions let us cover some Fast Ethernet specifications and definitions.

Coaxial cable (coax) was commonly used for thick ethernet, thin ethernet, cable TV and ARCnet, coaxial cabling that uses BNC connectors; heavy shielding protects data, but expensive and hard to make connectors

Through the first half of the 1980s, Ethernet's 10BASE5 implementation used a coaxial cable 0.375 inches (9.5 mm) in diameter, later called "thick Ethernet" or "thicknet". Its successor, 10BASE2, called "thin Ethernet" or "thinnet", used a cable similar to cable television cable of the era.

10BASE5, also called Thicknet or Thick Ethernet, uses thick, coaxial cable. Thick coax cable (RG-6) requires the following:

  • a 50-ohm terminator on each end of the cable;
  • a maximum of 3 segments with attached devices (populated segments);
  • a network board using the external transceiver;
  • a maximum of 100 devices on a segment, including repeaters;
  • a maximum length of 1,640 feet (500 meters) per segment;
  • a maximum of 4,920 feet (1500 meters) per segment trunk;
  • one ground per segment;
  • a maximum of 16 feet (5 meters) between a tap and its device; and
  • a minimum of 8 feet (2.5 meters) between taps.


10BASE2 uses thin Ethernet cable. Thin coax cable, or Thin Ethernet, implemented with T-connectors and terminators, such as RG-58 and A/U or C/U, have the following specifications:

  • a 50-ohm terminator on each end of the cable;
  • a maximum length of 1,000 feet (185 meters) per segment;
  • a maximum of 30 devices per segment;
  • a network board using the internal transceiver;
  • a maximum of 3 segments with attached devices (populated segments);
  • one ground per segment;
  • a minimum of 1.5 feet (.5 meters) between T-connectors;
  • a maximum of 1,818 feet (555 meters) per trunk segment; and
  • a maximum of 30 connections per segment.


A British Naval Connector (BNC), also known as the  Bayonet Nut Connector, also known as Bayonet Neill-Concelman (the inventors of the BNC connector) is usually used for thinnet coaxial cable. A terminator is a resistor attached to the end of the cable. Its purpose is to prevent signal reflections, effectively making the cable "look" infinitely long to the signals being sent across it.

Physical Layer Topology

A network topology refers to the layout of the transmission medium and devices on a network. As a networking professional for many years I can honestly say about the only time network topology has come up is for certification tesing. Here are some basic definitions.

Physical Topology:


Bus: Uses a single main bus cable, sometimes called a backbone, to transmit data. Workstations and other network devices tap directly into the backbone by using drop cables that are connected to the backbone.

This topology is an old one and essentially has each of the computers on the network daisy-chained to each other. This type of network is usually peer to peer and uses Thinnet(10base2) cabling. It is configured by connecting a "T-connector" to the network adapter and then connecting cables to the T-connectors on the computers on the right and left. At both ends of the chain the network must be terminated with a 50 ohm impedance terminator.

Advantages: Cheap, simple to set up.
Disadvantages Excess network traffic, a failure may affect many users, Problems are difficult to troubleshoot.

Star: Branches out via drop cables from a central hub (also called a multiport repeater or concentrator) to each workstation. A signal is transmitted from a workstation up the drop cable to the hub. The hub then transmits the signal to other networked workstations.

The star is probably the most commonly used topology today. It uses twisted pair such as 10baseT or 100baseT cabling and requires that all devices are connected to a hub.

Advantages: centralized monitoring, failures do not affect others unless it is the hub, easy to modify.
Disadvantages If the hub fails then everything connected to it is down.

Ring: Connects workstations in a continuous loop. Workstations relay signals around the loop in round-robin fashion.

The ring topology looks the same as the star, except that it uses special hubs and ethernet adapters. The Ring topology is used with Token Ring networks, (a proprietary IBM System).

Advantages: Equal access.
Disadvantages Difficult to troubleshoot, network changes affect many users, failure affects many users.

Mesh: Provides each device with a point-to-point connection to every other device in the network.

Mesh topologies are combinations of the above and are common on very large networks. For example, a star bus network has hubs connected in a row(like a bus network) and has computers connected to each hub.

Cellular: Refers to a geographic area, divided into cells, combining a wireless structure with point-to-point and multipoint design for device attachment. 


Logical Topology:

Ring: Generates and sends the signal on a one-way path, usually counterclockwise.

Bus: Generates and sends the signal to all network devices.

Physical topology defines the cable's actual physical configuration (star, bus, mesh, ring, cellular, hybrid). Logical topology defines the network path that a signal follows (ring or bus).

The Data Link Layer of the OSI model

The Data Link Layer is Layer 2 of the seven-layer OSI model of computer networking.  The Data Link layer deals with issues on a single segment of the network.

The IEEE 802 LAN/MAN Standards Committee develops Local Area Network standards and Metropolitan Area Network standards. In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). IEEE 802 splits the OSI Data Link Layer into two sub-layers named Logical Link Control (LLC) and Media Access Control (MAC),

The lower sub-layer of the Data Link layer, the Media Access Control (MAC), performs Data Link layer functions related to the Physical layer, such as controlling access and encoding data into a valid signaling format.

The upper sub-layer of the Data Link layer, the Logical Link Control (LLC), performs Data Link layer functions related to the Network layer, such as providing and maintaining the link to the network.

The MAC and LLC sub-layers work in tandem to create a complete frame. The portion of the frame for which LLC is responsible is called a Protocol Data Unit (LLC PDU or PDU).

IEEE 802.2 defines the Logical Link Control (LLC) standard that performs functions in the upper portion of the Data Link layer, such as flow control and management of connection errors.

LLC supports the following three types of connections for transmitting data:

  • Unacknowledged connectionless service:does not perform reliability checks or maintain a connection, very fast, most commonly used
  • Connection oriented service. Once the connection is established, blocks of data can be transferred between nodes until one of the nodes terminates the connection.
  • Acknowledged connectionless service provides a mechanism through which individual frames can be acknowledged


IEEE 802.3 is an extension of the original Ethernet
. includes modifications to the classic Ethernet data packet structure.

The Media Access Control (MAC) sub-layer contains methods that logical topologies can use to regulate the timing of data signals and eliminate collisions.

The MAC address concerns a device's actual physical address, which is usually designated by the hardware manufacturer. Every device on the network must have a unique MAC address to ensure proper transmission and reception of data.  MAC communicates with adapter card.

Carrier Sense Multiple Access / Collision Detection is (CSMA/CD) a set of rules determining how network devices respond when two devices attempt to use a data channel simultaneously (called a collision). Standard Ethernet networks use CSMA/CD. This standard enables devices to detect a  collision.

After detecting a collision, a device waits a random delay time and then attempts to re-transmit the message. If the device detects a collision again, it waits twice as long to try to re-transmit the message. This is known as exponential back off.

IEEE 802.5 uses token passing to control access to the medium. IBM Token Ring is essentially a subset of IEEE 802.5.

The IEEE 802.11 specifications are wireless standards that specify an "over-the-air" interface between a wireless client and a base station or access point, as well as among wireless clients. The 802.11 standards can be compared to the IEEE 802.3™ standard for Ethernet for wired LANs.

The IEEE 802.11 specifications address both the Physical (PHY) and Media Access Control (MAC) layers and are tailored to resolve compatibility issues between manufacturers of Wireless LAN equipment

The IEEE 802.15 Working Group provides, in the IEEE 802 family, standards for low-complexity and low-power consumption wireless connectivity.

IEEE 802.16 specifications support the development of fixed broadband wireless access systems to enable rapid worldwide deployment of innovative, cost-effective and interoperable multi-vendor broadband wireless access products.


 

The Network Layer of the OSI model

The Network Layer is Layer 3 of the seven layer OSI model of computer networking.  The key element of the Network Layer are addressing and routing. 

The Network Layer defines how information moves to the correct network address, how messages are addressed and how logical addresses and names are translated into physical  addresses, as well as enabling the option of specifying a service address, known as a sockets or ports. to point the data to the correct program on the destination computer.

Addressing

Each computer on a TCP/IP network has to have a unique, numeric IP address. The IP address is like a mailing address, some of the bits represent the network segment that the computer is on, like the street name of a mailing address. Other bits represent the particular host on the segment, like the house number of a mailing address.

IP addresses have 4 bytes, each of which is referred to as an octet. Since each byte in the address has 8 bits, an IP address is 32 bits long. IP addresses are usually displayed in decimal format where the value of each byte is converted from binary to decimal. This makes them easier to remember. For example, an IP address of 74.52.151.178 is much easier to remember than its binary equivalent of: 01001010.00110100.10010111.10110010

If an IP address represents a mailing address, thing of the service address as a specific room in the house.  The service address is a number that is appended to the IP Address  such as 74.52.151.178:25 where 74.52.151.178 is the IP address and 25 is the service address. In the early days of computer networking the term socket number was use. A well-known range of port numbers is reserved by convention to identify specific service types on a host computer.

On most IP networks, computers have not only IP addresses, but they also have descriptive names that are easier for people to remember and use. This name is called the host name. It's a friendly name assigned to a computer that people can use instead of the numeric IP address

Routing

Routing is the process of selecting which physical path the information should follow from its source to its destination.  The Network Layer manages data traffic and congestion involved in packet switching and routing

Routers are devices that play a significant role in directing the flow of data between two or more networks. Routers make sure that information makes it to the intended destination as well as ensure that information does not go where it is not needed. This is crucial for keeping large volumes of data from clogging connections.

One of the tools a router uses to decide where a packet should go is a configuration table. A configuration table identifies which connections lead to particular groups of addresses and sets priorities for connections to be used and establishes rules for handling both routine and special cases of traffic.

A configuration table can be as simple as a half-dozen lines in the smallest routers, but can grow to massive size and complexity in the very large routers that handle the bulk of Internet messages.

Internet Protocol (IP) envelopes and addresses the data, enables the network to read the envelope and forward the data to its destination and defines how much data can fit in a single packet. 

Internet Protocol (IP)   is a connectionless protocol, which means that a session is not created before sending data. IP is responsible for addressing and  routing of packets between computers. It does not guarantee delivery and does not give acknowledgement of packets that are lost or sent out of order as this is the responsibility of higher layer protocols such as Transmission Control Protocol (TCP).

Time To Live (TTL) is a concept in IP that prevents packets from endlessly looping around the Internet. When a packet leaves a computer the TTL is set to a maximum of 256 Each  router will decrease the TTL by one or more If the TTL reaches Zero, the Router Sends the Source Computer a ICMP-Time Exceeded and discards the packet

Packet Switching

Throughout the standard for Internet Protocol you will see the description of packet switching, "fragment and reassemble internet datagrams when necessary for transmission through small packet networks." A message is divided into smaller parts know as packets before they are sent. Each packet is transmitted individually and can even follow different routes to its destination. Once all the packets forming a message arrive at the destination, they are recompiled into the original message.

 

 

Packet switching explained in simple terms

RouterAs you begin your quest to learn computer networking one of the important concepts you will need to understand is packet switching. One of the key differences between communications before the internet, to the way information flowed with the new standards known as Internet Protocol, is the concept of packet switching.

Internet data, whether in the form of a Web page, a downloaded file or an e-mail message, travels over a system known as a packet-switching network. Each of these packages gets a wrapper that includes information on the sender's address, the receiver's address, the package's place in the entire message, and how the receiving computer can be sure that the package arrived intact.

There are two huge advantages to the packet switching. The network can balance the load across various pieces of equipment on a millisecond-by-millisecond basis. If there is a problem with one piece of equipment in the network while a message is being transferred, packets can be routed around the problem, ensuring the delivery of the entire message.

Packet switching, an integral part of internet technology and internet history explained in simple terms

In teaching the concept of packet switching in the classroom, I would take a piece of paper with a message written on it, and from the front of the classroom, ask the person in the front seat simply to turn around and pass the paper to the person behind him, and in turn continue the process until the paper made it to the person in the back row.

In the next phase of the illustration, I would take the same piece of paper that had the message written on it, and tear it into four pieces. On each individual piece of paper I would address it as if sending a letter through the postal service, by writing my name as the sender, and also the name of the person in the back of the room as the recipient. I would also label each individual piece of paper as one of four, two of four, three of four, and four of four.

This time I would take the four individual pieces of paper and walk across the front row, and as I handed one piece of paper to four different students, I would explain to them who was to receive the paper, and asked them to pass it to the person marked as the recipient by using the people behind them. When all four pieces of paper arrived at the destination, I would ask the recipient to read the label I had put on each piece of paper, and confirm they had received the entire message.

My original passing of the paper represented Circuit switching, the telecommunications technology which used circuits to create the virtual path, a dedicated channel between two points, and then delivered the entire message.

My second passing of the "packets" or scraps of paper illustrated packet switching, and each individual in the room acted as a router. The key difference between the two methods was the additional routes that the pieces of the message took. A very primitive, but effective demonstration of packet switching and the way in which a message would be transmitted across the internet.

Once the concept of packet switching was developed the next stage in the evolution was to create a language that would be understood by all computer systems. This new standard set of rules would enable different types of computers, with different hardware and software platforms, to communicate in spite of their differences.


 

Geek History: In the 1960s Paul Baran developed packet switching

 


 

 

 

The transport layer is layer four of the OSI model.

In computing and telecommunications, the transport layer is layer four of the seven layer OSI model. It responds to service requests from the session layer and issues service requests to the network layer.

Transport Layer is responsible for packet handling, ensures error free delivery, repackages messages, divides messages into smaller packets, and handles error handling.

The purpose of the Transport layer is to provide transparent transfer of data between end users, thus relieving the upper layers from any concern with providing reliable and cost-effective data transfer.

On the Internet there are a variety of Transport services, but the two most common are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).

TCP is the more complicated, providing a connection and byte oriented stream which is almost error free, with flow control, multiple ports, and same order delivery. UDP is a very simple datagram service, which provides limited error reduction and multiple ports.

Transmission Control Protocol (TCP) breaks data up into packets that the network can handle efficiently, verifies that all the packets arrive at their destination, and reassembles the data.

Transmission Control Protocol (TCP) is connection oriented, which means an acknowledgement (ACK) verifies that the host has received each segment of the message, reliable delivery service.  Acknowledgements are sent by receiving computer, unacknowledged packets are resent. Sequence number are used with acknowledgements to track successful packet transfer

If the ACK is not received after a given time period, then the data is resent. If segments are not delivered to the destination device correctly, then the Transport layer can initiate retransmission or inform the upper layers. Uses segmentation, flow control, and error checking to insure packet delivery the purpose of name resolution, either to an IP/IPX address or a network protocol name resolution helps upper layer services communicate segment destinations with lower layer services.

User Datagram Protocol (UDP) provides same services as TCP but is connectionless and unacknowledged.  UDP lets applications send datagrams without the overhead involved in acknowledging packets and maintaining a virtual circuit. UDP is therefore used to broadcast messages across an internetwork, because acknowledgment is unnecessary and overhead is undesirable.
 

The Internet family of protocols the TCP/IP protocol suite

The TCP/IP protocol suiteThe Internet protocol suite commonly known as TCP/IP is a set of communications protocols used for the Internet and similar networks.  TCP/IP is not a single protocol, but rather an entire family of protocols.

The network concept of protocols establishes a set of rules for each system to speak the others language in order for them to communicate.  Protocols describe both the format that a message must take as well as the way in which messages are exchanged between computers.

Transmission Control Protocol (TCP) and the Internet Protocol (IP), were the first two members of the family to be defined, consider them the parents of the family. Protocol stack describes a layered set of protocols working together to provide a set of network functions. Each protocol/layer services the layer above by using the  layer below.

Internet Protocol (IP)


Internet Protocol (IP) envelopes and addresses the data, enables the network to read the envelope and forward the data to its destination and defines how much data can fit in a single packet. IP is responsible for routing of packets between computers.

Internet Protocol (IP) is a connectionless, unreliable datagram protocol, which means that a session is not created before sending data. An IP packet might be lost, delivered out of sequence, duplicated, or delayed. IP does not attempt to recover from these types of errors. The acknowledgment of packets delivered and the recovery of lost packets is the responsibility of a higher-layer protocol, such as TCP.

An IP packet, also known as an IP datagram, consists of an IP header and an IP payload. The IP header contains the following fields for addressing and routing: IP header field, Source IP address of the original source of the IP datagram, and the Destination IP address of the final destination of the IP datagram.

Time-to-Live (TTL) Designates the number of network segments on which the datagram is allowed to travel before being discarded by a router. The TTL is set by the sending host and is used to prevent packets from endlessly circulating on an IP internetwork. When forwarding an IP packet, routers are required to decrease the TTL by at least 1.

Transmission Control Protocol (TCP)


Transmission Control Protocol (TCP) breaks data up into packets that the network can handle efficiently, verifies that all the packets arrive at  their destination,  and reassembles the data. TCP is based on point-to-point communication between two network hosts. TCP receives data from programs and processes this data as a stream of bytes. Bytes are grouped into segments that TCP then numbers and sequences for delivery.

Transmission Control Protocol (TCP) is connection oriented, which means an acknowledgement (ACK) verifies that the host has received each segment  of the message,  reliable delivery service.  Acknowledgements are sent by receiving computer, unacknowledged packets are resent. Sequence number are used with acknowledgements to track  successful packet transfer

Before two TCP hosts can exchange data, they must first establish a session with each other. A TCP session is initialized through a process known as a three-way handshake. This process synchronizes sequence numbers and provides control information that is needed to establish a virtual connection between both hosts.

Once the initial three-way handshake completes, segments are sent and acknowledged in a sequential manner between both the sending and receiving hosts. A similar handshake process is used by TCP before closing a connection to verify that both hosts are finished sending and receiving all data.

TCP ports

TCP ports use a specific program port for delivery of data sent by using Transmission Control Protocol (TCP). TCP ports are more complex and operate differently from UDP ports.

While a UDP port operates as a single message queue and the network endpoint for UDP-based communication, the final endpoint for all TCP communication is a unique connection. Each TCP connection is uniquely identified by dual endpoints.

Comparison between the OSI and TCP/IP Models

TCP/IP Model Layer 4. Application Layer

Application layer is the top most layer of four layer TCP/IP model. Application layer is present on the top of the Transport layer. Application layer defines TCP/IP application protocols and how host programs interface with Transport layer services to use the network.

Application layer includes all the higher-level protocols:

DNS (Domain Naming System),

HTTP (Hypertext Transfer Protocol) is the protocol used to transport web pages.

FTP (File Transfer Protocol) used to upload and download files.

TFTP (Trivial File Transfer Protocol) used to upload and download files.

SNMP (Simple Network Management Protocol) designed to enable the analysis and troubleshooting of network hardware. For example, SNMP enables you to monitor  workstations, servers, minicomputers, and mainframes, as well as connectivity devices such as bridges, routers, gateways, and wiring concentrators.
 
SMTP (Simple Mail Transfer Protocol) used for transferring email across the internet

DHCP (Dynamic Host Configuration Protocol) used to centrally administer the assignment of IP addresses, as well as other configuration information such as subnet  masks and the address of the default gateway. When you use DHCP on a TCP/IP network, IP addresses are assigned to clients dynamically instead of manually.

X Windows, Telnet, SSH, RDP (Remote Desktop Protocol) etc.


TCP/IP Model Layer 3. Transport Layer

Transport Layer is the third layer of the four layer TCP/IP model. The position of the Transport layer is between Application layer and Internet layer. The purpose of Transport layer is to permit devices on the source and destination hosts to carry on a conversation. Transport layer defines the level of service and status of the connection used when transporting data.

The main protocols included at Transport layer are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

TCP/IP Model Layer 2. Internet Layer


Internet Layer is the second layer of the four layer TCP/IP model. The position of Internet layer is between Network Access Layer and Transport layer. Internet layer pack data into data packets known as IP datagrams, which contain source and destination address (logical address or IP address) information that is used to forward the datagrams between hosts and across networks. The Internet layer is also responsible for routing of IP datagrams.

Packet switching network depends upon a connectionless internetwork layer. This layer is known as Internet layer. Its job is to allow hosts to insert packets into any network and have them to deliver independently to the destination. At the destination side data packets may appear in a different order than they were sent. It is the job of the higher layers to rearrange them in order to deliver them to proper network applications operating at the Application layer.

The main protocols included at Internet layer are IP (Internet Protocol), ICMP (Internet Control Message Protocol), ARP (Address Resolution Protocol), RARP (Reverse Address Resolution Protocol) and IGMP (Internet Group Management Protocol).

Reverse Address Resolution Protocol (RARP) adapted from the ARP protocol and provides reverse functionality. It determines a software address from a hardware (or MAC)  address. A diskless workstation uses this protocol during bootup to determine its IP address.

Address Resolution Protocol (ARP) translates a host's software address to a hardware (or MAC) address (the node address that is set on the network interface card).

Internet Control Message Protocol (ICMP) enables systems on a TCP/IP network to share status and error information such as with the use of PING and TRACERT utilities.

TCP/IP Model Layer 1. Network Access Layer


Network Access Layer is the first layer of the four layer TCP/IP model. Network Access Layer defines details of how data is physically sent through the network, including how bits are electrically or optically signaled by hardware devices that interface directly with a network medium, such as coaxial cable, optical fiber, or twisted pair copper wire.

The protocols included in Network Access Layer are Ethernet, Token Ring, FDDI, X.25, Frame Relay etc.

The most popular LAN architecture among those listed above is Ethernet. Ethernet uses an Access Method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection) to access the media, when Ethernet operates in a shared media. An Access Method determines how a host will place data on the medium.

IN CSMA/CD Access Method, every host has equal access to the medium and can place data on the wire when the wire is free from network traffic. When a host wants to place data on the wire, it will check the wire to find whether another host is already using the medium. If there is traffic already in the medium, the host will wait and if there is no traffic, it will place the data in the medium. But, if two systems place data on the medium at the same instance, they will collide with each other, destroying the data. If the data is destroyed during transmission, the data will need to be retransmitted. After collision, each host will wait for a small interval of time and again the data will be retransmitted.

 

 

 

 

What is the difference between the Internet and OSI reference model

Interface Message Processor (IMP) ARPANET packet routingWhen learning computer networking it is essential to have a general idea of the different computer networking reference models and the reasoning behind the layered approach. Both the TCP/IP network model and the OSI Model create a reference model for computer networking. The OSI model is widely used to teach students as was created in the mindset of a reference book. The TCP/IP standards were created to provide guidance to people actually implementing a networking technology and was created in the mindset of a service manual.

Much like the answer to the question of why was the internet created, the answer to why do we need the OSI model depends on who you ask. Here at ComputerGuru.net try to explain the basics of the OSI model as it relates to understanding basic computer networking.

The Internet and the TCP/IP family of protocols evolved separately from the OSI model. Often you find teachers, and websites, making direct comparison of the different models. Don't get too hung up on drawing direct comparisons between the two models. Our discussion here on the two networking reference models is address some commonly asked questions, and give some historical perspective as to how the models have evolved.

The Open Systems Interconnection Reference Model (OSI Reference Model or OSI Model) was originally created as the basis for designing a universal set of protocols called the OSI Protocol Suite. This suite never achieved widespread success, but the model became a very useful tool for both education and development. The model defines a set of layers and a number of concepts for their use that make understanding networks easier. The theoretical OSI Reference Model is the creation of the European based International Organization for Standardization (ISO), an independent, non-governmental membership organization that creates standards in numerous areas of technology and industry. The OSI model was first published in 1984 as ISO 7498: Information processing systems -- Open Systems Interconnection -- Basic Reference Model.

The Internet model is often compared to the OSI model. This internet model has many names such as the DOD reference model or the ARPANET reference model, because like the internet itself the TCP/IP protocol suite has evolved over the years. The ARPANET was the original name of the network we now call the internet. ARPA, currently known as DARPA, the Defense Advanced Research Projects Agency, is funded by the DoD (Department of Defense).

Unlike the International Standards Organization (ISO) where there is one main library of information that maintains specific standards, the internet is an ever evolving network with many entities working together to maintain standards. There is a collection of documents known as Request for Comments (RFC) maintained by the Internet Engineering Task Force (IETF) that describes various technology specifications.

Simple talk and some needed geek speak

Since TCP/IP is the primary networking language of the internet, everyone who works in the field of technology needs to have at least a simple understanding of how it works and its role in the big picture of the internet. In the spirit of the Guru 42 family of websites, we attempt to tackle the basic understanding using as simple terms as possible.

To understand the role of TCP/IP in the big picture of the internet, we need to delve just a bit into the geek speak of the internet. If you want to learn more, and really delve into how the internet works and the interesting history of the internet, an understanding of IETF and RFC's is needed.

What is an RFC?

The concept of Request for Comments (RFC) documents was started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications.

In computer network engineering, a Request for Comments (RFC) is a formal document published by the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), and the global community of computer network researchers, to establish Internet standards. The Internet Engineering Task Force (IETF) develops and promotes voluntary Internet standards,

The IETF started out as an activity supported by the U.S. federal government, but since 1993 it has operated as a standards development function under the auspices of the Internet Society, an international membership-based non-profit organization.

Which came first the Internet model or the ISO model?

A question often asked is which network reference model came first. Various sources state that the ground work for the Open Systems Interconnection model (OSI Model) started in the 1970s by a group at Honeywell Information Systems. Other sources point to two projects that began independently in the 1970s to define a unifying standard for the architecture of networking systems. One was administered by the International Organization for Standardization (ISO), and one by the International Telegraph and Telephone Consultative Committee (CCITT).

RFC 871 published in September 1982 is one of the first formal descriptions of the ARPANET Reference Model (ARM). The the introduction of RFC 871 addresses the history of the internet model versus the ISO model.

"Since well before ISO even took an interest in "networking", workers in the ARPA-sponsored research community have been going about their business of doing research and development in intercomputer networking with a particular frame of reference in mind."

Is there an official document that explains the ARPANET Reference Model (ARM)?


RFC 871 was published in September 1982 as a recollection of the past by one of the developers ARPANET Reference Model as the author describes "as a perspective on the ARM." The author points out that the ARPANET Network Working Group (NWG), which was the collective source of the ARM, hasn't had an official general meeting since October 1971.

The four layer internet was defined in Request for Comments 1122 and 1123. RFC 1122, published October 1989, covers the link layer, IP layer, and transport layer, and companion RFC 1123 covers the applications layer and support protocols

The TCP/IP Model is not merely a reduced version of the OSI Reference Model with a straight line comparison of the four layers of the TCP/IP model to seven layers of the OSI model. As you read through many of the RFC documents on the IETF protocol development you will see direct statements that they are not concerned with strict layering such as section 3 of RFC 3439 which is titled: "Layering Considered Harmful."

The links below to RFC 1958 and 3439 will help you understand the general mindset of the developers of TCP/IP. RFC 1122 and RFC 1123 are the definitions of the four protocol layers of the TCP/IP model. As the constantly growing library of RFCs illustrates, the concept of the TCP/IP is a ongoing evolution.

References:

Request for Comments (RFC) http://www.ietf.org/rfc.html

Memos in the Requests for Comments (RFC) document series contain technical and organizational notes about the Internet. The Internet Engineering Task Force (IETF) is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet.

RFC 871: September 1982 https://tools.ietf.org/html/rfc871

A perspective on the ARPANET REFERENCE MODEL
Abstract: The paper, by one of its developers, describes the conceptual framework in which the ARPANET intercomputer networking protocol suite, including the DoD standard Transmission Control Protocol (TCP) and Internet Protocol (IP), were designed.

RFC 1122: October 1989 https://tools.ietf.org/html/rfc1122
This RFC covers the communications protocol layers: link layer, IP layer, and transport layer;

RFC 1123: October 1989 https://tools.ietf.org/html/rfc1123
This RFC covers the applications layer and support protocols.

RFC 1958: June 1996 https://tools.ietf.org/html/rfc195
Architectural Principles of the Internet

RFC 3439: December 2002 https://tools.ietf.org/html/rfc3439
Internet Architectural Guidelines
Extends RFC 1958 by outlining some of the philosophical guidelines to which architects and designers of Internet backbone networks should adhere.



Links to learn more:

Check out our site Geek History where we discuss the evolution of the ARPANET and TCP/IP

Why was the internet created: 1957 Sputnik launches ARPA
 
When was internet invented: J.C.R. Licklider guides 1960s ARPA Vision
 
In the 1960s Paul Baran developed packet switching

The 1980s internet protocols become universal language of computers

 

 

 Photo: Interface Message Processor (IMP) ARPANET packet routing

 

The evolution of the Internet and the birth of TCP/IP

During the  Vinton Cerf would collaborate with 1970s Bob Kahn as key members of a team to create the building blocks of the modern internetTCP/IPThe creation of the protocol suite TCP/IP as the basic set of rules for computers to communicate was one of the last major phases in the development of this global network we now call the Internet.

The internet was not something born of a single idea, but rather a gradual evolution, and the work of many people over many years.

The idea started with a vision to create a decentralized computer network, whereby every computer was connected  to each other, but if one member of the systems was hit, the others would remain unaffected.

From the initial idea of a decentralized computer network came the concept of packet switching. During the 1960s Paul Baran developed the concept of packet switching networks while conducting research at the historic RAND organization.

What is a Protocol?

Once the concept of packet switching was developed the next stage in the evolution was to create a language that would be understood by all computer systems. 

The network concept of protocols would establish a standard set of rules that would enable different types of computers, with different hardware and software platforms,  to communicate in spite of their differences.  Protocols describe both the format that a message must take as well as the way in which messages are exchanged between computers.

During the 1970s Bob Kahn and Vinton Cerf would collaborate as key members of a team to create TCP/IP, Transmission Control Protocol (TCP) and Internet Protocol (IP), the building blocks of the modern internet.

What is an RFC?

The concept of Request for Comments (RFC) documents was started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications.

In computer network engineering, a Request for Comments (RFC) is a formal document published by the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), and the global community of computer network researchers, to establish Internet standards. 

TCP/IP RFC History

The creation of TCP/IP as the basic set of rules for computers to communicate was one of the last major phases in the development of this global network we now call the Internet.  Many additional members of the TCP/IP family of protocols continue to be developed, expanding of the basic principals established by Bob Kahn and Vinton Cerf back in the 1970s.

In 1981 the TCP/IP standards were published as RFCs 791, 792 and 793 and adopted for use. On January 1, 1983, TCP/IP protocols became the only approved protocol on the ARPANET, the predecessor to today's internet.

Links to learn more:

Check out our site Geek History where we discuss the evolution of the ARPANET and TCP/IP

Why was the internet created: 1957 Sputnik launches ARPA

When was internet invented: J.C.R. Licklider guides 1960s ARPA Vision

In the 1960s Paul Baran developed packet switching

The 1980s internet protocols become universal language of computers

 

Vinton Cerf photo by Joi Ito
Attribution 2.0 Generic (CC BY 2.0)