A Brief History of Computer Networking

 

Introduction.  Now for a bit of background.  In the 1960s computers could very seldom communicate with each other.  When someone needed to move data from one computer to another it was done via things like punch cards or magnetic tape.  Even the formats for these could be incompatible.

The US military, in particular, was very concerned about communications that could withstand war and espionage.  The Department of Defense wanted networks that could function even if some connections were destroyed.  ARPANet - Advanced Research Projects Agency Network came into being in connection with major research university infrastructures to bring such implementations into the rest of the world.  ARPANet was built on two assumptions

  • The network itself is unreliable and has to be designed to compensate/overcome its own unreliability
  • All computers on the network need to be equally capable of communicating with other computers on the network

From the start, there was no central authority which would make the network more vulnerable to attack in many ways.  These beliefs resulted in a peer-to-peer networking philosophy.  Data and information could travel in packets independently by any number of different paths through all kinds of linkages to their destination.

Distributed/Centralized Control.  Think about how centralized authority/control actually makes any sort of operations more vulnerable to attack.  Some classic "military" strategies are

  • cut-off communications from the central authority
  • focus on immobilizing or eliminating the central authority

There are obviously many others.  But I think it is also the case that central control has a number of other disadvantages.

  • more vulnerable to corruption
  • more vulnerable to not being empathic with those that are under "control"
  • more vulnerable to being out of touch with what is really happening
  • more vulnerable to rewarding those that only benefit those that are in "control"

Obviously, as more central control is exerted all of these issues increase in vulnerability.

As we will see throughout the semester, how much centralized control should be used is an issue of much complexity.  Unfortunately, it seems to always be at its worst when "authorities" are least willing to examine and consider how much control they do and should exert.

But considering how much we see individuals striving to exert centralized control, it is obvious that it has its advantages, at least for some people.

But, distributed control has its advantages and disadvantages.  We will get into this to varying extents throughout the semester.  But some of the advantages for developing more distributed systems are given in the following list.

  • taking out any "nodes" or sections of the network shouldn't take down the entire network
    • there are plenty of alternative routes/connections
  • expertise is spread out and developing, hopefully, with interactions and cross fertilization
  • growth/depletion is more likely to be organic and adaptive to demand

What we will see in practice is that

  • the overall internet has developed as a distributed peer-to-peer sort of network
    • based on common standards
    • requiring common protocols
    • requiring demand sensitivity
    • requiring intelligent routing of information
  • more localized networks on this internet are more typically client server in their basic structure with fairly local administrative control
    • improved security
    • improved leveraging of expertise

But these are issues that we will discuss in much more detail over the entire semester.