The History of the Internet

 

The CloudINX History


The Internet that we use today was switched on in January 1983, the predecessors were known as Arpanet which was the Military network, constructed by the United States in the late 1960s.

Finally completed in 1972, ARPANET was designed and set up for the Advanced Research and industrial laboratories in response to the Soviet Union’s first artificial satellite known as Sputnik in 1957. Sputniks launch was at the height of the Cold War and had triggered fears from the Eisenhower Administration and a greater fear that the US was falling behind with its super power rival in Aerospace and other high tech industries.
As part of its technology mission ARPA was founded to provide the research in computer science departments across the US. 

This organisation provided funds to enable research grants to purchase large and expensive computers and mainframes. However these mainframes and technology mix deemed incompatible with each other, they had different architectures, different operating systems and incompatible interfaces, and this resulted in the ARPA funded projects to be incompatible with each other that were deployed across secret and various parts of the country. These inefficiencies let to the new development at ARPANET, in 1965 a young Texan names Bob Taylor became director of ARPAs information processing techniques office (IPTO) the division of the agency that was responsible for advanced computing.The continued evolution of the connected Internet drives Internet business as we know it today. Networking and grown into what we use every day and known as the Internet. 

The first major network connecting the universities and government was known as the ARPANET [the Advanced Research Projects Agency Network] with its aim to link up universities and governments to hubs and was a way to exchange data in the event of any outages, pockets of connected hubs began to network across continents and ensured connected solutions in times of the cold war. At first, universities and colleges reacted negatively to the idea of non-educational use of the networks. 


However, as a result of the ISPs educational institutions were able to participate in new areas of science, development and research because costs related to Internet services became more affordable.

The founding principle of all networks hasn’t changed much, at the basic level information and communication requirements exchange data on a secured networked platform via satellite and undersea carriers. By the end of the decade, the first Internet Service Providers (ISPs) were formed.These ISPs spearheaded the movement for what grew into Internet data centers or colocation hubs. Internet network exchanges (INX) and Internet exchange points (IXPS)


Around the world INXs (Internet Network Exchange) hubs formed to host client’s servers. Pioneering this movement was a company originally called UUNET Communication Services. UUNET Communication Services was initially formed as a non-profit entity in 1987.

The History of Connecting
Perring in the Cloud at two Exchange points The first peering session traces back to the 1980s when two government funded network projects required interconnection the (ARPANet ) – (Advanced research Project Agency Network and the (CSNET) Computer Science Network operated by different organizations structures, and equipment the motivation was to interconnect the two networks as seamlessly as possible forming the basis of Cloud Peering at the Exchange using Internet protocols understood by both networks

Founding members of CloudINX began their early careers at UUNET
 
Many hours spent  in the datacenters of many global ISPs. Formed to assist in developing communities and internet development contributing to what the internet as it is today. UUNET launched an independent IP Backbone called Alternet, becoming the fastest growing ISP in the mid-90s. Surpassing MCI, Sprint, and PSI (one of the original ISPs), it was later purchased by WorldCom.

CloudINX is dedicated to providing our customers with innovative connectivity solutions while keeping a focus on our customers ROI and business alignment.

1994 A Silicon Valley Dot Com Rush.


Commercial restrictions that were previously hindering entrepreneurs and companies from completely utilizing the Internet fell away. ISPs and colocation facilities originated to take advantage of the newly established technological wave that would prove to be a breakthrough for global businesses that were developing daily.
The CloudINX concept was inspired by original 90s Silicon Valley ISPs which helped pave the way for Internet business to be recognized as a viable means of commerce. Equally contributing to educational and medical institutions that implemented special programs offering free Internet services to colleges and universities to help sustain research and educational growth.

The dot com boom and bust, forced many ISPs to go bankrupt. ISP colocation facilities have struggled financially and large corporations simply jumped ship to avoid further profit losses. Through our consolidated cloud and virtualized approach we survived the bust, and run a profitable independent organization showing year on year growth, a landmark achievement that reflects a diligence and stability unlike much of our competition. Through our strategic global partnerships, we currently operate across some of these largest state of the art colocation facilities from Silicon Valley and other colocation facilities worldwide CloudINX is a business enabler for entrepreneurs, consumers, and corporates, governments, military, and learning institutes. As technology changes the networked environment creates opportunities to connect business globally.

Today CloudINX returns measurable inevitable financial returns.

Multinationals, entrepreneurs, ISPs Telecom, VISPS, VoIP providers, broadcasters and companies like yours are turning to CloudINX as a crucial entity to cost savings and ecofriendly solutions to maximize their company’s stability and financial growth. The prime accessible locations combined with sophisticated maintenance and services are available to the public at affordable pricing.

Global colocation hubs provides the means to competitively grow any company, achieving its connected vision with stability and to achieve the financial success it deserves. Real-time commerce, Instant global communication, is the driving force of most business.

 Partnering with CloudINX yields a ROI second to none.


Throughout its existence, colocation centers such as CloudINX have consistently driven Internet business to new heights by bringing technical reliability to entrepreneurs and cutting costs down significantly. The internet continues to play a distinctive role within a company's profitability and expansion providing new opportunities to emerging markets. The Internet is an integrated businesses tool and continues to rely on it increasing connected hubbed networks. The incomparable strengths of colocation is the choice for companies wishing to compete in domestic and worldwide trade environments, by leveraging off networked communities thereby making cloud colocation (CloudINX) a benchmark around the globe for Internet business.

For content and service providers who operate their own Autonomous System, CloudINX provides the introduction to a global eco system of international ISPs, dedicated, diverse Internet connection via our global IP backbone partners. Extending clients reach and benefit from the speed of direct connections.
Extend your reach in the eco system. Global IP Connection at the core.

Direct connectivity between content and customers is the key to high-quality, low-latency delivery. Connect in the cloud. We have access to global carriers and networks interconnecting with their Tier 1 networks provides through CloudINX a full global routing table is possible with minimal AS hops. CloudINX provides a solutions via our partners for direct, uninterrupted paths between global pops throughout North America, Europe Asia and Africa. For service providers and other organizations that do not operate their own AS, CloudINX - provides access to high quality wholesale networks and access via a choice of global fiber backbone partners delivered over an E-NNI solution.

Plug in to the world and gain the performance of direct connections backed by industry-leading reliability and security.

An Internet exchange point (referred to as network node, IX, CLOUDIXP or CLOUDINX) is a physical infrastructure through which Internet service providers (ISPs) or cloud providers exchange Internet traffic or data between networks.

The primary purpose of an INX is reduce the portion of an ISP's traffic which is delivered via upstream transit providers, thereby reducing the average per-bit delivery cost of their service. Increasing the number of network paths learned through the network; routing efficiency, network speed data redundancy, service continuity and fault-tolerance is maintained. An INX allow networks to interconnect directly, with each other via the exchange, rather than through one or more third-party networks.

Primary benefits of connecting at an INX are numerous however cost, latency, and bandwidth are key factors

Traffic passing through an exchange is typically not billed by any party, whereas traffic to an ISP's upstream provider is. The direct interconnection, often located in the same city as both networks, avoids the need for data to travel to other cities (potentially on other continents) to deliver information packets to each other another, reducing latency. The aim of any INX is to keep local content local. Ensuring faster delivery times of data. The other advantage of connecting at an INX is network speed, and is most noticeable in areas that have poorly developed long-distance connections. ISPs in these regions often pay between 10 or 100 times more for data transport than ISPs in North America, Europe, Africa or Asia. Therefore, these ISPs typically have slower, more limited connections to the rest of the global Internet. However, a connection to a local node or INX Network Exchange may allow them to transfer data without limit, and without cost, vastly improving the bandwidth between customers of the two adjacent ISPs.

A typical node consists of network switches, to which each of the participating ISPs connect. Prior to the existence of switches, these nodes /hubs typically employed fiber-optic inter-repeater link (FOIRL) hubs or Fiber Distributed Data Interface (FDDI) rings, migrating to Ethernet and FDDI switches as those became available in 1993 and 1994.

Asynchronous Transfer Mode (ATM) switches were briefly used at a few nodes in the late 1990s, accounting for approximately 4% of the market at their peak, and there was an abortive attempt by the Stockholm IXP, NetNod, to use SRP/DPT, but Ethernet technology and TCP/IP protocol suites has prevailed, accounting for more than 95% of all existing Internet exchange switch fabrics.

Port speeds modern IXPs,INXs ranging from 10 Mbit/s port in use in small developing-country INXs, where we see 10+ Gbit/s ports in major metropolitan centers such as Seoul, New York, London, Frankfurt, Africa, Amsterdam, and Palo Alto. Ports with 100 Gbit/s are available at e.g. the AMS-IX in Amsterdam and the DE-CIX in Frankfurt.

An optical fiber patch panel at the Amsterdam Internet Exchange (AMS-IX) The technical and business logistics of traffic exchange between ISPs are governed by mutual peering agreements. Under such agreements, traffic is often exchanged without compensation. When an exchange point incurs operating costs, they are typically shared among all of its participants.

At the more expensive exchanges, participants pay a monthly or annual fee, usually determined by the speed of the ports which they are using, or much less commonly by the volume of traffic which they are passing across the exchange. Fees based on volume of traffic are unpopular because they provide a counterincentive to growth of the exchange. Some exchanges charge a setup fee to offset the costs of the switch port and any media adaptors (gigabit interface converters, small form-factor pluggable transceivers, XFP transceivers, XENPAKs, etc.) that the new participant requires.

In the History of the Internet Katie Hafner and Mathew Lyon described: “There, side by side, sat three computer terminals, each a different make, each connected to a separate mainframe computer running at three separate sites. There was a modified IBM Selectric typewriter terminal connected to a computer at the Massachusetts Institute of Technology in Cambridge. A Model 33 Teletype terminal, resembling a metal desk with a large noisy typewriter embedded in it, was linked to a computer at the University of California in Berkeley. And another Teletype terminal, a Model 35, was dedicated to a computer in Santa Monica, California, called, cryptically enough, the AN/FSQ 32XD1A, nicknamed the Q-32, a hulking machine built by IBM for the Strategic Air Command. Each of the terminals in Taylor’s suite was an extension of a different computing environment—different programming languages, operating systems, and the like—within each of the distant mainframes. Each had a different log-in procedure; Taylor knew them all. But he found it irksome to have to remember which log-in procedure to use for which computer. And it was still more irksome, after he logged in, to be forced to remember which commands belonged to which computing environment. Communicating with that community from the terminal room next to Taylor’s office was a tedious process. The equipment was state of the art, but having a room cluttered with assorted computer terminals was like having a den cluttered with several television sets, each dedicated to a different channel”. “It became obvious,” Taylor said many years later, “that we ought to find a way to connect all these different machines.

A way to connect the computers was found, with an initial Million USD funding the IPTO awarded a contract to Boston consulting firm Bolt Baraneck and Newman (BBN) to build the Arpanet the initial design started off with incompatible mainframe computers located at a secret remote locations, only two ways were used for them to communicate with each other – a common language or set of communication rules which we call a protocol, the solution would be to implement a sub network of identical computers, this later advantage reduced the amount of additional software development required for all the computers to communicate. Each system was re programmed to receive the IMP Interfaced Message Processor code. After the roll out and implementation the system engineers adopted the new BBN design going forward.

Neutral Traffic exchange locations - (The Inter networking project)


With the demand for information across the globe. New additional networks adopted the Arpanets new packet switching networking solutions and implemented and rolled out to countries at astonishing speeds, this new network connected the military, Pentagon, and other networks. Other smaller Networks that adopted the technology such as the Cyclades network in France, and the National Physical Laboratory in Britain led by Donald Davies and Roger Scantelbury, other smaller networks at the time such as the Aloha network in the University of Haweii that used radio links to connect their networks. Each of the new networks that were being implemented had the same technical backbone technologies such as the ARPAnet but were owned and operated independently – for vastly separate purposes.


So ARPAs engines took on new challenges and that was to connect and integrate all the autonomous networks together with Arpanet to make one large global network, or cloud fabric a network of all networks, the Internetworking Project was commissioned in 1973 by Vinton (Vint) Cerf and Robert Kahn.
The project seemed at face very simple, however faced design problems such as designing one network that in inter connects to other networks or run software on other computers when there was no control over each other’s network, or challenges such as a network that was originally designed to run a telephony (voice) network to now carry Data, or future proof it for Video services and other big data requirements that were only ideas at the time and still developing today. The answer to the major internetworking problem was to keep the core network decentralized with no control over the network, simple and effective and not optimized for any one particular application. (Today we call this the end pipe principle) The absence of control which limited the control of the networks allows freedom of speech and no monopolization, constant innovation, discoveries, and flow of information. With the future of what the internet could untimely be used for the design needed to be simple – take a data packet in at one end and try to best to deliver the same packet at the other intended destinations. The design was simple, the protocols enables computers on the network to act as gateways these messages cold be passed from one node or routing station to the other reliably, and unaffected. 


Two protocols were used – (TCP) - Transmission Control Protocol which was responsible for each packet to be sent and received in a specific order. (IP) Internet Protocol. Which specified that the format of packets and addressing schemes. Know now as TCP/IP Keeping to the basic network architectural protocol principles this TCP/IP foundation concept has enabled the internet to grow into what it is and use daily. With the focus on no central control a global network is possible, this open and permissive architecture allows anyone to connect to the internet, as long at the basic computer has a TCP /IP protocol connection. 

Colocation and hosting at a CloudINX

A colocation center or colocation center (also known as co-location, collocation, colo, or coloc) is a type of data center where equipment, space, and bandwidth are available for rental to retail customers. Colocation facilities provide space, power, cooling, and physical security for the server, storage, and networking equipment to anyone wanting a direct low latency cost effective connection, with no 3rd party involvement. Facilities such as CloudINX connect clients to a variety of telecommunications and network service providers—with a minimum of cost and complexity. Colocation has become a popular option for companies with midsize IT needs—especially those in Internet related business—because it allows the company to focus its IT staff on the actual work being done, instead of the logistical support needs which underlie the work. Significant benefits of scale (large power and mechanical systems) result in large colocation facilities, typically 4500 to 9500 square meters (roughly 50,000 to 100,000 square feet).


Colocation facilities provide, as a retail rental business, usually on a term contract
Simply.Connect