INTRODUCTION

The emergence of data center did not occur overnight as it took multiple decades to what they look like today. The relationship between data center and cloud computing is like of chicken and egg. Every cloud computing environment requires a data center. And for data center survival its physical infrastructure needs to exist alongside. I recently came across an article written by Stuart Miniman titled, ‘The Data Center: Past, Present and Future’. This article inspired me to go in depth and write about history and evolution of data center. For tech followers’ data center is a known term but even as a layman you will find the topic interesting and believe me you have to read and know what these two terms really mean when they pronounced together. As, it’s a topic on which one must have knowledge in modern age of digitisation.

The article published in Wikibon also presented a simplified definition of data center, which I haven’t read elsewhere but have always believed  in it. “Data centers are at the center of modern software technology, serving a critical role in the expanding capabilities for enterprises.”

 

FROM PAST TO PRESENT:

Electronic Numerical Integrator And Computer (ENIAC) is known to be the first electronic computer for general-purpose. It was basically designed for United States Army’s Ballistic Research Laboratory. This happened in 1945-46 when the whole machine weighed around 30 tons. Such a weight was attributed to large size of ENIAC which took almost 1800 square feet of floor space! To keep machine running required as many as 6 technicians throughout the operational time. Moreover, ENIAC was able to perform 5,000 operations in a second. It was the time when computers were not accessible to general public and government agencies and army were making the best possible use of them. They did not coin the term data center, but in today’s time the same room would have housed a complete data center unit enabling business efficiency.

eniac
ENIAC‘s being unveiled by its creators

With the advent of 1960, the use of vacuum tube in computer machines was replaced by solid state components such as transistors which boasted durability and were comparatively smaller in size. This new gadgetry also had few other features like cost effectiveness, reliability and space efficient. Even after utilising the new gadgetry these machines costed a bomb. In those days a single computer would cost somewhere around $4 – $5 million (a piece).

In mid 1960s, computers started gaining commercial relevance but still single computer was shared by multiple companies. To cite an example, IBM and American Airlines jointly developed Sabre system which was known as ones of the first reservation program. The system was located at a computer center in New York.

Major transition came when companies started using magnetic core devices in place of solid state static and dynamic semiconductor devices in computer memory. This proved to be a major game changer, as the new semiconductor designed memory devices were relatively very cheap, used far less power than their predecessors and consumed less space.

intel-4004-microprocessor
Intel® 4004 became the first general-purpose programmable processor on the market—a “building block” that engineers could purchase and then customize with software to perform different functions in a wide variety of electronic devices. (http://www.intel.com/)

Done with evolution of computers, 1970s can be called as the time of real evolution of data center technology. It was in year 1971 that Intel released 4004– It was the first ever commercial microprocessor built in the world. Data Security was a major concern of businesses even during that period of time. Therefore, formal disaster recovery plans were started across data centers in US.

Xerox Alto created history on 1 March 1973 when it became the first ever computer which was designed to support an operating system and was based on GUI (Graphical User Interface). It created a lot of buzz with new age features:

xerox-alto
First computer designed from the start to support an operating system based on GUI
  • Bit-mapped high resolution screen
  • Mouse
  • Dedicated software
  • Internal/External memory storage

Another great step leading to creation of modern data centers was introduction of ARCnet (Attached Resource Computer NETwork) which came in 1977 and became an instant hit for office automation tasks. It got the tag of first widely available networking system used for microcomputers.

ibm-pc
was created by a team of engineers and designers under the direction of Don Estridge of the IBM Entry Systems Division in Boca Raton, Florida.

No tech enthusiast can forget the contribution of 1980s to computing and technology world. Those born in 1980s like me must thank computer scientists, researchers and developers who were behind creation of IBM Personal Computer (PC). It can be attributed as the decade when computers started reaching to households. At least early adopters were too excited about the whole thing! Though computers were found everywhere during 1980s, too less consideration was given to operating requirements of the devices.

Known by the name of IBM System i today IBM Application System/400 or AS/400 became one of the most popular business computing system in later 1980s. This was also the period when IT operations started getting complex and there was a need to have a better control mechanism for Information technology resources which brings us to 1990s.

Microcomputers which are known as servers today got placed into computer rooms that became data centers of 1990s. In this decade, enterprises started setting up server rooms within their facilities. This because possible due to two prime reasons:

  • Networking equipment were readily available
  • Equipment required for setting up was inexpensive for companies

Dot com boom also resulted in boom of data centers. More and more businesses wanted to gain access to fast and interrupted internet connectivity which would enable them to carry on operations smoothly and without any hitch. Every business had a common goal to be present on internet and this was when companies came up with the idea of creating large facilities or ‘data centers’ so as to render businesses with a wide array of services and solutions relating to systems deployment and operations.

In 1999 VMware started selling VMware workstation which was more or less same as Virtual PC. The initial versions released by the company were made to only run on Windows while the later one supported different operating systems prevalent in that time. It was the same year when Salesforce.com developed the concept of enterprise applications delivery through a website. Just a few years later, in 2001 bare metal hypervisors named VMware ESX were launched. These used to run directly on server hardware and did not require any underlying operating system additionally.

In 2002, AWS (Amazon Web Services) started a suite of cloud-based services. Amazon Mechanical Turk included features such as computation, storage and even human intelligence. By 2006, AWS was offering IT infrastructure services to various businesses. The services were majorly rendered in the form of web services which are today known by the name of cloud computing.

It was Sun Microsystems in 2007 which introduced the concept of modular data center. This transformed the basic principles of corporate computing.

open-compute-project
The Open Compute Project is an initiative started by Facebook to share more efficient server and data center designs with the general information technology (IT) industry.

Another landmark resulted when in 2011 Facebook launched Open Compute Project. Google also invested heavily into internet infrastructure during 2013. The expenditure was initiated by Google’s plan of massive expansion in global data center network. It created history by being the largest construction effort ever made in data center industry.

The Current Scenario:

data-center-locations-by-country

 

Datacenters of present times are moving towards capacity on demand model from earlier hardware & software ownership model. Data centers of modern times are crucial for enterprise growth and success and they remain as their own entity.

  • As per an approximation, 5.8 million new servers are deployed every year
  • Government data centers have almost doubled since 1999.
  • U.S. data centers of today consume 1.5% of energy and the demand is growing at a rate of 10% every year.
  • More energy efficient data centers are what new age requires.

 

Interesting Data Center Facts:

  • Total number of data centers around the globe is around 5, 09,147. These occupy an area which can be equated to as much as 6,000 football fields!
  • The total amount of data created in 2015 was 1.2 trillion GB
  • As per EPRI (Electric Power Research Institute), “For every watt of computer power consumed by a data center, it takes another watt to cool it”. This simply means that as much as 5-10 MW of power goes just in cooling of an average data center.

Range International Information Group – 6,300,000 Sq. Ft. (China)

Switch SuperNAP – 3,500,000 Sq. Ft (USA)

DuPont Fabros Technology – 1,600,000 Sq. Ft. (USA)

Utah Data Centre – 1,500,000 Sq. Ft (USA)

Microsoft Data Centre – 1,200,000 Sq. Ft. (USA)

Lakeside Technology Centre – 1,100,000 Sq. Ft. (USA)

Tulip Data Centre – 1,000,000 Sq. Ft. (INDIA)

QTS Metro Data Centre – 990,000 Sq. Ft. (USA)

Next Generation Data Europe – 750,000 Sq. Ft. (UK)

NAP of the Americas – 750,000 Sq. Ft. (USA)

  • Due to security reasons data server companies keep switching data from one center to another. This is done to minimize invasion and leakage of data.
  • A data center running for more than 7 years is considered to be out-dated as suggested by green computing norms. To maintain the efficiency, equipment is upgraded every two years.
  • As per an estimation, Google owns as much as 900,000 servers in more than 12 data centers around the world.
cloud-integration-services
“I don’t need a hard disk in my computer if I can get to the server faster… carrying around these non-connected computers is byzantine by comparison.” – Steve Jobs

Cost is one factor that still remain the concern of businesses. When they look for data center services they need a partnership that can prove cost-effective without compromising on quality and services part. Market leaders have time and again chosen Iron Systems which provides that cost-saving data center, cloud integration services. The company focuses onto building, deployment and support services for any kind of IT infrastructure/data center set-up requirements. Global Leader in Turnkey Design, Integration, Supply Chain, Deployment & Field Services.

How did you find this blog post? Let me know via your comments. Keep breathing tech!