To help us provide you with free impartial advice, we may earn a commission if you buy through links on our site. Learn more

A brief history of the computer

With computers now commonplace in every home, workplace and pocket, Simon Handby traces the development of the technology

If you’re a typical Expert Reviews reader, the chances are you use a computer at work, that you’ve got one or two at home, and that there’s more than a handful between your television, games console, car and mobile phone. Computers and computer technology have become an indispensable part of modern life, and their widespread uptake is changing the way we live, but computing for all is still relatively new – and it’s something that many early pioneers didn’t foresee.

The first true computers were electromechanical giants, developed by governments and institutions driven on by the desperate circumstances of the Second World War. Computers remained in the hands of universities, governments and big business for decades after the war’s end, but as the technology improved they became smaller, more affordable and more accessible until they came into our homes and ultimately our pockets. Here we chart the history of computing, telling the story of how such powerful tools have ended up in so many hands.

EARLY BEGINNINGS

Most histories of the computer start with the English mathematician and engineer Charles Babbage, whose unfinished ‘analytical engine’ was undoubtedly the first design for what we now think of as a computer: a machine that takes an input, mathematically manipulates it according to a customisable program, and produces an output. Babbage was a true visionary; it’s a somewhat macabre indication of the esteem in which he was held, that one half of his brain remains on display at the Hunterian Museum in the Royal College of Surgeons, and the other at the Science Museum. Still, even his work built on some existing fundamentals.

Mankind had been using machines to aid calculation since at least the appearance of the abacus, thought to date back before 2300BC; but it was in Renaissance Europe that engineers began to produce far more sophisticated calculating devices, some of which had some degree of programmability. In 1801 as the Industrial Revolution gathered pace, Joseph Marie Jacquard invented a weaving loom that could be programmed with punched cards to produce different patterns – the first machine to be given instructions in this way.

Difference Engine
The reconstruction of Babbage’s difference engine at the London Science Museum

Babbage sought a way to remove human errors from the mathematical tables available in the early 19th century, devising his mechanical ‘difference engine’ to calculate polynomial functions (a type of algebra equation). Though it was never finished, the first difference engine would have contained more than 25,000 parts and weighed over 13 tonnes. A revised design was completed in 1991 by the Science Museum, and found to work perfectly.

More complex still, and also unfinished, Babbage’s analytical engine added features that define modern computers. It could be programmed with cards, but could also store the results of calculations and perform new calculations on those. Babbage intended to support conditional branches and loops, fundamental to all modern programming languages. His death in 1871 meant that he never finalised his designs for the engine, but his son Henry completed its core computing unit – ‘the mill’ – in 1888.

WARGAMES

While various inventions led to some early analogue, non-programmable calculating machines, the next major advances took place immediately before and during the Second World War, most notably at the Bletchley Park code-breaking site. We looked in detail at Bletchley’s wartime role in issue 272’s feature (on this month’s cover disc if you missed it), but its achievements in unlocking German ciphers were made possible partly by the mathematical genius of the people working there, and partly by the brute-force number-crunching provided by the first true computers.

The brilliant mathematician Alan Turing is acknowledged as the father of computer science, but he’s often wrongly credited with developing Colossus; the world’s first programmable, electronic computer. In fact, Colossus was designed by Tommy Flowers and other Post Office research engineers to replace and improve ‘Heath Robinson’, a mechanical calculating machine used at Bletchley. Entering service in February 1944, the Colossus machines provided the calculating speed and power to rule out impossible Lorenz cipher settings – which hugely sped up breaking messages from the German high command.

Eniac 2
ENIAC: huge numbers of valves made wartime computers massive, room-filling affairs

While Colossi operated electronically – their only mechanical system was the tape reader through which encrypted messages were input – their construction would be unrecognisable next to a modern PC. Their huge size was necessary in part due to the use of 2,400 big, hot and power-hungry thermionic valves for circuit switching. Valves were at the heart of other giant computers immediately after the war, with the American Army’s ENIAC ballistics computer having no fewer than 17,468 when it became operational in 1946.

Turing’s cryptographical genius was essential to the successes of Bletchley Park, but for many years after the war the site’s work remained a secret. In 1952 he was prosecuted for then-illegal homosexual acts and ‘treated’ with female hormones, before committing suicide in 1954. It wasn’t until the 1970s that Bletchley’s work, and Turing’s importance to it, became widely known. Gordon Brown’s official governmental apology for the way Turing was treated after the war didn’t come until 2009.

A brief history of the computer timeline 1

POST-WAR

The next three decades would see numerous inventions and innovations in electronics that would set a pattern for computer technology that continues today: as technology improves, computers increase in complexity, affordability and operational power, while their heat and power consumption fall. Some early milestones included the 1949 invention of random access memory (RAM) and the development in 1952 of the trackball by the Canadian Navy, but computers remained the preserve of governments, universities and large corporations who could afford the hardware and the expert staff to operate and maintain them.

DATAR trackball
The first trackball works very similarly to today’s examples, although it’s not as ergonomic

One of the single most important breakthroughs happened in 1947, with the building at Bell Labs of the first working transistor – a semiconductor device that can perform the same functions as a valve. Although it was some years before the technology was refined, the first transistor computer appeared in 1953 and heralded the start of a second generation of more sophisticated machines. However, while the first fully-transistorised computer appeared in 1957, a second major innovation at the end of the 1950s would play an equally important role in pushing computers towards the hands of the masses.

CIRCUIT TRAINING

While the earliest transistors were self-contained components, smaller than a valve but still challenging to build into a complex device, in 1957 an engineer at Texas Instruments, Jack Kilby, was working on ways to modularise them so that they could be assembled in grids. Kilby subsequently hit on the idea of building multiple components on a single piece of semiconductor substrate – the essence of the integrated circuit (IC). He built the first working IC from germanium, while in 1959 Robert Noyce independently built the first silicon example. Kilby’s discovery proved so revolutionary and important that by his death in 2005 he had received the Kyoto Prize, the Nobel Prize in Physics and had been awarded no fewer than nine honorary doctorates.

By 1962 simple ICs containing just a few transistors were being manufactured in small numbers at high cost and were almost solely used in ballistic guidance systems. However, growing demand helped reduce costs and improve manufacturing processes. Chips came with more and more transistors, by 1965 prompting Intel co-founder Gordon E. Moore to coin his famous Law. Originally Moore’s Law said that the number of transistors on a chip would double every year, although he later revised it to a doubling every two years – an estimate that has proved uncannily accurate. By the end of the 1960s, ICs were being mass-produced and the most advanced chips contained hundreds of transistors.

A brief history of the computer timeline 2

One of the very first computers to use ICs was the Apollo Guidance Computer, introduced into NASA’s Apollo rocket programme in 1966. Weighing more than 30 kilos, the electronic brain that first steered man to the moon had roughly 4k of RAM, 72kB of ROM and ran at just over 1MHz. It comprised 2,800 separate ICs, but by the beginning of the 1970s the first microprocessors arrived – ICs that comprised all the components needed for a computer’s central processing unit. While the costs were still considerable – Intel’s 4-bit 4004 cost thousands of dollars – building a computer was far cheaper than ever before.

In the early 1970s, the falling price and increased availability of ICs made them increasingly available to electronic hobbyists, a small but significant group of people who used available components to build their own electronic devices such as calculators. Several magazines served the community, publishing projects that readers could undertake, discussing technological developments and, in some cases, helping to drive them forward. By 1974, Intel’s 8-bit 8008 microprocessor was within the reach of hobbyists, and the July issue of Radio-Electronics magazine published a project to build the 8008-powered Mark 8, ‘your personal minicomputer’.

Cover radio LX
Computer magazines have come a long way since 1974

The computer was fairly daunting, and only around 100 of the specially-produced circuit boards were sold, but the project inspired Popular Electronics magazine to take the idea further, commissioning Ed Roberts, the founder of Micro Instrumentation and Telemetry Systems (MITS), to design a computer in kit form that its readers could buy and build. MITS, established to supply rocketry and calculator kits, was heavily in debt, but what followed not only rescued it; it laid the foundations of widespread personal computing.

It’s hard to overstate the impact of the Altair 8800 and the events of 1975 in the history of personal computing. Launched as a project in the January 1975 issue of Popular Electronics, the Altair was available from MITS for $397 in kit form, or $498 preassembled – equivalent to roughly £179 and £224 then, or £1,100 and £1,400 in today’s money. It had an 8-bit Intel 8080 processor and 256 bytes of memory and, optionally, came with a version of the Basic programming language. At a time when only a tiny proportion of society had ever been directly exposed to computers, here was one that people could go out and buy for themselves. Journalist Art Salsburg, who wrote the accompanying editorial, proclaimed: “The home computer is here!”

EXPLOSION

While MITS had expected to sell 800 or so Altairs in total, they had taken 1,000 orders by the end of February 1975 and had delivered 2,500 computers by the end of May. MITS took on more employees and the Altair’s price went up. While the cheapest versions could be instructed in machine code, the true cost of a ‘Basic-speaking’ computer kit was nearly $1,900 (roughly £5,400 in today’s prices). Even so, in the context of the times the Altair 8800 was an incredible success in its own right, selling more than 10,000 units before MITS sold the design on. Its historical importance goes further. Its version of Basic was coded by Paul Allen and one William Henry Gates III (later known as Bill) and, though marketed as Altair BASIC, it was Microsoft’s founding product.

Altair
The influential Altair 8800 home computer

Bill Gates and Paul Allen had been friends since attending school together in Seattle, where Gates first learned to program in BASIC on a mainframe computer. Later, Gates and Allen were temporarily banned from another computer after they were caught exploiting bugs to get more time on the system. With two other students they would later offer to find and fix bugs in the system in return for more time on it. Gates’ colourful youth continued when, asked by his school to write a program that would schedule students’ classes, he added code to make sure his lessons contained mostly female students.

A brief history of the computer timeline 3

Microsoft isn’t the only company that can trace its history to the mid-1970s, though. At the start of 1975 there were two microcomputer manufacturers in the US, but by the end of the year this had risen to 27, accompanied by a burgeoning industry of software providers and expansion board manufacturers, two magazines, two computer stores and several computing clubs and groups. 1975 saw the first integrated microcomputer; the Sphere, which contained the processor, keyboard and display in a single case and which also had an optional floppy drive.

The following year saw the appearance of more and more companies and pioneering products – among them Apple’s first effort; the hand-built Apple I. Founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne, Apple was incorporated in 1977, but by then Wayne had already sold his share to Jobs and Wozniak for just $800 (equivalent to less than £3,000 today). In retrospect this doesn’t seem to have been the wisest decision: today Apple is among the world’s largest companies, with assets of more than $75 billion, and profits in 2010 alone of $14 billion.

The explosion in companies and products would continue over the next few years into the 1980s, with new companies springing up and existing electronics companies such as Commodore switching to computer production. Commodore’s PET of 1977, with its integrated keyboard, ‘Datasette’ and display, sold alongside the similar Apple II and Tandy’s TRS-80, which, though less sophisticated, was widely distributed through the electronics chain’s stores.

In the UK, Clive Sinclair’s Science of Cambridge Ltd launched its first microcomputer kit, the MK14, for £40 in 1978. This was followed in February 1980 by the ZX80, which cost under £100 (roughly £320 today) in kit form, but which was also available pre-assembled. It went on to sell 50,000 units before its replacement a year later by the ZX81, which sold an astonishing 1.5 million units.

Zx81 ad
The ZX81 flew off the shelves back in 1981

While there were many buyers, however, the proliferation of non-compatible systems was far from ideal. Each manufacturer had its own user-base, each running programs that were generally incompatible with other makes and models. This kept the personal computer community fragmented, but it also provided a headache for developers. Michael Shrayer, whose Electric Pencil became in 1976 the first word processor for home computers, reportedly compiled 78 versions to run on the different platforms, operating systems and display capabilities of the time. Even as the ZX81 was enjoying its enormous success, IBM announced a product that would refocus the market, and become the bedrock of personal computing for at least the next 30 years: the IBM Personal Computer.

A brief history of the computer timeline 4

PC GONE MAD

Founded in 1911, IBM had been instrumental in the development and production of electronic computers since their earliest days, and in 1975 had produced its first desktop microcomputer – the IBM 5100. While this was very expensive, the company wanted to go head-to-head with Commodore, Apple and Atari in the growing market for home sales, and convened a special team to produce something more competitive.

IBM 5100
IBM’s first desktop computer – the 5100

With permission to do things quickly and in new ways, the team made a series of decisions that would not only keep costs and time to a minimum, but which also proved fundamental to the computer’s success. They used off-the-shelf components, including Intel’s 8088 processor and a pre-existing IBM monitor and Epson printer, and also settled on an open architecture – documenting the system and encouraging third-parties to produce compatible expansion boards and software. This was a contrast to most other proprietary approaches, and helped ensure that compatible products were available within weeks of the PC’s August 1981 introduction.

IBM’s engineers didn’t fully predict another side of their open approach. With the computer’s circuit schematics and other information available to developers, and with the processor and other key components not exclusive to IBM, the computer was susceptible to being copied. By June 1982, Columbia Data Products had legally reverse-engineered IBM’s BIOS, produced their own copy and begun selling an IBM PC clone – compatible with the same hardware and software as the original, but cheaper.

While this was bad news for IBM, which was powerless to stop a growing number of compatible systems competing with its own, it ultimately helped to establish the PC as the platform for the majority of home computers. It happened slowly, however. The home computer market was strong and diverse in the first years of the 1980s, and at more than $1,500 (roughly £2,200 in today’s money) the original PC was too expensive compared to rivals selling for less than half as much. Though designed as a home computer, it initially only sold well to businesses, with just 13,000 shipped by the end of 1981.

IBM continued to develop the PC, releasing the XT with a 10MB internal hard disk just 18 months later, and the more competitively priced PCjr in November 1983, but other factors would help to pave the way for the platform’s mainstream uptake. Commodore had bought the company that made the chips for its C64 and began an incredible price war that drew in almost all home computer makers. While it helped make the C64 the best-selling home computer model ever, it also destabilised many in the industry and helped precipitate a collapse. By 1984, Atari and Commodore were the only major survivors of the price war and both were in a parlous financial state. Users began to gravitate to IBM PC compatibles and Apple’s Macintosh. By the end of 1984 IBM had sold half a million PCs.

A brief history of the computer timeline 5

MODERN TIMES

Many PC enthusiasts will be familiar with the way home computing has developed since then. As PC uptake continued and manufacturers tended towards a single, compatible platform, software vendors soon had access to a far wider and less complicated market. This was particularly true when it came to operating systems, the one bit of software that every computer needs. Just as it had with Basic for the Altair 8800, Microsoft got the contract to develop the operating system for the IBM-PC. Although this was distributed as PC-DOS, Microsoft cunningly retained the right to market its own MS-DOS, which it could supply to the growing number of IBM-compatible PCs.

Bill and Paul Microsoft
An early photo of Microsoft founders Paul Allen and Bill Gates – courtesy of Microsoft

Microsoft has remained dominant ever since but its operating systems, like the hardware they run on, have continued to evolve. While the PC’s essential architecture remains unchanged, with any modern example theoretically able to run any early program, its subsystems have improved almost beyond recognition. New devices such as optical drives and sound cards have appeared while there have been several generations of data bus, disk interface and video card – each bringing faster speeds.

A brief history of the computer timeline 6

To date Moore’s Law has held true. While cramming 2,300 transistors onto Intel’s first microprocessor was at the cutting edge in 1971, today’s six-core Core i7 processor has more than a billion transistors – more than half a million times as many. At the same time, better designs and materials mean that modern processors run at far higher clock speeds. Intel’s 4004 ran at a maximum 740KHz and the Apollo Guidance Computer managed 1MHz, but today’s desktops can exceed 3GHz – three thousand times faster.

Improvements in hardware have enabled PCs to run anything from suites of office software through to graphics-rich games, but they’ve become more affordable in real terms too. At the same time, the public has become more computer-literate as computers have become more prevalent in our workplaces and schools. Cheap, compact processors have allowed digital technology to displace earlier standards in photography, music and other media, and our PCs help us edit, store and display the results.

Perhaps the most poignant illustration of the way in which massive computing power has become widely available came in 2007, after a working replica of a Mark II Colossus was completed at Bletchley Park. In a challenge to mark the occasion, enthusiasts were invited to compete against the mighty computer in a recreation of its wartime code-breaking role. German radio and computer enthusiast Joachim Schüth won the challenge; his 1.4GHz laptop decoding the Lorenz-encrypted message in just 46 seconds. The replica Colossus worked perfectly, but it took three and a quarter hours.

A brief history of the computer timeline 7

Such developments have helped make today’s PCs and Apple Macs truly mainstream objects: according to Office for National Statistics (ONS) figures, 75 per cent of all UK homes owned a computer by 2009. The internet has proven to be one of the most effective drivers of mainstream computer uptake. In 2009, 71 per cent of UK households had an internet connection, but home computers are no longer alone in being able to exploit it.

Games consoles, smart phones and other computerised devices increasingly support wireless networks and access the internet: HTML was originally written (see below) to be platform independent, and more advanced web applications are made possible through application frameworks such as Flash or Java, which provide a standardised environment for more advanced web apps. With these available for a range of devices, the browser, operating system and even the underlying hardware is quickly becoming less important for web users.

What this means is that, while full-sized computers have long faced competition from compact laptops, netbooks and most recently net tops, many of their most popular applications can now be tackled by a high-end smart phone. Larger alternatives such as the iPad and its Android-powered rivals may soon present a serious challenge to our current notion of the home computer. Just as advancing technology made the PC an indispensable tool for the modern home, such advances might ultimately make it obsolete. The future of personal computing might – literally – be in our hands.

A brief history of the computer timeline 8

MAKING A PACKET – THE DEVELOPMENT OF THE INTERNET

While the invention of transistors, integrated circuits and the move to mass production established the foundations of the personal computer revolution, it’s the internet that has truly unleashed the computers potential. A communication medium so powerful and desirable that it has helped push PC technology into everybody’s homes and beyond. But while mass internet use is a phenomenon of the last decade, the network’s foundations pre-date many of the technologies that made personal computing possible – including even the microprocessor.

Arpanet
ARPANET, the predecessor to the modern internet, as it was in 1973 when the UK got its first connection

The internet’s roots date back to the late 60s, and early work in the US on the Advanced Research Projects Agency Network (ARPANET), a computer communications network developed jointly by the Massachusetts Institute of Technology (MIT) and the Defense Advanced Research Projects Agency (DARPA). The goal was to find a way to share information between the users of various computer mainframes. Like established phone systems, data transmissions had previously relied on circuit switching, where an electrical circuit is created between two parties for their exclusive use in exchanging information, but researchers had started considering something fundamentally different.

ARPANET was designed from the start around the concept of packet switching. Instead of information being sent over dedicated point-to-point connections, the network groups data into parcels that are electronically stamped with an address. Data packets are sent into the network and routed to their destination by nodes that read the address and forward the packet appropriately. The key advantage is that a single link can be used to send data concurrently to multiple recipients, with packets from several streams intermingled as necessary.

While the initial ARPANET linked just four nodes, it grew to 13 routers by the end of 1970 and continued to expand steadily. In 1973 the first UK node was added at University College London, and by 1981 there were 213 host computers worldwide. At the same time, the protocols and services used today began to emerge. The first use of email came in 1971, and the File Transfer Protocol (FTP) appeared in 1973. At the end of 1974 the term ‘internet’ was first used as three Stanford University scientists published the specification of the Transmission Control Protocol (TCP). In 1983 ARPANET was converted to use TCP and Internet Protocol (IP), which still comprise the bulk of internet traffic today.

The next big step occurred in 1990. Tim Berners-Lee, an English research fellow at Switzerland’s CERN nuclear physics laboratory, began a project that would combine the internet and its Domain Name System (DNS) with the idea of hypertext. Working with Belgian Robert Cailliau, Berners-Lee developed the world’s first worldwide web server, serving pages written in HyperText Markup Language (HTML) from Christmas day that year. Incidentally, this ran on a simple workstation computer built by NeXT – a company founded by Steve Jobs after he was forced out of Apple in the mid-1980s.

While the web was originally used only within CERN, Berners-Lee publicised it in August 1991 and made his rudimentary server and browser software freely available for others to download. This decision was one of several that helped the web grow to its current ubiquity: it had been designed from the start to be platform-independent, suiting the variety of computers and operating systems at that time and since. Perhaps most significantly, in April 1993 Berners-Lee persuaded CERN to certify that the technology of the web was in the public domain – free for all to use.

Read more

In-Depth