How the PlayStation-powered New Horizons probe flew to Pluto
As New Horizons explores the outer reaches of the Solar System, we reveal the technology that has made it all possible
By the time you read this, NASA’s New Horizons space probe will likely have just had its long-awaited ‘close encounter’ with Pluto, one of the most distant outposts of our Solar System. Its first pictures will have been splashed across the news, images that it has travelled an epic 4.8 billion kilometres in order to capture.
This was the first mission to a new planet – or, more accurately, a dwarf planet as Pluto is now classified – since Voyager 2 visited Neptune in 1989. The project is one of superlatives. It’s the furthest a spacecraft has ever flown to reach its primary target. It’s also the first mission to a double system, as Pluto and its moon Charon are called. Because the moon is over half the diameter of the dwarf planet itself, the two bodies actually rotate around each other.
Capturing photos billions of miles from earth, of an astronomical object that’s little more than a point of light in the sky, is an awe-inspiring achievement. In all probability, though, this visual feast will be just the tip of the iceberg. As scientists start to analyse the volumes of data that are being returned by New Horizons, there’s every expectation that we’ll learn so much more about this distant world than previous generations of astronomers could have dreamed of.
^ Work on New Horizons started in November 2001. Scientists from NASA, Johns Hopkins University and Southwest Research Institute are involved in the project
Our emphasis here isn’t primarily on astronomy, but on the technology that made all this possible. Here we explore the computer systems on board the spacecraft, designed to survive in one of the most hostile environments imaginable and soldier on if hardware faults threatened the success of the $700 million mission. We’ll also examine the imaging instruments – sophisticated digital cameras, if you like – that will furnish us with such amazing images, and we delve into the challenges of returning data from such vast distances.
Computer System
To say that, like all modern devices, New Horizons relies on computer systems to control all its functions may be stating the obvious, but this spacecraft isn’t exactly a modern device. New Horizons has taken nine years to reach Pluto, and it was on the drawing board for five years before it cleared the launch pad at Cape Canaveral in 2006. By computing standards that makes this incredible scientific achievement something of an antique.
At the time New Horizons was being designed, PCs were powered by 800MHz Pentium IIIs, but space scientists tend to be more conservative, choosing to go with something tried and tested instead of risking failure by opting for the latest high-performance chips. In addition, because radiation-hardened chips are required for operation onboard spacecraft, only processors that have been around long enough for a suitably engineered variant to be developed can be used.
^ The Mongoose-V is a radiation-hardened variant of the MIPS R3000 processor, most famously found in the original Sony PlayStation
In the case of New Horizons, the processor of choice was Synova’s Mongoose-V chip, a radiation-hardened version of the MIPS R3000 processor – a design first released in 1988 and clocked at just 12MHz. Despite this apparently lacklustre specification, though, Mongoose-V processors aren’t exactly cheap, costing from $23,000 each.
A MIPS-based processor may seem an odd choice in a world dominated by Intel and AMD chips on desktop and ARM cores in smartphones and tablets, but MIPS was one of several processor architectures launched in the 1980s and the early 1990s to power high-performance UNIX workstations and servers. Along with Sun’s SPARC, DEC’s Alpha, Power or PowerPC from IBM, and HP’s PA-RISC, these chips adhered to the RISC (Reduced Instruction Set Computer) design philosophy, based on the premise that a simple streamlined design can achieve lightning speed, despite the fact that it places greater demands on the software.
At the time, these chips were much faster than processors based on the Intel x86 architecture, which was a CISC (Complicated Instruction Set Computer) design. As the differences diminished over time, and Intel maintained a significant price advantage, a few of these architectures fell by the wayside, while others were repositioned. SPARC is still used in Fujitsu workstations and PowerPC can be found in some of IBM’s server and supercomputer lines, but it’s in embedded systems you’re most likely to find MIPS.
^ When it launched in 1994, the original Sony PlayStation was powered by a MIPS R3000 processor
Embedded systems are no less fundamental to our computing infrastructure than the big-name Intel Core processors that take the limelight. MIPS designs might no longer boast the highest headline performance figures, but they consume far less power than more mainstream chips, and this is an increasingly important consideration.
Despite this efficiency, modern designs offer a choice of 32- or 64-bit architecture, speeds of up to 2GHz, and as many as six cores per chip. MIPS processors are found in networking equipment, storage devices and printers, as well as smart TVs, set-top boxes, home automation kit, in-car systems, industrial control and automation equipment and wearables.
The first Sony PlayStation, launched in 1994, used the MIPS R3000 on which the New Horizons’ Mongoose-V chips are based. So the chip that took us to a myriad of alien worlds in games is now visiting real alien worlds out in space.
Doubling up
On top of this, New Horizons has two Integrated Electronics Modules (IEMs), each of which has two of these Mongoose-V processors – one for data handling and the other for guidance and control. Each IEM is equipped with a 64Gbit solid-state recorder, which can be thought of as a radiation-tolerant memory card or solid state disk.
The risk of radiation damage is just one of several threats which, if not properly managed, could have derailed New Horizons, wasting $700 million and shattering the dreams and expectations of its creators. We spoke to Chris Hersman, New Horizons mission systems engineer at Johns Hopkins University Applied Physics Laboratory, to learn how the computer systems were designed to survive a decade-long journey through space. He explained how a unique design philosophy addressed several of the most demanding challenges.
^ New Horizons was launched from Cape Canaveral in January 2006 aboard a Lockheed Martin Atlas V-551 rocket with a Boeing STAR-38 solid-propellant booster
“Keeping it low power is essential since the whole spacecraft has only 200W to do everything,” he said. “This requires custom-built circuits. For example, the solid-state data recorder is built in a way that only those chips being written to or read from are powered up.” There are other advantages of this approach, too, as Hersman went on to explain. “Other benefits that this provides is that the chips last longer and they are less vulnerable to radiation.”
But just what is the risk posed by radiation? Does it just cause circuits to malfunction or can it actually destroy them? “Both,” said Hersman, “unless properly managed.” Using memory as an example, he went on to describe how, even though New Horizons doesn’t suffer from as much radiation as spacecraft in Earth’s orbit or in the vicinity of Jupiter, the memory experiences an average of 10 ‘upsets’ per day. Because of this, the memory is designed both to detect and correct errors.
Perhaps the most ambitious means by which New Horizons is designed to withstand this most hazardous of environments, though, is by duplication. Instead of a single computer system there are two, and this philosophy of redundant circuitry applies to many of the other key electronic systems, too. Hersman described how, in most cases, only one of the circuits can be powered up at once, and the command to switch between one circuit and the other either comes from ground control or is generated onboard as a result of abnormal conditions such as a high temperature in one of the circuits.
Imaging Systems
Like most of NASA’s planetary probes, New Horizons carries a range of scientific instruments, seven of them to be precise, several with strange-sounding names such as PEPSSI, which is described by the equally mysterious-sounding phrase ‘energetic particle spectrometer’. The two instruments that have probably done most to create public interest in the mission are LORRI and Ralph. The former is described as a long focal length panchromatic charge-coupled device camera, while the latter is a multicolour imager/infrared imaging spectrometer. In plain English, they’re the digital cameras that were designed to bring Pluto and Charon to life. However, these instruments are markedly different from everyday digital cameras.
^ LORRI, the Long Range Reconnaissance Imager, is fitted into New Horizons
At first sight, New Horizons’ imaging instruments don’t look particularly special. With a resolution of 1,024×1,024 pixels, LORRI can be thought of as a one-megapixel camera, which compares unfavourably with the 16 megapixels provided by today’s cheap compact cameras. The high resolutions of modern cameras may be excessive for most uses (your monitor is only around two megapixels, after all), but even then we’re not comparing like with like.
This view is reinforced further when we look at Ralph. With its 5,024×32-pixel CCDs, one for each primary colour plus infrared, it has a resolution of just 0.16 megapixels. Even stranger, the image shape seems ridiculously long and thin, with an aspect ratio of 157:1. The fact is that, generally speaking, an exposure is not the same thing as a photograph. Instead, these imaging instruments often capture multiple exposures which are then stitched together as a mosaic to create a single high-resolution image.
To understand more about this very different approach to digital photography, we put some questions to Dr Paul Jerram, head of engineering sensors at e2v, the British company that supplied the CCDs for New Horizons’ imaging instruments. He started by explaining the rationale behind Ralph’s long, thin sensor.
“Ralph operates in a scanning mode,” he said, “in a similar way to the sensor in a document scanner. These scanners generate the image on a line-by-line basis as the document passes under the sensor. However, in the case of Ralph, the sensor has 32 lines which are added together as the spacecraft scans the ground, thereby increasing the sensitivity by a factor of 32. The advantage of this is that very large multicolour images can be built up from relatively small sensors.”
However, as Jerram went on to explain, the emphasis on small sensors isn’t only a matter of economics. “It is more that a scanning sensor is the most practical way to build up really large colour images with a high sensitivity array with large pixels. To produce a full-colour image with a staring array would need four sensors, each of which would be 5k x 5k (25 megapixels) or a single array of 100 megapixels. This would also greatly increase the size of the optics to an impractical degree.”
^ Preparing New Horizons for launch requires stringent ‘clean room’ conditions
At first sight, though, this still doesn’t tell the whole story since, even though 100 megapixels is admittedly a large number, it’s the equivalent only of a handful of cheap point-and-shoot cameras. But these are no ordinary CCDs, as Jerram pointed out.
“Space is a harsh environment. The main challenges are surviving the launch process and the adverse environmental conditions that are encountered on a journey, such as radiation. This means that a large part of the project involves environmental testing to show that the sensor performance will not be damaged by shock, vibration, temperature extremes and the x-rays and proton radiation that are encountered outside the Earth’s atmosphere. So, while there is only a single sensor on board the spacecraft, many tens of sensors are produced and destructively tested to ensure that it will work as required and when required,” he said.