You are on page 1of 6

Savepaperandfollow@newyorkeronTwitter

MAY 14, 2015

What a Dinosaurs Mating Scream Sounds Like


BY RAFFI KHATCHADOURIAN

COURTESY HELLO GAMES

wo years ago, Sean Murray, a video-game developer


from the town of Guildford, outside London,
announced an ambitious game that he had been
working on in secrecy with a small team: a fully
explorable digital cosmos, called No Mans Sky
(http://www.newyorker.com/magazine/2015/05/18/worldwithout-end-raffi-khatchadourian) (which I wrote about in the
magazine this week). When the digital universe is complete, it will contain more than eighteen
quintillion planets; it would take billions of years, Murray has calculated, for one player to explore them
all. To create a representation of space that is so vast, Murrays studio, Hello Games, is using a technique
called procedural generation: equations that draw upon random numbers to build naturalistic features,
such as solar systems, planets, flora, and fauna.
When I visited Hello Games earlier this year, I expected to meet people working to figure out how the
universe would look, and how it would function. One thing I hadnt considered was the games audio. But,
it turns out, No Mans Sky poses a complicated question for its designers in this regard: How can you
imbue this universe with naturalistic sound, from dinosaur grunts to whirring star-ship engines? The
answer, initially, was unclear. Sound design in games, as in films, typically begins with the known; once
Steven Spielberg decided that Jurassic Park would contain velociraptors and the Tyrannosaurus rex, his
sound technicians knew they had to create plausible voicing for such creatures. But No Mans Skys
cosmos will be inhabited by countless imaginary organisms, based on random morphology, in
environments shaped by chance. It will contain modular ships and architecture of unpredictable size and
design. And everything in the game will be rendered only at the moment of encounter.

The job of solving the games sound problem, I learned, had fallen to Paul Weir, a composer and sound
designer who specializes in the creation of audio for films, games, banks, and large stores, like Harrods.
One morning at Hello Games, I climbed a narrow set of wooden stairs to the studios second floor, where
members of the audio team were working in a tiny trapezoidal room. Weir was seated at a computer
flanked by speakers on pedestals. He has a compact, wiry body, which he maneuvers with easy formality.
(He was the only person in the studio wearing a tie.) He maintains an affiliation with the London office
of Microsoft Studios, where he works as its audio director, and he is also well-versed in science fiction.
Among other projects, he has worked on a BBC radio drama based on The Hitchhikers Guide to the
Galaxy.
That morning, two programmers were with Weir, and a discussion was under way about how to fix
unanticipated reverb that was appearing in the game. Like many parts of No Mans Sky, Weirs system
was evolving through cycles of disassembly and reassembly. Working on his own for months, Weir had
been making adjustments to game sounds divorced from their graphical embodiments, but that week he
was trying to stitch key elements of his system back into No Mans Skys master build. When I walked in,
he was testing audio for a spaceship. The sound was multi-dimensional; layered with noises thrumming
at alternate pitches, and rich in overtones, it seemed to be the byproduct of a genuine mechanism
combustion at the intensity of high-energy rocket thrust. I love harmonic complexity, Weir said. The
source of the sound, which he had manipulated, was a hand dryer. I always carry a recorder. For a lot of
jobs, particularly for a game like this, I have this rule: the sounds have to be a hundred-per-cent original.
We havent sourced anything from sound libraries, but a lot of games would.
No Mans Sky will make use of a wide range of atmospheric sound. Fly past a cluster of stars in its 3-D
galactic map, and you will hear a shimmering noise that gives the universe a bejewelled quality. The game
will also contain a soundtrack by the band 65daysofstatic, recorded in an old church. Some of the bands
music will be fragmented into micro-segments, which the games algorithms will recombine into ambient
soundscapes uniquely tailored to what players see.
Weir told me that No Mans Skys biggest audio challenge was the creatures. Vocalizations for the
dinosaurs in Jurassic Park were amalgams of field recordings, too: distorted utterances of whales or
terriers. (The Tyrannosaurus rexs roar was from a baby elephant.) But Weir did not have the budget to
make high-quality field recordings of exotic animals, nor did it make sense to do so, since there was no
way to predict what individual animals in the game would look like, or do, or even what environment they

Enter your

would be in. Instead, he decided to bestow each creature with a unique digital vocal tract, to simulate
sounds consistent with its appearance. Rather than working against the games algorithmic chaos, he
would embrace it. Weir knew of no other game that did such a thing, and he thought that a few
programmers would be required to complete the coding on schedule. Then he reached out to a Scottish
programmer and game designer named Sandy White. (In 1983, Whites game Ant Attack
(https://www.youtube.com/watch?v=nAZnXa1Eamk) was among the first to use 3-D graphics.) In two
months, White wrote the necessary software.
White had flown down from Edinburgh for the week to work on the game, and he was seated next to
Weir. The whole issue is: how do you synthesize creature vocals without them sounding synthetic? he
said. Because the danger is that they will sound like Stephen Hawking.
Our brains are very adept at detecting patterns, and the reason synthetic voices typically sound artificial is
that they are carried on sound waves that have a regular frequency: unvarying up-down-up-down
modulations that are unmistakably inorganic. White suspected that if he built digital vocal chords
(stimulated by columns of mathematically simulated air) the system would achieve naturalism. The first
results were a bit like the squeaker out of a dog toy, he said, which wasnt surprising: blow through the
mouthpiece of a clarinet without the instrument, and the effect is similar. White then added a digital
version of the pharynx, which sits behind the mouth and nasal cavity; it served as a resonator, amplifying
sounds produced by the vocal chords, but also altering their texture. The squeaks became elongated. He
called this the systems trumpety-chicken-duck-whale-car-horn phase. By the end of January, several
weeks after he had started programming, he added a digital mouththe final component necessary for a
rudimentary virtual vocal tract. Then he set about giving his creation a voice.
Every vowel is defined by a narrow band of frequencies, known as a formant, which are created by the
vocal tract as a wholethe way sound resonates throughout all its parts. White found a paper from 1962,
titled A Study of Formants of the Pure Vowels of British English. The paper, based on recordings of
twenty-five male subjects, contained a table of the relevant data. Late one night, alone in his Edinburgh
studio, he copied the values for a vowel labeled /a/ hard and plugged them into his system. The digital
resonance that White had createdwith vocal chords, pharynx, and mouth all affecting each other
caused the utterance to take on human character, and the result was a blood-curdling scream. The voice
broke, twisted, and grew hoarse during moments of high intensity. White gave me an MP3 of it, and I
later played it for two people without telling them what it was. Both thought it came from an animal; one

wondered if it was a person being tortured, and the other wondered if it was a goat. White recalled, Two
oclock in the morning, headsets in, and the thing went Aaaaahhhhh. I was sweating because it was so
scary. But I was also like, This is working!
From there, White set about making the system adaptable, so that it could be used for animals of myriad
species and sizes. The week before I visited Guildford, he had designed a toolusing an off-the-shelf
app on an iPadto allow Weir to implement his software to generate the creature sounds, which would
then be incorporated as code into the game. Sitting in his office, Weir held the iPad in his hand. The
device had a hybrid user interface: part laptop trackpad, part midi boardthe kind you might see in a
sound studio, with many sliding leversand part theremin. A player would never see this, he said. Weir
explained that he would use the iPad to perform vocalizations only for creature archetypes; then a set of
algorithms would mutate each performance, to adapt it to the countless variations of that creature in the
game. The process is analogous to the way the creatures are designed graphically: using tools like
Photoshop, artists at Hello Games create archetypes that algorithms then transform.
Raising the iPad, Weir said, It feels like an instrument. He offered to play it. Drawing his finger across
the screen, he nudged the lever bars to indicate attributes like body mass, aggressiveness, windpipe length,
wetness, screechiness, harshness. (The software makes sounds based on roughly a hundred different
parameters.) Then, while moving his thumbs across two graphical boxes on the iPadone labelled vowel
map, the other pitchand simultaneously twisting the device in space, he generated a vocalization.
The iPads physical movement determined the energy behind the utterance: the arc of the motion
shaping the sounds arc.
Out came a tired, yawn-like rumbling, the deep grunt of a lumbering multi-ton herbivore. I can change
the size, Weir said. He tinkered with the iPad, and moved it differently, and the vocalizations over-all
frequency become higher; the texture became rougher and wetter. After a few seconds, Weir gave it an
upturn in pitch and intensity. The sound resembled the mating call of some tropical species.
You see, you have to perform it, Weir said. He is collaborating with an engineering Ph.D. candidate to
build bespoke hardware to replace the iPad; using this, he hoped, would give the vocalizations more
fluidity and, thus, realism.
We are trying to convey emotion, White said.

Weir adjusted the iPad again. If I go way upthats more like a bird sound, he said. Out came a long
shrill bird call. White furrowed his brow. That sounds slightly synthy, he said. There is a vibrato in
there that is very regular. Its experimental. He had used the vibrato as a placeholder. It would later be
replaced by Perlin noise, an algorithm often used in movies and in video games to create natural-seeming
texture on computer-generated images.
Weir tinkered with various settings, and performed more sounds: the voices of an alien menagerie. There
was an ungulate experiencing sudden sharp pain; the vocalization descended from a screechy high pitch
to a lower, more guttural, exhalation. It sounded very real.
Thats it falling off a cliff, Weir said, laughing.
Id save that one, White said. Its like a horse.
The scope of No Mans Sky, along its openness, made it easy to get carried away. At one point, Weir
suggested, We could potentially have a situation where every tree in a forest could have a bird in it,
which you would only hear if you are close enough to it. Then we would be automatically populating
birds in forests, and spatially it would always sound correct. He became more excited, and White added,
They are fairly musical. Why couldnt they be musicalor at least in harmony?
Since I visited Hello Games, Weir had integrated creature sounds into the master version of No Mans Sky. Above
is a sample that he created for The New Yorker using the games software.
Raffi Khatchadourian will participate in a Reddit Q. & A. about his piece on No Mans Sky today, Thursday, at
10 A.M.

Raffi Khatchadourian became a staff writer at The New Yorker in 2008.

You might also like