Developer commentary/Half-Life 2
From Combine OverWiki, the original Half-Life wiki and Portal wiki
This is a quote article. | |
---|---|
This article is a transcript of all of the quotes from a given character or entity. Unless noted otherwise, these transcripts are sourced directly from the official scripts, closed captions, or internal text data with only minimal modifications for typos and formatting purposes. |
The following is a list of all of the developer commentary from Half-Life 2's commentary mode.
Contents
Commentary[edit]
Filename | Speaker | Subtitle |
---|---|---|
cn_001_physics_sound
|
Jay Stelly | Getting physics to feel right for the player meant that the sounds produced by physics had to sound right too. As humans, we're all used to how objects and materials sound during various interactions, even if it's not something we actively notice. While we knew we couldn't account for everything, we wanted to focus on the details that contributed most to making our physics sound realistic. We started with per-object sounds for collisions and impacts. This allowed an object, like a wooden crate, to have custom sounds when it collided with other objects or classes of objects. Next, we began dynamically modifying these sounds based on the characteristics of the physical event, such as the masses involved and the force of the collision. This worked great for collisions, but didn't hold up in other interactions, like objects scraping against each other. In reality, a wooden crate makes a different sound when it scrapes against a wall versus when it's thrown against that same wall. Let me tell you, nobody has done more research on the nuances of a wooden crate's physical properties than us. To get what we wanted out of these interactions, we implemented per-material collision sounds—metal against wood, metal against metal, and so on. These sounds were also dynamically modified based on details, like material hardness or type of metal. Getting this system to sound right required countless small tweaks, which we could only identify once it was running. We also experimented with custom sounds for rolling objects but ultimately cut the feature because we weren't happy with the results. |
cn_002_ramming_with_the_buggy
|
Jay Stelly | Our physics system is based on rigid bodies, which means it doesn't support any deformation or destruction of objects. If a physical object is hit by something, it can move if allowed, but otherwise, nothing happens. So, when you drive the buggy through a glass window, we have to employ some trickery to make things work as expected. Because the window is fixed in place, the physics simulation will actually bounce the buggy off the window, but it does notify our game system that a collision occurred. The game system handles destruction, so it breaks the window into glass shards and tells the physics system to start simulating them. Crucially, the game system also tells the physics system to restore the buggy's state to where it was before the collision. This ensures that the next time physics is simulating, the buggy will drive freely through the now destroyed window. This process does produce a one-frame hitch in the buggy's physics, as it briefly stops and is teleported back to the instant before the impact. We're sorry if knowing this has made Half-Life 2 totally unplayable. But after watching playtests at the time, we decided we actually liked the momentary hitch—it added to the sense of impact. |
cn_003_ramming_zombies_with_the_buggy
|
Jay Stelly | As human beings, we have expectations for our physical world, as mentioned a few times in this commentary. One of those expectations is if you hit a zombie with your car, it should get absolutely booted into the air. But getting this effect in game isn't as straightforward as it is in the real world. For performance reasons, enemies like the zombie use a simplified collision box during gameplay. This is a box that fits snugly around the zombie, with no part of the model sticking out. But when the zombie dies, it's converted into a ragdoll with more accurate, per-limb collision boxes, though this is more expensive to simulate. This change in collision representation adds further complexity when you ram a zombie with the buggy, beyond just the frame hitch. When the buggy first collides with the zombie, it touches the zombie's simplified collision box. The game then kills the zombie, converting it into a ragdoll that the physics system begins to simulate. Now, with more accurate per-limb collision, the closest part of the zombie may be several inches away from the buggy. As a result, the next time the physics system simulates, the zombie starts falling from gravity because the buggy is no longer pushing it. While it's only a short moment before the buggy reaches the zombie again, this small discrepancy made ramming zombies unsatisfying—they often fell out of the player's view. To fix this, we saved the force the zombie's collision box received when hit by the buggy, and applied that same force to the ragdoll after conversion. We also added a slight upward force to the ragdoll, launching it a bit. With these tweaks, zombies reliably folded over the front of the buggy when hit, which better matched player expectations. |
cn_004_buggy_sound_engine
|
Kelly Bailey | The audio for the buggy was challenging because we wanted it to feel like a really powerful vehicle that's driving along at super high speed, even though the maps aren't really big enough to support that, and the buggy doesn't really leave first gear. And it has a top speed of maybe 35 mph. Now, in our defense, 35 mph is still pretty fast, especially back in 2004. In addition to the speed, the driving inputs are really simple. So with a keyboard as your target input device, the player's controls are basically binary—so the buggy is either accelerating or not accelerating, is turning fully in one direction or not. So essentially, we need to create all this audio complexity to represent a simulation where the complexity doesn't actually exist. So, we built a fake engine for the buggy's audio. It's got it's own transmission, and we combine the player's inputs with data from the physics simulation. We measure the torque on each simulated wheel, whether it's spinning or making contact, and we create a set of looping sounds, and then carefully blend between them as the fake engine shifts gears and responds to the wheel data. And we just basically kept trying different things. We added more complexity. We did some fine-tuning until the buggy just feels more fun and sort of rewards you for that feeling of going fast. And most players, they don't comment on it, other than saying the buggy sounds good, and to us, that's the success, given the amount of craziness that was going on under the hood. |
cn_005_buggy_sound_design
|
Kelly Bailey | The sound design for the buggy was based on samples from an old 1968 Camaro. We chose that car because they have these amazing V8 engines with just huge pistons. And there's not really another sound like it, it has this sort of gutsy, visceral sound that we were looking for. The transmission also has this really wonderful mechanical whine when it stops accelerating. And that combination felt really responsive within this basically binary input system that we had to work with. So the fake engine transmission wasn't modeled after a real-world car. Instead, it was just designed as a simple system to reward the player for holding down the accelerator. So it's like a looping escalator that produces a rising tone, constantly building tension and shifting gears whenever the tone needs to restart. Since the player would quickly run out of space in our relatively small maps, the goal was just to make accelerating fun for a short period. |
cn_006_automating_dsp
|
Kelly Bailey | All the sounds the player hears are processed through our Digital Signal Processing system, or DSP. This just refers to how audio is altered to reflect the environment the player is in. For example, the same sound will differ when heard in a parking garage, a small room, outdoors. In Half-Life 1, we manually placed invisible nodes throughout the world to define these DSP settings for each space. For Half-Life 2, we knew that manual placement was just gonna be way too time-consuming, especially since we wanted a larger game and we still only had one audio person on the team. So, we built a system that automatically determines the DSP settings around the player in real-time. It does this by analyzing the space around the player, assessing the volume they're within, it examines the materials that make up the environment basically. Then the system matches that space to one of the 25 hand-authored basic DSP types, we adjust some parameters, and we create the fully detailed DSP that modifies the sounds the player hears. |
cn_007_dynamic_mixing
|
Kelly Bailey | During gameplay, many sounds are playing at the same time, but some are naturally more important than others and we need to set the volumes of the sounds relative to each other to ensure that the important ones stand out. In a movie, this sound mixing is done as a post-process. But in a game, it happens real-time, responding to the player's actions, so we can't predict ahead of time which sounds will play simultaneously. To handle this, all the sounds are grouped into types and tagged with priorities. During gameplay, the audio system monitors all the active sounds and lowers the volume of sounds in a group if a higher-priority sound is playing. This mixing is adjustable. If the gunshots need to be loud in one scene, we can make them higher priority there. If a specific line of dialogue needs to be heard clearly, it can then force all the other sounds to drop in volume. And we do the same thing with music. It's a subtle but very critical system, and it just prevents the soundscape from becoming too cluttered. |
cn_008_soundscapes
|
Kelly Bailey | In Half-Life 1, ambient sounds were placed by hand and then individually tuned. For Half-Life 2, we wanted more ambient sounds in our environments, with higher quality and less manual effort for placement. So we built the system that we call the soundscape system. A soundscape is basically a small program that controls which ambient sounds play, where they come from, and then how they behave in a specific environment. So flies buzzing, wind blowing, birds, a distant train, distant combat. The soundscape system could then randomly select and play these sounds, and also really importantly, handle the smooth transition as the player moves through environments. Soundscapes can also contain other soundscapes, and that allows us to create reusable collections of sounds— so for instance, distant combat might be small arms fire, distant explosions, a passing helicopter. These soundscapes would dynamically assemble and play from many individual sounds, and that helps us avoid the artificial feel of a single looping track. Overall, the soundscape system was a real workhorse for audio in Half-Life 2 and pretty much responsible for every sound you hear that isn't directly driven by gameplay. |
cn_009_audio_over_distance
|
Kelly Bailey | The large scale spaces in the Coast maps led us to seek an audio solution for how sound changes over distance. For instance, in these maps, an enemy can be well over 100 yards away, and their gun should sound different at that range than when it's up close. We decided to store our sounds in a single stereo file, with the near sound in the left channel and the far sound in the right channel. Then, during playback, we crossfade, or we change the proportion of sound you hear, between those two channels based on the player's distance from the sound. It was a simple solution that worked well, it helped us with memory constraints. You can easily hear it in action—just listen to how a Combine soldier's gunshots change as you drive towards them. |
cn_010_flying_vehicle_audio
|
Kelly Bailey | The large flying enemies in the game present a unique audio challenge. During combat, they're much more mobile than other enemies, and they often move a significant distance away from the player. They're also really dangerous if they have line of sight. These factors led us to focus on an audio solution specifically around occlusion. In other words, when you can see the enemy and when the enemy can see you. We wanted to make a clear audio distinction between when they have that line of sight. The flying vehicle engine sounds are made up of multiple looping sounds that we play simultaneously. Some of those loops are processed by the DSP system based on the space you're in, while others are played directly and unprocessed. The DSP system adds more of an environment echoing effect when we run sound through that system. When the flying object is occluded behind something, and it can't see you, we increase the percentage of loops that go through the DSP processing, and it sounds like that object is a little more distant. Then, when that enemy emerges from behind an object, and it can see you, we play more of the direct sound. That overall just gives you a sense that the sound is either distant and directionless, or more directly in your line of sight. You can hear it when the helicopter emerges from behind a building, ideally you can really hear that difference. |
cn_011_bullet_audio
|
Kelly Bailey | If there's one thing we're sure of, it's that a lot of bullets will fly past the player throughout the game, so we spent a bunch of time trying to perfect the audio treatment for it. Unfortunately, bullets in the game are essentially instantaneous, so we can't just attach a sound to them and rely on the movement for positioning. After some experimentation, we landed on this solution: By figuring out the closest point on the bullet's trajectory to the player's ear, we calculate the exact time and position the bullet would conceptually pass them. We then play two different sounds at that point to fake a Doppler effect—one for the bullet's approach and one for its departure. We delay the departure sound based on the calculated time the bullet passed by. |
cn_012_staircase
|
Aaron Seeler | There are situations throughout the game that stem from simple and quite straight-forward design goals, and this staircase features two of them. The first was to frame the citadel as it awakened, showing the player they've stirred up the hive. The second was to showcase our ragdoll system. While it's standard now, in 2004 it was still a novelty to see bodies tumble realistically, and stairs provided a perfect stage to make that happen. |
cn_013_alyx_and_mossman
|
Bill Fletcher | In our early plans for Half-Life 2's narrative, we wanted to take on the challenge of doing something less explicitly game-focused. Not just a story about the obviously good player defeating the obviously evil enemies. Instead, we aimed to focus on characters and their relationships. We thought it would be interesting to explore family as the central dynamic unit of Half-Life 2. The tension between Alyx and Mossman emerged from that idea, and we liked it because it felt relatable—something players could understand and empathize with. |
cn_014_exposition_board
|
Danika Rogers | We wanted to put this dense chunk of visual story here because it's the place the game is most explicit at telling players to explore a space. Even though it's primary job is world building, we still take strides to highlight the relationship between Eli and Breen, giving context to world's central conflict. |
cn_015_city_17_awakens
|
David Sawyer | The early trainstation section was designed to make the city feel indifferent toward the player, as if Gordon is just another nameless, faceless citizen, ripe for subjugation. But during the tenement raid and rooftop chase, the city identifies the player as an anomaly and starts to stir. Now, with the player as the aggressor, the city fully turns its attention on them. The Overwatch PA system, first heard during the raid, played a key role in delivering this message. If players listen closely, there's a lot of information about the city's awareness. Even when players didn't pay attention, though, the audio treatment still conveyed the emotional message we were aiming for. |
cn_016_elis_lab
|
Dhabih Eng | We built Eli's Lab after Dr. Kleiner's, which gave us a better understanding of how to set up and design effective choreo scenes. As a result, we were able to create a more complex scenario with more opportunities for it to respond to the player. Having multiple characters doing their own thing around the room made staging tricky, and it took a lot of iteration and playtesting to get it to run smoothly. Our experience showed that, generally, players wanted to play along with the scene—they just needed enough subtle clues to know where we wanted them. Similarly, if they got distracted, the scene had to gently pause and wait for them to return. |
cn_017_feel_free_to_look_around
|
Dhabih Eng | Balancing the player's desire to explore with the narrative goals of the scene was always a challenge. We wanted the room to be full of interesting things to look at, but many players wouldn't explore until they were sure they wouldn't miss any of the character performances. That's why it was important for Eli to tell the player to look around midway through the scene—it signaled that it was safe to explore. From there, we could trigger a variety of custom mini-scenes based on what caught the player's interest. |
cn_018_interstitial_scenes
|
Eric Kirchmer | As playtesting progressed, we often found areas where players lost track of their goal, which isn't uncommon in a game like Half-Life. To address this, we added smaller interstitial scenes to help guide players along their path and reinforce their objective. After finishing the bulk of development on our larger choreo scenes, like Kleiner's Lab, putting these smaller scenes together became relatively straightforward. We discovered that using characters in these moments worked much better than relying on passive storytelling or even some sort of reminder you'd find baked into a pause menu. |
cn_019_overwatch
|
Josh Weier | During development, we generated the Overwatch voice using a free text-to-speech tool. But as we neared shipping, we realized we didn't actually have the rights to use it. So, we held auditions with voice actors to find someone who could match the qualities we liked in the generated lines. It didn't take long before we found Ellen McClain, whose delivery was exactly what we were looking for. In her sessions, we played the generated lines for her to mimic, though in her own style. She was a bit sarcastic at first, given that she was being asked to replicate such a robotic delivery. But years later, when she returned for her iconic role as GLaDOS in *Portal*, she had found a way to creatively integrate that tone into her performance. |
cn_020_box_car_joe
|
Laura Dubuk | This small scene originally featured two human characters, but later in development, we replaced one with a Vortigaunt. We wanted to show that Vortigaunts were coexisting with humans and resisting the Combine as well. We also introduced their ability to heal the player, although we didn't fully capitalize on that feature throughout the game. Vortigaunts became a useful narrative tool for keeping dialogue brief, as their hive mind justified their full knowledge of the world and the player's goals. This meant they never had to ask questions, neatly sidestepping the challenge of writing dialogue for a silent protagonist. |
cn_021_mossman
|
Marc Laidlaw | Judith Mossman was originally based on a character we had planned for a section of Half-Life 1. That section was cut from the base game and reused in Uplink, but without the character. We consider her the most complex character in Half-Life 2. We knew we'd be walking a tightrope with how the player felt about her, even before Nova Prospekt, because the player's trusted sources don't agree—Eli likes her, but Alyx doesn't. We recognized there was a limit to how much complexity we could pull off in relationships, but we felt that a stepmother-daughter dynamic was something players would understand. Whether or not you agree with Mossman, she's doing something heroic. She risks being a double agent to get the information she needs to protect Eli and Alyx. She just didn't plan for Gordon. |
cn_022_hands_free
|
Matt Wright | At the time of Half-Life 2's release, most first person shooters gave the player a a gun or a melee weapon right off the bat, but we opted to start them empty-handed. Without a weapon, it was easier to train the player on the more novel ways they're able to interact with the world and introduce them to our story. We wanted the player to transition from fugitive to the aggressor, and arming them at the right time created that moment naturally. By the time they finally get a weapon, they're done running away. They say sometimes the best way to win is to walk away, but we all know that's not true. The best way to win is to run towards something while shooting at it. |
cn_023_metrocop_scene
|
Mike Dussault | This is another scene designed to make it clear that the Metrocops are the bad guys – except this time the player has a weapon and can finally do something about it. That 'something' comes out of the barrel of a gun. We knew we needed the player to feel righteous and this moment was part of creating that. The Metrocops are still human, unlike the rest of the Combine in the game, so we felt like we had to go the extra mile in making sure the player's actions felt like justice and not just wrath. |
cn_024_moving_the_gravity_gun
|
Ted Backman | When the gravity gun was first prototyped, we planned to give it to the player midway through the game. But as playtesting progressed, we saw the strength of its design and realized the gravity gun, and more importantly, the ability to launch saw blades at zombie faces, needed to come earlier. That meant Eli's lab also needed to move earlier too. It was a significant change, but it was crucial to Half-Life 2's success. It serves as a good reminder that storytelling and gameplay are intertwined, and neither can always take priority; they have to work together. |
cn_025_buggy_birds
|
Adrian Finol | In one memorable Coast playtest, a player returned to their vehicle after exploring a house and happened to see a bird flying past. They yelled at the bird, telling it to stay away from their buggy, complaining that birds often foul up their car in real life. In our post-playtest discussion, we realized this was the first time we'd had objects separate from the player that the player actually cared about. We saw it as an opportunity to explore, so we built a system to deliver the moment that playtester feared. When the player leaves their buggy alone for a while and is out of sight, we spawn a seagull at a random location on the vehicle frame. As time goes on, the seagull spawns poop decals on the car body beneath it and continues until the player returns, at which point it flies away. Naturally, playtesters wanted to be able to clean their vehicle, so we added the ability to wash the car by driving it into shallow water. It was late enough in development that we already had the tech for all of this, so the feature was quick to implement. It's one of those cases where we know only a small percentage of players will discover it, but if we can implement these kinds of details efficiently and have enough of them, most players will encounter at least one. |
cn_026_bridge_gunship
|
Adrian Finol | We were particularly happy with the gunship fight, despite it being tricky to set up given the level geometry. Being on the underside of the bridge allowed the gunship to move fully around the player, using all three dimensions to position itself above and below at different moments. The girders provided plenty of high-frequency cover, creating opportunities for the gunship's fire to produce a fun and dynamic visual and audio display. |
cn_027_floor_is_lava
|
Adrian Finol | Building gameplay around broadly understood concepts like 'the floor is lava' made training it easy. We liked how our physics system allowed for a high degree of creative freedom in how players chose to tackle it. Some built walkways out of many objects, while others focused on using just their two favorite sheets of metal. Some carefully followed the path up in the rocks, concerned about falling, while others moved directly towards the exit, perched on objects just above the sand. Playstyles varied—some players considered it a failure if they summoned antlions, reloading immediately, while others enjoyed scrambling back onto the rocks to recover. We also liked that it functioned as a navigation puzzle but could be turned into an arena if players preferred; some just jumped onto the sand and fought their way to the end. |
cn_028_opportunistic_moments
|
Ariel Diaz | Some memorable moments in the game come from the interactions of multiple game systems, but sometimes we realize these moments are likely to happen only rarely. When that's the case, we often wish more players would experience them. There's a tension here: the more we know players will encounter something, the more effort we can put into it. But if it feels too artificial, the moment can come across as forced, and players may not respond as positively. The gunship's cinematic death is an example where we nudged things a bit, trying to increase the chances of a memorable moment without being too obvious. The design of this area and the gunship's flight paths make it likely that the player and the gunship are in the right spots when it dies. If they are, we trigger the gunship's crash into the cars. If not, it dies in the usual way. It's totally 'gamey,' but playtesters didn't seem to mind when the moment played out well. |
cn_029_assault_on_nova_prospekt
|
Ariel Diaz | With an infinitely respawning antlion army on a coastline assaulting a fortified Combine base, it was hard not to try a Normandy-inspired level. Our initial prototype featured trenches leading up to Nova Prospekt, with lethal machine guns forcing the player and their antlions to stay within the trenches to survive. It was thrilling to have machine gun fire and leaping antlions constantly passing above the trenches. Unfortunately, we couldn't make it work within our performance budget, and ultimately had to set it aside in favor of a completely different layout. However, the mounted guns remained, as their lethality proved useful in encouraging players to rely on their antlions. |
cn_030_buggy_physics
|
Charlie Brown | The physics for the buggy took a lot of trial and error. We wanted it to feel fast, even though its top speed was only about 35 mph, so we had to do some cheating to make it work. We added extra vertical force to the turbo jump and a bit of kick to the rear when using the handbrake to powerslide. Without those, the low speed meant the buggy just wasn't fun to drive. On top of that, the world was full of unpredictable, physics-systems-driven chaos—players were bound to drive the buggy into all kinds of collisions, so we had to make sure they could always get out. This took a lot of experimenting and playtesting. In the end, we settled on a pretty goofy, unrealistic simulation that made the buggy more forgiving and harder to flip. Its center of gravity is super low, and the wheels, while they look like they turn a lot, actually don't. It was our first attempt at a physically simulated vehicle, and we learned a ton from it. Thanks to that, we were able to make the muscle car in Episode 2 much easier to drive. |
cn_031_the_bridge
|
Charlie Brown | This bridge section was inspired by the Deception Pass Bridge, just up the road from Valve's office. We initially built the section as a pure navigation puzzle, gradually layering more gameplay into it. It also became the primary test case for our player's edge friction. In our movement system, we subtly increase friction on the player's deceleration as they approach the edge of a surface. Players have never noticed it—likely because it aligns naturally with their goal of stopping at the edge to look down, matching the human tendency to move more slowly and carefully near a drop. But it's a critical feature for this part of the game. We know this because at one point during development, we accidentally broke it, and suddenly all of our playtesters started repeatedly falling to their deaths. |
cn_032_coast
|
Chris Green | We wanted a wide-open section of the game for the buggy, but we still needed a way to constrain the player's movement. Choosing a coastline gave us a natural boundary along one side of the map, while still feeling open and expansive. It also helped visualize the effects of the Combine's extraction of Earth's resources. Playtesters still tried to swim out to sea, so we added leeches to deter them. The leeches couldn't be a real, avoidable enemy that players felt they could fight, so they're a bit magical. Players generally understood they were pushing the game's boundaries, so we didn't overdevelop them. |
cn_033_grenade_house
|
Dave Riller | A lot of a Half-Life game is about giving players slightly novel combat experiences around each corner, even if they're just small tweaks on previous ones. They don't all have to be skill-based challenges; sometimes they can just be fun moments where the player gets to enjoy themselves. This grenade house is one of those moments, with the twist being that we've inverted the usual setup. Instead of letting the player drop grenades on enemies below, we've put the enemies above. The unlimited grenade box is a somewhat clumsy way to encourage players to engage with the scenario, but its invulnerability also ensures the player can still climb up to the floor above, even if they've blown up everything else. |
cn_034_buggy_training_course
|
David Sawyer | This training section was designed to let the player play around with their buggy. At the time Half-Life 2 was released, this level of physical simulation in a vehicle was novel, so we wanted to give players ample opportunity to experiment with it. The player has also just left the tight confines of Ravenholm, and we liked giving them the contrasting reward of being able to move quickly through a large space, feeling safe inside a vehicle. We made sure to require a turbo jump to get out of this area, which doubled as a test to make sure they'd reached a level of comfort in their driving, and also ensure that they didn't leave their buggy behind. |
cn_035_ambush
|
David Sawyer | One of the general challenges of a Half-Life style FPS is that the player is constantly moving forward into spaces they don't know. This means that when they find themselves in combat, they're in unfamiliar areas with no knowledge of how to approach the fight. We use many different methods to help the player with this problem, and this ambush is one example. Upon arrival, the player fights a group of Combine among these houses. During the fight and the exploration afterward, the player has the chance to learn the arena layout. Then, when the player explores up into this attic, we spawn a second wave of Combine. In that second fight, the player now has the advantage of knowing the layout, allowing them to make more tactical decisions in defending against the ambushers. |
cn_036_bugbait_training
|
David Sawyer | This clumsy bugbait training is where we landed, having run out of time but needing players to understand the tool for Nova Prospekt's design to work. Generally, we aim to add enough flavor and context to training so it doesn't feel like training—it feels like a natural part of discovering something new. Here though, we didn't quite manage that. As obvious as it may seem, bugbait wasn't something players intuitively grasped. We're well into the game now, so players have plenty of experience dealing with enemies. Before we added any training, the first playtesters who collected bugbait entered the next level and instinctively shot the first antlion they saw—because that had been working up until then. Half-Life tends to encourage a 'shoot first, ask questions later' approach. Similarly, when players reached Nova Prospekt, they didn't think to throw bugbait at Combine soldiers and instead relied on their usual weapons. So, we needed this training not only to show how bugbait works but also to demonstrate that antlions would be close to the player without attacking. |
cn_037_vortigaunt_extraction
|
Dhabih Eng | This arena presented some tricky design problems for a couple of reasons. First, the Guard is a melee-only enemy, so we couldn't allow the player to reach areas where the Guard couldn't follow. We did experiment with a few ranged attacks, but didn't like the results—its goo-spitting attack, for instance, ended up looking ridiculous. The only non-melee attack that survived was its ability to bash physics objects at the player, but this requires objects to be in range, which isn't always guaranteed. The second challenge was ensuring that, no matter where the Guard dies, the Vortigaunt can reach its corpse. This was essential, as the game can't proceed if the Vortigaunt is unable to extract the bugbait. |
cn_038_dropship_deployment
|
Doug Wood | Deploying soldiers from the dropships was tricky, as it required NPCs to spawn inside another solid object. To get them into the open where they could start navigating normally, they immediately play an animation that has them leaping out of the container the moment they spawn. Once that was working, we encountered another issue—savvy players would simply just wait outside the container and shoot the soldiers during their exit animation. To counter this, we added a turret to the container to defend the soldiers as they exit. It doesn't fully prevent a player from taking advantage, especially if they're using an indirect weapon like a grenade or something, but that felt realistic and like a true-to-life counter-measure someone thinking strategically might take. |
cn_039_laser_guided_rpg
|
Eric Smith | The laser guidance feature of the RPG was a huge headache for us because it was difficult to teach. Many playtesters would fire the RPG and then immediately duck behind cover, which meant they didn't see the rocket's flight path. Even worse, they didn't realize the rocket was now turning around to fly back at them. We tried various HUD elements and laser dot indicators to help, but found that most players overlooked them, focusing on the enemy instead. We made as many small improvements as we could, like emphasizing the feature in Odessa's speech and enhancing the audio and visuals around how the gunship shoots down direct RPG shots. While these changes helped some players grasp the feature, we never reached a point where we felt confident that almost everyone understood it. We even considered cutting the feature entirely, but gunship combat without it was much less interesting. The laser guidance forced the player to keep the gunship in sight while the rocket was in flight, but that also meant the gunship had line of sight back, creating a tension we loved. |
cn_040_performance_constraints
|
Eric Smith | For the final Coast arena, we envisioned a big finale where the player and a squad of citizens would hold out against waves of Combine soldiers, culminating in a gunship battle. Unfortunately, the computational cost of having soldiers and friendly allies moving around on terrain meant that the battle kept being scaled back to fit within our performance budget. By the time we were down to featuring only a single squad of soldiers at any given time, we were already late in development. Too late to make large-scale changes to the arena to accommodate more enemies, we had to redesign the flow of the fight to keep the number of soldiers to a minimum. |
cn_041_lighthouse_gunship
|
Ido Magal | The gunship scenario here took some work to play out the way we envisioned it—we wanted it to unfold as it would in a movie. In the filmic version, a gunship would fire through the windows as the protagonist ascended the lighthouse, anticipating their movements. To simulate that anticipation in the game, we set up a complex series of trigger volumes along the staircase. As they detect the player moving up, they instruct the gunship to choose a new flight path that matches the player's altitude. The triggers also enable bullseye targets in the windows nearest the player -- these are invisible game objects that the enemy AI will immediately perceive as hostile -- and prompts the gunship to fire at the windows. This blend of AI and scripting is central to Half-Life's combat scenarios. The AI manages the general, adaptive behavior of the NPCs, while the scripting customizes each scenario's unique requirements. |
cn_042_crane
|
Jakob Jungels | The Crane took a lot of fine tuning to get right. Something that needed to generate so much physical force was always risky, given the limitations of our rigid body physics and destruction technology. For example, the magnet was never going to fully meet player expectations for something with that much mass. But because the Crane was stationary, we could carefully choose everything within its range and fine-tune as needed. Fortunately, most players didn't seem to notice its rougher edges, likely because of the sheer novelty of interacting with something like it in a vaguely realistic physics environment. |
cn_043_flashlight_power
|
Jeff Lane | The decision to have both sprint and the flashlight share the HEV suit's power was a controversial one in Half-Life 2. Initially, only sprint used suit power, which made sense as it was meant to be only a short burst of speed. The flashlight, on the other hand, could be used indefinitely. But playtesting revealed an issue—players often left dark areas without realizing their flashlight was still on. In brightly lit, open areas like Coast, our performance budget couldn't support the flashlight being on continuously, so we needed a way to turn it off without player involvement. Limiting its power seemed like a simple solution, with the benefit of adding some tension to dark areas too. The challenge then became: what resource should it use? We already had suit power, and didn't want to clutter the UI with a new resource just for the flashlight. We toyed with the idea of other player abilities using suit power, thinking it might be interesting to have players make tradeoffs between them. This is a good example of game design often being about choosing between imperfect solutions. We didn't love the idea of sprint and the flashlight sharing a resource, but we also didn't want multiple resources cluttering the UI. In Episode 2, with no plans to add anything else that consumed suit power, we ultimately decided to give the flashlight its own separate resource |
cn_044_open_world_loot
|
Jeff Lane | The openness of Coast introduced a new problem for us in how we distributed item rewards. In a Half-Life game, we typically reward players for exploring. If you see the way forward but notice something off to the side, there's probably something useful if you take the time to search. When we first started playtesting our Coast maps, we tucked rewards away in the corners of the outdoor areas. However, this led to a worse experience with the buggy. The wide-open spaces meant there were always lots of small areas that looked like they might contain something, and players kept stopping and getting out of the buggy to search. It became tedious and interrupted the fun of the level. So, we removed those scattered rewards. But this raised a new problem: how do we tell the player when it's worth getting out of the buggy? After some thought, we established a rule: any kind of human construction would always come with gameplay content and exploration rewards, while purely natural areas would be left empty. We couldn't tell players this directly, but it aligned well with their natural assumptions, and we reinforced it consistently with areas like this. |
cn_045_gas_station
|
Jeff Lane | We had grand plans for this encounter. We envisioned building a gas station that the player would fight around, likely igniting the gas tanks and setting fire to everything. We imagined the player driving away from a raging fire. But budget and performance were harsh constraints in 2003, and the station had to keep getting smaller and simpler to fit into Coast's large levels. Our plans for a dynamic fire system didn't materialize, and building it no longer made sense—there's always a limit to how much work we can justify for a single encounter. In the end, the only remnants of our original vision are the explosive gas tanks. |
cn_046_buggy_introduction
|
John Cook | We wanted something more engaging for the buggy's introduction than just having the player hop in and drive away. The crane helped with that. Crucially, we needed players to learn how to recover the buggy when it flips over, and this sequence forced them into that situation. Training under pressure is always tricky, but we wanted to emphasize the antlion infestation as the reason to stay inside the buggy. So, we kept the antlion threat active and provided a couple of citizens on mounted guns for support. A small bonus: players also get the experience of being lifted by the crane—just before they'll get to operate one. |
cn_047_crossbow
|
John Cook | This is the player's first opportunity to get the crossbow in the game, and looking back, it's somewhat surprising that we didn't gate it. It's entirely possible for the player to drive right past it. To make that less likely, we set up the roadblock and vantage point to encourage players to get out of the buggy, at which point they'd spot the crossbow and item crate. Playtesting showed that almost every player stopped and collected the weapon, but this is a case where the difference between playtesting and real play might matter. A playtester who's been playing since the start of Coast may not be as fatigued with exploration as someone who's been playing from the very beginning of the game. So, we placed another crossbow a few levels ahead, directly on the player's path in Nova Prospekt. |
cn_048_rollermines
|
Josh Weier | Rollermines were created to fill the need for an enemy that could threaten players while driving the buggy. Since the player in the buggy was essentially a fully physically simulated entity, we thought it would be interesting to also have a physically simulated enemy. The manhack had already shown us how this approach could lead to an enemy with lots of interesting interactions with the world. So we built the rollermine—a ball whose movement is entirely driven by generating torque on itself—and iterated on its behavior until it was fun to drive against. Rollermines aren't particularly dangerous to the buggy, but if you let them stack up, they can become a problem, especially near a cliffside. We also made sure they worked well when the player is on foot, allowing level designers to get a few extra encounters out of them. |
cn_049_sandtrap_thumper
|
Laura Dubuk | The explosive barrels are provided for players to use against antlions while standing safely within the Thumper's protection. But we're also using this as a hint for the upcoming Antlion Guard encounter, where players often use similar tactics to win the fight. Optional player actions, like launching explosives with the gravity gun, can be forgotten if they aren't used regularly. With the wide range of offensive options we give players, there's no guarantee they've used this ability recently. So here, we're subtly suggesting they bring it back into their toolkit before they face the Guard. |
cn_050_coast_kickoff
|
Marc Laidlaw | This interstitial scene came along late in development to connect Ravenholm to Coast. Noticing a trend? Quite a bit of the connective tissue in our games comes together late. In this case, that meant cutting corners wherever possible—we didn't even have time to animate Alyx on the monitor. These kinds of scenes are always a challenge to make interesting, and we didn't succeed with this one. It's a good example of what happens when NPCs are treated like signposts, without adding character to them or the scene. The result is a forgettable moment where the game's seams show. But that's the nature of development—you only have so much time, and you have to decide where to spend it. Kleiner's Lab and Eli's Lab were much more important, so that's where we focused our efforts. |
cn_051_odessa_cubbage
|
Marc Laidlaw | Odessa Cubbage is an example of the kind of cameo character we found works well. He's an entertaining guy with some entertaining details, living inside his own little bubble. Since he doesn't leave his scenario, he's free to be exaggerated enough to stand out. We started with his model, which was repurposed from a cut section of the game, and named him after the title of a spam email we received. The model was originally a Norwegian fisherman, and it didn't quite fit as an Englishman, so we slapped a mustache on him—and in that moment, Odessa's character was born. He's ultimately a charlatan, with everything about him being fake—his accent, his mustache, everything. He would never put himself in danger, making him the perfect person to hand Gordon the RPG and explain exactly how to use it. |
cn_052_28_transitions_later
|
Marc Laidlaw | After stringing together a few levels with dark tunnels as transition spaces, we realized we could subvert player expectations by building another one that appeared to be a transition, but instead contained something else. Like everyone else, we saw 28 Days Later in 2002, loved it, and immediately wanted to create an escalating zombie encounter in one of our dark tunnels. It came together quickly, and we were pleased with the result. Years later, we met Alex Garland, the writer of the film. He'd played and enjoyed Half-Life 2 and we confessed that his movie inspired this section. |
cn_053_finest_mind_of_his_generation
|
Marc Laidlaw | Here is another small, interstitial scene that ended up working well. The only requirement for this moment was to teach the player to stay off the sand. With prior scenes, we were still focused on simply making them functional, but by the time we got to this one, we were becoming more comfortable and started enjoying the process of putting extra thought into these moments. In doing so, we pushed the characters beyond being mere signposts, creating something more interesting and entertaining. After release, Lazlo went on to develop a larger-than-life following, reminding us how hard it is to predict what players will hold onto. |
cn_054_bridge_playtesting
|
Matt Boone | The Bridge section was always a high point in playtests, with testers consistently having strong reactions to it. The tension they felt while slowly making their way over the girders was visible in their movements. Over time, we layered more challenges into the traversal, and playtests only became more entertaining to watch. From the train rumbling across the bridge and shaking the player, to a surprise headcrab in a shack that seemed like a safe haven midway through. We did feel a little guilty relishing the playtesters' visible stress, but we weren't total monsters: after players made it across the bridge, they'd breathe a sigh of relief, disable the Combine shield, and then realize they had to return across the underside. Concerned they might be bored on the second trip, we decided to add a gunship to keep them company out on the girders. |
cn_055_thumper_final_stage
|
Matt Boone | We liked the final stage of Thumpers, where their purpose is inverted. Now, instead of serving as a tool for safety, they act as a blocker for the player's antlion army. However, we probably should have incorporated more thorough Thumper training before this point. While it was acceptable for players to only partially understand Thumpers in the earlier Coast levels, the stakes are higher now because players are also trying out bugbait for the first time. Players who don't realize that this first Thumper prevents their antlions from moving forward may misunderstand how bugbait works. If they throw bugbait at soldiers in the first bunker and no antlions arrive, they may conclude that bugbait is unreliable or doesn't work as expected, and switch back to using their weapons instead. |
cn_056_buggy_tools
|
Matt Wright | We wanted the buggy to be more than just a means of travel, so we decided to attach a gun to it. We chose the Tau Cannon, a Half-Life 1 weapon we liked but hadn't found a place for in Half-Life 2. The Tau Cannon can shoot through walls, which makes it tricky for level design, so we limited it to the Coast section. Once you introduce wall-penetrating weapons, level design becomes much tougher. Furthermore, we mounted an ammunition crate on the back of the buggy, giving the player unlimited SMG ammo throughout Coast. This was especially helpful for playtesters who struggled with the infinite antlions spawning from the sand. |
cn_057_antlion_guard
|
Matt Wright | In rare cases, players could find themselves unable to kill the Antlion Guard in this arena. This would happen if they ran out of ammo and had also misused all the available physics objects, leaving nothing to launch with the gravity gun. To address this, we added a citizen on the mounted turret above the gate as a failsafe, allowing a stuck player to rely on the citizen to kill the Guard. |
cn_058_terrain_and_performance
|
Mike Dussault | Our performance budget was often a significant constraint when it came to combat, and even more so for combat on terrain surfaces. When NPCs move, they perform many small spatial probes to determine if they can safely navigate their chosen path. To help visualize it, imagine they can't see and are constantly waving a stick in front of themselves to check if the way is clear. The performance cost of these probes increases with the complexity of the world geometry. Our terrain displacement system creates uneven surfaces using many polygons—far more than the perfectly flat floors found elsewhere. This meant that NPCs moving on Coast's terrain were more resource-intensive than in other areas, and that constraint dictated the scale of many encounters in this section of the game. |
cn_059_area_portals
|
Mike Dussault | One performance challenge we faced in Coast was handling wide open spaces that included buildings with highly detailed interiors—or at least, they were detailed by 2003 standards. When the player was outside, we wanted to allocate as much of our budget as possible to rendering that space, without losing resources to the building interiors. But unless we were willing to board up every window and door, the player could still see inside. To solve this, we created a feature called Area Portals, placing one in every open window and door. An Area Portal pre-computes a flat image of the building's interior. During play, when the player is some distance away, the Area Portal displays this pre-computed image over the window or door, allowing us to skip rendering the actual interior. As the player approaches, we smoothly crossfade between the Area Portal's image and the building's actual contents. With careful use, Area Portals helped us meet our performance goals on Coast—along with the occasional door that closed itself behind you. |
cn_060_buggy_removal
|
Miles Estes | Once we reached the point where we wanted the player to leave the buggy behind, we had to be very explicit about it. Throughout Coast, players had been leaving the buggy and then returning to it, so we found that playtesters would try to bring it along if it looked even remotely possible. This small garage scene primarily exists to signal clearly that the player is done with driving and will be proceeding on foot. |
cn_061_d2_coast_04
|
Randy Lundeen | This map was actually the first area of Coast that we built, before the Crane and other elements existed. The drained seabed and rusted ships were some of the earliest pieces of art direction we developed. Years later, we showcased the map in our 2003 E3 demonstration, though much of the gameplay was still being figured out at the time. As a result, when we began building out the entire Coast section, that demonstration became a useful vision for us to work toward. |
cn_062_transition_tunnels
|
Randy Lundeen | Building the Coast required us to develop a way to handle level transitions. To move from one level to another, we need to save everything in the transition space as the player leaves the first level and restore it in the next level after loading. Critically, we also have to save and restore everything visible to the player in the transition area, even if it lies outside the space itself. For this reason, we aim to keep transition spaces as small and enclosed as possible. In most of the game, a small room works well for this purpose. But on the Coast, we needed a different solution. After some experimentation, we settled on dark tunnels. Besides being reusable in a believable way, the tunnels allowed us to add a bright light flare at the end to mask the fact that we can't render what's ahead of the player because it exists in an entirely different level. |
cn_063_sniper_alley
|
Robin Walker | This section of game track was added late in development, after we moved Eli's Lab and Ravenholm to come before Coast. While you might think that we build the game linearly, we actually split up into three different groups and worked in parallel on different sections, starting mostly in the middle of the game. Each group's first pass focused on nailing down the core of their section before they moved on. Then, once we had a rough version of the whole game laid out, we went back over each section to polish it—adding successful elements from other areas and removing what wasn't working. In some cases, such as in Ravenholm, entire sections were moved. Then, after Ravenholm was relocated, we needed a new piece to connect it to the Coast, which is the area you're entering now. Given the time constraints, this short section was built using existing, proven gameplay elements, like Snipers and Combine Soldiers. |
cn_064_battery_finding
|
Robin Walker | With the increased density of objects in Half-Life 2's gameplay spaces, we thought it would be interesting to have players search for a specific item, especially if there was some logic to where it might be found. This way, players could locate the item either through careful observation and deduction, or just by brute force searching the area. Since we already had models for cars and batteries, it was a natural step to create a small junkyard for players to explore. We also hid a couple of extra batteries in unexpected places, for the players who chose the brute force method. As with other item-based puzzles, this one needed a failsafe in case players lost the batteries. You can try throwing all of them over the cliff if you want to see what happens. |
cn_065_boathouse
|
Scott Dalton | In Half-Life 1, we discovered we could get a lot of value from scenarios where different AI enemies fight each other. It's fun for players to watch them engage, and it's much easier to observe enemy behavior when you're not the one that's being threatened. This first encounter between antlions and Combine soldiers is designed to be a fun opportunity to ram something with the buggy, but it also serves to highlight the Thumper. By seeing the soldiers use the Thumper to hold off antlions, we hoped more players would come to understand its purpose. We liked how the soldiers demonstrated the value of staying near the Thumper while engaging with antlions. |
cn_066_buggy_theft
|
Scott Dalton | As we built the end of Coast, we decided to have the Combine steal the player's buggy. It really doesn't make logical sense, but after watching so many playtests where players developed a strong attachment to their buggy, we wanted to play with that emotional connection. Once we implemented the theft, we had playtesters yelling at the Combine for stealing their ride—something we took as a sign of success. The soldier firing down from the clifftop is simply there to draw the player's eye up, triggering the look target and spawning the dropship with the buggy. Watching it now, 20 years later, we're struck by how ridiculous it all is. |
cn_067_snipers
|
Ted Backman | The Half-Life 2 team was never very large, so we were always looking for ways to create new gameplay without high asset production costs. The Sniper is a great example. It has no actual model—just a laser beam—so it required no art or animation support. When it dies, it simply spawns a Combine Soldier ragdoll and launches it out the window. Despite that simplicity, we got a lot of unique gameplay from it. We were especially happy with how it interacted with the physics objects in the game world. Players are used to exploiting explosive objects near enemies, so it felt fitting to have an enemy that flipped that dynamic when the player was in cover. |
cn_068_buggy_model
|
Ted Backman | The buggy's model went through many iterations before reaching its final version. We cycled through designs that satisfied both our artistic and physics simulation needs, but then hit an unexpected issue: many playtesters, even experienced FPS players, were getting motion sick while driving it. Along with some simulation tweaks, we discovered that altering the buggy model helped—the more the model obscured the world, the worse the motion sickness became. So we began stripping away parts of the model, removing more and more of the body until players stopped feeling sick. In particular, being able to see the ground through the missing undercarriage made a noticeable difference. |
cn_069_antlion_design
|
Ted Backman | In some of the earliest design work for the game, we had plans for multiple core alien races, with the Combine being just one of them. One of the other races was a religious insectoid species, and antlions were part of that. Later, as we developed Coast, we started viewing them more as pests that came through the Xen portals, along with headcrabs and other creatures. While they aren't actually from Xen, they infest the corners of any world they enter—a lot like cockroaches. |
cn_070_explosive_barrels
|
Brian Jacobson | Here, we introduce explosive barrels, a staple of first person shooters, but with a new twist: the second pistol shot sets the barrel on fire, starting a timed detonation. A third shot will detonate the barrel immediately. This way players could either delay the explosion or trigger it right away, depending on the situation. Players may want to feed an explosive barrel to a barnacle and then shoot it twice as a handy way to safely dispatch a cluster of them. The delayed explosion also made for more interesting chain reactions between multiple barrels. Exploding barrels were a fan favorite in the series, but also were internally favored by level designers who often felt like the game could never have enough of them. |
cn_071_ragdoll_magnets
|
Dario Casali | Half-Life 2's physics gave us a variety of tools to make combat more exciting. For instance, when enemies miss the player, we subtly redirect their shots toward breakable objects nearby, adding mayhem and creating that action-movie feel. We also employed physics to add hints. We created what we called 'ragdoll magnets' across the game to make enemy corpses fall in interesting and dramatic ways—such as off ledges or into the path of a train. |
cn_072_helicopter_peekaboo
|
Dave Riller | The Combine attack helicopter as central antagonist was something we wanted to develop throughout the entire chapter. The relentless chopper became a character in its own right. Here, if you listen and look overhead, you can hear its engine and see it flying along the train tracks and into the distance. This first glimpse is a small piece of foreshadowing, with its presence becoming more pronounced over time—until, eventually, you're squaring off against it. |
cn_073_iterative_passes
|
David Speyrer | Some chapters of the game, such as the on-foot sections of the Canals and Ravenholm, were developed in multiple passes by different cabals. We found that handing off maps in this way led to greater variety in the experience and a higher density of interesting moments, as each cabal added their unique touches to the maps. In the on-foot portion of the Canals, the first pass established the art direction and the player path, along with the broad strokes of the experience and some major beats. But the team-wide Alpha playtest identified insufficient tension and a somewhat monotonous experience here. In our second pass, we returned to the map solely focused on creating that tension and greatly increased the density of moment-to-moment scenarios. |
cn_075_pistol_design
|
Josh Weier | The standard pistol is the workhorse of the game, functioning not only as the first ranged weapon you learn to use, but also as a method to interact with physics and destructible objects at a safe distance for the entirety of the game. To help make it feel more responsive and fun to use, the pistol employs a refire mechanic that is unique to it in the game. The gun will fire as fast as the player can click their input button, meaning that they feel an instant response to their input and the faster they can click, the faster they can shoot. |
cn_076_shatterglass
|
Josh Weier | When designing early levels where there are fewer mechanics in the player's toolkit, we found that it's just as engaging to spotlight new technical features instead, as we did here with breakable glass. For the shattering effect, glass was divided into a grid of small squares, allowing us to track which were intact or broken and to blow out larger sections once enough squares were no longer connected. Each square was rendered with a unique texture based on the status of its neighboring squares, creating the sharp, jagged edges between broken and intact glass. While breaking glass is now commonplace in games, it was something we hadn't seen much back when we were making Half-Life 2 and were incredibly happy with the level of fidelity and dynamism it brought to the game. |
cn_077_lambda_caches
|
Laura Dubuk | Technically, the first lambda cache is at Kleiner's Lab, but this is the first one the player finds on their own. We added lambda caches to encourage exploration and to hint to players where hidden resources could be found, with the fiction being that they were being placed by the Resistance as part of their underground rebellion. In Half-Life, the contents of supply crates were predetermined by the level design. In HL2 we introduced adaptive item crates that give the player whatever they are low on at the time, based on resource targets set by the level designer. |
cn_078_scanners
|
Kerry Davis | Scanners presented a tricky game design challenge. While we liked how their dystopian presence helped convey the Combine occupation as an oppressive surveillance state, we didn't actually want them to shoot at the player. We found that aiming into the sky to hit a small, mobile enemy with a constant height advantage was something we couldn't make fun. Since we loved their role in the story, we decided that in combat, scanners would serve primarily as eyes, alerting nearby enemies to the player's presence—a feature we reinforced with sound effects from the soldiers. Beyond that, they'd only be able to harass the player with bright lights and occasionally dive-bomb after taking enough damage. |
cn_079_surface_properties
|
Ariel Diaz | We play custom footstep sounds when the player walks on the slippery mud. This was a system called surface properties that used the material type to determine the sounds and other effects that should occur when walked on or shot at. Earlier in development, we had an effect that slowed players' movement when walking on mud surfaces. However this was removed in the shipping version as it was decided it was too punishing. |
cn_080_combat_entrances
|
Dario Casali | Running around a corner into a new area just to see enemies standing there is always awkward and it makes the world feel less alive and more like a shooting gallery. So we were motivated to introduce enemies into combat spaces in more interesting and dynamic ways that would communicate planning and intentionality on their part, as if these soldiers had a plan for Gordon even before he showed up. The Metrocops in this scene jump down into the space via scripted sequences and later in the game we use rappelling in a similar way. It doesn't matter where they're coming from in reality because players end up telling that story to themselves. We found that in the end, combat should have a story implied in its setup: an ambush, a surprise, a surge, an assault – all of these scenarios use much of the same enemy AI but all felt different to playtesters. |
cn_081_mounted_gun_gallery
|
Dave Riller | There are a few things going on in this scene: the turret firing at the breakable crates behind teaches players not to take the threat head-on and instead utilize the sewer route to defeat it. And when they reach the turret, players enjoy using it in a shooting gallery with a wave of combine soldiers and dive-bombing scanners. We also begin to introduce the missile-firing Combine APC that pushes the players along and will show up again later in the chapter. |
cn_082_barnacle_bowling
|
Josh Weier | This is a satisfying example of how a bunch of design elements can come together in a matrix of interactions to create a memorable scenario for the player. We've got barnacles, our physics simulation, slippery mud surface properties, and last but not least: explosive barrels. You've encountered all of these on their own but they're able to snap together naturally to create a moment we called 'barnacle bowling.' |
cn_083_barnacle_introduction
|
Mike Dussault | Because barnacles can be confusing when first encountered, we wanted to introduce them in a way that showed players their behavior at a safe distance. We decided to do that through a 'nature show' style predator/prey scene. But as always, scenes like this took a lot of iteration to get players to reliably see. |
cn_084_physics_in_puzzles
|
Eric Smith | Often, simpler mechanics encountered earlier in the game are pulled from more complex ones as a form of training. The teeter-totter puzzle, the first physics-based contraption in the game, was actually derived from a later, more intricate washing machine puzzle in the Canals. The teeter-totter uses the same principles of mass and ramps but in a more straightforward way that players can directly interact with, helping them understand the game's physics and making later puzzles easier to tackle. This puzzle also highlights the analog nature of the physics system, with the placement of cement blocks and the player's own weight both factoring into the solution. |
cn_085_combine_attack_helicopter
|
Brian Jacobson | Players have had glimpses before, but here is where we fully introduce the main boss of the Canals chapter: the Combine attack helicopter. This chopper relentlessly destroys the player's cover, forcing them to move forward or risk being gunned down. We wanted this sequence to feel tense and menacing, with the helicopter seeming far too powerful to confront directly, and maintaining that intensity through the Canals until the final showdown in the reservoir. To achieve this, we heavily constrained the helicopter's movement based on specific gameplay goals. In this map, designed as a dash from cover to cover under helicopter fire, we created a ringed path in the sky above the arena. The helicopter always travels to the farthest point on that ring from the player, keeping it visible and making it easier for us to place cover in just the right spots. |
cn_087_explosive_barrel_factory
|
David Speyrer | This room, filled with an absurd number of exploding barrels, inspired some good-natured debate within the team around how far to push the plausibility of our world. Frankly, one exploding barrel is probably too many in regards to realism – in the real world, barrels really shouldn't explode, even if you were to shoot them. |
cn_088_rising_water
|
Dhabih Eng | The floating bridge and wooden spools here introduce the concept of buoyancy as a tool for the player. In this case it's not really a puzzle, it's just some fun for the player interacting with the physics system, but later we revisit this gameplay element in the Canals to raise a ramp for the airboat. |
cn_089_manhack_matt
|
John Morello | We called this guy 'Manhack Matt'. We wanted to give players the sense that hundreds of citizens inhabited the hidden nooks and crannies of City 17. At this point in the game it had been a while since players had seen a citizen, and we wanted to explain the significance of the manhacks, which was the first physics-based enemy that we created. In the original concept, there was a video arcade in the city where players could fly manhacks around to attack virtual citizens. Eventually players would discover that the manhacks were real, and so were the citizens. As the Combine occupation became more fleshed out, this idea of a video arcade became an odd fit, and eventually fell by the wayside. |
cn_090_fighting_motion_sickness
|
David Speyrer | In early playtesting, many players developed motion sickness while driving the airboat. While trying to fix this, we discovered several factors that contributed to it. We found that reducing the intrusion of the airboat into the view helped, so we made several passes on the model to pare it back. Maintaining a consistent frame rate helped as well, so it was important to keep an eye on performance in these sections. However, the single most important factor was eliminating view roll to maintain a more stable horizon, no matter what the airboat was doing. One of the programmers on the team was super prone to motion sickness, and they valiantly offered to playtest several times to evaluate our progress towards a solution. I'm pretty sure they threw up at least once after playtesting, and their noble sacrifice was hugely appreciated. |
cn_091_headcrab_canisters
|
Jeff Lane | Used headcrab canisters are seen later in Ravenholm, so we decided to show them here to tell the story of how they are used by the Combine. The story is that the Combine have weaponized Xen fauna to use them against the humans. Burrowed headcrabs was added as a new feature for this section that was expanded into a larger area that is meant to show a citizen shanty that was decimated by the headcrab assault. |
cn_092_airboat_intro
|
John Morello | This is where the player gets the first vehicle of the game, the airboat. We wanted to create a safe space for players to practice driving the airboat for as long as they wanted before moving on. The arena is sprinkled with opportunities for jumps and tricks, and we filled the arena with toxic slime to keep players in the airboat longer, while preventing them from leaving the vehicle behind and progressing forward on foot. |
cn_093_zombie_surprise
|
Jeff Lane | There's a little bit of a cheeky jumpscare in this sewer pipe. Players wander in here, all focused and curious, and then an underwater zombie pops up, almost always scaring the pants off of playtesters. One fell completely out of their chair and we marked that moment as a success. |
cn_094_airboat_time
|
Dave Riller | Finding the right balance between vehicle time and time on foot took some work. Driving at full speed propelled players through the world quickly, and in early playtests, we felt that players were missing out on exploring and taking in the game’s detail. Exploration and world examination are core values of Half-Life, so we didn’t want the vehicle to eliminate that. To address this, we added periodic spots where players could get out and explore small pockets of dense detail. Additionally, as the physics system evolved, we were learning new ways to use it. These spots off the airboat let us introduce an element of discovery around physics, like dropping crates into the water or combining barrels to raise a ramp. This gave players chances to interact with water and physics in different ways. |
cn_095_floating_supplies
|
Josh Weier | In the alpha playtest, the Canals were sprinkled with floating supply crates that players could smash with the airboat to collect the supplies inside. The mechanics were somewhat gamey, but helped keep the flow of gameplay by allowing the player to resupply on the go. To help make the premise more plausible, we wanted to tell the story of how the floating crates ended up so conveniently placed along the player's path. This scene of citizens dropping supply crates helps to sell that fiction while also making the canals feel more alive. It also made players feel more heroic and important to be helped by so many supporting characters. |
cn_096_grenade_intro
|
Brian Jacobson | We conceived of this Combine base as both a training area and a playground for grenades, introducing them here in infinite supply crates to signal that it’s safe for players to use grenades freely, without fear of running out. Go wild. The combat encounter here uses extensive markup to direct the soldier AI toward cover and provide good opportunities for players to make use of grenades. At the end of the area, the same infinite grenade technique helps players resolve the turret placement, as a direct assault is a lot more difficult if you don't. A bright red light and a particle trail were added to the grenades, making it significantly easier for players to follow their trajectory, whether they’re throwing them or dodging one from a Combine soldier. |
cn_097_puzzle_layers
|
Dave Riller | Because players are often totally unaware of the designer's intent, which we think is a good thing, they tend to respond in all kinds of surprising ways to our puzzles. Even seemingly simple puzzles can see countless failures along the way to success. That was the case with this puzzle to open the canal gates, which required many layers of hints and cues. The sparking wheel communicates that this is where someone would normally open the gate, but some other method is needed. A few crows draw the player's eye towards the solution - in this case a bundle of girders on a pendulum. As grenades have been thoroughly introduced, we originally wanted players to throw one to break the wooden support structure, but that wasn't obvious enough. We added exploding barrels as a much clearer clue. But even after solving the puzzle, some players didn't realize that the gates were now open, which led them to endlessly explore the base interior looking for another exit. So we had to remind them of their original problem, the broken gate. We added smoke, the alarm klaxon, and a spinning light, and it was then that we finally we started seeing reliably good results in playtesting. |
cn_098_metrocop_strafing
|
Doug Wood | The initial design of the airboat levels aimed to challenge players with tight maneuvers under pressure from Metrocops. The Metrocops’ strafing fire pattern was meant to make players slalom around shots as they sped through. But in our review process, internally known as “Overwatch”, some playtesters felt frustrated being shot at without a way to retaliate. They wanted the airboat to feel powerful and unstoppable. Based on that feedback, we refocused the levels to emphasize empowerment over challenge, shifting from dodging attacks to fast-paced combat, speed and busting through barriers. |
cn_100_mounted_gun
|
Jeff Lane | The mounted guns were added here to help train the player on the gun's effectiveness against the Attack Helicopter for the final battle in the reservoir. This was necessary because up to this point, the helicopter was invulnerable to the player's weapons and we had to give some indication that it could be harmed with larger weaponry. This was only somewhat effective in playtests due to the amount of pressure the player was under at this moment, which led to us developing the particle effects and sounds that would sell to players that they were really doing damage. |
cn_101_357_intro
|
Josh Weier | The Magnum, unlike the basic handgun with its slower rate of fire, is meant to be fired in a more measured, calculated way, dealing catastrophic damage to your target. Earlier versions of the Magnum had a bit of random dispersion, which we felt fit the heft and less sensitive trigger pull of a larger pistol. However, this led to anticlimactic moments where players would carefully line up a shot, only to miss due to the accuracy penalty. To address this, we made the first bullet perfectly accurate, with each subsequent shot becoming less accurate as you blasted away. |
cn_102_helicopter_bombing_run
|
Brian Jacobson | This is another example of us creating movement hazards while driving the airboat. The helicopter attempts to stay in front of the player when dropping bombs, and if the player backtracks, it switches to hovering and turns to attacking the player. In one playtest, a bomb just so happened to drop between the pipe seams, creating a pretty cinematic moment. The playtester had a positive reaction, so the team decided to script it so that it happened every time. This process has become part of our organic design process: every so often, a cool thing randomly happens in a playtest and we find a way to make it happen more reliably. |
cn_103_smokestack_collapse
|
Doug Wood | When making a cinematic action game like the first Half-Life, we'd always wanted to make some big, spectacular destruction sequences, but we never were quite able to do anything like that. Half-Life 2's physics introduced a system for breaking physics small objects, such as crates, into pieces which provoked the idea of doing something much bigger. We took a leap and tried breaking a smokestack into pieces as it fell. So we experimented with the masses until it felt better, refined the model, and added lots of smoke and audio to sell the effect and there we had it, we had our big, spectacular destruction moment. Ultimately, our physics made the sequence just barely possible in a way that maybe seems quaint by modern standards, but felt really awesome at the time. Because the simulation was unique every time we ran it, we had to take steps to avoid completely blocking the player's path with the big chunks of debris. The success of this sequence whet our appetites for the much higher-fidelity cinematic physics destruction effects next seen in Episode Two. |
cn_104_washing_machine_puzzle
|
David Speyrer | We created the washing machine physics puzzle late in development, after playtesting indicated that players needed some downtime from the relentless pressure from the helicopter. Pacing contrasts like this can accentuate the high moments by resetting the player's emotional state and warding off exhaustion or even boredom. It was a positive sign that players reacted so strongly to all the work done to make the helicopter truly menacing, but if we didn't manage the stress of it, fatigue set in which quickly started to feel like monotony rather than excitement. Originally, players quickly drove through this area, but because of the nice sunlight and composition of the canal here, we decided it was a good spot to pause the action and have the player spend some quiet time solving a puzzle. The puzzle itself, which was the first to use masses and pulleys, ended up giving rise to other physics navigation puzzles elsewhere in the game. |
cn_105_airboat_changeup
|
Jeff Lane | We heard from playtesters that they wanted more varied gameplay from the airboat. Up until this point in the game, the player had no real way to fight back against the helicopter and other enemies, so it felt right to introduce a new weapon, and in turn deliver a pacing change and set the stage for what would come next. In the subsequent areas, we added target practice to acquaint people with their new weapon. Extra physics force on the Air Gun Projectiles meant targets would get sent flying and make it feel way more powerful than anything in the player's loadout. Even then, when it came to playtesting combat against the helicopter, a lot of testers still thought it was invincible. They hadn't noticed our efforts to illustrate that the helicopter was taking damage from the Combine mounted guns earlier. To get over this, we did the most obvious thing: we put the helicopter directly outside the tunnel in which you get the gun and make it extremely clear that you can and should shoot it. And it worked. |
cn_106_donkey_kong_barrel_ramp
|
Eric Smith | This particular area, the drainage ramp, originally started as just an interesting piece of level architecture. But during playtesting, we quickly saw how fun it was to slide the airboat up and around the slope, so we added more elements to enhance that experience. This included Metrocops dropping barrels down toward the player, in a little nod to the classic arcade game Donkey Kong. |
cn_107_lambda_cache_interlude
|
Danika Rogers | A Half-Life game is an ultimately linear experience, but we're always looking for ways to break up the player path and create non-linearity at level-scale. This lambda cache, replaced with a zombie and headcrab ambush, was entirely optional and built only to add some of that non-linearity to this section of travel through the canals. |
cn_108_skybox_time_of_day
|
Eric Kirchmer | The skybox and fog color changes incrementally during this chapter. This is meant to visually represent the passage of time as the player journeys through the canals and the chapter. Two fog color values in opposite directions help sell the illusion of the oblique angle of the sun. You can see it getting closer to sunset by this point. This culminates in the helicopter combat arena with the sun setting and Ravenholm visible in the skybox. |
cn_109_helicopter_megabomb_origin
|
Brian Jacobson | The megabomb helicopter attack was based on a bug where a programmer accidentally removed the bomb drop cooldown, resulting in the chopper dropping hundreds of bombs all at once. It looked so cool that we decided we had to keep it for the final shipping bombing run attack. |
cn_110_helicopter_final_battle
|
Dave Riller | We designed this area as the arena for the final showdown between the player and the Combine Attack Helicopter. It’s the player’s chance to finally eliminate the relentless enemy that’s been hunting them since fleeing underground after leaving Kleiner’s lab. This area went through many iterations to get right. It needed ample space for fast-paced vehicle combat, a smooth movement flow with clear path options, and reliable performance. We also aimed for visual clarity suited to combat—striking but not overly noisy—while ensuring it would serve as a memorable vista once the helicopter is destroyed. |
cn_111_airboat_wreckage
|
David Speyrer | Originally, we destroyed the airboat in the jump over the dam, so this level opened with the player amidst the wreckage of their beloved airboat. In a team-wide playtest, people felt disappointed to lose the vehicle they had spent so much time in. There was a lot of internal debate about whether that feeling of loss was a valuable experience or not. Ultimately we decided to leave the airboat intact to allow for the possibility of the player reuniting with it someday. Some say that airboat is still out there waiting for you. |
cn_112_arena_exit_gate
|
Quintin Doroquez | The wheel mechanism to raise the exit gate was added specifically to force the player to defeat the helicopter before they can move forward. We often refer to elements like this as 'gating' the player—using design elements to keep them from progressing until they’ve completed a necessary task. Here, it’s quite literally a gate. |
cn_113_blocking_tutorials
|
Adrian Finol | At times we need to be certain the player has learned something, and the standard method – at least in game development, not in life – is to trap them behind a gate of some kind, where the player must demonstrate their knowledge to be able to continue. We try to disguise these gates as something natural, so it doesn't feel like the game has come to a halt. Of course, time, resources, or the environmental context itself can make that tricky. In this case, we needed to teach players how to move objects, so a simple stacking puzzle seemed like the best solution. We tested more complex designs, but they ended up being less effective at teaching the actual thing we needed the player to learn. |
cn_114_natural_storytelling
|
Bill Van Buren | To achieve our world-building and storytelling goals throughout the Trainstation, we needed a natural way to deliver dialogue without it feeling too 'gamey' or requiring tutorials. We noticed players often approached NPCs to get a closer look, so we turned that into an opportunity for the NPCs to acknowledge the player and deliver their lines. This kept interactions low-key, which was essential since the citizens aren't exactly too cheerful or lively. The simplicity of this approach also aligned with one of Half-Life’s core storytelling goals: making the narrative responsive to the player's interest. Some players want to dig into every detail, while others just want to know what to do next. Giving players control over how much story they engage with—without it blocking progress— is a goal we always try to maintain in a Half-Life game. Plus, the less complicated a feature is to implement, the more of it we can include. |
cn_115_trainstation
|
Charlie Brown | The beginning of a Half-Life game has a big job to do: we have to introduce players to a new world while teaching them how to interact with it. Players need to quickly grasp what's happening, who the key players are, what threats they'll face, and how they fit into it all. And above all else, it has to be fun and as interactive as possible. Nailing this takes a lot of iteration and playtesting, especially on the small moments that make up the bigger picture of the game. In this section, we'll highlight some key moments, game design problems and a few playtests where things didn't quite go as planned. |
cn_116_physics_objects
|
David Sawyer | It might be hard to remember it now, but back when Half-Life 2 came out, small objects in games, like these bits of trash, were usually static - as in, they didn't really move, or if they did, it was in a very simple way. If you were lucky, they might break apart into some sprites if you hit them. One of the goals for this initial Train Station section was to really make it clear to players that these kinds of things were properly, physically, actually being simulated in Half-Life 2. They could be picked up, dropped, thrown, broken, moved by other forces, you name it. Here, we're starting with just a small thing, showing that trash is being blown by the arrival of the train - or more specifically, an invisible volume that's triggered by the player leaving the train. |
cn_117_blocking_tutorials_3
|
Eric Smith | The second failure in this tutorial space is a case of misteaching—where we accidentally teach the player something we didn't intend. This can be hard to avoid without extensive playtesting, since the game is always teaching, even when it's not by design. We cover a more extreme example of this later in Trainstation. Some cases of misteaching are critical to fix, like if the player learns the wrong way to use a weapon or interact with a common element of the game. But this one was more subtle. While teaching players to move objects, we also taught them to stack physics objects to escape a space. Normally, that's fine—except we don't deploy that puzzle later in the game, and teaching something irrelevant just clutters up their mental toolkit and makes it harder to readily keep useful things in mind later on. Worse, stacking objects to escape the play space became a problem later on, something we had to actively prevent in multiple areas. With so many physics-based objects in the game, blocking this behavior without putting ceilings everywhere wasn't really an option. In the end, we hoped players wouldn't overthink this early lesson, and when playtests showed players getting stuck later on, we made sure not to remind them of it. |
cn_118_welcome
|
Gabe Newell | I'm Gabe Newell, and welcome back to Half-Life 2. It's been a long twenty years since its release. In this anniversary update, we've tried to respond to as many of the community's requests as possible. And while this might not be the absolute number one request from fans, I often hear how much people would like a commentary mode, so we're excited to finally make that happen. Not only was it a challenge to remember many of the details of Half-Life 2’s development, but it's not easy to crack open decade's-old computer code and wonder what the hell you were thinking way back then. Along with the dev commentary, we've cleaned up a bunch of old bugs and tried to leave things in a better shape than we found them. And at least this time we have the luxury of knowing how things turned out in the end. To listen to a commentary node, put your crosshair over the floating commentary symbol and press your Use key. To stop a commentary node, put your crosshair over the rotating node and press the use key again. And please let me know what you think. You can reach me at gaben@valvesoftware.com. Thanks and have fun! |
cn_119_metrocop_shoving_citizens
|
Josh Weier | We've only walked a few feet and we've already got problems to solve. We're introducing the Metrocop – an enemy that eventually we're going to ask players to kill -- and giving you a glimpse of the world they're stepping into. It's a simple, clear moment that shows the relationship between the Metrocops and citizens and, by extension, the player. At the same time, we decided to reinforce the existence of the core physics simulation, showing the way in which NPCs can casually interact with objects in the world. Seeing something like a suitcase knock into another and dislodge was pretty novel back then. These moments at the Trainstation, where we combined multiple goals, were the most successful—they felt natural, like the game wasn't pausing just to teach you a lesson. |
cn_120_fall_damage
|
Kerry Davis | Teaching players how to do things is the an essential thing for the game to do. But for them to make confident decisions, they also need to understand the more subtle rules. This small moment covers two of those: First, it shows how far the player can fall without taking damage. Second, it introduces a bit of the game's 'vocabulary'—how the game will communicate with them. Players often worry about taking fall damage, but they also worry about taking a step they can't take back, so here, a welcoming crate to land on signals that dropping down is safe and it's the way forward. And when the crate breaks, it reinforces the physics simulation and is likely the first time players will see that larger objects can be broken apart. |
cn_121_sweeping_vortigaunt
|
Laura Dubuk | Another key moment of storytelling, and another nod to the physics system: here we're showing how Vortigaunts fit into this new world, which is a big shift from their Half-Life 1 role as enemies. Like so many moments in Trainstation, the sweeping Vortigaunt came from our constant hunt for ways to show characters interacting naturally with the physics system. Why just tell players Vortigaunts are friendly now when you can show one casually sweeping the floor and knocking around some simulated objects? Nothing signals 'Hey, don't shoot me this time around' like someone willing to tidy up. |
cn_122_blocking_tutorials_2
|
Robin Walker | There are two failures of note in this tutorial. First, players can break it by tossing all the boxes out the window, which is obviously not great. This leaves them stuck, and unable to continue, which we consider a game-stopping bug that must be fixed. A clean solution would've been to tweak the window or crate sizes so that the player could fit through, but not the crates. The problem was we really only caught this late in development, and making even a simple change like that can have unpredictable, game-breaking consequences— and right before launch, no less. Also, this issue was purely theoretical. We'd never actually seen playtesters do it. Our playtests of this design space had gone smoothly, with players learning what they needed and moving on. Changing the room's design so late in the game could've disrupted that success in ways we couldn't foresee. So, we opted for a more conservative fix rather than something elegant. If you're curious, try tossing all the boxes out the window and see what happens. |
cn_123_playground
|
Brian Jacobson | When we discussed ways to highlight our physics simulation around the city environment, a playground felt like a natural fit. And for players paying attention to Dr. Breen, it's also a reminder of the cost of humanity's surrender to the Combine. We chose a few playground structures with interesting physics potential and began tuning them. Even with the advanced physics simulation we had at the time, setups like this always required hand-tuning, as we had fully simulated objects interacting with non or partially simulated ones, such as the player or static world geometry. It might be easy to assume that a simulation can carry a lot of the load of implementation, but even now, twenty years later, that is rarely the case. Back when we were working on the playground, late in development on the game, we couldn't get the friction on the slide to feel right, we decided to break the ladder so the player couldn't get up there. However, you can still just walk right up the front. As is commonly case, you the player get the last laugh on us. |
cn_124_pick_up_that_can
|
Charlie Brown | Contextualizing tutorials – creating a story scenario in which they happen – lets us do more than just stop the game to teach players something. We get to add narrative value to the moment. Here, teaching the player how to throw objects is combined with a narrative setup that gives them a choice in how to respond. After watching the Metrocops casually abuse citizens, it felt right to put the player in a similar position next. If they had opinions on how the citizens should've reacted, now they get to express them. Playtesting showed most players either complied or refused the request to pick up the can, but some tried to avoid the confrontation entirely. Others would comply initially, then retaliate after feeling humiliated. By keeping the moment relatively constrained, in both terms of player actions and the space itself, we could generally react appropriately to whatever choice they made. |
cn_125_promises
|
Danika Rogers | Early in a Half-Life game, one of our goals is to make promises to the player, giving them a sense of what to expect later on. From movies, to TV shows and books – all great entertainment does this in some way, and video games are no different. The promises in a Half-Life game can take different forms. Some are simple, like offering just a glimpse of something intriguing. Others are more layered, like the interactions with Metrocops. By putting the player in a powerless position against them, we set the stage for that future moment when they can finally get to retaliate. |
cn_126_exploration
|
David Sawyer | After being herded through the Train Station by Metrocops, we wanted to give the player a chance to explore. This plaza offered several ways to interact with Citizens, Metrocops, Scanners, and the city infrastructure. We aimed for these interactions to continue building the world without needing explicit tutorials, and we refined them until they worked naturally, without any real training. |
cn_127_early_game_exploration
|
David Speyrer | Up until this plaza, the player likely hasn't had to think much about their path, as it's been clear and straightforward. But here, we wanted to encourage exploration, which meant making the way forward less obvious, which can produce a feeling of being lost, or worse, confused, as you might expect. These kind of shifts can be tough, especially early in the game when the player is still learning the game's guidance cues. Here, it's further complicated by force-fielded checkpoints, which allow NPC citizens to move through but restrict the player, leading many playtesters to believe that there was a way to bypass them. Ultimately, it was extensive testing that helped us find the right visibility for the alley forward, using lighting, layout, and a wooden crate – an object players have already used as a sign they're headed in the right direction – all to subtly guide them forward. |
cn_128_strider
|
Doug Wood | This simple moment is a promise that you'll encounter this enormous thing later in the game. Even an uncomplicated scene such as this really does require a lot of playtesting and iteration because of Half-Life's goal of not taking control away from the player. As you can imagine, it takes a lot of small tweaks to ensure that as many players as possible are looking at the right place at the right time here. The street layout and the Metrocop scene are all set to guide the player's gaze, perfectly framing the Strider when it appears. |
cn_129_physics_simulation
|
Jay Stelly | Even though we began Half-Life 2 with a functional third-party physics simulation, it still took us years of additional work both on the simulation itself and its integration into the game until we felt like we could meet our designers' ambitions. Maintaining a shippable level of quality and performance was something we knew would be a challenge along the way, but the more subtle issue was actually figuring out what to simulate—and to what extent. We can't simulate everything, and even if we could, there are game design reasons why the experience is better when we don't. In many cases, we deliberately bend the simulation to improve the player's experience. These adjustments usually go unnoticed because they align with player intentions. For example, we disable collisions between the player and large objects to let them turn in narrow spaces. Or, we tweak the combine rifle's alternate fire to gently guide the combine ball towards enemies, making it more effective. |
cn_130_physics_simulation_2
|
Jeff Lane | Areas where game design or other constraints impacted the physical simulation often required extra tuning to keep the simulation from blowing up. As an example, early on we experimented with increasing the simulation on the player entity, which is to say, make player movement much more a part of the physical simulation of the world. We knew this might make movement feel less precise but hoped it could lead to some novel experiences. However, after tuning the movement to match the less simulated version, we hit a major problem: the physical force needed for the player to move at their usual speed was so high that simply running into other simulated objects could result in death from the impact. We iterated on it, but ultimately couldn't find a solution we liked and reverted to the less simulated movement. However, this wasn't exactly the easier path —it introduced new challenges with how the less simulated player interacted with the more simulated objects in the game. But it did allow us to deliver the precise movement experience we wanted. |
cn_131_playtesting
|
John Morello | Playtesting is a core part of how we make games – we can never fully predict how players will respond or what they'll take away from what we've built. So we test to learn and hopefully improve. During a playtest, we don't guide or talk to the player, even if they're stuck— as it's extremely unlikely that we'll be sitting there next to the player once the game ships. Our goal is for testers to forget they're playtesting altogether and play as if they're at home. Any interruptions risk skewing the data, so we only step in if there's a bug that's blocking progress or more rarely, if their behavior is so confusing that we need to understand what they're thinking. |
cn_132_playtesting_anecdote_1
|
Matt Boone | One memorable playtest reminded us that we can never be certain what players are thinking or what lessons they've taken from their experience. This tester, after playing from the start and stuck trying to open a door, spent several minutes going back and forth between the game and the Keyboard Settings. When we asked what they were looking for, they said they were searching for the Inventory controls, thinking they could use an item they'd picked up earlier to unlock the door. While we'd seen them collect and drop several physics objects, they'd misunderstood what was happening. The tester believed that pressing the Use key while holding an object moved it into an inventory, because the item dropped straight down. We knew it was just gravity, but their assumption wasn't crazy, some adventure games of that era did slide items into an invisible inventory that way. We explained there was no inventory, and the test continued. Later, we considered whether we needed to address this, but since no other playtesters had the same confusion and all of our potential solutions were more heavy-handed than necessary, we decided it was an outlier that was OK to remain unaddressed. It served as a good reminder, though, of how even the most subtle, overlooked details can mislead a player. |
cn_133_cratebaby
|
Scott Dalton | This small baby doll has a long legacy. During the long months of internal testing, some of the team turned it into a challenge—a way to make the 47th playthrough of the game a bit more fun. They'd place the doll inside a nearby blue crate and see how far they could carry it throughout the game. After Half-Life 2 launched, the community started carrying cratebaby as well, adding their own stories and rules to the mix. A few years later, while working on Episode 2, this was the inspiration for the 'Little Rocket Man' achievement, where players had to carry a garden gnome dubbed 'Gnome Chompski' throughout the entire episode. Chompski even made a comeback for Left 4 Dead 2, in the 'Guardin' Gnome' achievement, where players had to carry him through the Dark Carnival campaign. These kinds of interactions with players are some of the most rewarding parts of game development. We design games with theories in mind, but you never really know where things are going until players get their hands on it. |
cn_134_the_citadel
|
Eric Kirchmer | We wanted the player's first view of City 17 to highlight the Citadel and its dominance over the city, representing the Combine's complete and total power over humanity. It also needed to serve as a clear goal—even at the start, you can see where your journey ends. But despite its massive size, playtesters often missed it. A player's sense of depth and scale can be tricky on a computer monitor. If you've played any games in VR, you'll likely have noticed that scale feels more natural. Anyway, in this case, the Citadel's position above the player didn't help things either, which may be because FPS games train players to focus straight ahead or down at their feet. To improve the chances of players taking in the Citadel, we added birds flying towards the structure, a subtle trick we've found helps guide the player's view. |
cn_135_close_captions
|
Yahn Bernier | During Half-Life 2 development, we were approached by folks interested in accessibility who encouraged us to go beyond the subtitle system we had built for Half-Life 1. They requested a full-featured closed captioning system to capture all game sounds, not just dialogue. This turned out to be a much tougher challenge than expected. Half-Life 2 produced a wide variety of sounds, and categorizing them for captions was crucial. Prioritization was also key: dialogue needed to stand out, and it had to be clear who was speaking. Ambient sounds, like distant combat, were important for setting the mood in some levels—but less so if there was an enemy shooting nearby. Our level designers also often reused sounds in different contexts, so a sound might require captioning in one area but not in another. Ultimately, every sound in the game ended up with its own caption data, formatted almost like a small version of HTML, giving us control over the visual style of each caption. As with the audio files, the caption data was too large to keep in memory, so we built an asynchronous caching system to handle it efficiently. |
cn_136_the_raid
|
David Sawyer | The section ahead, where the Metrocops raid the apartment while the player is inside, took a lot of iteration to get right. Generally, we design Half-Life to let players set their own pace, since people move through the game in a variety of ways. Some players run, ignoring anything that doesn't seem like a threat, while others methodically poke at everything. So, whenever we need to force a specific pace, it's always a challenge to ensure players perceive that pace and naturally play along. Since this is the first time we do this, we had to put in a lot of work to make it clear to the player that they needed to move quickly, and there's no time for exploration here. |
cn_137_materials_and_shaders
|
Gary McTaggart | Half-Life 2 came out in the midst of a significant shift in 3D graphics, where the industry moved from textures to materials and shaders. In Half-Life 1, every surface had a texture—essentially an image wrapped onto the polygons of the surface. But by Half-Life 2, just five years later, every surface had both a material and a shader. The shader was a lump of code that customized how the surface was rendered. The material defined properties for how the surface behaved physically and how it responded to gameplay features. It also held references to textures and other parameters for the shader to use for rendering. As a result, artists and designers had much more control over how things looked and behaved. However, this came with a tradeoff: Half-Life 1 could run on a computer with only a CPU, but Half-Life 2 required a GPU because the computations needed for materials and shaders were specifically designed for GPU hardware. The refracting glass door here is a clear example of a custom shader, but every surface in the scene is actually running a shader of some kind. |
cn_138_the_raid_2
|
Miles Estes | In addition to making sure the player felt pressured to move forward, we had to ensure the raid itself was foolproof. There couldn't be any way for the player to avoid it or double back past the Metrocops. The real challenge wasn't that players would try to break the scene, but that many would panic and run in random directions once they realized they were under threat for the first time. This moment requires a delicate balance: we wanted a heart-pounding escape, the sense of real danger, but we wanted players to make it through on their first try and feel like they escaped by the skin of their teeth. To achieve this, we kept the path ahead clear and reminiscent to the one they had just taken through the previous floor. A series of carefully placed triggers controls the threats appearing ahead and behind, maintaining the pressure while still giving players enough time to stay ahead of their pursuers. |
cn_139_health_and_no_hud
|
Dave Riller | The rooftop chase sequence posed a unique design challenge. We didn't want it to be a true gameplay obstacle and our best case was for players to make it through on their first try. But we still wanted to create the sense of real danger, where players felt legitimately threatened if they didn't keep moving. The challenge was that, without the HEV suit, players don't technically have a heads-up display, or HUD—which means no way to show damage or remaining health. After some experimenting, we created a special health system for this section, using a red glow around the screen to indicate damage. Players reacted as we hoped, taking it as a sign to keep moving, and the chase unfolded as intended. To top it off, we made health regenerate quickly, so the glow wouldn't linger, reinforcing to players that running was indeed the right choice and ensuring players weren't taken down by a stray bullet at the end just because they dawdled or hesitated earlier on. |
cn_140_playtesting_anecdote_2
|
Jakob Jungels | In the upcoming section, we needed the player to be knocked down so Alyx could save them. With limited time and resources, we simplified the encounter as much as possible—re-using existing systems and letting it play out like a normal combat sequence, with a custom event triggered when the player 'died.' After a few playtests to fine-tune the timing, we moved on. But as we got close to shipping, we encountered a problem with a playtester who quicksaved constantly and immediately hit quickload whenever they thought they'd failed. When a metrocop hit them, they quickloaded before Alyx could step in, repeating this more than ten times before we asked them to stop. At this stage of development, in the final bug-fixing phase, we had to limit ourselves to fixing only game-breaking issues—since even small changes could risk breaking something more important. Knowing this was a rare edge case, because most playtesters had no trouble, we made a few conservative changes. We shortened the delay between the knockdown and Alyx's first line, and changed the fade-out color to white instead of black, which wasn't ultimately that useful since players hadn't died enough yet to associate black with death. We considered disabling quickload for the few seconds the scene plays out, but we were worried that could introduce a bug where quickload became disabled permanently |
cn_141_character_reactions
|
Ariel Diaz | Figuring out what characters should react to—and how—was an ongoing challenge. At the core, we always want the game to respond to player input. However, any response from the game can be seen as a reward by the player. So, what characters react to, and what they ignore, teaches the player about the kind of input the game expects. As a result, we made sure to respond to things we wanted more of, like players tinkering with devices in the Lab, and ignored behaviors we wanted less of, like players repeatedly slapping Kleiner in the face with a box over and over again. Seriously, we watched a playtester do it for five minutes straight. |
cn_142_choreography
|
Bill Fletcher | The system we built and iterated on to deliver Half-Life 2's storytelling was internally called Choreo, short for choreography. Early on, we found it useful to imagine we were writing a play where one of the actors didn't know the script. We needed a system that could control all the other actors, moving them around the stage, delivering their performances, and speaking their lines. To handle this, the other actors would need to seamlessly mix our directorial commands with dynamic responses to the player's position and behavior. Timing was clearly going to be a major challenge, as any part of the scene might need to lengthen, contract, or even pause, depending on what the player was doing. |
cn_143_phoneme_extraction
|
Bill Fletcher | Another critical tool we built into FacePoser was the Phoneme Editor. It processed the voice actors' dialogue lines and extracted phonemes to generate basic lip sync animations for our characters. For the hero characters, our animators used this as a starting point to create high-quality custom animations for each dialogue line, incorporating full facial animation. However, in a dynamic Choreo scene, the animators wouldn't know exactly what the rest of the actor's body would be doing. This meant it required careful layering of Choreo commands and timings in FacePoser, while still maintaining the actor's ability to dynamically respond to the player. |
cn_144_delivery
|
Ken Birdwell | It took a lot of iteration and playtesting to strike the right balance of information and emotional connection in our major Choreo scenes. Each scene had a lot of information we needed to convey to the player, but we couldn't just have characters spouting exposition—players would start thinking of them more as signposts than as people. We needed to take the time to let the characters interact with each other, allowing the player to observe their relationships. At the same time, we had to expand upon the world, giving the player a sense of everything that had happened while they were gone. All of this had to be done carefully, without taking too long—we wanted the game to work for players who were less invested in the narrative. Animation became a powerful tool in communicating the emotional connections between characters, often conveying the depth of a moment quickly without overstating it. Humor was another commonly used tool, helping to puncture a setup and disarm the player when things got a bit too heavy. |
cn_145_alyxs_kiss
|
Ken Birdwell | It's a small moment, but Alyx's kiss on Eli's cheek was important. Eventually, we're going to ask the player to save Eli, and we want them to care about him. To build that emotional connection, we needed to show the warmth and affection between Alyx and Eli. We didn't want to oversell it with dialogue or make it too big of an event—it had to feel genuine, not performative – and were able to iterate on the subtleties of this moment until it struck just the right note. |
cn_146_storytelling
|
Bill Van Buren | Creating a sense of immersion was a core principle in our approach to Half-Life 1's storytelling, and it led us to the self-imposed constraints of never – or at least very rarely - taking control away from the player and of keeping the player's avatar silent. These limitations made storytelling more challenging at times, but we felt it was important to stay with them for Half-Life 2. Our hope was that, by maintaining these principles, we'd push ourselves to explore deeper, more interactive storytelling. |
cn_147_archetypes
|
Bill Van Buren | We wanted to create narrative links between Half-Life 1 and Half-Life 2 through our characters. Our idea was to design individuals who felt like representations of Half-Life 1's archetypes, so that players might find them familiar—almost as though they'd met them in Half-Life 1 at some point that they couldn't quite remember. We based Dr. Kleiner on the scientist model we felt was the most iconic, and his personality was largely shaped by Hal Robins, whose voice and performances had defined the Half-Life 1 scientists. One day when we were in the thick of trying to figure out this character, Dr. Kleiner's face came into focus, as we happened to share an elevator with an accountant from an office on another floor of our building; and looking at him, it was clear that 'This is Dr, Kleiner’s face.' Luckily, he was receptive when we asked if we could photograph him for a video game character. |
cn_148_tone
|
Bill Van Buren | Setting the right tone for our characters and their dialogue was critical. We aimed for a respectful, almost pleasant style of interaction with the player, while avoiding anything that felt too subservient. This balance was especially tricky with Alyx, who needed to support the player without compromising the strength of her character—something we felt was crucial for players to respect her. Her relationship with Eli was a core driver of the player's motivations, so it was important that players liked her and wanted to help her. We also wanted to contrast the warmth and camaraderie between the player's allies, against the hostilities of Breen and the Combine, inspiring a sense of righteous indignation in the players for the battles ahead. |
cn_149_voice-actors
|
Bill Van Buren | To create more responsive and multi-dimensional characters in Half-Life 2, we needed to evolve our approach to working with voice actors. While running through a script with a modicum of direction was sufficient for much of the NPC dialog for Half-Life 1, for Half-Life 2, we decided to give the actors more of an opportunity to be involved in helping to define the characters and bring them to life. We provided much more story context to the actors, showing them the game world, the environments their characters inhabited, and the other characters they interacted with. We sought actors who could take all that information and synthesize it into their performances to elevate their characters beyond what we had imagined. We held multiple sessions with the main actors, allowing them to digest and build upon their characters over time. For scenes where characters are interacting with each other, we’d often cue an actor with performances from other actors in the scene to create a better sense of connection and flow. Though we’d arrive at the studio with a script and plan for the recording session, we made sure that the actors felt empowered and had the space to bring their own ideas to the character and performances. Having animators attend the voice-over sessions also proved critical, as they were responsible for stitching the character's dialogue together with physical actions and expressions to form the resulting Choreo scene. Often, animators found that their plans for how a scene would play out evolved quite a bit after working with the voice actor. |
cn_150_citizens
|
Bill Van Buren | When it came time to design the human citizens of City 17 and the world outside the city, we chose to include a variety of races, ages, and genders. This not only provided narrative context for the Combine's enslavement of all of humanity, but it was also important to us that anyone playing the game could see themselves represented in the Resistance, with everyone on Earth united against the Combine. While we found a number of candidates to use as reference in creating the citizens through our daily lives and connections, we ultimately resorted to advertising in the Seattle Times classifieds to reach the desired range of faces that we’d envisioned. |
cn_151_alyx
|
Dhabih Eng | Nailing Alyx's design was a big challenge—we wanted the player to both respect and care for her. Saving Eli was a key player goal midway through the game, and we knew that getting players to care about that goal would be easier if they also cared about Alyx and her relationship with her father. We intentionally designed, wrote dialogue, and cast a voice actor for Alyx to be savvy, capable, and relatable within the Combine-controlled world, while also making her attractive and charismatic. We aimed to avoid the objectification, exaggeration, and hyper-sexuality that many games of the 2000s featured, as we felt that would undermine our goals for her character. |
cn_152_lab_interactivity
|
Dhabih Eng | In our choreo scenes, we have the player trapped for a bit, allowing us to focus on the characters and narrative. To accommodate different players—some deeply invested in the story, others unable to stand still for more than a second—we felt like we needed a constant stream of interesting details on screen. We scattered narrative elements throughout the environment, ensuring some were interactive, essentially toys for the player to tinker with. In some cases, we embedded further storytelling into these, like the cactus mini-teleporter hinting at the upcoming teleporter failure. It was a delicate balance—too much visual interest could distract players from the core of the scene. Ultimately, Kleiner's Lab played it safe with interactivity, as it was the first major choreo scene. Later, in Eli's Lab, we explored it more deeply, ensuring the characters themselves paid attention to what players were tinkering with. |
cn_153_gestures
|
Doug Wood | Each of our characters had a library of simple animations called Gestures, which we could easily layer on top of other body movements. While originally planned as a production-savings to allow us to reuse animations throughout the game, we found they actually had more utility than that. Humans often repeat certain movements, and these can be markers of their specific personality. As a result, our Gestures library ended up capturing the essence of each character, helping to unify our animation team's understanding of who those characters were. This made it much easier for our animators to share any one character's workload among themselves, which production often required. |
cn_154_interactive_storytelling
|
Erik Johnson | When we started on Half-Life 2, storytelling in games was generally either high fidelity and linear or low fidelity and interactive. We knew players had seen high-quality storytelling in games, but they'd never been placed right in the middle of it—able to watch it unfold around them, interact with it, and examine any part closely. While that goal excited us, we didn't yet know how it would work or what kind of technology we'd need to make it happen. We had built a variety of storytelling technology in Half-Life 1, but it was mostly presented in front of the player and lacked the interactivity and fidelity we were aiming for. So, Kleiner's Lab became the section of the game we used as a test bed to figure out how our interactive storytelling would actually function. |
cn_155_monitors
|
Ido Magal | We use monitors a lot for storytelling in Half-Life 2 and the episodes, so much so that we even make fun of ourselves for it. The problem is they're just too damn useful. They're fantastic for expanding the world by showing events happening elsewhere, and they provide an easy way to have characters talk without needing to explain how they all get to the player's location. Anyone who has ever written any kind of story, especially one with an ensemble cast, understands the unique frustration of trying to create natural reasons for all of your characters to come together in the same place. In this case, we were really happy with how the monitor allowed Alyx to teleport away and then quickly appear in Eli's Lab, making everything feel real and connected. |
cn_156_scene_interactivity
|
Jeff Lane | At certain points in our choreo scenes, we require the player to take action. Beyond simply giving them something to do and keeping them engaged, these moments served a few key purposes. First, they allowed the player to catch up if they'd been distracted by something else in the environment. The scene could safely 'pause' until the player completed the task, ensuring they were back and paying attention. At the same time, it let us position the player where we needed them, which was useful for framing the scene and making sure they saw something important. In some cases, we got double value out of these moments by using them to train the player—here, they're learning to interact with plugs and sockets, a skill they'll need for puzzles later in the game. |
cn_157_kleiners_lab
|
Marc Laidlaw | Kleiner's Lab was the first piece of storytelling we worked on, and the one that took the longest to figure out. In addition to its role in launching the player into the game, it eventually became the unifying vision for the development team in terms of storytelling. As we began building our characters and narrative delivery around the player, many questions arose, all of which felt like major risks for the game. Would players be able to extract the important information from the scene? Could we get them to care about these characters? How would we support players with varying levels of investment in the story? How costly would it be to produce a scene like this? And how many more would we need? These, and many more, were questions we couldn't answer until we'd built enough of the scene and tested it with a lot of playtesters. Frustratingly, unlike many other problems we face, these weren't questions we could confidently answer until we'd done significant work. We can often test gameplay ideas with low-fidelity prototypes, but if a player isn't invested in our storytelling when the characters are just grey blocks, how confident can we be that they'll care once the characters are fully realized? |
cn_158_characters
|
Marc Laidlaw | In Half-Life 1, security guards and scientists could best be described as archetypes, not individuals. They were abstractions of characters that matched the visual fidelity we could achieve at the time. But as our technology improved with Half-Life 2, we saw an opportunity to move beyond abstractions. The increased visual fidelity of the character models, combined with improvements in animation, made us believe we could focus on creating individual characters – people you would actually care about. We allowed ourselves to write real dialogue and have characters deliver it in a way that felt realistic enough for players to not think about them as pawns in a video game, and hopefully, propel our narrative forward. In 2001, a couple of years into development, we were energized by Japan Studio's Ico. Directed by an animator, it blew us away with its reliance on characters as the key connection to the player, giving us the courage to keep pushing forward. |
cn_159_writing
|
Marc Laidlaw | In the Half-Life 1 era, player goals in FPS games were still largely about fighting and defeating an obviously evil enemy. It was hard to get players to care about characters when they were as low fidelity as they were in Half-Life 1. With the planned improvements in character fidelity for Half-Life 2, we wanted to take a shot at making characters the driving force of the narrative. Our goal was for the player's motivations to align with those of our characters—for the player to care about these people, their hopes and fears, and to want them to succeed. With this in mind, we were able to assemble our approach to character and dialogue. We would create a set of characters who liked and cared about each other, hoping that the player would feel the same. These characters would interact in a way that felt inclusive, like a group of old friends who see the player as one of them, just someone who's been away for a while. If the characters demonstrated realistic emotional connections with each other, we believed we could build an emotional connection with the player as well. |
cn_160_animation
|
Miles Estes | Constructing believable human motion for our characters out of many discrete layers of animation required a lot of technological iteration. Even simple actions like turning to look at something or walking to a location became more complex when combined with the need to dynamically respond to the player and the requirements of Choreo commands. For example, turning to look at something had to be fully decomposed, allowing a character to use any combination of its eyes, head, shoulders, upper torso, or full body to face a target. In Half-Life 1, human characters had their bodies separated into upper and lower halves, allowing them to walk in one direction while facing another. However, this approach was too abstract for Half-Life 2's characters. Instead, our animators created walking animations in all eight directions of the compass, which could then be smoothly blended together at runtime. |
cn_161_faceposer
|
Yahn Bernier | The primary tool we built to power the Choreo system was called FacePoser, an inaccurately named program that allowed our animators to lay out the entire structure of our scenes along a timeline. The timeline contained a series of Choreo commands for the actors, along with all the necessary information for them to perform those commands. By layering these commands on the timeline, animators could easily describe how an actor might need to face one thing while talking towards another, and walk to a different location—all at the same time. But the timeline wasn't a fixed, linear sequence. Actors would follow their commands, but they had to respond dynamically to the world around them. An actor might take longer to move to a location if the player is in the way, or they might need to wait to deliver a line because another actor has been delayed by the player. FacePoser allowed us to rapidly iterate on scenes, trying different layouts, dialogue deliveries, and timings. This ability to quickly experiment and adjust proved critical in finding the right performances to achieve our character and narrative goals. |
cn_162_choreo_commands
|
Yahn Bernier | This debug visualization shows the two most common Choreo commands actors receive throughout a scene, internally known as look-ats and move-tos. Actors are given targets to look at—sometimes more than one at a time—and then try to focus on the one that makes the most sense given their current state. At the same time, they might be receiving move-to commands, which tell them to move to a location and face a specific direction or object. Even these seemingly simple commands involved a lot of complexity. A scene might require an actor to talk while typing on a keyboard, all while looking at an object the player is holding. Since we couldn't predict where the player would be during playback, authoring scenes like this required us to break each element of the scene into separate pieces and layers. These could then be reassembled as needed during playback, based on the player's actions, and interrupted if necessary. |
cn_163_antlions
|
Kerry Davis | With the player driving the buggy throughout the Coast, we needed a pervasive enemy that could justifiably appear anywhere and act as a constant threat to keep the player in or near their buggy. Antlions spawning from the sand were the perfect solution. Their high mobility allowed them to keep up with the buggy for short bursts, unlike our humanoid enemies. Plus, they were very satisfying to run over. |
cn_164_thumpers
|
Adrian Finol | Whenever possible, there's a lot of value in players feeling like they've figured something out on their own. It makes the world feel more alive, rather than just a series of gameplay elements lined up for the player. If we can accept the risk that some percentage of players might miss an element entirely, we can take a more subtle approach. We took this approach with Thumpers, designing setups that players are likely to encounter but aren't forced to engage with. There are plenty of visual and audio cues, as well as antlion reactions, to help the player understand what's happening. It doesn't guarantee all players will figure it out, but playtesting showed that most did. |
cn_165_d2_coast_09
|
Aaron Seeler | Late in Coast development, we reached a point where our game code was far enough ahead of our level design that our gameplay programmers were looking for more code to write and new tasks that could help Half-Life 2 ship. This was largely due to the demands of Coast's levels, which placed a heavy burden on our level designers with their sheer scale and extensive terrain usage. As a result, our gameplay programmers began designing smaller-scale encounters. Later, when our level designers caught up, they created this level by stitching together three separate encounters from the programmers and performing all the necessary finishing work to prepare it for release. |
cn_166_final_scene
|
Bill Van Buren | Locking the player in a pod for the final confrontation in Breen’s office was a bit of a compromise, but it allowed for a focus on the characters that we really couldn’t provide elsewhere. All of the characters’ arcs reach their conclusions in this scene, and constraining the player to this point of view creates the opportunity for a real emotional payoff – a climactic scene powered by the combination of dialogue, voice performances and animations. The scene was a challenge to pull off from a production standpoint; setting up the right blocking and motion, getting effective timing and emotional synergy and connection from all of the individual animations, and vocal performances that were recorded separately. It took a lot of tuning before it reached some dramatic synergy and really felt like all of the characters were actually connecting. |
cn_167_end_game_scramble
|
Dave Riller | There was originally a significantly larger choreography scene slated for the end of the game. However, the choreo team was very busy with the huge amount of storytelling work remaining throughout the game. To take some work off their plate, the Citadel gameplay cabal was asked to instead create a gameplay challenge to wrap up the game. It could be a puzzle, combat, anything. The resulting Breen boss battle leveraged existing super gravity gun game mechanics and a small amount of custom choreography to create a finished product in a very short amount of time. |
cn_168_combine_portal_effects
|
Gary McTaggart | A key part of our development process is the way that any team member can contribute their skill to any part of the game. Once the final battle gameplay was solid, artists and visual effects programmers helped elevate the presentation of the portal to the Combine homeworld and the stopped time effect at the very end of the game. A refractive shader similar to what we used for water was used for the Combine portal effect. |
cn_169_citadel_entrance
|
Jeff Lane | We built the route into the Citadel backwards: the interior geometry was created as an art prototype to perfect the look. We knew you were coming from the sewer pipe in our Streetwar map and had to create a section of slides and jumps into the Citadel entrance to connect them. |
cn_170_citadel_steep_landing
|
Randy Lundeen | The Citadel was built very late in development, and initially, we had no clear vision for its gameplay. This isn't uncommon—while game design usually drives our process, big creative choices we make early on, such as the existence of the Combine Citadel, need specific design work done to fit into the final product. After the previous chapter's climactic battle, we thought a different tone for the finale here in the Citadel would feel more rewarding. Even if we'd wanted to raise the stakes, creating new mechanics and NPCs was too risky this late. Our first plan was to make it a passive narrative spectacle—a train ride through the heart of the Combine war machine, ending in Breen's office. But after we developed that plan, we spent a few days prototyping super gravity gun mechanics, thinking that at the game's end, we could fulfill players' wishes by making them all-powerful without disrupting the game design's overall balance. The results were immediately promising, leading to a quick redesign to incorporate the super gravity gun. Rather than traditional end-game challenges, we wanted the player to feel godlike—grabbing enemies at will, ripping consoles off walls. In the end, the Citadel blends these two concepts, alternating between spectacle and unleashed power. |
cn_172_lightmap_pass
|
Mike Dussault | The Citadel interior is relatively simple geometrically, but the Combine metal material and the dramatic lighting help sell the immensity of the structure. Lightmap resolution was increased on important interesting surfaces and reduced on ones that were fully shadowed or distant. Also, due to the large size of the Citadel spaces, unnecessarily large lightmap surfaces had a significant performance cost. Near the end of development there was a large pass done throughout the Citadel levels to optimize these values for visual interest and performance. This was made easier by a new feature added to the Hammer level editor that would display surfaces with their luxel density. |
cn_173_super_gravity_origins
|
Brian Jacobson | By this point in development, we were confident in the design of the gravity gun. We wanted to give the player a taste of feeling omnipotent and set to work on making a super-charged version. It didn't take long -- truly, maybe a day of prototyping -- before we had something we liked and it became the core of the gameplay of the Citadel. With our physics engine complete, we were free to find new and exciting ways to push it past its previous limits and yield whatever fun results came of it. Our prototype object -- an ultra-heavy watermelon that weighed hundreds of pounds in our physics sim -- ultimately became the Combine balls you experience in the game. |
cn_174_combine_energy
|
Brian Jacobson | What started life as a watermelon in prototype form ended up turning into an energy ball after the sounds made for its impact and explosion sounded less like produce and more like what we have here in the game. Once we started iterating on levels that weren't watermelon-based, we realized we could use the energy ball not just as a fun way to kill soldiers, but also as a puzzle element for powering up and down various equipment. At a significantly later point in development, we were not liking the AR2 alt-fire that we had at the time, and since the team liked the combine ball, we ended up incorporating it right into the AR2 alt-fire. We always look back and are surprised at how the things that are working independently of one another in the game come together late to solve problems. |
cn_175_combine_wall_and_balls_training
|
Ido Magal | In playtests players were not noticing that the Combine Spheres dissolved the soldiers and why they were disappearing. We initially tried to solve this by making the spheres prefer to bounce towards enemies to increase the likelihood of dissolving kills. We eventually realized that the reason players didn't notice the effect was because it always happened during intense combat, and that people have a very limited ability to learn new things when under stress. As a late effort remedy, we inserted a low pressure training room with a forcefield that can only be opened by grabbing the combine ball. This setup was added specifically to train how combine balls work to deactivate forcefields, and that punting it at the soldiers would dissolve them. Even then, in the playtests it was only moderately successful. Despite our efforts, some of the players still struggled to perceive the dissolving effects of the projectiles. We ran out of time and decided that since players were still telling us they were feeling powerful, it was okay if not all of them perceived why they were succeeding. |
cn_176_weapon_strip
|
Josh Weier | Once we introduced the supergravity gun, we really didn't want players returning to their conventional weapons, so we did the only rational thing: we disintegrated them. It helps to have an inscrutable supernatural enemy like the Combine as the antagonist of your game because it stands to reason that they could simply just have a disintegrator. Why not? And if for some reason we had needed you to have your loadout back in the middle of the level, wouldn't you know it, they also have a reintegrator. |
cn_177_breen_monologue
|
Marc Laidlaw | Creative work is never done in a vacuum, as we're all inspired by other things. Breen's monologue that appears on the Citadel monitors here got some inspiration from an earlier game we all loved– Thief 2, in which the antagonist taunts the player while they're sneaking around. The execution is quite different, but the core inspiration is there. The real heavy lifting, however, what really sells this moment,comes from the performance by the late Robert Culp. His delivery was pitch perfect. He interpreted each of Breen's lines with a subtlety and flair that transformed the character on the page into something truly special. A voice session with Robert Culp was as easy as they come -- he'd have the script and the engineer would hit record and that was about it. The lesson reinforced here is to work with people whose talent exceeds yours, people you're certain to learn from. Don't go in with a fixed idea of how the line should be delivered or how a certain moment should play out, but instead, collaborate with talented people and go into it trusting that they'll make it even better. |
cn_178_dissolved_weapons
|
Quintin Doroquez | All the combine's weapons were set to dissolve after the combine soldier was ragdolled with the gravity gun for two reasons. One, so players wouldn't pick them up and use them, but also to improve the performance in these maps. Additionally a ragdoll manager was added that would remove the fallen soldier ragdoll corpses as the player progresses forward. |
cn_179_elevator_fight
|
Brian Jacobson | Even though we wanted the player to feel all-powerful, we couldn't resist the temptation of adding a tough fight at the elevator shaft. The battle waiting for the arrival of the elevator here ended up being somewhat more difficult than originally envisioned for this part of the game. It became easier during playtesting but there really wasn't enough playtesting time to balance it to the level it should have been. Though the player was never in that much danger, it is the number of soldiers that appear that makes it feel like more of a climatic battle than it is. |
cn_180_replacing_the_prefabs
|
Jeff Lane | The combine energy balls are a relatively simple setup, with a force field holding the perpetually bouncing objects. However, during playtesting, there were repeated bugs with the energy balls bouncing around, going rogue and 'escaping' the forcefield we set up to contain them. Due to the limits of our prefab tech at the time, this required them to be manually rebuilt multiple times, at every one of their locations, in all of the Citadel levels that included them. |
cn_181_strider_fight
|
Ted Backman | One of the original Citadel prototypes included being able to grab the strider with the super gravity gun, and for a long time, the fight with the strider was about getting close enough to grab him with the gun. Toward the end of development, we decided that grabbing the strider made them feel too weak, especially after all the combat against them during Street War, so we replaced the encounter with one where you kill him with combine balls. Model pieces were added so the strider could be gibbed and blown apart instead of dragging its body around. This is the only time in the game when a strider has this effect. |
cn_182_getting_in_the_pod
|
Dhabih Eng | The pod ascension sequence that starts here necessitated taking full control of the scene and the player view. This is contrary to our usual desire to have the player retain as much control as possible. For this final step, we felt the tradeoff was worth it. The player is so powerful at this point that controlling them was difficult. So we had to disable them. We do still have all the characters in the scene talking to the player, keeping them the center of attention. We also were able to provide some narration through the environment along the way. The train ride through the Citadel starting here contains several adversaries that we had art content for, but were cut from the main part of the game. From the safety of your cozy pod you can see the processing of Stalkers and a quick glimpse of the Crab Synth. |
cn_183_turrets
|
Mike Dussault | Turrets were a useful design tool when the player had bugbait, as they could completely prevent antlions from approaching them head-on. This allowed us to create setups where the player couldn't just throw bugbait to proceed—they needed to circle around and clear the turrets so their antlions could follow. It didn't hurt that watching antlions get mowed down by turret fire was a great visual, too. |
cn_184_grenade_puzzle
|
Aaron Seeler | This small puzzle often caused players to get stuck during playtests because the unlimited grenade box was placed a bit behind them on their path. By this point in the game, players have fully learned to associate unlimited ammo boxes with puzzle solutions, so if a grenade box were here, they'd immediately know they needed a grenade to solve it. We wanted them to have to think a little first. Eventually, we added a subtle hint by making grenades visible on the desk through a window, just to remind players they're available. They also helped uncover a bug in a playtest when a player approached to look more closely and ended up collecting the grenades through the glass. |
cn_185_allied_turrets_balancing
|
Adrian Finol | Balancing this fight around the allied turrets was challenging. The turrets are the novel element here, so if players ignore them, the gameplay feels redundant because so much of it is similar to what's come before. We wanted to require turret usage and reward strategic placement, but it was highly binary—perfect placement made the fight easy, while poor placement made it very difficult. Tuning and custom soldier AI logic helped; soldiers became better at toppling turrets by throwing grenades or flanking to knock them down. These changes made the fight less about a perfect setup and more about actively maintaining turrets, as any setup would be disrupted at some point. Playtests varied widely, with some players finding it too hard and others calling it their favorite fight. Running out of time, we ultimately made it a bit easier and shipped it, though we were unsatisfied with the final balance. |
cn_186_meeting_mossman
|
Ariel Diaz | Reuniting the player and Alyx with Mossman after the betrayal scene was tricky, as many players felt they should shoot Mossman on sight. We couldn't allow this, as we didn't have time to develop alternate story branches. However, we also didn't want to highlight that we were preventing this choice, so we aimed to reduce the likelihood of players trying it. The bulletproof glass here buys time for dialog between Alyx and Mossman, during which it becomes clear that Alyx doesn't think shooting Mossman is the right thing to do. |
cn_187_eli_in_pod_choreography
|
Ken Birdwell | We wrote and recorded dialog for this scene while still building out Nova Prospekt. At that stage, we knew Alyx and Gordon would teleport out of the prison, but none of the specifics had been worked out, so the dialog needed to be broad enough to fit whatever we ultimately built. This led us to focus less on the player's actions and more on Alyx's vulnerability and her connection to her father. We hoped the emotion would carry the scene more than the narrative itself. Ultimately, we're aiming for the player to care about these characters—to want to prioritize what's important to Alyx. |
cn_188_mossman_and_eli
|
Bill Van Buren | This choreo scene required significant iteration. We wanted to deliver a plot twist in the middle of the scene while keeping player control unrestricted. It required a delicate balance to makie that twist clear, yet still make sure that it was uninterruptable. If players followed Alyx to the console, they needed to understand what was happening when they turned around. If they stayed back and watched Mossman, they needed to understand what was happening without feeling it was something that they should try to prevent. It required extensive tuning and playtesting to get the timing right, but ultimately we employed a combination of distraction, delivering the crucial moment quickly, and then finally delivering an explanation after the event. |
cn_189_nova_prospekt_entry
|
Danika Rogers | The player has just completed two combat-heavy levels, so we wanted to slow things down once they got inside. These early levels were designed as a pacing change, allowing players to explore the layout of Nova Prospekt's interiors. Along the way, we began to introduce the turret gameplay, first as an enemy to the player and later as a tool they can use. Much of this level was built by reusing prototype and art spaces created earlier in the project, when we were focused on shaping the look of Nova Prospekt. |
cn_190_turret_training_exit
|
Doug Wood | This pile of trash, along with the cluttered descent down the stairs, is designed to discourage players from bringing a turret with them. While it wouldn't break anything if they did, we wanted to make it less likely. Making it inconvenient to keep the turret isn't an ideal design strategy, but it was better than nothing. Many playtesters eventually gave up and tossed the turret aside once they felt the game was trying to tell them something. |
cn_191_allied_antlions
|
Eric Kirchmer | Having an endlessly respawning antlion army attacking turrets and soldiers meant we had to spend time figuring out the player's role in Nova Prospekt's combat. The player had already faced plenty of Combine soldiers, so we decided to make combat here feel more like a puzzle and a spectacle instead of a combat challenge. With that in mind, we created setups that remained in a steady state until the player intervened. By directly killing enemies, the player could tip the balance in favor of the antlions—but we wanted to make bugbait usage especially rewarding. In many scenarios, when the player threw bugbait onto enemies, we activated additional antlion spawners, unleashing a rapid rush of new antlions into the fight. The resulting carnage was the reward that made bugbait satisfying to use. |
cn_192_laser_mines
|
Erik Johnson | We really enjoyed bringing these laser tripmines into Nova Prospekt. They were a weapon from Half-Life 1 that we hadn't managed to include in Half-Life 2, but at least they found a place as a puzzle element. We liked how they served as a kind of personal morality test when the player had an antlion army. Some players immediately threw bugbait past the tripmines, sending antlion bodies at the problem, while others held their little buddies back until they'd safely removed the tripmines with other tools. We didn't judge. |
cn_193_final_turret_arena
|
Erik Johnson | When building the arc of a gameplay element, like allied turret usage, we often build a prototype that represents the arc's endpoint first. We iterate on that prototype until the playtests show that it works. Then, we work backward, adding preceding scenarios to introduce the element and build player skill. This approach ensures the final arc is engaging before we invest substantial time. It also means that, by the time we create training scenarios, we've observed enough playtests to know which skills need guidance and which players will grasp naturally. This final turret scenario was our goal, emphasizing the player's defense against enemies from multiple directions, requiring dynamic turret placement and recovery. |
cn_194_one_way_drops
|
Jakob Jungels | Players never know what's ahead of them, but they're familiar with what's behind them, so it's common for them to retreat to familiar ground when they encounter an enemy. If we've built an arena where we want the player to engage—especially for an enemy with specific geometry requirements, like the Antlion Guard—we need a way to prevent that retreat. A common solution is to require the player to enter the arena through a one-way drop, something you've likely noticed a few times already. |
cn_195_npcs_on_elevators
|
Jay Stelly | We often spend a significant amount of time solving problems that players might not even notice. Many of these are subtle design challenges, while others stem from multiple systems interacting with each other. For example, getting Non-Player Characters to ride in elevators posed a complex problem because it involved various systems—NPC AI decision-making, navigation planning, and movement simulation, to name a few. Additionally, the elevator itself had its own simulation requirements, unrelated to NPCs. One of the more subtle challenges came from the fact we simulate NPCs at a slower rate than we simulate elevators. This meant that NPCs on a descending elevator would simulate – to make sure they were properly standing on the elevator – and then the elevator would simulate a few times, each time moving further down the elevator shaft, subtly leaving the NPC behind. Shortly after that, the NPC would simulate a second time, and pop itself down onto the elevator floor once again. You can imagine that it was hard to take Alyx seriously when she was constantly falling down in small increments while talking to you. |
cn_196_looktargets
|
John Cook | One of the key tools in our design toolkit is the Looktarget—an invisible point in space that activates when the player looks at it. Designers can specify how accurately the player has to look at it, the duration of the look, and other conditions. Once activated, the Looktarget triggers various events in the game. For example, Alyx might wait for the player to look at her before delivering a line of dialog, a spectacle might begin only when it's in view, or a Combine dropship might fly over when the player is looking in its general direction. Once we implemented Looktargets, designers found endless ways to use them to make gameplay feel more reliable for players, especially since some move at very different speeds or in difficult to predict ways. |
cn_197_nova_prospekt_yard
|
John Morello | The combat throughout the Nova Prospekt yard was largely inspired by our early observations of antlions fighting Combine soldiers. The antlions' high mobility and melee attacks paired well with the soldiers' ranged lethality, and vulnerability up close. Antlions look great under fire, with particle effects flying everywhere, and when they finally reach a soldier, the resulting ragdoll effect is incredibly satisfying. |
cn_198_vent
|
John Morello | You probably know this, but if there's a vent, there has to be headcrab in it. We don't make these rules, they, like the vents themselves, come straight from the top. |
cn_199_kitchen
|
Kerry Davis | It's a well-known custom among game developers that all video game kitchens must feature a gas explosion. |
cn_200_alyxs_tool
|
Laura Dubuk | Alyx's multi-tool was incredibly useful for letting us use her as a key. Often, we needed to keep the player from moving forward until we were ready, and her multi-tool allowed her to deliver dialog while unlocking the way ahead. We probably relied on it a bit too much, but it was hard to resist. There's always a tradeoff—using Alyx to gate a room, instead of creating a custom solution, saved time that we could spend on other parts of the game, often more critical ones. |
cn_201_mossmans_betrayal
|
Marc Laidlaw | Our original plan for Judith Mossman's arc was for her to betray the player around this point. But by the time we reached this scene, having developed other parts of the game, we felt she was more likely just trying to protect Eli. We envisioned her reconsidering, realizing she couldn't actually betray Gordon. We had ideas to clarify this shift later, but ultimately, we never finished her arc. We liked the ambiguity in Mossman, but players didn't seem to—they almost universally hated her. That might be a success, though; we worked hard to build the player's affinity for Alyx, and Alyx wasn't exactly a fan of Judith, either. |
cn_202_d2_prison_05_start
|
Yahn Bernier | With the player able to move a large number of objects, we often ran into challenges, especially given the limitations of 2003's CPUs. One limitation was that our AI navigation system wasn't aware of dynamic physics objects, like these metal beds. While our NPC movement system could detect and avoid physics objects while moving, they were essentially invisible during navigation queries for strategic planning before movement. This was primarily due to performance limitations—we couldn't afford anything more complex. We also faced design challenges without clear solutions, like how a soldier should respond if a player completely blocked a corridor with large physics objects. To address this, we gave NPCs additional strength to push physics objects around. We then mitigated the issue through level design, widening corridors and ensuring enough empty space to handle props landing anywhere within them. This setup is a high-risk one: the combination of soldiers hiding behind beds and antlions charging for melee attacks creates a good chance of awkward interactions between the NPC movement system and the beds. |
cn_203_alyx_as_companion
|
Miles Estes | This section became an unintentional prototype for what Episode 1 would eventually feature: Alyx as a companion, moving smoothly along a path that blends dynamic combat with scripted choreography. We didn't yet have a clear model for how she would fit alongside the player in these scenarios, but even this brief section taught us a lot, despite not having enough time to iterate on it. |
cn_204_laundry
|
Mike Dussault | By the time the player reaches this level, known as Laundry, we're confident they're comfortable using bugbait to solve combat scenarios. So, we wanted to raise the stakes a bit by creating a scenario that encourages the player to get even more involved, combining bugbait with their full set of weapons. |
cn_205_side_zombie
|
Quintin Doroquez | This poison zombie was always a highlight in playtests. Players, so focused on Combine soldiers, antlions, and turrets, hadn't seen a poison zombie in quite some time, so its sudden appearance in the dark often created a moment of shock. |
cn_206_flares
|
Quintin Doroquez | This section is a nice counterpoint to the previous turret segment in terms of production cost and iteration. Built very late, it came together quickly as the AI for Alyx and the Combine Soldier was already mature. We liked the darkness and flares and filed it away as something to explore further. Later, we revisited it when planning the dark sections in Episode 1. |
cn_207_magic_wall
|
Robin Walker | Like many of our combat arenas, we needed a gate of some kind to prevent players from moving on before they finished this fight. Unfortunately, we were out of time and had to move on to the Train Station, so we ended up with this bit of magical bullshit: A wall that just explodes after the second gunship is defeated, revealing the way forward. We reassured ourselves that most players wouldn't notice, as they'd be busy fighting the gunships, and our playtesting had shown they generally they all finished the fight before they reached this point. With enough explosions throughout the arena, the destroyed, burning wall felt pretty natural, so we threw in a couple of simple gas puzzles on the other side to justify it. But no, we're not exactly proud of it. |
cn_208_ragdoll_vortigaunt
|
Scott Dalton | There's a lot of pain to see in this dead vortigaunt - pain of our level designers. At the time, our development tools didn't allow us to directly pose a ragdoll, which is to say, when an NPC dies and it systemically ragdolls to the ground, we didn't have any control over what pose it ended up in. If we'd had some animator time, they could have given us a single frame animation in roughly the pose we wanted, but we were late and our small animation team was already overloaded. So a level designer positioned the vortigaunt above this chair, told it to die as the level loaded, and waited to see what pose the ragdoll settled into. Then the designer moved the Vortigaunt slightly, and tried it again. We can't remember exactly how many attempts it took, but it was way, way more than you'd think. |
cn_209_nova_prospekt_process
|
Eric Smith | Like Ravenholm, Nova Prospekt was initially designed with significantly different gameplay than what ultimately shipped. The first version focused on player-versus-Combine soldier combat, with no involvement of antlions. Unlike other areas of the game, where gameplay and art evolved together, Nova Prospekt received a large art push early on. This produced a set of reusable, polished sections of level geometry with art already applied. Assembling Nova Prospekt from these pieces worked well, as the prison's layout was naturally repetitive. And then later, when we returned with antlions and bugbait, we had to repurpose the geometry to fit the new gameplay. |
cn_210_d2_prison_04
|
Eric Kirchmer | Throughout development, our artists created small vertical slices of environments, which we called art zoos. These zoos laid out how to use textures and props to create gameplay spaces that achieved our visual goals. However, when building the prison's gameplay track, the repetitive layout allowed us to use the art more directly. This map was initially built as the primary art zoo for Nova Prospekt, but after being used to test various gameplay prototypes—including those that eventually led to bugbait—it became part of the actual level architecture. |
cn_211_autosave_dangerous
|
Dave Riller | In Half-Life 1, we had a set of autosaves throughout the game, hand placed by level designers, and triggered when the player reached some specific moment or position in the game. This ensured the game was automatically saved for you as you traversed through Black Mesa's corridors. In Half-Life 2, we found ourselves needing a new tool. The problem arose in some of the more complex arenas we'd built. Some of those encounters would last for quite a while and features multiple sets of enemies. Others were long, sprawling mixes of exploration and combat without any clear separation between the two. We never want an autosave where the player is in danger of dying right after loading the game, because that could place the player in a death loop - if you played enough games back then, you definitely found yourself in this situation at least once. So unless we deliberately crafted pauses in every arena's flow, there wasn't always a safe place to put an autosave. After some experimentation, we implemented a feature called Autosave Dangerous. This was a level designer tool that combined an autosave with an associated time frame. When triggered by the level designer, it immediately saved the game, but kept the save off to the side, not available as a savegame to load from. If the player then stayed alive for the specified time frame, then the save would be committed, and used as the most recent autosave. This allowed level designers to put an autosave at a reasonable place or time in an arena, specify a time, like thirty seconds, and if the player was still alive for thirty seconds, the autosave would be used. |
cn_212_we_dont_go_to_ravenholm
|
Dario Casali | As we finished up our work on Ravenholm, we couldn't shake the feeling that the transition from Eli's Lab just wasn't working. It was far too abrupt—Eli's lab connected directly to the streets of the town, where you'll be shortly, and the section you're standing in didn't exist. We had just made this promise at the end of Eli's Lab: 'We don't go there anymore,' and we liked the overall buildup we'd created. But when the player got to Ravenholm, they were immediately dropped into combat with zombies without any room for tension building. So, we built this section and the subsequent tight corridors to address that in a way we were happy with. We slowed the pacing, focused on art and lighting, and got you ready to experience why we don't go to Ravenholm anymore. |
cn_213_lever_shenanigans
|
Dario Casali | Even with all the work we did to try and make a connection between this lever and the outside electric fence in the player's minds, some testers would still flip the lever and not know what they'd done. They'd even wander around, flip the lever back on, and wander around some more. With not much time left to spend on making the connection clearer, we elected to just have the lever break off. This prevented players from undoing their progress, and added to the feeling that Ravenholm was an unreliable and decaying environment. |
cn_214_town_arenas
|
Dario Casali | At this point in Ravenholm, the player has learned all the gameplay elements available to them and can predict their interactions. This allows us to build more complex setups, providing a greater level of threat and more freedom in how the player can approach each scenario. This arena is a deliberately entwined space that the player will need to explore, and we use that exploration time to slowly ramp up combat intensity. The faster the player explores, the faster we ramp it up. The free-form nature of this design made it challenging to tune, given the large dynamic range in how players played through it. |
cn_215_mines
|
Dario Casali | The complex route the player takes to safely navigate this mineshaft gives them plenty of time to see what's waiting for them at the bottom. And surprise: it's headcrabs. Honestly, It's often headcrabs. But in this case it's Poison Headcrabs, which are fun by themselves, but a real riot when you add fast headcrabs into the mix. These two guys make a great combination: Neither is particularly dangerous on its own, but together, they're incredibly lethal. Once poisoned, a single hit from a Fast Headcrab is enough to kill the player. And that is what biologists call symbiosis. At least that's what we think. |
cn_216_poison_headcrabs
|
David Speyrer | This room unintentionally introduces poison headcrabs. We'd spent a lot of time crafting their proper introduction later in the level, but once we saw how well they were playtesting, our excitement got the better of us and before too long we'd scattered them throughout the level. By the time we realized they'd appeared before their intro, it was really too late to rework it. Fortunately, a tight room of poison headcrabs works as decent training because their design avoids a common issue with combat training. Typically, players are stressed and they can get frustrated if they die repeatedly without understanding why and, sure enough, we encountered that problem when trying to train the player with both poison headcrabs and their source, the poison zombie. However poison headcrabs can't kill the player by themselves. So while stressful, a room full of them at least avoids that frustration. |
cn_217_poison_zombie_introduction
|
David Speyrer | Constraining the player in a tight space while providing them with a view outward is a useful way to increase the chances that they'll notice something we want them to see. Here, we draw the player's attention with the sound of a wail before the Poison Zombie itself becomes visible. Originally, this encounter was also the first time players faced Poison Headcrabs, but, as mentioned in an earlier node, we got far too enthusiastic in proliferating Poison Headcrabs around the level and ended up spoiling their joint introduction here. In the end, however, these first encounters worked better as separate instances anyway. |
cn_218_final_arena
|
Eric Smith | When we finished the first pass over Ravenholm, this rooftop was intended to be the culminating fight. Like other spaces in Ravenholm, it had begun life as a space purely focused on being an interesting visual location. We felt it was important that, as the player left Ravenholm, they had a moment to look back over the town they had just survived and get a glimpse of where they were headed next. Multiple gameplay groups took a shot at building a fight here, but the tight geometry and performance constraints meant we never really created a finale we were happy with. Not unlike the Half-Life saga itself, it seemed as if Ravenholm was destined to not have a real ending. |
cn_219_tables_and_bodies
|
Ido Magal | Towards the end of Ravenholm's production, with all the major pieces in place, we went through and tried to fill out every bit of empty space with something interesting. In doing so, we had to be careful to use what we already had and not create anything new, as we couldn't risk introducing more bugs. Often, as in this room, we were trying to add new moments of gameplay while also adding more detail and storytelling to the area. Small scenes like this allow an attentive player to think about what they're seeing and what it might mean for what they'll encounter next. |
cn_220_physics_object_consistency
|
Kerry Davis | All physics objects have a variety of parameters that define their behavior: their mass, the amount of damage they take before breaking, what they break into, and so on. During development, these parameters were specified by level designers for each item, which is how most things in the game worked. But later in development, we realized we'd created a problem for ourselves—there were thousands of physics objects in the game, and no method to ensure consistency across them. As a result, a cardboard box might break apart with a single pistol shot in one level, and be completely invulnerable in another. Internally, we generally referred to these inconsistencies as level designer crimes. To fix these, and to deliver some justice, we built a system that enforced consistency on all physics objects based on their visual model. Since most of the game had already been built, we were forced to allow level designers to explicitly opt out of this system for rare edge cases where they needed to do something criminal to preserve what they'd designed. Internally, we refer to these designers as recidivists, some of whom perpetuate their life of crime at Valve to this day. |
cn_222_fast_zombie_introduction
|
Randy Lundeen | This space was another area originally built for E3 2003, where it showcased the player fighting against a squad of Combine soldiers. When we returned to use it to introduce the Fast Zombie, we tried to keep as much of the previous work as possible. We wanted an iconic moonlit shot, with zombies leaping past the moon like werewolves to draw the player's eye. Once that beat is over, we introduce the first Fast Zombie to the player space and send it at them. It took a lot of iteration to find the right balance of lighting for the scene to look good while ensuring it also worked for the gameplay goals. |
cn_223_ravenholm_gameplay_focus
|
Mike Dussault | After watching many playtests of Ravenholm, we knew the core gameplay of the level revolved around the gravity gun and physics. Once equipped with that knowledge, we felt we could appropriately build the transition into Ravenholm and help the player learn and understand a bit about what the upcoming experience would be. This happens often—we build the level track, figure out what's working, and then design its prologue. We built the following set of rooms with that table-setting in mind. The brutality of the tools we provide the player seemed to aid in setting the mood we were striving for, helping shift the tone from the safety of Eli's Lab to something darker and more frightening. |
cn_224_player_tools
|
Steve Bond | By this point in the game, players have a lot of experience with Half Life 2's physics, but they've just received the gravity gun, which provides a whole new way to interact with the physics objects in the game. We iterated on a variety of the physics objects that the player could play with. Things like saw blades, the catapults, explosive canisters, traps, spears, and so on, and honed in on the ones that worked well with the physics gun and cut the ones that didn't. Then we went about stripping the levels of ammunition items so that the player had to rely on these objects to survive. |
cn_225_contrived_setups
|
Kerry Davis | There's a pretty obvious contrivance in a bunch of the gravity gun and physics object training. The player pulls a saw blade out of the wall with the gravity gun, and right on cue, a zombie stumbles around the corner, just begging to have a saw blade launched at it. Then the player crawls under a spinning blade trap to turn it off, and suddenly, three zombies appear, taunting the player to switch the trap back on. We didn't have much time to iterate on this section, so we needed setups that worked for all players, not more open-ended ones that relied on them making the 'right' move. While this kind of design has the potential to reduce player's creativity, in this case, we found that they didn't seem to mind that we forced their hand a little bit. |
cn_226_misteaching_flinches
|
Kerry Davis | Another memorable playtest and example of the danger of misteaching players occurred in Ravenholm. One of our design principles is for the game to respond to the player as much as possible. So when designing the gravity gun, it seemed obvious that if the player pointed it at a zombie and pulled the trigger, something should happen. It was simple enough to make it zap the zombie for a tiny amount of damage, causing the zombie to play its flinch animation, which felt good and responsive. Later, we watched a playtest where the tester discovered this interaction immediately upon arriving in Ravenholm, and interpreted it as feedback that the gravity gun was hurting the zombie... which it was, just not very much. So they zapped the zombie again, repeatedly, shocking it over and over until it died. This is a slow, un-fun, inhumane way to fight zombies, especially where you're surrounded by saw blades and explosive canisters. Having learned our lesson, we removed that flinch animation, even though it meant losing that interaction between the zombie and the gravity gun. In this case, it was better to not have this simple interaction rather than give the false impression that the gravity gun was an effective way to fight zombies. Even simple principles can require tradeoffs when you get into the details. |
cn_227_fire_trap
|
Steve Bond | This fire trap was originally intended as a visual showpiece for the fire system before a level designer repurposed it as a gameplay element for Ravenholm. It was almost cut many times during development due to the performance impact on the low end PC's because of all the overdraw involved with the transparent sprites. In the end, there was a programmer who felt that it was important enough to the game play that it was worth doing the work to solve the performance issues and make sure we could get it into the game. |
cn_228_hallway_cupboards
|
Yahn Bernier | These junk-filled hallways were a simple solution to our need to slow the player down. With the constraint of preserving as much of the existing geometry as possible, combined with slow-moving zombies as the main enemy, it was tricky to stop players from just sprinting through here. Luckily, having Grigori as the crazed inhabitant of Ravenholm made these setups easy to justify—it makes good sense that he'd clog the stairwell to keep himself safe on the rooftops. In the end, Grigori's solution and ours were one and the same: just throw a bunch of furniture in the way—because if it stops zombies, it'll probably slow the player down too. |
cn_229_pacing_switch
|
Steve Bond | Now that the players through the training section, we can start shifting the pacing from moody claustrophobic spaces to more combat focused action arenas. With the player familiar with the various physics tools in the environment, we can start mixing those into open spaces and just give the player the freedom to choose their approach. |
cn_230_car_crushers
|
Steve Bond | These car crushers required some adjustments and quite a bit of work to our physics damage system. There is always a challenge when big physics objects interact with Non Player Characters. For instance, if the physics can push the NPC, it might shove them somewhere they're not supposed to go, leaving them unable to recover. This isn't a big issue for something like a headcrab, which can just hop out of a tricky situation, but for something like a zombie, it could mean to get stuck and end up helpless, flailing away in whatever gap they managed to wedge themselves into you. Which kind of kills the survival horror mood we're going for here. |
cn_231_car_crusher_lifts
|
Eric Smith | Getting the car crushers to lift the player was also tricky. See, the player is always being simulated by both the physics system and the game's movement system. Each time we update the player's state, we compare the results from both systems and decide which one is 'right,' then sync them up and repeat. This allows the two systems to influence each other—like when the player stands on a seesaw, the physics lets the board give way, or when they stand on the car crusher, it lifts them up. While more complex, this dual-system approach was easier to tune than trying to handle both physics and player movement in a single system. By keeping them separate, we could fine-tune the movement system to feel good on its own, only reconciling the two when physics was affecting the player. |
cn_232_paint_cans
|
Steve Bond | Once we started to succeed at making the gravity gun and physics gameplay the core of the Ravenholm experience, we started to go through the levels and look for objects that we might have overlooked, as we were busy working with the really aggressive things like saw blades and exploding canisters. One of the things we found was the paint can, and it was simple enough for us to reinforce your expectations by having the paint cans splatter paint on the zombies when you threw one at them. |
cn_233_bringing_physics_to_the_enemies
|
Tom Leonard | The physics traps in Ravenholm were some of the first gameplay elements in level that we were really happy with, but they lacked player agency. It was the saw blades that helped us figure out what Ravenholm was truly about. With saw blades, the player was able to bring physics to the zombies instead of leading zombies into the physics. Once we have that understanding, Ravenholm evolved rapidly. We kept combing through the map looking for anything, especially anything unusual, that could be turned into a tool or weapon for the player. |
cn_234_ravenholm_development
|
Tom Leonard | We began developing Ravenholm early in the production of Half-Life 2, so it went through more than one full development pass. The first version predated the gravity gun, with gameplay focused on traditional zombie combat using guns and some of the physics traps. We even planned to showcase Ravenholm at E3 in 2002, but the game was ultimately unveiled a year later in 2003. By that point, we had invested heavily in Ravenholm's art, something which we generally avoid until the gameplay is fully proven. So once we went back to Ravenholm with the gravity gun mechanic in mind, we were working with a lot of pre-existing level geometry that had been designed with a very different gameplay goal in mind. |
cn_235_grigori_plaza
|
Steve Bond | This open plaza is one of the areas that was fully detailed in our first development pass and when we returned to it, we wanted to preserve as much of the existing art as possible while designing a new player path that used all three dimensions. Moving players up and down through windows, across rooftops, in between buildings. It wasn't a large space, so we had to squeeze as much track out of it as possible. And since that started as an art focused area, it was light on gameplay and we needed to address that playtesters tended to fly right through it. To top that off, it was already pushing our rendering performance budget with little room left for anything new. The challenge became finding small additions to slow the player down by adding threats around corners and into every nook and cranny of the level. |
cn_236_ravenholm_development_2
|
Tom Leonard | When fans talk to us about Half-Life 2, Ravenholm is often the most remembered section. Thanks to its multiple development passes and the density of the material in it, more members of the team are represented in Ravenholm than anywhere else. It was where we figured out how to apply our artistic pipeline to the set pieces and mood we wanted, how physics and gameplay combined with the gravity gun, how headcrabs and zombies need to evolve for Half-Life 2 and much more. Looking back on it now, it reassures our belief in individual empowerment over centralized design. Ravenholm is dense, highly efficient in its gameplay and artistic execution, and the sum of many small contributions by individuals who understood how they could push it just a little further with their own unique capabilities. |
cn_237_playtester_archetypes
|
Tom Leonard | Iteration and regular play tests were the lifeblood of Half-Life 2's production. Every week, we'd quietly watch over the shoulder of a player going through the section we were working on, taking notes on what worked and what didn't. It was important for us to test with a wide range of skill levels to be sure that we weren't making decisions that only worked for some. Over time, we learned to choose specific types of play testers based on the development stage we're at. Early on, we used experienced gamers, usually teammates, who could handle rough, unfinished areas and were good at explaining their thought process. For combat testing, we’d find players of all skill levels, especially those creative in their usage with the tools. Finally, as sections became more polished, we'd focus on beginners -- people who didn't play games much or weren't familiar with first person shooters. Even if they weren't necessarily someone who might buy the game, we often learn something valuable from watching them. |
cn_238_the_real_final_arena
|
Tom Leonard | Very late in the development of Half-Life 2, when we were reviewing the entire game for final improvement opportunities, we decided we did, in fact, want a real finale for Ravenholm. The rooftop fight wasn't enough. After some discussion, we chose to add this final fight through the graveyard with Gregori alongside the player. It was risky development wise, but we felt the game needed it. Since we were near the end of production, we had a mature set of gameplay tools to work with, but even more importantly, by this point we understood our own game. Making a game is often a process of learning what it is you're actually creating. Why is our game fun? How is it different? What are the most important elements? As we built the game and watch play testers go through it, we learned answers to those questions and in turn got better at making it. So the graveyard came together quickly and required far less iteration than many earlier areas of the game. |
cn_239_cities
|
Eric Kirchmer | One of the trickier challenges in this section was figuring out how to give the player the sense of being in a sprawling city, when our performance budgets wouldn't allow us to render anything like one. This led us to focusing on tightly packed tenements, underground garages, subway tunnels, culverts, and the like. All places that are in cities, but generally prevent being able to see to the horizon. We then supplemented that style of level design with vista locations, where the whole space is dedicated to providing an expansive view to the player. In those, we could devote our entire performance budget to the view, and not have to save any of it for enemy AI or combat. |
cn_240_strider_4
|
Eric Smith | While this map was the production map built for the Strider, we still had an original development map for testing its AI. We’ve always found a lot of value in having a solid test map for an AI, as it allows us to quickly try out new gameplay ideas and tune them easily. If there are bugs or design issues, we can isolate and iterate on them faster. Once the AI features are working well, we transplant them into the production maps for level designers to work with. The Strider’s test map included rough outlines of ruined buildings, with infinite citizen spawners at various locations and heights to provide a constant stream of targets. This setup allowed us to refine the Strider’s AI and stance changes to make it interesting and engaging, even to just sit back and watch it in action. |
cn_241_perf_arena
|
Aaron Seeler | This arena came with a major challenge: performance. Its size and the combined Strider and citizen AI pushed our limits here. When we ported Half-Life 2 to the Xbox, this arena couldn’t ship without some design adjustments. Preserving the original intent and delivering the same player experience with fewer tools and space was a tricky balance. |
cn_242_space_re-use
|
Chris Green | Returning to this space brought multiple advantages. First, the obvious one—production costs for new areas are always high, so if we can add more gameplay to an existing area, it’s a win. But for a hard fight like this one, where the player faces multiple Striders, re-use has extra benefits. Since players have been through it before, they already have a sense of the arena layout. This time, they’re moving back-to-front, but the path largely follows their original route. This familiarity is useful when players need to move forward to grab resources for the fight. City 17’s war-torn state also gave us the flexibility to adjust cover and add smaller obstacles along the path, keeping the environment dynamic and surprising. |
cn_243_tenements_and_courtyards
|
Danika Rogers | We used a lot of small tricks and built various features to give players the impression of a city at war, with much of the work aimed at providing glimpses of skirmishes or the sounds of distant fighting. Looktargets helped make something happen every time a player looked through a window, while tools like Tracer Makers allowed designers to create fake tracer fire between two points. Paired with audio tools, this made it seem like two groups of enemies were exchanging fire, without the need to run any AI. We also developed small storylines for groups of citizens the player would glimpse multiple times while progressing through the level. All these moments were fine-tuned through playtests to ensure as many players as possible would experience them. |
cn_244_hopper_mine_training
|
Dario Casali | This corrugated metal panel was deliberately placed over the hole to encourage players to switch to the gravity gun and remove the obstruction. It’s a bit of manipulation on our part because we found that training players on hopper mines worked much better if they immediately tried using the gravity gun on them, and the metal panel significantly increased the chance they’d have it ready. We placed multiple mines in the hole so players would have several opportunities to interact with them, increasing the chances they’d discover alternative ways to disposing of them. Furthermore, this is the first time we’re doing training while the player has a whole squad of citizens with them, so we took advantage of that opportunity to include some dialog lines concerning the hopper mines as well. |
cn_245_dog_combat
|
Eric Kirchmer | Dog is a really fun character, and we were always looking for ways to include him in combat alongside the player. However, getting the right scale of combat for him—meaning the appropriate environment, new enemy AI, and potentially even new enemies—quickly fell beyond the project's scope. To create a combat scenario that did justice to Dog and could stand up against the rest of the game, it almost felt like we'd be building a new game inside of the game. We knew he couldn't just run over to a soldier and throw a punch—he'd feel less impressive, without any sense of awe. He needed bigger threats to match his scale, along with interesting ways to interact with them. In the end, we chose to create a large choreographed scene, delivering on the cinematic goals for Dog in combat without incorporating extensive gameplay design. |
cn_246_strider_2
|
Eric Kirchmer | Iterating on the Strider here finally gave us an experience we were happy with, as players needed to dash from cover to cover, avoiding its fire while hunting for rockets. We wanted players to feel like they were diving into cover at the last second, so we tuned the Strider’s stitching gunfire pattern to telegraph impending danger. We also built a system of tracks for the Strider to follow, allowing it to dynamically switch between them to pursue the player or establish line of sight. This setup enabled level designers to carefully align Strider tracks with player cover for a balanced experience. The Strider’s leg positioning was driven entirely by code, and its unpredictable movements meant we often had to fine-tune things to ensure it reliably kicked a car or troop container when needed. |
cn_247_manhack_tunnels
|
Eric Smith | When developing the AI behavior for an NPC, we found it was useful for the AI programmer to have a test map in which to experiment. That map would often evolve into an environment specifically designed to showcase the NPC’s unique capabilities. Later, when the AI was ready for use by level designers, the test map served as a valuable point of reference—without it, designers could accidentally build scenarios that made an otherwise interesting AI less engaging. In some cases, we were able to incorporate these test maps directly into the game. This small section, for instance, started as the tunnels used to develop the manhack’s AI. |
cn_248_sniper_streets_2
|
Erik Johnson | Bullets in Half-Life 2 don’t actually follow a flight path—they arrive at their target instantly. But for the Sniper, with its slow rate of fire and the player’s close attention to every shot, we wanted to simulate bullet travel. This approach also worked better with the laser sight, where the sight flicks off, and the bullet lands shortly after, depending on the distance to the Sniper. However, this change introduced a new issue: we didn’t want players to just sprint down the street to evade shots, so the Sniper had to lead a moving target, predicting where the bullet should hit. Predicting the player’s future position was tricky, as players can change direction much faster than would be possible in the real world. So we added extra checks to catch behaviors like players quickly tapping left and right as they ran. |
cn_249_triangle_plaza
|
Ido Magal | This plaza battle was a complex setup using the Assault behavior with two converging battle lines, one for each force-field. As the soldiers enter from behind the fields, their battle lines prevent them from individually rushing into the plaza. As the fight escalates, and the soldiers build up, the battle lines start to move towards the center of the plaza, which will cause the soldiers to overwhelm the player if they’re not repelled. |
cn_250_sniper_streets_3
|
Jakob Jungels | One feature we tried in these streets was suppression fire. Many players, hunkered down at the end of the street, would fire a burst at the Sniper’s window to see if it had any effect. So we experimented with making the laser sight flicker off briefly, giving players a moment to relocate. But after a number of playtests, we encountered two issues: First, it disrupted the tense, move-and-hide gameplay we aimed to create with the Sniper. Sprinting while firing wasn’t much different from other actions we ask of the player in City 17. The second, more subtle issue was that players who discovered suppression fire first assumed it was required and would burn through ammo fast. If they ran out halfway through, they’d often give up, thinking they’d failed; if they made it, they’d frequently be left with no ammo. On the other hand, players who completed the scenario by stealth used almost no ammo at all. This resource gap between approaches made it near impossible to balance ammo for the sections that followed so it didn't make it into the final game. |
cn_251_assault_behavior
|
John Cook | The most complex behavior we built for Half-Life 2 was the Assault behavior, which is used throughout this section of the game. It allowed level designers to set up combat scenarios with Combine soldiers and citizens, establishing battle lines between them and controlling how those battles played out based on the player’s actions. NPCs involved in an Assault would try to find smart firing positions and use cover based on the battle line’s location. As the player advanced down a street, they might push the battle line forward, causing allied citizens to advance while enemy soldiers fell back. Alternatively, the player might need to knock over some Combine turrets or take out a soldier behind a barricade to move the battle line forward. |
cn_252_bank_roof_mortar
|
John Morello | By the time we’d built the level track up to this point, it was obvious that it’d be fun for the player to get to use the mortar. I suppose that's the type of thing that should've occurred to us earlier. 'Do you think anyone will want to launch explosives into the sky and have them drop down onto the bad guys? Yeah, I think so too.' Anyway – players have spent all this time on the receiving end of it, so it would've been a nice reward to get to serve the Combine some of their own medicine. But as soon as we started thinking about it, we realized how much work there’d be – which, to be fair, is a thought you have every single day when you set out to make a video game. The mortar would need animations and effects now that the player is next to it, and there’d be a fair bit of work to implement a nice escalation of enemies down in the plaza to use it on. But even more fundamentally, we had no interface for a weapon this unique. How would players aim and fire it? It’s hard to feel good about spending lots of time on a single moment in the game, so we decided our efforts were better spent elsewhere, and moved on. Sorry! Perhaps we'll solve the design conundrum of the mortar in a future installment of the Half-Life series. |
cn_253_strider
|
Kerry Davis | This area in front of the Bank was where we figured out the Strider. When we first saw the concept art for it, we knew we wanted to get it into the game, but there were a ton of questions about how it would practically work. We’d never moved something that size through our game world, and at that scale, it needed to be dangerous, but how would players actually fight it? As with all of our enemies, there was always a question of how we’d get gameplay out of it. What kind of tools would level designers need to build scenarios with it? These were the kinds of questions set out to answers to in our experiments in this map. |
cn_254_sniper_streets
|
Eric Kirchmer | These sniper filled streets are where we developed both the enemy and the core of the experience we hoped players would have. We knew we wanted the Sniper to shoot things near the player, so we first focused on finding a variety of interesting targets for them to hit. We found it worked well if we used the player’s view as a factor in selecting objects. The Sniper’s always-on laser sight meant that players had a constant indicator of the Sniper’s attention, so having it drift to a target that the player was looking at felt really responsive, and provided great moments of tension. |
cn_255_citizen_squad_details
|
Matt Wright | Once we'd established the citizen player squad and started building scenarios around it, we found it created other interesting opportunities. Playtesters began to care about specific squad members, so we added more to their design. Squad members would pick up better weapons if they found them, pass the player ammo when needed, and medics would heal the player as necessary. We also had to ensure the squad could keep up with a player who often took unpredictable paths. Off-camera, the squad would 'cheat' in various ways—regenerating health between fights, moving quickly to catch up, and even teleporting when necessary. |
cn_256_citizen_squads
|
Ken Birdwell | Our early tests with Combine-versus-citizen street fights revealed an issue: Without boundaries, combat AI could drift significantly away from the player. Soldiers might retreat to reload or take cover, and citizen AI would give chase. We needed a way to align the citizens' actions more closely with the player's goals, keeping them aware of where the player was heading and which enemies the player wanted to engage. This led to the creation of a citizen player squad. The citizen AI used this squad to move with the player, engaging the same enemies and providing a basic interface for the player commands. We initially had a more complex command system, but it quickly became unwieldy, so we scaled it back. |
cn_257_ai_behaviors
|
Mike Dussault | One piece of AI technology we developed for Half-Life 2 was the Behavior system. Behaviors were chunks of AI gameplay code that guided NPCs in specific situations. For example, we had a Lead behavior that enabled an NPC to guide the player, and a Follow behavior allowing an NPC to follow the player or another NPC. Behaviors worked alongside the NPC’s base actions, so a Medic squad mate using the Follow behavior to keep up with the player would still know how to fire at enemies, dodge grenades, or offer a medkit. |
cn_258_occlusion
|
Brian Jacobson | In Half-Life 1 and much of Half-Life 2, the world consisted mainly of enclosed tunnels and corridors, which worked well with a Binary Space Partition, or BSP tree. Essentially, this tech allowed us to quickly sort through the geometry and determine what might be visible in the player’s view. But the open areas in Coast and other outdoor sections didn’t suit that approach, so we needed some new tech. Area Portals were one, allowing us to carve out world up into discrete chunks separated by portals - doors and windows that is, not Aperture Science ones. We also developed an additional occlusion system, which basically said 'if the player can't see through, over or around a large object, don't render what's on the other side.' This might seem an obvious thing not to do, but it takes some work to make sure that calculating what's on the other side of the big thing isn't slower than just rendering everything. So to make that calculation fast, level designers placed large occlusion volumes inside major terrain shapes and big occluding objects like buildings and during rendering, we could then test the bounds of entities against these volumes to avoid rendering anything the player couldn't see. |
cn_259_detail_objects
|
Brian Jacobson | Once we had outdoor sections running, it was clear they needed some form of high-frequency detail, so we added small foliage elements, like grass and shrubs, across the displacement terrain. But knew we couldn't afford – which is to say, we didn't have the time – to hand-place them all. So we built the detail object system, which generates small foliage elements automatically based on the displacement material. Each material specifies the type of foliage to place, along with a set of parameters. At map compile time, the system calculates random points on the surface and selects foliage to place there. Given performance constraints, there was still a fairly complex hand-authoring process for foliage texture sheets, allowing us to render all of them in a single pass. |
cn_260_displacements_1
|
Charlie Brown | After Half-Life 1 shipped, our level designers and artists wanted a better solution for outdoor environments, so we built the first version of our terrain system. It was based on subdivided quads. To visualize this, imagine a flat square divided into a grid, where each point on the grid can be moved up or down. Level designers could lay out multiple squares and then use a paintbrush tool to shape hills and valleys by adjusting the grid points. For rendering, we used a special blended texture, which combined two different textures. Each grid point specified how much of each texture to use, allowing smooth transitions between surfaces like sand, grass, dirt and gravel. This type of terrain system wasn't unusual in game engines at the time, and we naively thought it would be enough for our game designers. |
cn_261_displacements_2
|
Charlie Brown | Once the initial terrain system was up and running, we expected level designers to use it for outdoor spaces. Instead, they began experimenting, adding detail in unexpected places. They used the subdivided quads on top of walls to create ruined edges, rounded off street curbs, and even attempted to make pipes. This met with varying success since the terrain system was designed for landscapes, not architectural detail. So we returned to the code mines, transforming the height-based terrain system into a fully 3D displacement system. Now, instead of only moving grid points up or down, designers could move them in any direction. This allowed them to add extra detail to all kinds of surfaces, using blended textures to smooth transitions between the displacement quads and the BSP geometry. However, this required additional tech work to ensure consistent lighting between the two renderers, or those edges would have been noticeably mismatched. But with all this done, the level designers seemed satisfied and we naively thought that this time, it really would be enough. |
cn_262_displacements_3
|
Charlie Brown | With the general displacement system in hand, level designers didn't stop at adding detail to the world and returned to their original goals of building large-scale outdoor sections. This led to another round of updates to balance performance, ensuring the system could handle both fine detail and larger ground terrain. Texturing had to improve as well, especially to support complex structures like overhanging cliffs that weren't possible with the original height-based system. Designers, never ones to rest on their laurels, also started using displacements to create caves and tunnels. While these additions looked great, they required even more technical adjustment. Eventually, we managed to wrangle the competing requirements of all the ways that displacements were being used, and they shipped in large sections of our game world. Looking back, displacements are a clear example of how technology and game design evolve together. It's a chicken-and-egg problem—we can't foresee all the ways a piece of tech will impact the game, but we also can't build the tech without knowing what it needs to accomplish. So, as is always the case, we have to build something. We start with a solid guess and see where it leads us. |
cn_263_particles
|
Chris Green | For the Source engine, we invested in expanding particle and special effects technology. This allowed for more realistic and complex particle movement, as well as more intricate shapes and shader effects. In particular, adding bumpmapping to our particles features made a huge difference in the volumetric look of them. While later iterations of the Source engine included a powerful particle system editor, the original version used in Half-Life 2 relied on hand-crafted effects—each requiring a programmer to meticulously build the effect manually, which was a pretty labor-intensive process. |
cn_264_cubemaps
|
Chris Green | With the addition of materials and shaders, we needed a solution for reflective surfaces like glass and steel. These surfaces had to know what to reflect, but in 2003, we couldn't do any of that rendering in realtime. So, we developed the cubemap system for precomputed reflections. A cubemap is a set of six images on the inside of a cube, capturing the environment in every direction—like a panoramic photo that surrounds you on all sides. Level designers placed invisible marker entities throughout the levels, and during map compilation, we generated a cubemap at each marker. Then, during gameplay, when rendering a reflective surface, we locate the nearest precomputed cubemap to create the reflection. |
cn_265_water_shader
|
Gary McTaggart | The most complex shader we made for the game was the water shader, which made full use of the newly available programmable graphics cards. When water was present, the scene was rendered three times: first, the underwater world for refraction; second, a reflected view; and third, the parts above the water. When rendering the water surface, we blended the refractive and reflective views, showing more reflection at glancing angles using the Fresnel equation. We also tracked the ground depth below the water's surface, making it more transparent and less refractive in shallow areas. Additionally, we had to create two alternate water rendering techniques for lower-end graphics cards: mid-range cards displayed a lower-detail reflection without refraction or transparency, while low-end cards only rendered a partially transparent surface. |
cn_266_normal_maps
|
Gary McTaggart | Half-Life 2 was developed during an exciting time for graphics, as the first programmable shader-supported graphics cards were hitting the market. Using this technology, we became one of the first games to implement normal maps, a technique that enhanced the lighting on surface geometry. The normal map described the surface's roughness, allowing each pixel of a polygon to be lit in a way that made surfaces appear bumpier and more detailed than was ever possible in Half-Life 1. |
cn_267_rendering_systems
|
Gary McTaggart | In the early 2000s, we couldn't afford a general-purpose renderer with the broad capabilities of a modern engine. Instead, CPU, GPU, and memory limitations required highly specific solutions to rendering challenges so we could squeeze every bit of performance out of players' hardware. As a result, Half-Life 2 actually has three different rendering engines: one for world geometry, another for outdoor terrain, and a third for dynamic entities like characters and objects. These three renderers had to support a complicated matrix of features—collisions, ray traces, decals, offline lighting, visibility calculations, and more. On top of that, each renderer needed to support DirectX 7, 8, and 9. The differences among these DirectX versions were significant, often requiring different data storage and rendering approaches; they even used different programming languages! Altogether, we essentially had nine different renderers, each needing unique solutions and getting them all to look consistent was one of the most challenging tech problems we faced during development. |
cn_268_dynamic_shadows
|
Gary McTaggart | With a focus on characters and physics objects, we felt a dynamic shadow system was necessary to render them in all of their glory. In 2003, GPUs didn't have any of the dedicated shadow capabilities that they have today, so performance was a tricky problem. Our eventual solution went something like this, running every frame. First, find all the objects that might have shadows in view. Doing this meant figuring out which objects were in the view frustum, or slightly behind it, to catch entities that were offscreen but who's shadow might end up onscreen. Then, we'd render a black shadow version of each one of those objects into a single shadow texture. For that render, we'd use the lowest level-of-detail model for the object, to be as fast as possible. Finally, we'd use the shadow texture to draw each of the shadows into the player's view, darkening the already rendered pixels. We'd carefully cut around each shadow in that shadow texture, to ensure they correctly overlaid each other in the case where multiple shadows overlapped. It's neat to look back on how we did these things, but ultimately we're a lot happier today, where GPU hardware makes it a much more straightforward problem. |
cn_269_combinatorial_shader_explosion
|
Gary McTaggart | Rendering became much more complex during Half-Life 2’s development with the addition of hardware shaders—small bits of code controlling how each pixel on a surface should render. In 2003, shaders were far more limited than today: Half-Life 2–era shaders lacked conditionals and had severe code size restrictions. This meant no if/else branching logic to adjust based on different conditions, and no large shaders with broad feature sets. Also, compiling shaders as needed was too slow to be useful, so we had to precompile them. To work within these limits, we had to compute every possible shader combination and store each one individually as its own shader. For example, if a shader affected a surface based on a single light source, we’d need another shader version for two lights, another for three, and so on. And it was combinatorial—if that shader also supported a normal map, we’d need two versions per light, one with the normal map and one without. With features like texture blending, vertex tinting, specular highlights, alpha testing, and various other light types, directional, point, spotlights, our shader count exploded. By release, we had thousands of individual shaders. Shortly after, we released Lost Coast, adding HDR support, which doubled the shader count again. Once compiling shaders on the local developers machines started taking too long, we switched to distributing the shader compiles over the entire network using VMPi. We also had a dynamic shader compilation mode that was useful for developing shaders that gave fast turn around at the expense of frame rate hitching when a shader had to be compiled. |
cn_270_gpu_wild_west
|
Jay Stelly | During development, we faced numerous decisions influenced by our choice of minimum spec—the least powerful CPU and GPU combination that would still deliver a good experience for customers. In the early 2000s, there was far more variety among GPUs than today, with wide differences not only in speed but in fundamental approaches to rendering. But at the time, we had no real data on the hardware our customers were using. What CPUs and GPUs did they have? How much RAM? Which version of Windows? We reached out to Microsoft, hoping they might know answers to questions like, 'How many DX7 cards are in use? Or DX8?' Unfortunately, they didn’t have the data either. Realizing we were at risk of making bad decisions without these insights, we developed an analysis tool that allowed players to report their hardware specs to us, and integrated it into the early version of Steam. The data was so useful that we decided to make it public, launching the Steam Hardware Survey in April 2003. It’s been helping us—and hopefully other developers—make informed decisions ever since. |
cn_271_3d_skybox
|
Jay Stelly | We’ve always needed to render a world that feels larger than our map limitations allow. In Half-Life 1, we used a cubemap texture to let players see beyond the playable area. But in Half-Life 2, we wanted that view into the larger world to be dynamic, not just a static texture. Our map size limitation was about half a mile in each dimension, but we realized we could create a much larger outer world if we stored it at a reduced scale. So, we invented what we called the 3D Skybox. The 3D Skybox is a small box hidden within the actual game map, containing geometry at 1/16th scale. Inside it, a reference point corresponds to a matching origin in the main map. Then, when we render the space beyond the playable area, we use the geometry from the 3D Skybox, positioned by that reference point and scaled up 16 times. Getting the lighting and fog to blend seamlessly across the transition from the main map to the skybox presented some sticky challenges, but it worked. The result made the world feel significantly larger and more immersive than it had in Half-Life 1. |
cn_272_missing_material
|
Jay Stelly | Early on in developing our low-level graphics code, we realized it would be difficult to spot polygons that weren’t rendering due to a missing texture. When this happened, you’d typically just see black where the polygon should be, which could easily go unnoticed in darker scenes. To catch this bug more reliably, we created an error texture that would be visually unmistakable and generated automatically whenever a texture failed to load. This gave rise to the now-iconic purple and black checkerboard texture, which has since taken on a life of its own well beyond Half-Life 2. |
cn_273_air_resistance
|
Jay Stelly | One important feature we had to add to our physics engine for Half-Life 2 was air resistance. At the time, physics engines would typically implement a general damping feature for slowing objects down as they moved. But this merely removes energy in a symmetric way. In order to create more believable motion we decided to simulate air resistance instead. This allowed us to get asymmetric behavior, where the amount the object was slowed by moving through air was different depending on how much surface area of the object was facing the direction of movement or rotation. This feature was key to making explosions and objects thrown by the gravity gun look believable. Fast moving and fast spinning objects are among the most expensive things to simulate in Half-Life 2's physics engine, so air resistance also improves CPU performance in cases like explosions. It is also key to presenting different materials and masses of objects. More dense objects can push aside air more easily, and this is visible. Having this effect in the simulation helps make the difference in density between cardboard boxes and wooden planks more apparent and really helps make the objects in Half-Life 2's world much more convincing. |
cn_274_early_tech_planning
|
Ken Birdwell | Shipping a game always teaches us a lot - we finally get to find out which choices we made were good ones, and which were ones we should do differently next time. So after Half-Life 1, when we sat down to start thinking about what kind of sequel we could build, we had all kinds of player feedback to digest. From it, we tried to build a master list of things that Half-Life 2 was going to focus on. We didn’t try to design a specific player experience, instead, we found it more useful to approach the question from a technological standpoint. Black Mesa’s security guards and scientists had worked well. What could we do if we pushed really, really hard on characters? We liked our interactive storytelling. What could we build that’d allow us to do even more of it, and at greater variety? The exploration and interaction within Black Mesa was compelling to players. Maybe a physics system would allow us to take that further? In all of these areas, we tried to imagine what the best version of it could look like, years in the future, and backsolve from that a set of technological steps that might get us there. |
cn_275_character_lighting
|
Ken Birdwell | In the first year of development, we were struggling to get our characters to feel as realistic as we wanted them to. There were actually quite a number of different problems, all of which eventually required specific technological and artistic improvements to fix. The first major problem was lighting. At the time, lighting technology in games had many shortcuts, some due to performance, but many due to misunderstandings of how it all works. To make characters look good at the time, models and textures had to be created with much of their lighting solution baked into them - like shadows and glossiness. The characters looked cool - but they also looked like video game characters. We started building Alyx with this approach, because that was how you were supposed to do it. We used the baked solutions and lit her like a movie actor - essentially, as if she had a custom lighting rig that followed her around and ensured she always looked good. But it didn’t really work. The range of lighting environments, and the dynamic nature of video games, meant that she’d look great in some environments, but always end up looking terribly unrealistic at some point. We tried iterating on her lighting rig, increasing its capabilities, but eventually it felt like we were at a dead end. Returning to the drawing board, we decided to try a different approach. We built her and the lighting and the textures from the ground up, throwing everything away. Focusing not on what we wanted her to look like under ideal lighting, but in how things were supposed to work, in the real world. It might seem obvious, but it wasn’t how it was done, at least not in video games. When looked at outside the game’s renderer, these models and textures actually looked worse than the previous versions. But we managed to methodically pull out all the shortcuts and misconceptions about how light and texture and graphic cards worked and put it all back together. After all that was done, when we were finally able to look at Alyx in game, with her new models and textures, in a dynamic environment that kept her visually consistent at all times, well, she looked great. Totally seamless and part of the Half-Life 2 world. |
cn_276_gamma_correction
|
John Morello | Our game engine has a rendering system that’s responsible for taking the entire state of the game, and calculating a color value for every pixel on your screen to represent that state. But code and software from other people are involved in the process of calculating those pixels as well. As an example, the software and hardware in your GPU and Monitor. At one point during development, we were unhappy with how our characters looked. In particular, we thought they looked waxy, or sickly. Their skin was wrong. We spent some time tuning our lighting, but we just couldn’t get both our characters and our world to look right. Luckily, we still had a fully featured software renderer, which allowed us to run the game without a GPU, because this was an era where many players still didn’t have one. And looking at the game on that renderer, we noticed that our lighting didn’t have the same problem. So we went through our code with a fine toothed comb, trying to figure out where our rendering code was doing something wrong when it was working with a GPU. After failing to do so, we tried talking to the GPU manufacturers, asking them to double check they weren’t doing anything wrong with the pixel values we were passing them. That was the start of a long journey - and, long story short, it was a journey of almost 2 years of back and forth communication with GPU manufacturers before we were able to convince them that they were doing gamma correction incorrectly in the low level math on their cards. During this, we’d even had to build a system that validated the color values of an individual pixel throughout the entirety of our rendering pipeline. But eventually, at the end, we finally got new drivers back from them, and our characters immediately looked like they should, matching the software renderer. |
cn_277_facial_technology
|
Ken Birdwell | We wanted players to care about our characters, and to even have a chance at achieving that, we knew we had to invest a lot of technology for our character’s faces. It’s well understood that humans are wired to pay attention to faces, subconsciously reading all kinds of information from them. So it’s an extremely high bar, and if we couldn’t reach it, we’d probably push players away from our characters, not towards them as we hoped. We started out doing a lot of research. It was not heartening. The best examples we could find weren’t anywhere near close to being able to be run in realtime, while the best real time ones weren’t close to the quality we wanted. We wanted to look as good as an animated movie, but they often took hours to render frames and were totally linear, authored by up to a hundred animators, whereas ours had to be dynamic - a character might be trying to speak dialog over a facial mood derived from the player’s actions and created by just a handful of artists. These requirements eventually led us to Dr. Paul Ekman’s Facial Action Coding System, a system for taxonomizing human facial movements done back in the 1970's. It gave us a vocabulary for describing facial movements throughout our tools and code, and a target to aim for when designing our character’s faces. With it as a framework, we started a many year process of iterating how we built faces and heads of our characters. Where others had tried taking bits of Ekman’s work, we wanted to try to use it all. That meant a lot of time spent figuring out how to do each piece within the tight performance constraints we had. But we also spent a lot of time just looking at people and faces, trying to find the extra features that would add the most value - like skin elasticity, teeth shadowing. In the end, while we weren’t able to do everything we wanted, we did manage to get the pieces we thought were the most important to a convincing facial performance level on our characters. |
cn_278_eye_shader
|
Ken Birdwell | Another critical part of getting character faces to be convincing was in the eyes. Humans are great at reading eyes. There is a part of your brain dedicated to it, down to knowing where somebody is looking based on the reflections on the eyeballs. But there are all kinds of subtleties that matter. Eyes aren’t actually spheres. Eyelids get distorted by the cornea moving around underneath. Eye movement can increase the odds of blinking. Eyes don't actually line up directly behind the pupil, they're shifted slightly. All these little details might feel small, or unnoticeable, but the absence hurts the believability of the character. We worked on eyes for a long time, iterating on our eyeball model and feature set little by little. Some things required external interfaces, like what things in the environment were interesting to look at, while others were automatic, at a subconscious level in the AI - like blinking, an obvious requirement, but one that’s subtly confusing if it’s wrong. Eye reflections had a particularly large impact - they're ultimately a tiny amount of real estate on the screen, but the difference they made was enormous. We knew we were finally done when we reached the point where, with our eye shader off, a character looked like a doll, human-like but unconvincing in some way that was hard to describe - but when we turned it on, suddenly, it felt like a human was looking at us. |
cn_279_combine_gunship
|
Ken Birdwell | The gunship was an interesting problem. We had a helicopter flight model already, and didn’t really have time to build something completely new. So, we tuned the helicopter’s flight model until it felt different enough, and then derived the gunship’s organic animation state from the values coming out of that simulation. We also tried to keep it as truthful as possible. For example, there’s a limit to how much it can turn its head to aim its cannon, and if it needs to aim beyond that, it has to alter its flight path to bring that cannon around to bear - it’s not allowed to just cheat. The other key feature was its engine sound. We have sound spacialization, so the player can tell roughly where it is. But the gunship is a fast moving object, and it’s up in the air, away from any nearby reference points. As a result, it can be hard to tell exactly how far away it is, and how fast it’s moving. To fix that, we added a doppler effect to the engine sound, pitch shifting it based upon the gunship’s velocity relative to you. It’s a subtle effect, but it’s extremely effective at subconsciously helping you perceive the gunship’s location and direction. Critical information letting you know that you’re being hunted by it. |
cn_280_risk
|
Ken Birdwell | Looking back on Half-Life 2 now, 20 years later, it’s sobering to reflect on how terrifying it was throughout much of the development. Game development is hard. You never really get what you’d like to have, which is to be able validate risk before you start to pile more on top of it. For example, our facial animation technology. It took probably around 3 or 4 years of work before we were confident it was actually going to work. But we couldn’t wait until then. In that first year, we immediately started making all kinds of other choices in our game design, our asset production, and our other technologies, all based on the assumption that our facial animation technology was going to work in the end, from a quality and a performance standpoint. If it hadn’t, many of those other choices wouldn’t have worked either. And the same risk profile is true of other large choices we made early on. We can’t wait, we just have to build, and hope our guesses will be true, years from now. Terrifying. |
cn_281_hugs_and_pickups
|
Ken Birdwell | Sometimes even little things can be a nightmare to get working. For example, getting a character to hug another character, or even more simply, just to pick up an object from a table can cause a lot of issues. Even worse, in our dynamic environment, where the characters and objects aren’t in pre-set positions, and the player’s running around getting in the way, these require a level of synchronization that can be quite tricky. They’re made harder because humans handle them without thinking. As you’re moving to the table to pick up a cup, you’re doing all kinds of preparation with your body to get your feet and hands in the right position to allow you to do it comfortably. Everything ready right when you get there. If you’re moving in to hug someone, you’re both doing it at the same time, adjusting constantly based on each other’s movement. To get these features to work in the game involved a lot of synchronization work, making characters aware of each other and the world. And at the same time, getting them to know more about their future plans. It’s tempting to want to up break up pickup-a-cup example into discrete steps - walk to table, face cup, pick up a cup - because it’s much simpler that way, and it makes the steps nicely reusable. But that’s how robots work, not humans. Our characters had to know they were moving to a table with the intention to pick up a cup, so they could work on multiple parts of it simultaneously, making sure their hand grabs the cup right when they get there. With two characters its even more of a negotiation. Both need to agree where and when its gonna happen and everything needs to happen in place in a seamless fashion. While these moments of simple interaction between our characters in the world are often short, they were critical to making them seem like believable people. |
cn_282_ropes
|
Mike Dussault | One of the things we noticed in all of the reference images that we gathered for City 17 was that there were cables and wires everywhere, and that they were a large part of the high frequency detail in view. We loved the look of these older, European cities enmeshed in the haphazard web of modern life. We felt like we needed a solution for cables, ropes, and similar elements, and we wanted them to be dynamic. Our level designers were eager to create plug puzzles, and connecting a cord to each plug helped players better understand the mechanics. Early in production, we built a rope system and iterated on it throughout development. Under the hood, each rope is made of multiple segments stretched between the two endpoints and they could move independently. Each segment acts like a spring and responds to forces in the game, such as a gentle wind or the passing of a Combine gunship overhead. When rendered, each segment is shown facing the camera and is bump-mapped, which is crucial for giving the ropes a realistic, rounded appearance. |
cn_283_vmpi
|
Mike Dussault | During development, we became increasingly frustrated with our slow gameplay iteration speeds, mainly because map compilation—needed to precompute lighting and visibility—was taking hours. Computers in 2000 were far slower than today’s and they had only a single CPU core each. Inspired by SETI@Home, a distributed program that used PCs worldwide to analyze radio waves for signs of extraterrestrial life, we built VMPi, which was our own distributed system that spread the map compilation process across many office computers. Today, these kinds of distributed systems run on giant GPU clusters, but at the time, we just ran it on all the computers in our office. To deal how our lighting code was evolving as quickly as our maps were, VMPi also distributed new .exe files to everyone’s computer too. In retrospect, this was a pretty terrifying security concept, just awfully bad, its like the stuff of nightmares for IT folks. Because we were just sending an arbitrary executable to all of our work PCs and they would just run it. It was absolute madness. But it worked for us and it did significantly increase our map iteration speed, and as a result, ended up being a crucial tool. |
cn_284_field_of_view
|
Kerry Davis | In Half-Life 1, we used a 90-degree Field of View, or FOV, which was fairly standard for first-person shooters at the time. But during Half-Life 2’s development, we grew unhappy with it. With our game’s focus on characters, we’d put extensive effort into detailed facial and body animations, but the 90-degree FOV just didn’t allow players to get close enough to fully appreciate that detail. So, we began experimenting with a tighter FOV and ultimately landed on 75 degrees. It took some adjustment for both us and players, and it required an additional FOV for viewmodels, which are the player's weapon, held at the bottom of the screen. Their models were originally built with 90 degrees in mind, so they looked distorted at 75. But this change succeeded in doing what we wanted: putting our characters front and center in the game. |
cn_285_reference_maps
|
Randy Lundeen | As our characters grew more lifelike between Half-Life 1 and Half-Life 2, we wanted the game world to feel equally realistic. In Half-Life 1, the limitations of first-person games often made it easier to lean on stylized or abstracted environments. But with Half-Life 2, we set out to create a setting that felt grounded in reality, which meant focusing on a lot of small details we hadn’t before. One of our main steps was creating reference maps to standardize how we’d map real-world proportions into the game. These maps established details like wall and door dimensions, doorknob heights, interior lighting, furniture scale, and the general scale of common objects. We created standardized textures for level designers, complete with scale and lighting values, so layouts would be consistent from the start. Besides setting up the physical world, these reference maps defined how the player would move through it—how high they could jump, how low they could duck, and the size of spaces like vents. Using these maps as foundation, we were able to keep our world geometry consistent and ensure realistically sized spaces across the game. |
cn_286_response_rules
|
Scott Dalton | We knew early on that we wanted a lot of dynamic dialogue in game, so we needed a system to handle it effectively. We aimed for a system that allowed sound designers to add and tweak lines without involving programmers, made the dialogue tightly responsive to the game’s state, and minimized repetition. With these requirements, we created the Response Rules system. Whenever a character speaks outside of a choreographed scene, this system determines which line to deliver. Rather than picking up lines directly, character AI is constantly flagging “opportunities for speech” whenever a relevant event happens: spotting the player, getting hurt, firing the last round in a gun, encountering overwhelming enemies, and so on. With each opportunity, it sends the Response Rules system a packet of data about the game’s current state, and the system returns the most fitting dialogue line, if there is any. This setup allowed our sound designers to create a detailed decision matrix for dialogue. For example, a character might have a different set of lines for announcing a reload while under fire, versus after the end of a fight, or if only one ally is left standing, or if there’s a Strider in sight. By the time we shipped Half-Life 2, we felt we’d only scratched the surface of this system’s potential. It became a core feature of Episode 1’s Alyx and went on to power dynamic dialogue in our later games, including TF2’s classes and the cast of Left 4 Dead. |
cn_287_audio_caching
|
Yahn Bernier | Another problem we ran into was the sheer amount of audio in the game. With all the gameplay sounds, music, and dialog, we had too much audio to be able to store it all in memory -- memory was at premium in 2003. But, as you can imagine, we need to be able to play sounds immediately, especially if they're gameplay related. So we built an audio cache, which stored only the first 125ms of every audio file, and kept that in memory. That 125ms time was our best guess for how long it'd take for us to be able to stream an audio file in from disk. Then, whenever a sound was played in game, we'd play the start of it from our audio cache while simultaneously kicking off a load of the full audio file from disk. By the time our cached start ended, we'd have the file loaded, and could continue playing the rest of the sound. There were some other minor features, like keeping recent sounds fully in memory so that high traffic ones didn't need to be loaded constantly. In today's engines all kinds of assets, beyond just sounds, typically use the same kind of strategy." |
List of speakers[edit]
- Aaron Seeler
- Adrian Finol
- Ariel Diaz
- Bill Fletcher
- Bill Van Buren
- Brian Jacobson
- Charlie Brown
- Chris Green
- Danika Rogers
- Dario Casali
- Dave Riller
- David Sawyer
- David Speyrer
- Dhabih Eng
- Doug Wood
- Eric Kirchmer
- Eric Smith
- Erik Johnson
- Gabe Newell
- Gary McTaggart
- Ido Magal
- Jakob Jungels
- Jay Stelly
- Jeff Lane
- John Cook
- John Morello
- Josh Weier
- Kelly Bailey
- Ken Birdwell
- Kerry Davis
- Laura Dubuk
- Marc Laidlaw
- Matt Boone
- Matt Wright
- Mike Dussault
- Miles Estes
- Quintin Doroquez
- Randy Lundeen
- Robin Walker
- Scott Dalton
- Steve Bond
- Ted Backman
- Tom Leonard
- Yahn Bernier