With each consecutive hardware generation it takes time to achieve what was possible at the end of the previous generation. New hardware requires new software techniques and often a return to first principles. The initial move from sprite based to polygon based games saw a marked increase in the spatial complexity of environments but was accompanied by a dramatic decrease in the size and number of objects that could exist within those environments. This clearest example of this can be seen when comparing Doom and Quake, two games separated by three years and an entire dimension. It wouldn’t be until five years later that the release of Serious Sam saw a return to the sprawling environments and hundreds of enemies that Doom boasted.
Twenty years ago I was playing a game that allowed me to explore thousands of square miles of virtual terrain. I was driving snowmobiles down mountains in order to meet one of over thirty non-player characters each with their own personality and skills which I would hopefully convince them to use in the fight against the invading forces of General Masters. This was Midwinter, prequel to the game I still consider my favourite game of all time, Midwinter II: Flames Of Freedom.
Since then, with each hardware generation, the scale of the environments in which I’ve been able to play has decreased. Only recently has the trend started to reverse and I have been able to have a similar experience to that I had twenty years ago. Far Cry 2 is the nearest I’ve come to recapturing that experience of first playing Midwinter, yet even though Far Cry 2 shows a significant increase in graphical fidelity over Midwinter the range of options available to me, the possibility space of the game, feels reduced.
It would be extremely narrow minded of me to ignore the impact the increase in technology has had on my reaction to the game, or to underestimate how the subtle changes in available mechanics have altered the dynamics. Despite these advancements in both technology and design it’s still difficult to ignore the feeling that somehow I’m playing a version of the same game I played twenty years ago and that the core experience has changed little in that time.
Twenty years of technological advancement, several hardware generations all so I can have essentially the same experience available on my Atari ST. I can’t help but wonder if that time has really been put to the best use.
This is not the only example I can think of where a recent titles has felt like it could have been created years previously. Last year saw the release of Left 4 Dead, a major factor in its appeal is the ability to face off against hordes of zombies alongside three companions. Four players together fighting off dozens of mindless enemies, it’s a fantasy that holds a lot of appeal. Yet that sense of four players against overwhelming odds, is an experience I can distinctly remember having eight years ago. Alongside three friends I faced down hundreds of enemies in the twisted ancient Egyptian setting of Serious Sam. The sheer number of enemies that game is able to thrown at the player is absurd, the final level is subtitled “Infinite Bodycount” and I honestly wonder how much of that is hyperbole.
The mechanics of Left 4 Dead could have been implemented seven years earlier in Serious Sam or even fifteen years earlier in Doom. The graphical fidelity of such an implementation would be much lower, but would the experience itself be that much different?
Of course it’s not only technology that has changed in that time. Those seven years have allowed artists, sound designers and level designers to hone their craft to the extent that even if Left 4 Dead or something similar had appeared earlier it would not possess the same level of craft. It takes time to learn and apply the techniques of filmic art direction and indirect training that make Left 4 Dead the holistic experience that it is.
This still doesn’t completely lessen the sensation that twenty years of technological advancement have done little for the actual design of games, and that is a wasted opportunity. Commercial video games are approaching their fortieth anniversary and with the first few years of each hardware generation spent trying to recreate the experiences that were possible before it’s little wonder that it can feel like video games have had trouble growing up in that period.
8 replies on “Two steps forward…”
It would be interesting to mod an older game to play similarly to Left 4 Dead, to see if it does hold up.
How much of the problem is that you’re talking only about first-person shooters? The possibilities of the genre are well explored (I’m not saying entirely explored) and what the general audience demands is well understood (not entirely understood). The truth is that innovation isn’t going to happen that fast in a domain with so much path dependence.
Innovation in game design happens all the time, it’s just that it happens in the design of new games, not when you refine a well understood formula like the first-person shooter.
I don’t think this is specific to FPS games. No, I believe that this is relevant to all sorts of games.
I think a key point is that the cost and resources needed to create a game—and, as the author focuses on, create and maintain large worlds and many people—is increasing. And there’s no doubt that this is true.
I’ve heard many times that the biggest opportunity that comes from the “next-gen” hardware is that it unlocks more ambitious types of gameplay—complex physical systems or AIs or worlds or whatever that respond to the player in more complicated ways—ways that would not have been practical with less powerful machines. Yet most of the time, those technologies have been harnessed to make their graphics look more realistic.
At the same time, this shifts resources from trying to use processing power to rather break paradigms in gameplay. And every time another “generation” comes around, teams are set back again. Monetarily, the cost has apparently been relatively low enough for most large developers to not hurt them too much. But I wonder, really… it’s true, for instance, when you render people with more detail, you can render less people. The cost is both in your computer processor and in the artists and modelers who have to make their textures, etc. look sufficiently good in High Definition.
“With each consecutive hardware generation it takes time to achieve what was possible at the end of the previous generation.” This opening sentence resonates deeply with me.
Dwarf Fortress.
Not that I’ve played it, its mind-boggling complexity scares the bejeezus out of me.
Still, as (or if) the priority for graphical realism plateaus, hopefully that energy will be diverted to everything else that makes up a gameworld. Maybe that’s speciously thinking this era’s tech is special; to paraphrase Bill Gates, 1GB of video RAM ought to be enough for anybody. But when, for example, Thief 3’s levels are smaller than its four year-old predecessor’s, something’s gone wrong. Why is every console effectively retired just when it’s starting to be fully exploited? It looks like this generation might be different, and maybe we won’t have to reinvent a higher poly-count wheel at the expense of any other possible advance.
@Charles
My concern is not with a lack of innovation so much as with the push for higher fidelity in graphical representation causing a retrograde progression in what is possible. I’ll admit my focus on first person shooters might have skewed my statements; however the same situation does occur in other genres.
The number of units available in real time strategy games peeked with Total Annihilation and after that most games never approached that scale game was able to put on screen at once, it took three years before that scale of battle could be depicted again in, Shogun: Total War and it was a further four until Rome: Total War was able to achieve the same effect with polygonal character models.
The size and complexity of levels in the Tomb Raider series increased with each title until a change of technology in Tomb Raider: Angel of Darkness cause that trend to stop, and by Tomb Raider: Legend the size of levels had been greatly reduced. Even the latest title, Tomb Raider: Underworld, struggles to feature levels that are as sprawling and complex as those in the earlier titles. In a game that is primarily focused on exploration and spatial navigation that has a big influence on the play experience.
My more specific concern is that the technology we have at our disposal isn’t being used to create new designs, only to recreate the ones that have come before, or ones that could have been made before. What does Zelda: The Wind Waker do that was not possible at a lower fidelity on the Nintendo 64?
That said, as you rightly pointed out there have been noticable design innovations, the most revelant of these being games that make a significant use of real-time physics; Rag Doll King Fu simply couldn’t have been made ten years ago, nor could physics heavy puzzle platformers like Trine or Little Big Planet.
Let me clarify. I understand that the phenomenon you’re pointing out happens in the other genres, but my point is that the problem itself is that you’re basing your disappointment on genres that are old and well understood. There’s always going to be less innovation happening in them.
There’s a lot of innovative game design happening out there, but if you only looks for it in first-person shooters, real-time strategy game, platformers, etc., you’re going to be disappointed. The big moves have already been made in those types of games, and no increase in the amount of computing power is going to change that.
If you want innovation you should search out new kinds of games! Don’t limit yourself to waiting around for old kinds of games to surprise you.
@Charles
Heh, I think I’ve had this argument with myself several times. I have a natural affinity for certain genres, and it frustrates me to see what feels like backward progression. It’s not about waiting for these genres to surprise me; so much as it is a frustration that each hardware generation leads to a reset. That said I don’t accept that these genres are in any way “played out” and that new ideas and the use of technology in new ways is not possible.
Outside of these genres I’ve seen some really interesting things done on the iPhone, however I’ve also see an overwhelming number of games retreading old tropes; the same is true of Flash games and XBLA titles.
There’s a innovation in game design out there, though we may disagree on how much, I think a lot of smaller developers have been able to thrive by working on platforms that have remained stable for several years. However if we see another mass generational change in technology how much time will be wasted simply getting those once innovative designs to work with the new technology? Time that could be better spent creating something that couldn’t have existed without that very same new technology.
Could you argue, then, that as the graphic-cycle (or whatever it should be called) seems to be decellerating (ie. console-generations seem to be lasting longer, new games are working on slightly older pcs, etc), that perhaps now we are heading into a new era (idealistically) of accelerated game innovation which is allowing games such as L4D that were technically possible years ago to finally be conceived?
That said, I almost completely agree with you. Perhaps I am just that bit too young to truly be able to enjoy really old games despite my respect for them, and I certainly hope I am able to enjoy a game regardless of its graphics but still… Equally good gameplay with better visuals and sound is still, well, really good.
Kotaku had an article which I didnt really like last week about “Where are all the next gen games?” claiming we have seen no “next-gen” games this generation. It uses a very narrow definition of ‘next-gen’ which I do not entirely agree with, but it may be worth hunting down as it parallels with what you are saying a little bit. I would find the link if it wasn’t 5am and I wasnt on my way out the door to work!