These notes discuss innovation within the video games industry. The factual information is primarily drawn from sessions (and conversations) with Jessica Mulligan (executive producer and one of “the five most important people in the virtual world“), Jason Rutter (University of Manchester), and Brian Baglow (Indoctrimat). These notes are my personal interpretation of what was discussed, not a transcript of the event.
On this page:
- Why the games industry can’t innovate
- Breaking the cycle
- It’s like partnerships, all over again
- Future development and finance
- Thoughts on alternative models for development
Why the games industry can’t innovate
Innovation probably isn’t what you think it is. It isn’t the latest console. Or making it all look prettier (a higher polygon count). At the core of video games, we’re all still basically playing Pong (or if you prefer, “Spacewar!” or OXO) – just a more complex version of Pong. And the last major online innovation was MUD1 in 1978 (to mis-quote Ms. Mulligan).
The video games industry largely can’t innovate, in spite of being quite profitable. It doesn’t collectively even do basic things like use the data generated by its games to analyse players’ behaviour, and in turn build a better game. Corporately, it doesn’t care – it doesn’t want to know. Given the inherently creative abilities required to produce games, this seems odd.
The explanation for this is slightly complex. Essentially power lies with the publishers, who are risk-averse. The publishers’ power is a combination of factors:
- the absolute development cost of games ($10 million for a console title, $20 for a massively multiplayer online game, $40+ for World of Warcraft),
- tendency of publishers to buy out and then hold intellectual property/franchise rights, and
- for consoles, the need to meet the costs of approving games with console manufacturers.
The result is that publishers dictate to developers. Developers now tend to “work for hire” and are increasingly interchangeable (different studios working on different sequels, for example). They’re producing games de-facto designed by publishers. Except nobody designs, they just copy whatever sold last year – anything else represents a risk.
Publishers also control the main retail channels (stores), and the media (the impartiality of the print media was… questioned). Yuck.
That’s largely happened over the last 10 years. It was noted that developers are no longer promoted as “heroes”, rather the “games are heroes”. All the developer personalities (Sid Meier, Blizzard) existed before this period – nobody new has been allowed to appear. That isn’t because these people don’t want to talk. Rather publishers prevent them from doing so, because the publishers control the whole marketing side.
From what I can glean, aside from their lack of freedom, the people working for developers aren’t treated terribly well. The average career lasts only 4 years, mostly due to the burnout of working 60+ hour weeks. Yet there is always an eager supply of high-quality talent ready to take their place, so the situation persists.
Breaking the cycle
There are a few trends that may break this.
Low-budget, simple, online/mobile, casual games are breaking through. And increasingly professional developers are working on these, not just kids as garage projects. They are able to make the games they think will be popular, plus keep the intellectual property rights. The logical conclusion is that the “good” games of the near-future (next ten years), are likely to be quite small projects, perhaps built by mixing up blocks of pre-existing technology, rather than spending 3-5 years writing huge immersive games from the ground up.
But that’s not the only approach. Steam was cited as a clever way of using the publisher’s distribution channel (for Half Life) to then subsequently allow the developer to sell content direct to the consumer.
The really interesting one was Nintendo’s Wii: Not only is it designed for more of a casual/family market (and as the average age of game players rises, the amount of time they have decreases, so the casual market will be the one that grows with time), but the business model has subtly changed: Instead of subsidising the console and profiting on the sale of games, Nintendo makes its money from the sale of the console. This has allowed them offer access to the console to smaller developers. Clever, huh?
It’s like partnerships, all over again
The lack of internal industry innovation underscores requests for academics to do that research – above and beyond training the “cube monkeys” (the people labouring away in the sweatshops to code the games). There is money in the system to fund it, albeit not held by developers.
The problem is precisely the same as with academics and industry in the transport sector: Essentially neither can understand a word the other says. Not just language – fundamentally different cultures. Industry can’t see value in most of what academics study. And the academic concepts that do become fundamental to the industry, aren’t seen as coming from academia. Plus the games industry isn’t trusted by journalists (in the social hierarchy game developers are down there with drug dealers), while academics are respected: Problems emerge when those academics don’t actually know what they’re talking about.
There are some parallels here with Susan Wu’s frustration in trying to get the “Web 2.0” developers and the games developers to work together.
My view here is that while the two groups can be brought closer together, few individuals will genuinely be able to bridge the divide. But right now it is hard to see anyone who can bridge the gap – from either side – so they clearly have a long way to go.
Future development and finance
There’s going to be a big problem getting the games industry to move forward. First they can’t do revolution. Jessica Mulligan highlighted that the mainstream games industry had completely missed the rise of kid/teen-centric virtual worlds, in spite of this market being right next door to their core market.
But the bigger problem is capital. We’ve established that you’d have to look outside of publisher channels, due to risk-aversion. The next port of call – venture capitalists – can’t finance the project long enough to develop it. World of Warcraft is essentially 10 years of experience, reputation and development time, with a return over the following 10 years. A Silicon Valley web-based startup might come from nowhere, take 6 months to build, and a year or two more to profit – assuming it is going to be profitable. The failure rate is so high (for every Amazon you back, there will be a dozen Webvans), the expected returns are also huge.
So, we’re faced with a move towards a product that fundamentally takes longer to make, yet has no history to suggest it will build a market that lasts more than a decade. My conclusion is that it won’t happen – at least not quickly.
Thoughts on alternative models for development
Pure evolution will take a long time – gradually venture capitalists learn to support ever more complex products, which get progressively quicker to make, because most of the innovation was already done in the previous iteration. In turn, each iteration becomes less risky, because there are progressively fewer and fewer companies in the market, since fewer and fewer are able to keep pace with the complexity of the product. And at the same time, increasing numbers of consumers will start using these products. Rather and try and build huge worlds right away, we’ll focus on very discrete uses.
Or perhaps all this forces the development of either a platform model (development of basic structure on which useful applications are then built by others), or systems with a high degree of inter-operability. “Or”? Probably both are the same thing. The potential strength of the platform right now is that there are lots of little developers waiting to build things on it, and those potentially can be funded, because instead of building an entire world, they’ll just be building a small part of it.
Or maybe (cheaper, better understood) web-based interfaces still have decades of development to go, and will transpire to be a far more flexible tool for dealing with the huge volumes of data that are incoming? What’s more important, the ability to digest too much information, or the ability interact with one another? Although again, that may not be an or – collectively other people have a fantastic (if sometimes wildly skewed) filtering effect on information.