Developing The Concept

Remember at the tail end of last year when usually based Swen Vincke of Larian got in hot water for an interview in which he came off as kind of AI-positive?
Don't worry, I'm not here to re-litigate Vincke—as far as I understand, Larian committed to stop using GenAI in their development, so he clearly understood how unpopular his take was. But while it was going on, Edmond Tran wrote an article for This Week In Videogames that I was recently reminded of, because it contained the seed of this post in my head.

In the article, Tran has interviewed a number of concept artists working in the games industry about how using generative AI as reference material affects their work. The sentiment is negative, to say the least; not only do the artists consider GenAI "not helpful", it's downright detrimental to the quality of their work. When describing why, two words come up that I hear repeatedly in discussions about AI: "process" and "discovery".

Trust the process

Let me share some quotes from the article. Here's Lucy Mutimer, an Australian illustrator and concept artist:

Something that I have found difficult for non-artists to understand is that the ‘early messy stuff’ that non-arty folks insist can be ‘fixed by a human artist later’ is where the best work is done. You cannot brute force your way to the end conclusion of an idea – you gotta work that out.”Lucy Mutimer, Concept Artist

Here's Kim Hu, Lead Concept Artist on Rollerdrome (and the artist behind Aftermath's dope "Destroy AI" T-Shirt):

[Hu] says outsourcing even part of the early ideation stage to AI “robs you of discovery, as it will likely more or less give you exactly what you asked of it.”

“On the other hand, going through archives and real world references will allow you to stumble upon things you have never thought of before, informing and branching out your ideas further. Going down these accidental rabbit holes is a pivotal step of concept and world building to me.”

Notice the word "discovery" in there? Here's Paul Scott Canavan, concept artist and occasional art director:

“I’m seeing more and more clients generate something approximating their desired outcome and essentially asking me to make ‘something like this' […] It sucks. This practice absolutely invalidates the entire creative process, in my opinion, and makes my job harder and more frustrating. The job of an illustrator or concept artist is to draw from their years of experience to interpret a brief in a creative way.”

One thing these quotes (and many others in the article) all highlight is the value that comes from the process of iteration. Starting with nothing but an idea, then iterating on it over and over again based on your own evaluation and feedback from others, is the essence of how good art is made. Artists do not "see the art on the paper, and draw until it is set free", to paraphrase Michelangelo. They make drafts, try ideas on paper, then throw those ideas out and move on as they think of better ones. This isn't a waste of time—as Lucy Mutimer said in the quote above, it's "where the best work is done".

Finding The Voice

When Embark Studios' Arc Raiders released last year to great success, the fanfare was dampened by their use of generative AI for voice lines in the game—something they'd already gotten in hot water for when they used the same technology in their previous game The Finals.

Embark have been unapologetic about their use of AI in this way, and gave some justifications for it that come off as rather lazy to me. They also didn't hold up according to the voice actors interviewed by Maddie Agne for This Week In Videogames.

I want to take a moment to give props to the people over at This Week In Videogames. These two articles I'm referencing came out within a week of each other, and do a great job of highlighting the perspectives of the industry professionals who are most able to speak on these issues.

The article is full of refutations of the various arguments presented by Embark to defend their AI use (and is worth reading in full), but I want to focus on one specific quote from Sarah Elmaleh, a voice actor and advocate with SAG-AFTRA:

This is a process where so much of the discovery happens in the booth with an actor […] You make these discoveries and you set them in place and the actor starts to feel ground underneath their feet… And then you see them settle into a pocket and they lock in, and it’s so beautiful and joyful.

There's "discovery" again. Just like with concept art, starting the process of iteration from a rough idea provides a value that enhances the final product. Skipping that process, or starting it from a "near-finished" draft like you might get from GenAI, limits the creativity of the people involved—this leads to less novel outcomes and, in turn, less interesting games.

Code Monkeys

So far, I've written about AI in the context of fields I'm unfamiliar with. From the statements in the interviews I've linked—as well as countless similar sentiments you can find online if you'r even remotely curious—it's clear to me that the use of generative AI robs the creative process of something valuable by restricting the creative's freedom to explore ideas and discover things as they go.

When I started writing this post, I wasn't sure how this concept of "discovery" applies to programming. Unfortunately, there was no convenient article with interviews from programmers I could reference. Fortunately, I am myself a game developer with nearly seven years in the industry, mostly in engineering roles, so I feel more qualified to speak about programming than I do about art or voice acting.

My initial thought was actually that AI may be less harmful to the output of programmers. The purpose of code in game development is more functional than voice acting, art, or game design—less art, more frame, canvas, and brushes. Looking only at the immediate output of an individual task, I can imagine a sufficiently advanced AI model producing code of similar quality.

But in programming—like any other discipline—going through the process leads to experience. The failures are just as important (if not more so) than the successes; I couldn't tell you how much time we've saved on projects I worked on because someone said "I've tried that approach in the past, and it didn't work because of XYZ".

This kind of learning doesn't happen when you have the computer magic up the entire solution for you. AI may be capable of producing good code, but it also produces worse coders. Coders who don't read the documentation, explore fewer solutions, and are less familiar with the code they submit into production. Coders who sacrifice process in the name of speed, and so inevitably sacrifice their own learning in the process.

I have heard people argue that this means coders who are already good can leverage AI to be more productive. Maybe this is true—I think relying too much on AI will cause your skills to stagnate, and eventually deteriorate—but what I can guarantee you is this: If you're a junior programmer who relies on AI for your work, you will never be a senior. If the AI doesn't work, you're under-performing; if it does, your employer will attribute your productivity to the AI instead of you.
Joe Wintergreen put it well in a blog post last year where he compares a "Genius" programmer who uses AI to a "Layperson" who doesn't:

As any type of boss, it’s now hard to argue that you did the right thing employing the Genius. If you think AI is bad, you should have hired the Layperson. If you think AI is good, you shouldn’t have hired the Genius. If you’re the Genius, you’re now starting to realise you’ve fucked yourself in a pretty comical way: you just pitched your workplace on your own irrelevance, and you nailed it.

I can't stop you from using AI, but keep that in mind if you want to make a career in this field.

AI And The Loss of Discovery