Episode 290 — Digital Domain — Doug Roble

 

Episode 290 — Digital Domain — Doug Roble

Doug Roble is the Senior Director of Software R&D at Digital Domain and leads a world-class team of software developers. An original member of Digital Domain, Doug has been writing software and conducting research to advance the artistry and technology for feature films since the studio’s inception.

Doug was awarded two Academy Sci-Tech Awards: one for “Track”, an integrated software program that extracts critical 2D and 3D information about a scene and the camera used to film it; and the second for his work on the fluid simulation system at Digital Domain which incorporates innovative algorithms and refined adaptations of published methods to achieve large-scale water effects.

He has participated in SIGGRAPH nearly continuously, in one form or another, for 25 years, including leading multiple courses, papers and presentations. An accomplished orator and advocate of technological advancement, Doug speaks regularly at premier conferences and events including TED 2019, and keynotes at the Symposium on Computer Animation and the DigiPro Conference. Also, he was the Editor-in-Chief of the Journal of Graphics Tools for several years.

Doug is an active member of the Academy of Motion Picture Arts and Sciences in the Visual Effects branch. He is the chair of the Sci/Tech Awards and a member of the Sci/Tech Council.

In this Podcast, Allan McKay and Doug Roble discuss the future of visual effects, machine learning, digital humans — and what it means for VFX Artists.

Doug Roble at Digital Domain: https://www.digitaldomain.com/leadership/doug-roble/

Digital Domain’s Website: https://www.digitaldomain.com

Doug Roble on LinkedIn: https://www.linkedin.com/in/doug-roble-752a081

TED2019 Talk and Digi Doug: https://www.ted.com/talks/doug_roble_digital_humans_that_look_just_like_us

Digital Domain on Facebook: https://www.facebook.com/DigitalDomainVFX/

Digital Domain on Twitter: (@DigitalDomainDD)

 

HIGHLIGHTS:

[04:38] Doug Roble Talks About the Beginnings of His Career

[09:04] Doug as One of the Original Employees at Digital Domain

[15:09] Passion at the Beginning of a Career

[21:35] The Current State of Visual Effects

[28:40] Doug Discusses His Favorite Projects

[33:16] Machine Learning and Digital Humans

[48:53] Doug’s TED2019 Talk and DigiDoug

[55:01] Doug Talks About the Future of Animation

 

EPISODE 290 — DIGITAL DOMAIN — DOUG ROBLE

Hi, everyone! 

This is Allan McKay. Welcome to Episode 290!

I’m sitting down with one of the first employees of Digital Domain Doug Roble to talk about digital humans, artificial intelligence and the future of animation. Doug and I recently did a panel for SIGGRAPH about artificial intelligence. I found Doug really captivating. 

Being familiar with the work of Digital Domain, I admire them for pioneering a lot of study on digital humans. Doug actually did a TED Talk with an avatar of himself DigiDoug. Doug Roble is the Senior Director of Software R&D at Digital Domain.

I love this Episode. For everyone, this should be a fascinating one!

Let’s dive in! 

 

FIRST THINGS FIRST:

[01:14]  Have you ever sent in your reel and wondered why you didn’t get the callback or what the reason was you didn’t get the job? Over the past 20 years of working for studios like ILM, Blur Studio, Ubisoft, I’ve built hundreds of teams and hired hundreds of artists — and reviewed thousands of reels! That’s why I decided to write The Ultimate Demo Reel Guide from the perspective of someone who actually does the hiring. You can get this book for free right now at www.allanmckay.com/myreel!

[1:01:30] One of the biggest problems we face as artists is figuring out how much we’re worth. I’ve put together a website. Check it out: www.VFXRates.com! This is a chance for you to put in your level of experience, your discipline, your location — and it will give you an accurate idea what you and everyone else in your discipline should be charging. Check it out: www.VFXRates.com!

 

INTERVIEW WITH DOUG ROBLE

[04:38] Allan: Doug, thank you for taking the time to chat! Do you want to quickly introduce yourself?

Doug: Sure! My name is Doug Rouble. I work at Digital Domain. I’m the Senior Director of R&D there. I manage a group of people who are developing tools in the realms of visual effects. Because that’s what Digital Domain does in movies, tv, games, commercials, all sorts of stuff. I have had a long career at Digital Domain. I’ve been writing software for 30 years now. It’s all about developing cool new tech.

[05:37] Allan: With your background, did you always aspire to work in film? When did you first discover computer graphics?

Doug: I’ve told this story before. Given my age, I was 14 years old when Star Wars came out. I was the perfect age to watch it! For you, youngsters, out there, it’s hard to describe how game changing Star Wars was. Right now, everything looks like Star Wars. But in 1977, when it came out, it was stunning! It’s kind of gotten lost. People were watching it over and over again in the theaters. I saw that movie 17 times that summer! It was just a game changer of a movie at the time. My father is a scientist. He studies upper atmosphere. I knew I would get into science. But when I saw Star Wars, I realized that you could use technology to tell stories. When I got my BA, I got into computer science and electrical engineering from the University of Colorado. I was heading to be a full-on nerd. But at the end of that, I realized that there were some programs that were just starting up and you could learn computer science and make images [with them]. I tailored my graduate work toward that. So I knew I wanted to do this since I was a teenager.

[09:04] Allan: So you pretty much just went straight from a Ph.D. to Digital Domain?

Doug: Yup! I met my wife at grad school, we got married. I graduated first and so I hung out at OSU and got my Masters and Ph.D. because they had a sweet computer science program! I was teaching at Ohio State and then this opportunity at Digital Domain opened up. I was the 33rd employee hired at this brand new company. I just opened its offices in Venice. There were boxes all over the place because they were still building it. And I stuck there ever since.

[05:40] Allan: I’m just curious, where was the location back then? It wasn’t next to the Firehouse?

Doug: Yup, that was the exact location! When I got there, we were just starting to work on True Lies. They were building this enormous set of ski slopes. Even though it was called Digital Domain, a lot of movies were made with sets and practical explosions back then. We hadn’t gotten fully digitalized. They took up most of the parking lot with that ski slope which Arnold Schwarzenegger was going to use to get away from some bad guys. I have a photo of myself hanging out on that ski slope. It was monstrous! I knew I was in a good spot.

[11:38] Allan: Looking back at the dawn of visual effects, DD was the pinnacle. What was the hiring process back then when it was hard to find VFX people? 

Doug: It was super casual. The head of the software department at that time came from a very hardcore software background. DD was a partnership back then, between Scott Ross, James Cameron and Stan Winston — and IBM! There was a lot of good input from IBM, including some employees. The interview process at that point was kind of like an IBM interview. I was asked about my computer science background and my Ph.D. thesis. But then I also talked to Scott Ross who was very much a product of ILM and the traditional way of making movies. But he had embraced the fact that computers would take us into the future. Here I am, coming from Ohio State, naive as all get out. A Ph.D. I’m wearing a suit to my interview, which made me stand out. I remember Scott was in his office. This company was just being formed! We had a Thursday afternoon interview and the head of the software department was barefoot. (Remember that Rose Ave is just 3 blocks away from the beach.) He was leaning back and I was in a suit, so nervous. There I was. The interview process was weird.

[15:09] Allan: I still remember I was talking to Gini Santos at Pixar and how Steve Jobs popped into the interview (www.allanmckay.com/173). I love that! I got into computer graphics as a young kid in the 90s. I always had an obsession with particle systems, long before we were using them. At the beginning of your career, were there any areas you were obsessed about?

Doug: Absolutely! The way I’ve traveled through my career is by getting obsessed with one thing or another. At the very beginning of my career, I was all about computer vision and how it could help make movies. My Ph.D. was on computer vision, actually. This was the first stuff I worked on at DD. One of the first! One thing I worked on is removing wires and harnesses from a scene. I built a tool that indicated where the wire was and what the background needed to be. But then I went deep into scene understanding: How can we figure out where the objects are in the scene, where the camera was, the lens information? So you could insert things more easily into the scene. Because back in the day, they were doing things by hand. If they wanted to put a character into the scene, you had to make sure to place that character frame by frame. I thought there had to be a better way. After that, I got fascinated by a paper I saw at SIGGRAPH about fluids. I took a deep dive into physical simulation of fluids, just like you. I was fascinated by that for 6 years. After that, it was muscle simulations and cloth simulations. My new thing is all machine learning and using the technology to reinvent the way we do some things in visual effects.

[19:09] Allan: Back in the 90s, what type of papers were you looking at? For me, I was interested in Ron Fedkiw.

Doug: There were a couple of seminal papers. There was a paper from the University of Toronto, from Josh Stam. It was called Stable Fluids. It was a beautiful paper! Fluid simulations are this enormous field. It has been around since the 1700s. Even computer simulations have been around since the 60s. Josh took a lot of that math and made it easy to understand. It was very pedagogical, it taught you about fluids. The results were amazing. That, with some of the stuff that was coming out of UCLA and Ron’s group and Stan Osher. (Ron was a student of Stan.) At least for fluid simulations, that was one of the papers that jumped out at me.

[21:35] Allan: I love that! How far things have come in the last 30 years! Knowing back then the possibilities, how do you feel about visual effects and computer graphics? We’ve tackled so much! Or do you think there is more that we could do?

Doug: We have to deal with this every year whenever we decide on the next project we want to work on. At the beginning of computer graphics, all the problems were out there. Everything you did was new. And computer graphics — and I say this lovingly — is a field of theft. There are all these fields that have been doing things and computer graphics will go, “Look at what they’re doing and use it.” With fluids, there was so much stuff, we picked and chose parts of the fluid simulations research and adapted. For instance, in the engineering realm, it’s all about accuracy. You want to know how much water is going to go through this pipe and when turbulence will start and how much of it will have to happen for the pipe to break. For us, we don’t care. We just want to make sure it looks good and that it can be finished fast. It’s taking the detail and pushing it to what we want it to do. 

So at the beginning, there was a lot of low hanging fruit. There was a lot of research that we brought in to solve particular problems. Then in the 2000s, there were all sorts of inventions. Before, you could do fluids in movies — you just built a big set and filmed it at a high rate of speed. But it was really expensive and hard to do. We started coming up with new ways, like taking fluids and controlling how they moved. Nobody in research cares about that. There are so many papers out there on manipulating water and fire. And then, there is now. Most of the big problems are solved. Back in the 2000s, a director would want an effect and we couldn’t do it. Now, I don’t know if that ever happens. We can do [anything]. It might be expensive, but it’s not impossible. Now, it’s about: Can we do it in real time? Can we do it faster? Can we make it look perfect? It’s all about the refinement and the speeding up. It’s rare to see something you haven’t seen before.

[27:12] Allan: I feel like the big focus these days is the producer brain: How can we do it on schedule and as cheaply as possible?

Doug: Just think about it: Back in the 90s, a show with lots of visual effects would have 150 shots. Now you have movies, where every shot has multiple visual effects. It’s all about the scale now and being able to do things like hair. We’re doing a lot of things with hair and simulation of it. We’ve been able to do it for ages, but now we can do it all the time. It’s pretty cool!

[28:40] Allan: I’m just curious, having worked on so many things, are there any projects that stood out overtime? Maybe the ones that were challenging or fun?

Doug: Titanic was an amazing experience, for a whole bunch of reasons. The scale of that show was insane for the time! I don’t know if you remember that it was supposed to come out in the summer but it didn’t. It finally came out half a year late. It turned out fine but there was this enormity of that movie and how much needed to be down, all the way up and down the chain. There was a camaraderie on the show and the ability to do things that were new. Apollo 13 was really interesting. Some of my software had been used in the film. That was a lovely movie with some really good models and practical effects that were blending well. 

Benjamin Button was staggeringly scary and it required a lot of software. That one, we didn’t know if it’d work. With visual effects you can hide something that isn’t working with editing. You can put some motion blur. With this, the first 45 minutes of the movie, we focused on a digitally created character and if it didn’t look good, people would be walking out of the movie. And the fact that it worked was extraordinarily satisfying! Working in software, I’m a little bit removed from working in the movies. I’m rarely involved with shots. Artists will be working on shots and using my tools. It isn’t the movie so much for me, it’s more of that collaboration with the artists. You have the software developers who are really interested in the mathematics and the science of making something happen — and the artists taking the tool and helping them figure out if the tool would do what they need it to do. I really like that sort of feedback loop between artists and tech!

[33:16] Allan: I remember one of my friends at DD telling me about Benjamin Button. The idea of what you were trying to pull off sounded so ambitious. That was, what, 2006? It’s amazing to see that in that film no one paid attention to the effects. Machine learning has been around for years. Have you always had your eye on it? When did it spark a lightbulb in your mind?

Doug: Way back in grad school, Ohio State was a leader in artificial intelligence. They had a couple of really good faculty members. I was introduced to neural networks back then, I never took classes on it though. The one thing I remember about my classmates who were taking those classes is their being frustrated as all get out. It had great potential but it was so hard to get it to do what you needed. That was something I remembered. I filed that away as, “Cool but kind of sucks!” I was working on fluids! Until 2016 or around there — when everyone else noticed it as well — there were a couple of cool papers that came out. There were techniques that people started to notice. The ability to train these networks made them more sophisticated. At the same time, graphics cards and the parallelisms that you needed to train a neural network was becoming much more possible. Like everything in my career, it worked out that when I look at something, I think, “That’s cool! Let’s see if I can adapt it!” Same thing happened here. I saw a paper on using neural networks to accelerate a fluids simulation. That just took me down a rabbit hole. Over the last 5-6 years, we’ve been experimenting on how to use them in visual effects. And like I said, everything we’re doing now has something to do with machine learning. The problem is, it changes the way you’re looking at problems. For instance, with fluids, you now look at collecting data and letting the system learn how to do this. All of a sudden, it’s less about math and more about, “How do I use the data to train the system”. It’s a totally different way of looking at problems.

[38:38] Allan: What are some setbacks in terms of approaching deep learning? What are the limitations?

Doug: These are things that people are addressing throughout the industry. Training stuff takes a long time and requires a lot of data. There are new techniques. Say, we want to train something to recognize your face. We need a lot of pictures of your face and a long time to train the system. What if we wanted to train it to recognize me instead? There are new techniques that allow you to nudge the system to understand something similar. Things like few-shot learning, one-shot learning. That is a huge problem. You still need data. That is one of the biggest things. Asking an actor, “We’re going to put together a digital version of you. In order to do that, we have to learn how your face moves. What we want you to do is do stuff with your face for 20 minutes to an hour. At the end of that, we’ll have enough data but it’s different.” People are used to getting scanned. All that data created a system that understands how you behave.

[41:07] Allan: For you, how early did you get involved with digital humans? 

Doug: We’ve been an all-purpose digital effects place. We can do anything. We have decided to spend a lot of time developing technology for digital humans before Benjamin Button. Then ever since, we made a concerted effort to make better digital humans. You can follow the movies. We had Tron which was challenging, but it was harder to do. We had Beauty and Beast, Maleficent. You should see some of the stuff we’re working on now. Now we’re working on how to make it easier to create a believable expression. Can we make it easier to create and manipulate the character? The way we did faces for Benjamin Button has changed and morphed. The addition of machine learning changed how we do all this stuff. Now, we’re doing some video games [for which] we’re doing thousands and thousands of minutes of capture. With whole chunks of it, artists don’t even have to get involved. The stuff we’re capturing on stage gets translated right into the application. I just think about the trouble we had to go through with each and every shot of Benjamin Button. Now, we go, “Yeah, we can do that!”

[44:10] Allan: Going more on that subject, what are some of the big challenges you’re facing now? 

Doug: Real time is a big new challenge. While machine learning takes a long time to train, it runs extraordinary fast. That means that when you’re building an engine, you have to teach it slowly. “Slowly” is being the operative word. Problem sets aren’t as big. A long time for us is 1-5 days of training. The amazing thing that is once it’s built, it runs really, really quickly. If I have a model that’s trying to figure out how to mask out my face, it will need a lot of data. But then it would mask out my face in milliseconds. The more we can do with machine learning, the faster this will go. That’s why we’re able to do the TED Talk where I had a digital version of myself mimicking everything I was doing. We have taken everything in that pipeline and machine learned the hell out of it. That’s a huge challenge.

The other big challenges [has to do with] is there a way to reduce the training stuff. We don’t want to wait as long to build this stuff. There are really smart people working on this and it’s getting easier. At the same time, machine learning has become a big thing with manufacturers, like Videa. That helps. Keep that up!

[47:29] Allan: I was surprised to hear that. Investing so much into GPU, I’m surprised to see the number of variations they’ve come up with. A lot of people are innovating. It’s an exciting time for sure!

Doug: Videa has been a great help to us. They’ve been a great partner! I have to give a shutout to their developers who are developing some amazing tools. You can tell why they’re doing that. It’s helped up enormously.

[48:53] Allan: You’ve mentioned that TED Talk that you did. With DigiDoug, where did that idea come from? Did you just want to have a test character?

Doug: The creation of DigiDoug and why it was me: First of all, I have no problem being in front of an audience. I’ve given a lot of talks. I’ve been the public face of DD. I understand the need for data for the software to work. If you’re going to put dots on a person, it’s nice for that person to know it would work. And I was also fascinated by it! One of the things is that it’s been in the back of our minds. We’ve sort of seen glimpses of it since working on Avengers. It’s been 3 years now, but Mike Seymour did MeetMike. You remember that?

[50:46] Allan: I always give Mike a hard time about that one! (www.allanmckay.com/135).

Doug: That was all SIGGRAPH was talking about it. We looked at that and thought, “Holy shit! Look at what Real Time is doing!” And we realized we had the technology for it. We could even take it up a notch. We had our own solution for it. Then we just ran with it.

[51:47] Allan: What are some of the features of it? The fact that you can talk to DigiTalk, what else can it do?

Doug: One of the key things is fidelity of the facial model. The way we represent the face and take an image of me — taken with a single camera — and translate it into a 3D model, this was the new thing. We thought, “What if we took the Masquerade we used on Thanos and created Masquerade Live?” That was it! There are a ton of details in this. How do you track the eyes? The details of the speed. Making sure that the audio is synced up. There is a lot of stuff going on in DigiDoug. Then we thought about making it autonomous. Now, this character can move on its own.

[53:58] Allan: How long did it take from the beginning of the project?

Doug: The capture happened in 2018, in the fall. I did the TED Talk in 2019. In 2020, stupid COVID [happened]! That’s when we thought about getting autonomous Doug going. That’s where we are now. 

[55:01] Allan: You’ve mentioned GPT-3 earlier. Those are controversial topics. This is where a lot of this stuff is going. With holograms, do you see it be a new way of doing performances?

Doug: We’re already doing some of that. We did Tupac but that wasn’t live. We’d prerecorded the performance. We can now do this stuff live which is really exciting. Whether these characters are human looking or not, they can interact with other performers. In Japan, there are entire concerts done with digital characters. But now we can do that with real people. The potential for entertainment is really, really neat! You could have characters from history looking exactly how they did. We did a project for the Chicago Museum and Dr. Martin Luther King, Jr. So the educational and entertainment aspects are really cool!

[58:10] Allan: My final question would be about where the things are going. We did the talk with Google at SIGGRAPH about how animation would change. Can you elaborate about that?

Doug: In terms of animating characters, machine human interface (and I hesitate to say this), you will have the ability to use more than a mouse. There had been things like armatures. But now, you can just step back from your web camera and dance around and your character will do that. Or, you can use your face to animate characters. Or, you can use actors’ faces to animate characters. The characters you’re seeing aren’t just quick rigs. They will be in real time. I still think people will use their mouse because it gives them a sense of control. You can get a long way with some really neat things. 

[1:00:29] Allan: Every animator used to have a mirror on their desk. Now, they can have a camera.

Doug: Exactly! Right! You’re always going to have to use a mirror, too.

[1:00:41] Allan: I want to thank you for your time! I had so many questions I wanted to ask. It’s been really fun to sit down and chat.

Doug: I had a great time, Allan! Thank you! 

 

I hope you enjoyed this Episode. I want to thank Doug for his time. This was so much fun!

I’d love to hear your feedback. Please let me know what other guests you’d like to have on the Podcast. And please take a moment to share it. 

I’ll be back next week. Until then — 

Rock on!

 

Click here to listen on iTunes!

Get on the VIP insiders list!

Upload The Productive Artist e-book.

Allan McKay’s Facebook Fanpage.

Allan McKay’s YouTube Channel.

Allan McKay’s Instagram.

 

 

Let's Connect

View my profile on

INSTAGRAM

wp_footer()