Episode 135 — FXGuide Founder Mike Seymour

 

Click here to listen on iTunes!

Get on the VIP insiders list!

Check out www.VFXRates.com.

Upload The Productive Artist e-book.

 

Episode 135 — FXGuide Founder Mike Seymour

Hey, everyone!

This is Allan McKay. Welcome to Episode 135! I’m speaking with Mike Seymour from FXGuide. I’m excited about this. This Episode is focused around the Virtual Humans projects that Mike has been working with a group of experts, using technology that was demonstrated at the MeetMike event at SIGGRAPH.

This Episode is really cool. We talked about some other things a well. I wanted to make this Episode super specific.

Let’s dive in!

 

FIRST THINGS FIRST: 

[-46:14] Later this month, I will be opening the doors for the Mentorship 2018. You don’t want to miss out. Please sign up for announcement at: allanmckay.com/inside/.

 

INTERVIEW WITH MIKE SEYMOUR

Mike Seymour is the VFX Supervisor and Compositor. He is a also Co-Founder of — as well as a writer and consultant for — FXGuide, FXPhD and FXPhD Academy, the online resources for the visual effects community and innovators.

Mike holds a Bachelors and Masters degree in CGI and Pure Math from the University of Sydney. He is currently doing his PhD on interactive realtime faces in new forms of Human Computer Interfaces. For the Wikihuman Project, he assembled a group of researchers and artists from Epic Games, Cubic Motion, 3Lateral, Loom.ai, Pixar, Tencent — and many others — to build a digital version of himself that could be directed and rendered in realtime by the real life Mike. That resulted in the MeetMike project that was presented at SIGGRAPH in 2017.

In this Podcast, Mike talks about the challenges of current-day VR — realtime rendering, the uncanny valley, neurological evolution — his research in the field and the successful fruition of the MeetMike project, collaborated on by the Wikihuman Project.

 

Mike Seymour on IMDb: http://www.imdb.com/name/nm1921346/

FXGuide: https://www.fxguide.com

FXPhD: https://www.fxphd.com

MeetMike Project: https://www.fxguide.com/featured/meetmike/

MeetMike Digital Human Showcase: https://www.youtube.com/watch?v=6MIkoLBWRv0

MeetMike Showcase at SIGGRAPH: https://www.roadtovr.com/siggraph-2017-meetmike-sets-impressive-new-bar-for-real-time-virtual-human-visuals-in-vr/

Wikihuman Project: https://www.fxguide.com/quicktakes/alsurface-shader-wikihuman-project/

Mike Seymour on Twitter: @mikeseymour

 

[-[45:43] Allan: Obviously, for you, one of the things that’s been cool to see come to fruition was the MeetMike project. Within that realm, can you talk a little bit about how that came to be?

Mike: Yeah, well in general from a career point of you, you’ve done some interesting advice for people. It’s not just about knowing the tools but also how you present yourself. There is a third dimension to that triangle which is positioning your expertise component. I find that artists don’t always think about down the road a long way: The tool set that they develop, as well as how they present themselves. I applaud the stuff that you’ve done, Allan!

[-[43:19] Allan: Positioning is everything. I agree with what you’re saying.

Mike: But the other thing is one needs to have a long-term plan. I moved away from Flame because it wasn’t going to take me down a new path. There wasn’t anything wrong with it. If I wanted to go further along the compositing path, I would have had to go heavily into some parts of Nuke. That was definitely an option. I loved the guys at the Foundry. I totally love them to death! [But] my original research stuff was in 3D. I wanted to get back to it. I decided that one of the areas that would be important in the future of storytelling — and was just outside of films — was virtual humans / digital humans. That area that overlaps with cognitive science: actors, agents and avatars. Those are the three applications. In a sense, you could argue that a bunch of problems in 3D have been solved and that the problems in compositing have been solved. But you can’t say that about virtual humans. I’ve often found that if you can find an area that has a lot of unknowns, it’s a rich field to plow — and then you can benefit from it later on. 

Plus, I really wanted to get on this Podcast with you. And you’ve invited Chris Nichols on (allanmckay.com/26/) which was 3 years ago — and it’s taken me that long to get around to it. Chris and I do get to work together, on and off a lot. I admire him a lot and he works on our Wikihuman project. With other people in this project, I’ve really enjoyed pushing in this particular space.

[-[42:09] Allan: I was going to touch on that: Who is involved in that project? Is it Dan Roarty or Paul Debevec? It sounds like you were able to cultivate a big group of some of the greatest minds and their expertise. Who are the people involved?

Mike: Let’s start with Paul Debevec because Paul is a great guy and a good friend. He took me out [when] I was thinking about doing my PhD. He [advised] I should do something incredibly hard. He said I could either report on a lot of stuff — which is obviously what we do a lot on FGGuide with John Montgomery and Jeff Heusser — but at some point you need to get off the side bench and invent some new stuff. So I ripped up my plan and started doing research for a period of time. With Chris and other team on Wikihuman, we build a virtual human. We use me because ethics approval is really easy on yourself. I did my head at USC, but I did my eyes at Disney Zurich Research. If I were to use you, I’d have to find you, free up your schedule and then fly you and me.

I got to know a lot of people at FXGuide over the years. I’d like to think that we tend to cover positive things and illuminate problems that gained us a lot of respect from the research community. That meant that we had a healthy [size] audience, but not a mega audience. I prefer to have tens of thousands of readers who are genuinely interested in the story, rather than a million — and it’s click bites. That served us really well. We write stories that we know people find not necessarily easy to read. It’s good to have some place that’s not talking down at people. That meant we just knew people who could helped us do research.

Kim [Libreti] got the Epic team involved. We got the Cubic Motion team in Manchester. We got the 3Lateral team in Serbia. Vlad gave us the best possible face rig. We got the Loom.ai team in San Francisco, they built the avatar face from AI. We pulled all these things together. And as you know, filmmaking is a collaborative sport. At no point in life, should you try to do this by yourself. You should try to hang out with people who are better than you and end up on better projects. 

It all came together in LA at SIGGRAPH in 2017. We produced this real time virtual human.

[-[38:05] Allan: I loved that you brought together all these specialists and you are the cultivator. What were some of the biggest hurdles and how is this whole thing being received?

Mike: Well, let me say the big thing about what we did: I’m not trying to spin this into a company and I’m not trying to make money from it. One of the consequences is to give away all the data to the community to use. And that was the main aspect. When you go to a university and ask them for research, they trust us. It’s to everyone’s benefit. Everyone involved — even the lawyers in the field — want to lead the ball down the field. No one is going to be ripping anyone off. And I can’t underline how hardworking and collaborative these people have been.

Actually, I forgot to mention Tencent in China which is another major contributor. So all of these companies come together and contributed; and we’re going to release it to the community. Which is why Pixar showed up and SIGGRAPH, the Foundry posts things for free. If you’re doing this with the right intension, you tend to get a lot more in return.

What was the second question?

[-[35:56] Allan: What were some of the biggest hurdles and challenges? Let’s say eyes alone, or realistic skin that’s well lit and reactive.

Mike: The biggest thing I would say is render time. Let’s say Pixar contributed some amazing stuff. But if you look at the problems, they’re inside the framework of a 2.5 hour render. We had initially got it down to about 9 millisecond. Realtime rendering is the single biggest problem. The real pivotal aspect of this research is interactivity of the face. It’s not enough to render a still of a good looking face. There is a whole other thing that comes into play: when you can actually talk to, interact and deal with that face. That’s a big difference! 

Let’s [talk about] the uncanny valley. Tom Hanks in the Polar Express, [for example]. We hate that conductor, but Tom Hanks is the most beloved actor of his generation. If you move, it accentuates the valleys and the peaks. The realistic stuff becomes more realistic and more uncanny at the low end. But that changes with interactivity. It isn’t that when you interact. My argument is that you can have an interactive face that is no longer uncanny. You may not think it’s real. But you can find it agreeable and fun, and sympathetic. And that’s really significant. There is a whole area in life where we could use virtual humans who were appealing. But if they looked realistic and not at the bottom of the uncanny valley — our lives would be significantly different. So we had to go realtime, we had to go interactive, stereo 90 frames a second, high res, full head tracking. Compared to a cartoon, the photo realistic face was more sympathetic and friendly. That’s really the different narrative.

[-[31:46] Allan: I love it! It’s really easy to pass judgements on the Polar Express. But when you’re interacting, you let your guard down. 

Mike: We took it even further: What’s happening neurologically and which part of the brain changes? Where is psychology — combined with neurochemistry — processing this stuff? We believe that it’s involved with the neurons. You accept it in the moment. You need to be able to articulate why. The why is really hard, if we don’t understand these things. Let’s say you have a really good VFX Sup. They will be able to articulate what’s wrong with an image. But getting that skill is so hard because we can’t articulate it. You have to see past what your brain is seeing. If you’re looking at blood flow on the face, you have to look at it in blue. Because if you look at it in red, you see emotional triggers. Eye pupil dilation, it’s hard to read that in the eye. We had to defeat our own neurological evolution to be able to see what’s being triggered — and then come back and refute it.

[-[29:31] Allan: You’re totally right! People will pixel fuck the hell out of a shot to figure out what is realistic. When you get onto this subject, you’re talking more about behavioral psychology. All these micro gestures we’re interpreting. These things we take for granted: Like, do I like this person? 

Mike: Let me give you an example of this. There was a client who was looking at the head. The head was isolated. The client said the upper teeth were too rigid. If you think about that, that’s absurd! The teeth are connected to the scull. That’s the only place you see what your scull is doing. What the client was saying that the scull was frozen and he said the teeth should move more. Which is something a human body couldn’t do.

So we’re turning to these directors and they aren’t able to defeat their own neurological evolution to find a solution. What a ridiculous comment! Tell them to articulate stuff about a laptop — no problem. We can analyze it free of emotion. Any director or Supervisor is hard up against that. We need to give them the tools. They should be good at articulating what emotion they want in the scene.

[-[26:08] Allan: What are some of the other fascinating — but not obvious — things you’ve discovered when you started digging?

Mike: One of the things that I found early on was: There was a terrific 3D scan done for the New Yorker. It was a bald Paul [Debevec]. If you know Paul, you think the scan looks really bad. People who don’t know Paul think it’s looks really good. I was looking at this and tried to work out why [that is]. Some people think it’s familiarity. But what we discovered was: The New Yorker didn’t want to publish what we call “the hockey mask” Paul. They wanted the head of Paul. They were working with Activision at the time. They scanned some other bald guy’s head and put Paul’s face on it, and it made it look real. Except [that] the silhouette of that scull isn’t Paul’s. People who knew him were looking at the face and they weren’t able to tell what was wrong. But there is a part of our brain that processes silhouettes. I have a Paul Debevic silhouette in my head. But it wasn’t his silhouette [in the New Yorker]. My brain knew something was wrong. I tried to focus on the eyes or the ears.

Why is that when I watch an old episode of television — that was shot for television of the day, which is 4:3 aspect ratio — and I’m watching it after it’s been compressed 16:9, and the heads are stretched out horizontally. I don’t say uncanny valley. I know the head is grossly wrong. So how can I be not okay with Paul Debevic’s head being wrong?

[-[23:02] Allan: I guess it’s not just one thing, you’re right. It’s an infinite check list. Valve Software were incorporating us into a study about silhouettes. There is a game Team Fortress 2. They wanted the silhouettes to be easily identified. It’s interesting to see what we identify as a friend or a threat.

Mike: Yes, because you have these gross things that are interesting and then you have these micro things. War of the Planet of the Apes has some of the best facial expressions. But if you look at some of the performances of the bad ape, we had down to literally fine-tuning render output that’s causing these emotional things. The nuanced performances are coming from the smallest pings of moisture around the eyes that just wouldn’t be there unless you were in the final render. I take nothing away from the actors doing the motion capture. But motion capture doesn’t capture glints on eyes. And yet that’s what giving me some of the emotional reaction. It’s good to understand these things.

I’m looking at a thumbnail image of you right now. We have no video…

[-[20:36] Allan: I’m looking at virtual one of you.

Mike: Right. [The thumbnail] is totally realistic. It’s small, not high res. So we don’t need massive amounts of render. But that doesn’t look like a virtual you. How is that possible? How do I read that thumbnail of you as real and you detect that the one of me is virtual? It’s so important to get it right. Films like War of the Planet of the Apes is a masterpiece.

[-[19:57] Allan: You’re totally right. I remember seeing both War and Rise. It’s so rare when I watch a movie not thinking about CG.

Mike: In the second film, you’ve got Koba as a character who is acting in the plot. We as an audience understand that he has this persona with a whole different motivation. We know he’s really smart acting stupid. Koba’s performance with a gun is a moment for a virtual character. There are some really good virtual humans: Joi was great in Blade Runner. These moments are hard to get right. And if you don’t get them right — like the mustache of Superman — they just rip you to shreds. The film is categorized as a disaster.

[-[18:13] Allan: Whenever I see bad CG, I know there is a story behind it. I remember watching the making of the Scorpion King. They had such a shoestring budget. You’re right, there are obvious things. But then there are the subtleties. I remember looking at Davy Jones [in the Pirates of the Caribbean] and thinking, “Okay, they kept his eyes” and everything else is CG. And then I read later on that his eyes were also CG. There are all these pivotal moments, even if it is such a small think. I watched a Q&A with ILM and James Cameron. I think it was Fox that wouldn’t let him put up posters of Avatar because the characters looked unrealistic. It was the motion that made them real. 

Mike: The cleverest thing he did was make them blue. Our brain can’t assess that color for realism. I run the Virtual Humans Day at the FMX Conference in Germany. A couple years ago, we invited an eye surgeon to discuss the eyes. He did a magnificent job. The most fun part of the talk was [him diagnosing] CG characters: “This character probably has cataracts. This character is probably going blind in the next six months.” It was so funny and so many things that people got wrong. But this was an eye surgeon. He’d been trained to look at eyes — not the whole person — to see the issue. When he articulated that, I thought, “Of course! That would make Gollum’s eyes so much more realistic!” Don’t get me wrong: I think Gollum is magnificent! Being able to diagnose what’s wrong with a shot makes a great Supervisor; being able to give notes that will improve the shot and not be random. It’s a pivotal aspect.

[-[14:15] Allan: I love it! How was this received?

Mike: It’s been great! It’s done what it should have done. I think we’re going to see some great stuff from Epic this year. Are you going to GDC this year?

[-[13:39] Allan: I’m probably going to stay put. There’s been so much travel this year!

Mike: Given the aspect of interactivity, GDC is super important. We’re obviously continuing the research. We’re looking at some other issues, stuff that’s less relevant to your audience:

– Application of this technology beyond games or film;

– Some of the ethical issues;

– As well as some of the cognitive issues, like what it means to be a person.

I’ll give you an example. If you were in Shotgun and I asked you to fill out a survey and in the last second I asked you to do it on my laptop: Statistically, you’re much more likely to give it a harsher score on my laptop. Even though it has nothing to do with humans, you don’t want to be mean to the software from your computer. That’s a tiny window into how much power this stuff has. It would be impractical to spend money on a two-legged android that comes in and drives me in my car — than on the car that is robotic. Or, the room could become the robot. You might have a face on the dashboard that can give you directions. You will be much more willing to give your safety to a relatable face.

We’re interested in how to integrate this stuff into our lives. For example, I’m a dad. Would it be okay to have a Virtual Mike read a bedtime story to my daughter? You have to put this stuff to practice.

[-[09:09] Allan: What’s been happening is a lot of people theorize about this stuff. Your cultivating this group of people and taking it for a test drive — that’s when you realize all these problems. I look at iPhone 1 (and it’s a terrible analogy). iPhone 1 needed to exist to make all the other amazing stuff. For you, to create that first step — has been an amazing opportunity.

Mike: That’s kind of you to say! To wave the flag for my friends at Disney Zurich Research, they’re very concerned about the whole process. I’d hate for people to think that all those guys were doing was eyes. They were working on other stuff. There are four teams in the world that get this stuff.

[-[07:22] Allan: Absolutely! There is a lot of hard work and years of research. For you, this being realtime, what’s been your experience seeing yourself as a virtual human?

Mike: That’s a great question! It’s been pointed out that Virtual Mike has better hair that I do. At a joking level, when I was scanning my head and then presenting my head a SIGGRAPH, almost a year had lapsed. The trouble was when I was scanning I was super fit. I had this first world problem: I had to diet to get back down to my scanned weight. I couldn’t change my hair and had to wear the same clothes. We wanted me to be identical.

Alan Kay, the Apple researcher and fellow, gave a lecture when I was young: If you want to see the future, take a look at the labs today. Things don’t come out of nowhere. The research that Hao Lee is doing at USC helped developed ILM’s pipeline (the face monster stuff) and into the technology that Apple bought for iPhone 10. In a sense, his work was at a lab — and now it’s in my pocket. If you want to see what’s coming, get into the labs. It’s insanely cool! The more I’ve been doing this research work, the more I’ve been talking to these researchers. I cannot wait until this stuff moves out into the world! Of course, it takes a long time. That’s the fun stuff for me. The frustrating part is that it doesn’t work very easily. I was one of the team and the team did the work.

[-[03:38] Allan: That’s great! If people want to find out more about you — and the virtual you — where should they go?

Mike: I tend to publish stuff on FXGuide. And FXGuide tends to have what interests us. On FXPhD, we have more stuff that’s relevant for people’s jobs. I’d like to think my research and other stuff people are doing, we tend to upload on FXGuide. Most of each movie is three quarters or a close up. Most of the story is told on the faces of the actors. So you take any film. Take Coco: While the wild vista shots are impressive, most of the story is played out on the face of the child. That’s what gives its humanity. So I can’t imagine us not publishing stuff about the research.

[-[02:04] Allan: FXGuide is such a great research! There are articles that have extremely insightful stuff. I definitely recommend to anyone! Thanks for taking the time to chat about this stuff.

Mike: Thanks a lot, Allan!

 

I want to thank Mike for taking the time to chat. I had a lot of fun talking about all the mind-blowing discoveries. I think the outcome looks amazing. I’m excited to see where all of this goes.

– Next Episode, I’m hoping to release the interview with one of the VFX Supervisors at Scanline. We’re getting the final approval right now.

– My Mentorship is opening registration later this month. To get on the waiting list, go to allanmckay.com/inside/.

– Please review this Episode on iTunes.

That’s it for now.

Rock on!

 

 

INSTAGRAM