Friday, August 1, 2008

The Ruby demo continued

One week after the unveiling of the Ruby teaser, Jules Urbach recorded a new video, in which he gave a bit more info on the voxel based rendering that was used in the Ruby demo and announced his other company, LightStage LLC. LightStage is a technology used by Hollywood film studio's to capture lifelike 3D representations of actors, which can be relighted afterwards. It has been used in Spiderman 3 and will also be used to motion capture and digitalize the actress playing Ruby. Her digital facsimile will then replace the Ruby character from the teaser.

Video: http://youtube.com/watch?v=Bz7AukqqaDQ

Full transcript:

Hi, I’m Jules Urbach and this is a follow-up to the presentation we did a week ago at Cinema 2.0, the launch for the 770. What we are showing is a couple of new things that we weren’t able to talk about last week, that I think are really interesting. So we’re announcing today that we are developing LightStage. LightStage LLC is a separate company for capturing and rendering really high quality 3d characters and LightStage is an interesting technology that was developed at USC by Paul Debevec and Tim Hawkins, and it solves the problem of the uncanny valley as far as characters go. So we’re very pleased we’re able to announce that we can actually take this data and start working with it and applying it in our projects.

So, if you take a look at what we were doing years ago, to do characters and animation it was limited by the fact that our artists can only create so much detail in a head or a human form. And this is the normal map for that head and this is a really complex skin shader that we wrote to try to recreate what humans look like. And in a lot of ways both these problems go away with the LightStage.

The LightStage first of all can capture a real person. So, to generate this head, an artist doesn’t even sit there and sculpt it. We can essentially put a woman in the LightStage, which is a domed capture environment, and it captures all the surfaces and all the normals for it and all the details, including the hair. And it captures it in full motion, so you have to understand that, unlike traditional motion capture where there is either make-up applied to the face or dots put on the face, we just put somebody in the LightStage and they can do their lines and speak and it does full motion capture optically.

So it gets the full data set of essentially all the points in their face. And this is in fact the rendered version of LightStage. This is all the data that is captured on the LightStage accumulated. It’s not a photograph. This is a fully relit head, based on the model you just saw. And it’s obviously a lot of data, but the work that we announced last week with the GPU, compression / decompression, we’re going to apply that to LightStage (and have) data sets that we can start loading in real-time and rendering them, not just for film work but in games as well.

So the LightStage data, I think it really closes the uncanny valley. I mean particularly this kind of data where we have all the relighting information, stored for every single pixel, it’s exciting and it gives us really high quality characters that I think look completely real. And that’s, I think one of the things we showed last week at the Cinema 2.0 event was that we could do scenes and cities and things that look really good and this essentially gives us people. And it goes even further than that, but we will certainly be announcing more as we get further along working on LightStage.

I’m gonna show one more thing that we didn’t get a chance to really show last week as well, which is some of the real-time stuff, the voxel rendering. We basically had two separate demos, one for just showing the fact that we can look around the
voxelized scene and render it. And this one now, this demo, is a slight update of the original one, where I’m able to actually look around and essentially place voxels in
the scene, but also relight it as well.

So this is part two of three that we’re releasing. This shows essentially us going through the scene, selecting an object and either rendering it through the full lighting pipeline or just doing the global illumination pass. Then we can also just use the normals to do full, totally new novel reflections on. But you can essentially see that the voxels, even in this fairly low resolution form, can capture lighting information and capture all these different details. And we can do that as we’re navigating through the scene.

So this is sort of part two of our Ruby real-time demo and part three of it is gonna show, as a next step, the full navigation through the voxel scenes. And that’s gonna be dependent on the compression we’re developing. Because right now, the reason why we are not loading the entire animation is that the frame data is about 700 megabytes for every frame. We can easily compress that down to 1/100th the size, we're looking to do about a 1000th the size. And then with that we’ll be able to load much larger voxel data sets and actually have you navigating pretty far throughout the scene and still keep the ray tracing and voxelization good enough that you don’t really see any sort of pixelized or voxelized data sets too closely.

So one more demo, that’s worth showing I think, is related to the LightStage. You can actually see that on the right here. And that basically is really a mesh that is generated from one person in LightStage. It doesn’t have all the LightStage data in there, but what you can see from this example is one reason why voxel rendering may be important. So this is really using a very simple polygonal mesh, evenso it’s about 32 million triangles, just to render the scene. So I’m gonna show the wireframe of it and you can see that the data is so dense, everything from the eyes to the eyelashes are all there, and we’re only really using a small subset of the point cloud that’s generated from the LightStage. So if we move to voxel rendering, which I’m planning to do for LightStage as soon as we’re done with the Ruby demo, we’ll be able to have voxelized assets rendering in realtime at mùch higher resolutions than this. And that’s gonna be giving us characters that look better than anything we can show in any of these videos. And we should have that ready probably before the end of the year, so it’s exciting stuff! Thank you for watching, hope you enjoyed the
demos.

No comments: