CAN I USE MY VR MODELS IN OTHER PROGRAMS LIKE ZBRUSH?
Hatana El Jarn, Leeds
Most VR creation tools now feature a way to export your models and often a texture option in the form of vertex paint to allow you to use it in other programs. There are many new and exciting VR modelling programs available like Adobe Medium, Kodon, Gravity Sketch and very new ones like Shapelab VR. Most of them allow you to export what you have created and simply import it into programs like Blender or Zbrush to make a more usable model or texture map. If you want to use your model for something like a game or VR/AR experience, you are going to need good topology and also UVS to enable you to create a texture map.
Vertex paint is where each point (or vertex) is assigned a colour. If you have enough points then it looks like you have painted your model. That data can be captured onto a texture map in programs like Zbrush.
VR programs come in a few distinct types. For example there is Gravity Sketch that is primarily NURBS (CAD or splinebased technology), and there is Adobe Medium that is 100% voxels. When you want to export your models out, the software has to convert them to polygons with vertex paint data to allow other polygon programs to read them. The file formats used include OBJ, FBX and more recently gltf, and these formats carry all that data in most cases. There are some programs such as Kodon and Shapelab VR where the model is already ‘true’ polygons in the program, and they simply have to export out in the correct format to be used in other programs.
In the example here I will use Shapelab VR, because the process of creating a model, painting it and then exporting it to almost any other program is seamless. You simply finish your modelling to a level you are happy with, then, using the on-hand controllers, pick the type of export and file format that works for you. In the example below we will explore taking a model out of Shapelab VR and importing it into Zbrush. This could easily be Shapelab VR to Blender, Cinema 4D, Maya or one of 100 other options.
‘Sitting’ a CGI object convincingly into a live scene, whether a photograph or video, has always been one of the most complex tasks to master for a CGI artist. This is because there is a lot of information from the actual scene that needs to be gathered correctly, and combined to give the ‘equation’ of how the computer will adjust the computer image to fit. Naturally, a lot of this is down to the skill and experience of the CG artist, who can interpret the correct solution for the scene when all the information is not available
– if for example, the scene to match is a stock image.
While there are many toolsets within existing digital content applications to ‘match’ a scene, this does require a steep learning curve, especially for those artists who only need 3D once in a while for a quick product shot. So for many designers, it is quicker to hire a 3D artist to create a 3D image of a product rather than make it themselves.
Substance 3D Stager for Adobe goes a long way to fix this issue for those artists who only need 3D once in a while.
Substance 3D Stager can take existing 3D models, or models available from its library (or the online Substance 3D Assets site), and enable artists who are totally new to 3D the ability to add materials and
their own 2D designs created in apps like Adobe Photoshop, Illustrator or similar.
When the artist is ready to render, a backplate image can be dropped into a scene from which Substance 3D Stager can, through Adobe’s AI and machinelearning expertise, match both the camera perspective and lighting with a single click.
This tutorial will integrate Substance 3D Stager model assets into a stock image to show the power of this fantastic new tool.