3D World

create lidarbased landscapes

Discover how to build procedural landscapes in Houdini using real-world elevation data from LIDAR surveys

- Greg Barta Greg is a VFX artist and scientist with a passion for cinematic scientific visualisat­ions. Details, along with Houdini resources like the UI mods in these screenshot­s, are on his Artstation profile. scivfx.artstation.com

Build procedural landscapes in Houdini with survey data

Using landscape generator software, and also the terrain module in Houdini – especially with the enhancemen­ts in version 17 – we can effectivel­y create fully procedural landscapes. However, none of these procedural workflows can compete with the complexity of the natural processes behind these landscapes in reality. Using real-world terrain elevation data as a starting point not only makes the result more grounded, but can also give you extra inspiratio­n during the work. Although this may initially sound like a complex approach, you’ll be surprised at how simple it is.

The renders shown in these tutorials are not finished projects, but study scenes, and are not complex setups. The goal is to create landscapes for look-dev, concept art or the base layer of a matte painting, using just the elevation data and procedural shading, without any further or additional detailing.

In the first part of this tutorial we will learn some basic LIDAR concepts and how to convert this data into a more useful terrain object. While even the free Apprentice version of Houdini can export various geometry files, users of other 3D software can use it as a converter. The second part of this tutorial is softwarein­dependent, so even though I used Clarisse for the rendering of the snowy scene, you can use any node-based raytracer.

01 fly around the Globe

First we need to find somewhere that can serve as the basis of our landscape. Google Earth is a good tool for finding this inspiratio­n, and we can get quite a good sense of the landscape.

02 Gather data

We need to find where can we obtain the elevation data of the chosen location. If we are lucky, it will be available online; there is a link list on my Artstation profile of such sites. In this screenshot we can see the UK’S official site for many types of earth science data sets: data.gov.uk. Here we can download not just the more well-known DEM (Digital Elevation Model) files, but also the raw point clouds of the measuremen­ts that came directly from the devices, usually LAS files.

03 what Is lidar?

LIDAR is similar to RADAR (Radio Detection And Ranging), however it emits beams within the optical frequency range of the electromag­netic spectrum, thus the first two letters mean light in this case. While the speed of light is known, we can calculate distances if we use a laser light and a sensor that can detect the exact time delay of the reflected light pulse – similarly to how the laser rangefinde­rs work. The main difference is that LIDAR equipment does this operation very frequently, some more than a million times a second.

04 lidar for earth science

The usage of LIDAR in earth sciences is wide – they measure range and altitude, atmospheri­c vertical profiles of aerosols and gas densities, temperatur­e, cloud cover, wind velocity and direction, shape and size of landscape features, height and density of forests, sea surface roughness and so on. These devices are used on aeroplanes or satellites to cover large areas.

We need the height measuremen­ts for this tutorial, however some other data can also be a good starting point even for a VFX artist or game developer. This image is from a NASA Goddard visualisat­ion, links are available on my Artstation.

05 houdini las Import

Luckily Houdini can directly import the LAS files with the dedicated Lidar Import SOP node, so we don't need to convert them with GIS software – used by earth science profession­als. The problem with this direct import is that these point clouds are basically raw data. Similarly to a raw photo, they contain all the original data of the acquisitio­n, even errors and noise, so we need to process them for further uses. If the extension is LAZ, it means that it's compressed, so we should first convert this to LAS as Houdini reads just the last format.

06 process raw data

Even if Houdini is a VFX software, we can use various SOP nodes to achieve similar data processing methods to what GIS software can provide. First we should use a Transform node to orient the Z-up coordinate system to Y-up, then move the point cloud to the origin of our scene, otherwise they might be thousands of miles away. They usually use the Cartesian coordinate system, and the curvature of the Earth with the usual sizes doesn't matter too much, so for CG work it’s okay not to worry about this. However these LAS data products provide metadata files with the downloads, which are basically the documentat­ions and/ or logbooks of the acquisitio­n and include the coordinati­on system.

As usual with LAS point clouds, some points are close to each other, so we can use the Fuse node to

snap them together and another one to merge them (or consolidat­e in Houdini terms). This not only averages the noise of the individual data points, but it also simplifies the parts of the point cloud that are too dense.

07 survey attributes

The Lidar Import node can also read some additional attributes of the points like return count and return index, not just the coordinate­s. However to get the most valuable attribute of the LAS format, the classifica­tion, we need to use GIS software for conversion. We can use classifica­tion similarly to the shop_materialpa­th attribute, but it has an official standard of assignment­s, and pertains more to the category of the object than the material of it. However we can use it not just for defining materials, but also to drive scatterer nodes to lay down trees, buildings and so on regarding the real-world layout.

08 dem

The DEM format is more familiar for CG artists as it is preprocess­ed and more easy to use. They are usually available in a special TIFF image file format, which stores the elevation data for every pixel. In a nutshell, using CG terminolog­y, these are orthograph­ic top-view P.z AOV renders of the 3D elevation models, which are based on the LIDAR point cloud. There are two types: the DSM refers to the surface model and includes all the objects like trees and buildings; the DTM is the terrain model which is a cleaned-up version using just the LIDAR returns from the ground, and the occluded areas are interpolat­ed.

While so many VFX and game scenes depict surfaces of alien planets, it’s likely that we can’t find appropriat­e landscapes in Google Earth for such uses. However, there is the option to switch to other planets like Mars. There are enormous amount of landscapes freely available, many with 1m resolution – check out the resources page on my Artstation site for relevant links. This render is based on a NASA data set.

10 Convert to surface

Houdini offers so many options for processing the LIDAR point cloud further and generating a more useful terrain surface. Luckily developers improved the Triangulat­e 2D node in Houdini 17, which is much faster and ideal for converting our point cloud elevation data to a polygonal surface. This is the most similar method to that which scientists use to convert the data to DEM formats. We should switch on the Restore Original Point Positions parameter to get the expected result. Point Cloud Iso is the dedicated node for converting scan data to a surface, but it needs normals on the points which we don't have with LAS files.

11 Convert to sdf

SDF (Signed Distance Field) is kind of an intermedia­te state between volumes and surfaces. It's based on volume voxel grid but instead of storing the density for each voxel, it uses a distance value in each voxel, defining an implicit surface. So let's try it and convert our point cloud with a VDB from Particles node using the Distance VDB setting. To render this kind of geometry we can either convert it to polygons or if we use Mantra, there are parameters under the counter we should add to the OBJ container node: vm_volumeiso and vm_volumedens­ity.

12 keep the points

We can directly use the points as particles or instanced spheres for rendering. Some data sets are so dense that at particular distances it can work well, at least for quick look-dev purposes or even for scientific visualisat­ions. With

vegetation, this kind of direct usage makes the look more organic than the polygon surface. In Houdini the most memory-efficient way is to create a Copy to Points node and switch on the Pack and Instance parameter, then create a Sphere node with Primitive type and use it as an instanced object.

13 Convert to Metaballs

The most ideal combinatio­n of the previous conversion options is to use metaballs. The points work like the droplets of a 3D printer – the metaballs stick together and fill the gaps between the points automatica­lly. It is quite simple to achieve, we just need to create a Metaball SOP then use a Copy to Points node to scatter this metaball on all the points, and tweak the Radius and Weight parameters of the Metaball node to get a coherent surface. The GEO container node's Geometry tab has a Metaballs as Volume option. It's worth a try for rendering distant vegetation.

14 Convert to height Map

For further enhancemen­ts and detailing the best method is to use the native terrain format of Houdini, the height field, but converting the point cloud directly is a bit complicate­d, so it's better to use the output of any of the previous conversion­s. Height field is basically similar to the DEM format, but the 2D image that stores the height data (technicall­y one layer of volume voxel grid) is automatica­lly rendered/visualised as a 3D geometry. If we have a DEM file, we can simply use the Heightfiel­d File node to import it directly. To convert the previously generated Las-based geometry, we can use the Heightfiel­d Project node.

15 untextured renders

It's worth doing some test renders with simple shading and lighting, just using our freshly generated terrain model. This is the point when the benefits of using of real-world elevation data start to become clear. Even without any texture and objects, the pure model still looks natural and grounded. As we can see, reaching this step does not take too much time and we now have a decent-looking and detailed terrain model. Now, using this pipeline, we can download other data sets, align them and then simply re-render.

16 define lakes/rivers

In this scene I used elevation data from USGS (United States

Geological Survey) – they have pretty high-resolution data products, especially for some areas in Alaska. This is the area of the Blue Lake, which is a three-mile long reservoir. With such high resolution and data accuracy it's easy to define the surface of lakes by selecting the areas with a narrow elevation value range. While this is DSM, you may see the shapes of some individual pines. I exaggerate­d the heights to get higher mountains.

17 procedural shading

We can use satellite images for the textures, but they usually don't fit the needs of the production as they are photograph­s and have fixed lighting conditions. In this Alaska scene I built a shading network which to some extent simulates the behaviour of a real snow cover.

This screenshot is the back end of the original shading network. In the next step you can see the first part, because I baked them to speed up the rendering. It is important for snow to use high albedo like 0.9, even if it looks too bright in the raw render. It doesn't need subsurface scattering at this distance, but high roughness can make it more realistic.

18 snow Cover texture

This is the network that generates the pattern of the snow cover. The utility node defines the melting height and there is also a noise procedural texture which adds some turbulent patterns. One Occlusion node gathers snow to the deeper areas where they build up by the wind, and the other has direction – it's more like the statistica­l average of the wind direction in this area that makes the wind-exposed areas uncovered.

19 fur for distant foliage

For distant foliage it's not recommende­d to use detailed tree models, as they tend to flicker even with high sampling settings. Especially for look-dev and concept work, we can use the hair/fur primitives of the renderer. With proper shading they can look like a forest from a bird's eye view, and we can use the same nodes for the scattering as we used for the snow for defining their pattern, which is based on height and slope.

20 atmos fx

For scenes of this scale it's important to include aerial perspectiv­e effects from the beginning as it significan­tly affects the sense of the scale. The fastest and easiest way is to simply use the blue-coloured version of the depth AOV. Additional­ly we can put this AOV on top of the image twice, one with a brown/amber tint and Multiply layer mode, the other with a blue tint and Add mode. Of course we can get more realistic results with volumetric objects, but this can often result in far greater render times.

21 lighting

It's important to use real values for light sources and shaders to achieve renders with correct colour and global illuminati­on values. In this image we can see the render without advanced colour management, and using real values can look strange, so it's also recommende­d to use solutions like ACES, Filmic Blender, SPI-VFX etc. This is a golden hour scene, so I used a dome light for the sky, a distant light with a 0.5-degree spread for the sun, and another distant light with a few degrees of spread, with orange colour to simulate the scattering of the distant clouds around the sun. •

 ??  ??
 ??  ??
 ??  ?? author
author
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from Australia