cre­ate li­dar­based land­scapes

Dis­cover how to build pro­ce­dural land­scapes in Hou­dini us­ing real-world el­e­va­tion data from LI­DAR sur­veys

3D World - - CONTENTS - Greg Barta Greg is a VFX artist and sci­en­tist with a pas­sion for cine­matic sci­en­tific vi­su­al­i­sa­tions. De­tails, along with Hou­dini re­sources like the UI mods in these screen­shots, are on his Art­sta­tion pro­file. scivfx.art­sta­tion.com

Build pro­ce­dural land­scapes in Hou­dini with sur­vey data

Us­ing land­scape gen­er­a­tor soft­ware, and also the ter­rain mod­ule in Hou­dini – es­pe­cially with the en­hance­ments in ver­sion 17 – we can ef­fec­tively cre­ate fully pro­ce­dural land­scapes. How­ever, none of these pro­ce­dural work­flows can com­pete with the com­plex­ity of the nat­u­ral pro­cesses be­hind these land­scapes in re­al­ity. Us­ing real-world ter­rain el­e­va­tion data as a start­ing point not only makes the re­sult more grounded, but can also give you ex­tra in­spi­ra­tion dur­ing the work. Although this may ini­tially sound like a com­plex ap­proach, you’ll be sur­prised at how sim­ple it is.

The ren­ders shown in these tu­to­ri­als are not fin­ished projects, but study scenes, and are not com­plex set­ups. The goal is to cre­ate land­scapes for look-dev, con­cept art or the base layer of a matte paint­ing, us­ing just the el­e­va­tion data and pro­ce­dural shad­ing, with­out any fur­ther or ad­di­tional de­tail­ing.

In the first part of this tu­to­rial we will learn some ba­sic LI­DAR con­cepts and how to con­vert this data into a more use­ful ter­rain ob­ject. While even the free Ap­pren­tice ver­sion of Hou­dini can ex­port var­i­ous ge­om­e­try files, users of other 3D soft­ware can use it as a con­verter. The sec­ond part of this tu­to­rial is soft­warein­de­pen­dent, so even though I used Clarisse for the ren­der­ing of the snowy scene, you can use any node-based ray­tracer.

01 fly around the Globe

First we need to find some­where that can serve as the ba­sis of our land­scape. Google Earth is a good tool for find­ing this in­spi­ra­tion, and we can get quite a good sense of the land­scape.

02 Gather data

We need to find where can we ob­tain the el­e­va­tion data of the cho­sen lo­ca­tion. If we are lucky, it will be avail­able on­line; there is a link list on my Art­sta­tion pro­file of such sites. In this screen­shot we can see the UK’S of­fi­cial site for many types of earth sci­ence data sets: data.gov.uk. Here we can down­load not just the more well-known DEM (Dig­i­tal El­e­va­tion Model) files, but also the raw point clouds of the mea­sure­ments that came di­rectly from the de­vices, usu­ally LAS files.

03 what Is li­dar?

LI­DAR is sim­i­lar to RADAR (Ra­dio De­tec­tion And Rang­ing), how­ever it emits beams within the op­ti­cal fre­quency range of the elec­tro­mag­netic spec­trum, thus the first two let­ters mean light in this case. While the speed of light is known, we can cal­cu­late dis­tances if we use a laser light and a sen­sor that can de­tect the ex­act time de­lay of the re­flected light pulse – sim­i­larly to how the laser rangefind­ers work. The main dif­fer­ence is that LI­DAR equip­ment does this op­er­a­tion very fre­quently, some more than a mil­lion times a sec­ond.

04 li­dar for earth sci­ence

The us­age of LI­DAR in earth sci­ences is wide – they mea­sure range and al­ti­tude, at­mo­spheric ver­ti­cal pro­files of aerosols and gas den­si­ties, tem­per­a­ture, cloud cover, wind ve­loc­ity and di­rec­tion, shape and size of land­scape fea­tures, height and den­sity of forests, sea sur­face rough­ness and so on. These de­vices are used on aero­planes or satel­lites to cover large ar­eas.

We need the height mea­sure­ments for this tu­to­rial, how­ever some other data can also be a good start­ing point even for a VFX artist or game de­vel­oper. This im­age is from a NASA God­dard vi­su­al­i­sa­tion, links are avail­able on my Art­sta­tion.

05 hou­dini las Im­port

Luck­ily Hou­dini can di­rectly im­port the LAS files with the ded­i­cated Li­dar Im­port SOP node, so we don't need to con­vert them with GIS soft­ware – used by earth sci­ence pro­fes­sion­als. The prob­lem with this di­rect im­port is that these point clouds are ba­si­cally raw data. Sim­i­larly to a raw photo, they con­tain all the orig­i­nal data of the ac­qui­si­tion, even er­rors and noise, so we need to process them for fur­ther uses. If the ex­ten­sion is LAZ, it means that it's com­pressed, so we should first con­vert this to LAS as Hou­dini reads just the last for­mat.

06 process raw data

Even if Hou­dini is a VFX soft­ware, we can use var­i­ous SOP nodes to achieve sim­i­lar data pro­cess­ing meth­ods to what GIS soft­ware can pro­vide. First we should use a Trans­form node to ori­ent the Z-up co­or­di­nate sys­tem to Y-up, then move the point cloud to the ori­gin of our scene, oth­er­wise they might be thou­sands of miles away. They usu­ally use the Carte­sian co­or­di­nate sys­tem, and the cur­va­ture of the Earth with the usual sizes doesn't mat­ter too much, so for CG work it’s okay not to worry about this. How­ever these LAS data prod­ucts pro­vide meta­data files with the down­loads, which are ba­si­cally the doc­u­men­ta­tions and/ or log­books of the ac­qui­si­tion and in­clude the co­or­di­na­tion sys­tem.

As usual with LAS point clouds, some points are close to each other, so we can use the Fuse node to

snap them to­gether and an­other one to merge them (or con­sol­i­date in Hou­dini terms). This not only av­er­ages the noise of the in­di­vid­ual data points, but it also sim­pli­fies the parts of the point cloud that are too dense.

07 sur­vey at­tributes

The Li­dar Im­port node can also read some ad­di­tional at­tributes of the points like re­turn count and re­turn index, not just the co­or­di­nates. How­ever to get the most valu­able at­tribute of the LAS for­mat, the clas­si­fi­ca­tion, we need to use GIS soft­ware for con­ver­sion. We can use clas­si­fi­ca­tion sim­i­larly to the shop_­ma­te­ri­al­path at­tribute, but it has an of­fi­cial stan­dard of as­sign­ments, and per­tains more to the cat­e­gory of the ob­ject than the ma­te­rial of it. How­ever we can use it not just for defin­ing ma­te­ri­als, but also to drive scat­terer nodes to lay down trees, build­ings and so on re­gard­ing the real-world lay­out.

08 dem

The DEM for­mat is more fa­mil­iar for CG artists as it is pre­pro­cessed and more easy to use. They are usu­ally avail­able in a spe­cial TIFF im­age file for­mat, which stores the el­e­va­tion data for ev­ery pixel. In a nut­shell, us­ing CG ter­mi­nol­ogy, these are or­tho­graphic top-view P.z AOV ren­ders of the 3D el­e­va­tion mod­els, which are based on the LI­DAR point cloud. There are two types: the DSM refers to the sur­face model and in­cludes all the ob­jects like trees and build­ings; the DTM is the ter­rain model which is a cleaned-up ver­sion us­ing just the LI­DAR re­turns from the ground, and the oc­cluded ar­eas are in­ter­po­lated.

While so many VFX and game scenes de­pict sur­faces of alien plan­ets, it’s likely that we can’t find ap­pro­pri­ate land­scapes in Google Earth for such uses. How­ever, there is the op­tion to switch to other plan­ets like Mars. There are enor­mous amount of land­scapes freely avail­able, many with 1m res­o­lu­tion – check out the re­sources page on my Art­sta­tion site for rel­e­vant links. This ren­der is based on a NASA data set.

10 Con­vert to sur­face

Hou­dini of­fers so many op­tions for pro­cess­ing the LI­DAR point cloud fur­ther and gen­er­at­ing a more use­ful ter­rain sur­face. Luck­ily devel­op­ers im­proved the Tri­an­gu­late 2D node in Hou­dini 17, which is much faster and ideal for con­vert­ing our point cloud el­e­va­tion data to a polyg­o­nal sur­face. This is the most sim­i­lar method to that which sci­en­tists use to con­vert the data to DEM for­mats. We should switch on the Re­store Orig­i­nal Point Po­si­tions pa­ram­e­ter to get the ex­pected re­sult. Point Cloud Iso is the ded­i­cated node for con­vert­ing scan data to a sur­face, but it needs nor­mals on the points which we don't have with LAS files.

11 Con­vert to sdf

SDF (Signed Dis­tance Field) is kind of an in­ter­me­di­ate state be­tween vol­umes and sur­faces. It's based on vol­ume voxel grid but in­stead of stor­ing the den­sity for each voxel, it uses a dis­tance value in each voxel, defin­ing an im­plicit sur­face. So let's try it and con­vert our point cloud with a VDB from Par­ti­cles node us­ing the Dis­tance VDB set­ting. To ren­der this kind of ge­om­e­try we can ei­ther con­vert it to poly­gons or if we use Mantra, there are pa­ram­e­ters un­der the counter we should add to the OBJ con­tainer node: vm_vol­umeiso and vm_vol­ume­den­sity.

12 keep the points

We can di­rectly use the points as par­ti­cles or in­stanced spheres for ren­der­ing. Some data sets are so dense that at par­tic­u­lar dis­tances it can work well, at least for quick look-dev pur­poses or even for sci­en­tific vi­su­al­i­sa­tions. With

veg­e­ta­tion, this kind of di­rect us­age makes the look more or­ganic than the poly­gon sur­face. In Hou­dini the most mem­ory-ef­fi­cient way is to cre­ate a Copy to Points node and switch on the Pack and In­stance pa­ram­e­ter, then cre­ate a Sphere node with Prim­i­tive type and use it as an in­stanced ob­ject.

13 Con­vert to Meta­balls

The most ideal com­bi­na­tion of the previous con­ver­sion op­tions is to use meta­balls. The points work like the droplets of a 3D printer – the meta­balls stick to­gether and fill the gaps be­tween the points au­to­mat­i­cally. It is quite sim­ple to achieve, we just need to cre­ate a Meta­ball SOP then use a Copy to Points node to scat­ter this meta­ball on all the points, and tweak the Ra­dius and Weight pa­ram­e­ters of the Meta­ball node to get a co­her­ent sur­face. The GEO con­tainer node's Ge­om­e­try tab has a Meta­balls as Vol­ume op­tion. It's worth a try for ren­der­ing dis­tant veg­e­ta­tion.

14 Con­vert to height Map

For fur­ther en­hance­ments and de­tail­ing the best method is to use the na­tive ter­rain for­mat of Hou­dini, the height field, but con­vert­ing the point cloud di­rectly is a bit com­pli­cated, so it's bet­ter to use the out­put of any of the previous con­ver­sions. Height field is ba­si­cally sim­i­lar to the DEM for­mat, but the 2D im­age that stores the height data (tech­ni­cally one layer of vol­ume voxel grid) is au­to­mat­i­cally ren­dered/vi­su­alised as a 3D ge­om­e­try. If we have a DEM file, we can sim­ply use the Height­field File node to im­port it di­rectly. To con­vert the pre­vi­ously gen­er­ated Las-based ge­om­e­try, we can use the Height­field Project node.

15 un­tex­tured ren­ders

It's worth do­ing some test ren­ders with sim­ple shad­ing and light­ing, just us­ing our freshly gen­er­ated ter­rain model. This is the point when the ben­e­fits of us­ing of real-world el­e­va­tion data start to be­come clear. Even with­out any tex­ture and ob­jects, the pure model still looks nat­u­ral and grounded. As we can see, reach­ing this step does not take too much time and we now have a de­cent-look­ing and de­tailed ter­rain model. Now, us­ing this pipe­line, we can down­load other data sets, align them and then sim­ply re-ren­der.

16 de­fine lakes/rivers

In this scene I used el­e­va­tion data from USGS (United States

Ge­o­log­i­cal Sur­vey) – they have pretty high-res­o­lu­tion data prod­ucts, es­pe­cially for some ar­eas in Alaska. This is the area of the Blue Lake, which is a three-mile long reser­voir. With such high res­o­lu­tion and data ac­cu­racy it's easy to de­fine the sur­face of lakes by se­lect­ing the ar­eas with a nar­row el­e­va­tion value range. While this is DSM, you may see the shapes of some in­di­vid­ual pines. I ex­ag­ger­ated the heights to get higher moun­tains.

17 pro­ce­dural shad­ing

We can use satel­lite images for the tex­tures, but they usu­ally don't fit the needs of the pro­duc­tion as they are pho­to­graphs and have fixed light­ing con­di­tions. In this Alaska scene I built a shad­ing net­work which to some ex­tent sim­u­lates the be­hav­iour of a real snow cover.

This screen­shot is the back end of the orig­i­nal shad­ing net­work. In the next step you can see the first part, be­cause I baked them to speed up the ren­der­ing. It is im­por­tant for snow to use high albedo like 0.9, even if it looks too bright in the raw ren­der. It doesn't need sub­sur­face scat­ter­ing at this dis­tance, but high rough­ness can make it more re­al­is­tic.

18 snow Cover tex­ture

This is the net­work that gen­er­ates the pat­tern of the snow cover. The util­ity node de­fines the melt­ing height and there is also a noise pro­ce­dural tex­ture which adds some tur­bu­lent pat­terns. One Oc­clu­sion node gath­ers snow to the deeper ar­eas where they build up by the wind, and the other has di­rec­tion – it's more like the sta­tis­ti­cal aver­age of the wind di­rec­tion in this area that makes the wind-ex­posed ar­eas un­cov­ered.

19 fur for dis­tant fo­liage

For dis­tant fo­liage it's not rec­om­mended to use de­tailed tree mod­els, as they tend to flicker even with high sam­pling set­tings. Es­pe­cially for look-dev and con­cept work, we can use the hair/fur prim­i­tives of the ren­derer. With proper shad­ing they can look like a for­est from a bird's eye view, and we can use the same nodes for the scat­ter­ing as we used for the snow for defin­ing their pat­tern, which is based on height and slope.

20 atmos fx

For scenes of this scale it's im­por­tant to in­clude aerial per­spec­tive ef­fects from the be­gin­ning as it sig­nif­i­cantly af­fects the sense of the scale. The fastest and eas­i­est way is to sim­ply use the blue-coloured ver­sion of the depth AOV. Ad­di­tion­ally we can put this AOV on top of the im­age twice, one with a brown/am­ber tint and Mul­ti­ply layer mode, the other with a blue tint and Add mode. Of course we can get more re­al­is­tic re­sults with vol­u­met­ric ob­jects, but this can of­ten re­sult in far greater ren­der times.

21 light­ing

It's im­por­tant to use real val­ues for light sources and shaders to achieve ren­ders with cor­rect colour and global il­lu­mi­na­tion val­ues. In this im­age we can see the ren­der with­out ad­vanced colour man­age­ment, and us­ing real val­ues can look strange, so it's also rec­om­mended to use so­lu­tions like ACES, Filmic Blender, SPI-VFX etc. This is a golden hour scene, so I used a dome light for the sky, a dis­tant light with a 0.5-de­gree spread for the sun, and an­other dis­tant light with a few de­grees of spread, with or­ange colour to sim­u­late the scat­ter­ing of the dis­tant clouds around the sun. •

author

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.