Mag­gie oh

LIVECGX brought Cg-en­hanced im­pro­vi­sa­tional per­for­mance to Lon­don Fash­ion Week – and it was all con­trolled by the per­former

3D Artist - - CONTENT - Mag­gie Oh Tech­ni­cal PM at Lu­cas­film’s ILMXLAB

The ILMXLAB tech­ni­cal PM talks mo­tion cap­ture in the fash­ion world

Alit­tle over a year ago, my man­ager pitched an idea to me. He said, “Just hear what i have to say with an open mind. if you’re not in, okay, but it’s im­pos­si­ble to imag­ine you won’t be.” The re­sult was LIVECGX, which rep­re­sents the first step to­wards a com­pletely new type of live per­for­mance – one that con­nects real-time visual ef­fects and hu­man-driven ex­pres­sion. it’s an evo­lu­tion of Lu­cas­film’s rich his­tory with tech­nol­ogy and sto­ry­telling.

LIVECGX has the po­ten­tial to be used for any film fran­chise, as well as non-movie ap­pli­ca­tions like sports, mu­sic and fash­ion. The lat­ter helped to shape our first pub­lic de­ploy­ment at Lon­don Fash­ion Week (LFW) and the fash­ion and sto­ry­telling col­lab­o­ra­tion was with the Univer­sity of the Arts Lon­don: Fash­ion in­no­va­tion Agency’s de­signer Steven Tai (steven­tai) via Matthew Drinkwa­ter and the GREAT Bri­tish cam­paign for 2018’s LFW.

Steven­tai’s Au­tumn/win­ter 2018 fash­ion pre­sen­ta­tion marked the global de­but of LIVECGX. it was used to dig­i­tally trans­form the venue as well as pieces from steven­tai’s Ma­cau-in­spired col­lec­tion. on a gi­ant LED dis­play in Dur­bar Court, lo­cated in the For­eign and Com­mon­wealth of­fice in West­min­ster, Lon­don, we saw the set­ting trans­form in real-time with el­e­ments from Ma­cau lay­ered onto the en­vi­ron­ment. While live mod­els showed off steven­tai’s lat­est Au­tumn/win­ter col­lec­tion on stage, an­other model per­formed in a mo­tion-cap­ture suit. Her avatar was vis­i­ble within the en­vi­ron­ment on screen, mod­el­ling two steven­tai-de­signed dig­i­tal gar­ments.

While his­tor­i­cally much has been done with so­phis­ti­cated pro­jec­tion in highly chore­ographed per­for­mances, what made this pre­sen­ta­tion unique was that the dig­i­tal el­e­ments were re­spond­ing in real-time to an im­pro­vi­sa­tional per­for­mance. Rather than the dig­i­tal pre­sen­ta­tion con­trol­ling the per­for­mance, now the per­former was driv­ing the pre­sen­ta­tion.

Mi­crosoft Kinects were used to cap­ture the depth buffer of the peo­ple on stage and in the au­di­ence as we wanted to real-time com­pos­ite the CG el­e­ments and the live video feed of Dur­bar Court. This al­lowed for con­vinc­ing in­ter­ac­tion be­tween the avatar and the mod­els – they could hug each other, look at each other and pass by one an­other. Eric Lan­dre­neau piped the depth buffer into Un­real En­gine’s com­posit­ing mod­ule Com­po­sure to be able to ac­com­plish this. peter Mal­nai also de­vised a way­point-driven tra­ver­sal sys­tem so that a CG char­ac­ter could auto-lo­co­mote be­tween way­points with­out col­lid­ing into real-life ob­jects. The sys­tem was set up so that a ges­tu­ral trig­ger started the auto-move­ment, which was shown in the fi­nale when the avatar walked among the on-stage mod­els.

The gar­ments were cre­ated in Mar­velous De­signer with cloth sim­u­la­tion in Un­real En­gine. in Un­real, omar Skarsvaag sim­u­lated the move­ment and drap­ery of the items of cloth­ing. Yoon Kim and Mo­ham­mad Mo­dar­res mod­elled and look-dev’d the ori­gin and Des­ti­na­tion gar­ments shown in the pre­sen­ta­tion.

The en­vi­ron­ment en­hance­ments were cre­ated by Ben nadler and Tommy Al­varez Ro­driguez. Through­out the show, the au­di­ence could see the en­vi­ron­ment change from Ma­cau’s jun­gle to Ma­canese neon signs. We wanted to show that we could trans­form both the en­vi­ron­ment and gar­ments in real-time.

Lu­cas­film was ex­tremely for­tu­nate to have ac­cess to the ILM Chiswick Stage. Matt Rank, Chris Jes­tico, Jack Brown and Laura Mil­lar were all very help­ful in not only loan­ing us their set of Vi­con Vero cam­eras, but also with the cam­era set-up and test­ing at Dur­bar Court. The Vi­cons were used to set up a mo­tion cap­ture stage at the show so that Vita old­er­shaw, our mo­cap artist, could drive the per­for­mance of the avatar.

The next steps for LIVECGX will be to find ways to ex­plore let­ting au­di­ences view the con­tent through hand­held de­vices. We would like to get this sand­box toolset into per­form­ers’ hands to cre­ate a unique ex­pe­ri­ence for their au­di­ence mem­bers.

We would also like to scale LIVECGX up to be part of a live sports event or con­cert in a larger venue. Ron Radezt­sky is al­ready plan­ning a re­fined code ar­chi­tec­ture that is scal­able for de­ploy­ment for events like this, and we’d like to en­hance the sand­box na­ture of LIVECGX as a plat­form. LFW was ex­hil­a­rat­ing for the team and we would love to con­tinue our fash­ion col­lab­o­ra­tions in the near fu­ture, as well.

About the LFW2018 Col­lab­o­ra­tors ILMXLAB is Lu­cas­film’s im­mer­sive en­ter­tain­ment di­vi­sion based in San Fran­cisco, Cal­i­for­nia. Steven­tai, Lon­don Col­lege of Fash­ion’s in­no­va­tion Agency and the GREAT Bri­tish cam­paign have col­lab­o­rated with ILMXLAB to bring this unique fash­ion pre­sen­ta­tion to life.

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.