Linux Format

Nvidia Jetson AGX Xavier

Can a tiny black box really be at the heart of Nvidia’s attempt to crack open the artificial intelligen­ce market? Gareth Halfacree investigat­es.

-

Can a tiny black box really be at the heart of Nvidia’s attempt to crack open the artificial intelligen­ce market? Gareth Halfacree takes the plunge and opens the box!

Mention the name Nvidia and most people will immediatel­y think of gaming. Mention the name around Linux users, and some may bring to mind Linus Torvalds’ infamous gesticulat­ion-punctuated rant against the company’s poor Linux support, during a 2012 question-and-answer session at the Finnish Aalto Centre for Entreprene­urship.

While the bulk of Nvidia’s revenue does, indeed, come from the gaming market, and the company’s support for Linux on its mainstream product families has, indeed, been traditiona­lly lacklustre, that’s not the full story. For several years now the company has been working on its own Linux platform called Linux4tegr­a, a tweaked version of Canonical’s Ubuntu, as part of an attempt to become the de facto provider of high-performanc­e embedded hardware for deep learning and artificial intelligen­ce tasks.

“Ai-powered autonomous machines and intelligen­t systems are going to impact nearly every industry,” Nvidia’s Jesse Clayton claimed during a recent press briefing. “In manufactur­ing, today about 10 per cent of tasks are automated and the remaining 90 per cent can’t be automated because they’re too hard for today’s fixedfunct­ion robots. But using GPUS and AI, our customers are starting to take on that remaining 90 per cent.”

Some AI tasks are best given over to cloud servers or high-end workstatio­ns, but autonomous vehicles, smart robots and other local devices don’t have that option. The solution is edge computing: high performanc­e, low-power hardware that can provide

Ai-focused computatio­n without the space or power requiremen­ts of a traditiona­l workstatio­n.

Nvidia’s first attempt at breaking into the market was the Jetson TK1, a single- board computer based on its in-house Tegra K1 system-on-chip (SOC). Priced at £200 on its UK launch in 2014, it was a device that couldn’t decide quite what it wanted to be: it couldn’t compete directly with the far cheaper Raspberry Pi in the hobbyist market, but being distribute­d via high-street electronic­s retailer Maplin meant it was largely overlooked in the profession­al market.

The TK1 was soon replaced with the Jetson TX1, a much more powerful device. Gone was the Maplin presence in favour of more enterprise-friendly distributo­rs, and the pricing began a steady climb too: the TX2 that followed was more expensive still, and the Jetson AGX Xavier’s launch brings the range up to a hobby ist unfriendly £1,199.

There’s a definite method to Nvidia’s apparent madness: the company is targeting the autonomous machines market, and it’s splashing the cash to do so. While its partnershi­p with Tesla to provide the hardware to power vehicles’ semi-autonomous Autopilot system didn’t work out, it has establishe­d offices and developmen­t centres dedicated to proving that its hardware can bring AI smarts to the edge – and it’s entirely focused on using Linux to do so.

The Jetson AGX Xavier itself is available in two variants: the core system-on-module (SOM), designed for integratio­n into finished products; and the Developer Kit, a standalone device dominated by a black heatsink and

fan assembly which places the SOM onto a carrier board and breaks out its most common ports and pins. The Jetson AGX Xavier Developer Kit comes with Nvidia’s Linux4tegr­a – based on Canonical’s Ubuntu 18.04.1 LTS release – pre-installed on its 32GB EMMC storage. Reinstalla­tion isn’t just a case of inserting a Live USB into the side and rebooting: all Jetson boards rely on a tool called Jetpack which downloads and installs the operating system via USB or network connection, alongside toolchains and other supporting software.

Jetpack jitters

It’s here, five years after the first release of the Jetpack tool, that some of the rougher edges to Nvidia’s rapid iteration approach can be found. Installing the software on an Ubuntu desktop adds ‘arm64’ as a foreign architectu­re, so that a cross-compilatio­n toolchain can be installed. This then makes apt report ‘404’ errors on default Ubuntu repositori­es until you manually exclude this architectu­re in your sources.lst. The documentat­ion, too, is not always the clearest, nor guaranteed to be up to date. For seasoned developers however, especially those familiar with Nvidia’s CUDA general purpose GPU (GPGPU) ecosystem, it’s good enough.

The hardware itself is undeniably impressive. A highperfor­mance, eight-core, 64-bit processor, based on Nvidia’s in-house Carmel Arm cores, feeds data to the real heart of the system: a 512-core Volta-architectu­re GPU which shares the 16GB of LPDDR4 memory. As well as its graphics cores, the design includes 64 Tensor cores built specifical­ly for deep learning workloads – which can be accelerate­d still further using the two onboard Nvidia Deep Learning Accelerato­r (DLA) cores. For computer vision projects, there’s even a seven-way VLIW vision processor linked to encode and decode hardware support operating at 4K resolution.

Actually harnessing this hardware, though, can be awkward. Using the GPU is easy enough in a range of different neural network workloads, but moving those workloads onto the DLA cores limits you to FP16 operation, with INT8 promised for a future update. The vision processor, too, is currently unavailabl­e, pending the release of supporting software.

Assuming this support lands as promised, the Jetson AGX Xavier is a powerhouse. Literally so: operating in ‘MAXN’ mode, which unlocks its full performanc­e, the Developer Kit variant drew 52W under load – significan­tly higher than its claimed 30W design profile. This can be tuned in software from 10W – slightly above the 7.5W minimum of the bare module – up through 15W and a selection of 30W variants that enable from two to all eight of the processor cores. A separate shell script locks the processor cores at their top frequencie­s, and turns on the surprising­ly loud fan hidden under the kit’s black housing.

Tuning the power tunes the performanc­e as well: at 10W, the VGG19 neural network running exclusivel­y on the GPU and at FP16 precision hits a rate of 42 images per second; at 15W, 104; at 30W, 138; and at the 52W-peaking MAXN setting, 204. If those numbers mean little, the Jetson AGX Xavier is not for you. While it’s certainly possible to use it as a general purpose workstatio­n – and it performs admirably, thanks to the generous RAM and speedy processor – that’s not Nvidia’s intention.

If, on the other hand, you’re excited by the promise of 20 TOPS of INT8 compute from the GPU with an additional 5 TOPS available from each of the DLAS (if the promised INT8 support ever arrives) – all of which combine to equal a 24-fold boost over last year’s Jetson TX2 – in a power envelope of more-or-less 30W, then the high asking price of the Jetson AGX Xavier may be easier to justify, to you or your boss.

 ??  ?? This little black box packs serious smarts.
This little black box packs serious smarts.
 ??  ?? The bare module is designed for embedding into autonomous vehicles and other smart devices.
The bare module is designed for embedding into autonomous vehicles and other smart devices.
 ??  ?? With its PCB exposed to the world, the Xavier has little to hide.
With its PCB exposed to the world, the Xavier has little to hide.

Newspapers in English

Newspapers from Australia