JavaScript is the most ex­pen­sive re­source for web browsers to process on mo­bile phones. In this fea­ture Addy Os­mani cov­ers how to load JavaScript quickly with­out throw­ing out the kitchen sink

net magazine - - FEATURES -

Build­ing in­ter­ac­tive web­sites can in­volve send­ing JavaScript to your users. Of­ten, too much of it. Have you been on a web page on your phone that looked like it had loaded only to tap on a link or try to scroll and noth­ing hap­pens? We all have. Byte-for-byte, JavaScript is still the most ex­pen­sive re­source we send to mo­bile phones be­cause it can de­lay in­ter­ac­tiv­ity in sig­nif­i­cant ways ( bit.ly/in­ter­ac­tiv­ity-mat­ters). To­day we’ll cover some strate­gies for de­liv­er­ing JavaScript ef­fi­ciently to your users on mo­bile while still giv­ing them a valu­able ex­pe­ri­ence.

The web is bloated by too much user ‘ex­pe­ri­ence’

When users ac­cess your site you’re prob­a­bly send­ing down a lot of files, many of which are scripts. Per­haps you added a quick JavaScript li­brary or plugin but didn’t have a chance to check just how much code it was pulling in? It’s hap­pened to many of us. As much as I love JavaScript, it’s al­ways the most ex­pen­sive part of your site. I’d like to ex­plain why this can be a ma­jor is­sue.

Many pop­u­lar sites ship megabytes of JavaScript to their mo­bile web users. The me­dian web­page to­day cur­rently ship a lit­tle less – about 350kB of mini­fied and

com­pressed JavaScript ( bit.ly/state-of-js). Un­com­pressed, that bloats up to over 1MB of script a browser needs to process. Ex­pe­ri­ences that ship down this much JavaScript take more than 14 sec­onds to load and get in­ter­ac­tive on mo­bile de­vices ( bit.ly/load­ing-speed).

A large fac­tor of this is how long it takes to down­load code on a mo­bile net­work and then process it on a mo­bile CPU. Not only can that 350kB of script for a me­dian site from ear­lier take a while to down­load, the re­al­ity is, if we look at pop­u­lar sites, they ac­tu­ally ship down a lot more script than this. We’re hit­ting this ceil­ing across both desk­top and mo­bile web, where sites are some­times ship­ping mul­ti­ple megabytes of code that a browser then needs to process. The ques­tion to ask is: can you af­ford this much JavaScript ( bit.ly/can-you-af­ford-it)?

Sites to­day will of­ten send the fol­low­ing in their JavaScript bun­dles:

● A suite of user-in­ter­face com­po­nents (for ex­am­ple, code for wid­gets, carousels or draw­ers)

● A client-side frame­work or user­in­ter­face li­brary

● Poly­fills (of­ten for mod­ern browsers that don’t need them)

● Full li­braries vs only what they use (for ex­am­ple, Mo­ment.js and lo­cales vs a smaller al­ter­na­tive like date-fns or Luxon)

This code adds up. The more there is, the longer it will take for a page to load.

Load­ing a mod­ern web page

Load­ing a web page is like a film strip that has three key mo­ments: is it hap­pen­ing? Is it use­ful? And is it us­able? ‘Is it hap­pen­ing?’ is the mo­ment you’re able to de­liver some con­tent to the screen: has the nav­i­ga­tion started, has the server started re­spond­ing? ‘Is it use­ful?’ is the mo­ment when you’ve painted text or con­tent that en­ables the user to de­rive value from the ex­pe­ri­ence and en­gage with it. And then ‘is it us­able?’ is the mo­ment when a user can start mean­ing­fully in­ter­act­ing with the ex­pe­ri­ence and have some­thing hap­pen.

I men­tioned this term ‘in­ter­ac­tive’ ear­lier but what does that mean? For a page to be in­ter­ac­tive, it must be ca­pa­ble of re­spond­ing quickly to user in­put. A small JavaScript pay­load can en­sure this hap­pens fast. Whether a user clicks on a link or scrolls through a page, they need to see that some­thing is ac­tu­ally hap­pen­ing in re­sponse to their ac­tions. An ex­pe­ri­ence that can’t de­liver on this will frus­trate your users.

When a browser runs many of the events you’re prob­a­bly go­ing to need, it’s likely go­ing to do it on the same thread that han­dles user in­put. This thread is called the main thread. Too much (main thread) JavaScript can de­lay in­ter­ac­tiv­ity for vis­i­ble el­e­ments. This can be a chal­lenge for many com­pa­nies.

So what is a good tar­get for in­ter­ac­tiv­ity?

We on the Chrome team feel your base­line should be get­ting in­ter­ac­tive in un­der five sec­onds on a slow 3G or 4G con­nec­tion on a me­dian mo­bile de­vice ( bit.ly/time-to-in­ter­ac­tive). You might say: ‘My users are all on fast net­works and high-end phones!’ But are they? You may be on ‘fast’ cof­fee-shop WiFi but ef­fec­tively only get­ting 2G or 3G speeds. Vari­abil­ity mat­ters.

Who has man­aged to ship less JavaScript and re­duce their time-to-in­ter­ac­tive?

Pin­ter­est re­duced its JavaScript bun­dles from 2.5MB to < 200kB and re­duced time-to-in­ter­ac­tive from 23 sec­onds to 5.6 sec­onds ( bit.ly/pin­ter­est-pwa). Rev­enue went up 44 per cent, sign-ups are up 753 per cent, weekly ac­tive users on mo­bile web are up 103 per cent ( bit.ly/pin­ter­est

ret­ro­spec­tive). Au­to­Trader re­duced its JavaScript bun­dle sizes by 56 per cent and re­duced time-to-in­ter­ac­tive by ~50 per cent ( bit.ly/au­to­trader-javascript).

Let’s de­sign for a more re­silient mo­bile web that doesn’t rely as heav­ily on large JavaScript pay­loads. In­ter­ac­tiv­ity im­pacts a lot of things. It can be im­pacted by a per­son load­ing your site on a mo­bile data plan or cof­fee shop WiFi or just be­ing on the go with in­ter­mit­tent con­nec­tiv­ity.

Why is JavaScript so ex­pen­sive?

A re­quest is sent to a server, which then re­turns some HTML. The browser parses that markup and dis­cov­ers the nec­es­sary code (CSS and JavaScript) and re­sources (im­ages, fonts etc) com­pos­ing it. Once com­plete, the browser has to down­load and process these files.

If we want to be fast at JavaScript, we have to down­load it and process it

We’re hit­ting this ceil­ing across both desk­top and mo­bile web, where sites are ship­ping mul­ti­ple megabytes of code

quickly. That means we have to be fast at the net­work trans­mis­sion and the pars­ing, com­pil­ing and ex­e­cu­tion of our scripts. If you spend a long time pars­ing and com­pil­ing script in a JavaScript en­gine, that de­lays how soon a user can in­ter­act with your ex­pe­ri­ence.

Keep in mind that re­sources on the web have dif­fer­ent costs. A 200kB script has a dif­fer­ent set of costs to a 200kB JPG. They might take the same amount of time to down­load but when it comes to pro­cess­ing the costs aren’t the same.

A JPEG im­age needs to be de­coded, ras­terised and painted on the screen. This can usu­ally be done quickly. A JavaScript bun­dle needs to be down­loaded and then parsed, com­piled and ex­e­cuted. This can take longer than you might think on mo­bile hard­ware.

Dif­fer­ent types of mo­bile

Mo­bile is a spec­trum com­posed of lowend, me­dian and high-end de­vices. If we’re for­tu­nate, we may have a high-end phone but the re­al­ity is that not all users will have those de­vices.

They may be on a low-end or me­dian phone and the dis­par­ity be­tween these mul­ti­ple classes of de­vices can be stark due to ther­mal throt­tling, dif­fer­ence in cache sizes, CPU, GPU – you can end up ex­pe­ri­enc­ing dif­fer­ent pro­cess­ing times for re­sources like JavaScript, de­pend­ing on the de­vice you’re us­ing. Your users on low-end phones may even be in the US ( bit.ly/an­droid-go-usa).

Some users won’t be on a fast net­work or have the lat­est and great­est phone, so it’s vi­tal that we start test­ing on real phones and net­works. Fast de­vices and net­works can ac­tu­ally some­times be slow; vari­abil­ity can end up re­duc­ing the speed of ab­so­lutely ev­ery­thing. Test on a real phone or at least with mo­bile em­u­la­tion. De­vel­op­ing with a slow base­line en­sures every­one – both on fast and slow set­ups – ben­e­fits.

Check­ing your an­a­lyt­ics to un­der­stand what de­vices your users are ac­cess­ing your site with is a use­ful ex­er­cise. We­bPageTest has a num­ber of Moto G4 phones pre­con­fig­ured un­der the Mo­bile pro­files ( we­bpagetest.org/easy). This is valu­able in case you’re un­able to pur­chase your own set of me­dian-class hard­ware for test­ing.

It’s re­ally im­por­tant to know your au­di­ence. Not ev­ery site needs to per­form well on 2G on a low-end phone. That said, aim­ing for a high level of per­for­mance across the en­tire spec­trum en­sures that ev­ery po­ten­tial user ac­cess­ing your site has a chance to load it up fast.

How to send less JavaScript

Code split­ting helps you break up your JavaScript so you only load the code a user needs up­front and lazy-load the rest ( bit.ly/js-code-split­ting). This helps avoid ship­ping a mono­lithic “main.js” file to your users con­tain­ing JavaScript for the whole site vs just what the page needs.

The best ap­proach to in­tro­duce code split­ting into your site is us­ing the dy­namic im­port() syn­tax ( bit.ly/

js-dy­namic-im­port). What fol­lows is an ex­am­ple of us­ing JavaScript Mod­ules to stat­i­cally ‘im­port’ some math code ( bit.ly/

javascript-mod­ules-syn­tax). Be­cause we’re not load­ing this code dy­nam­i­cally (lazily) when it’s needed, it will end up in our de­fault JavaScript bun­dle. im­port { add } from ‘./math’; con­sole. log(add( 30, 15));

Af­ter switch­ing to dy­namic im­port(), we can lazily pull in the math util­i­ties when they are needed. This could be when the user is about to use a com­po­nent re­quir­ing it, or nav­i­gat­ing to a new route that re­lies on this func­tion­al­ity. Be­low we im­port “math” af­ter a but­ton click.

const btn = doc­u­ment.getEle­men­tById(‘ load’); btn. ad­dEven­tLis­tener(‘click’, () => { im­port(‘./math’).then(math => { con­sole. log(math. add( 30, 15)); }); });

When a JavaScript mod­ule bundler like Web­pack ( https://web­pack.js.org/) sees this im­port() syn­tax, it starts code split­ting your app. This means dy­namic code can get pushed out into a sep­a­rate file that is only loaded when it is needed.

Code split­ting can be done at the page, route or com­po­nent level. Tools like Cre­ate Re­act App, Next.js, Pre­ac­tCLI, Gatsby and oth­ers sup­port it out of the box. Guides to ac­com­plish this are avail­able for Re­act ( bit.ly/re­act-code­split­ting), Vue.js ( bit.ly/vue-code-split­ting) and An­gu­lar ( bit.ly/an­gu­lar-code-split­ting).

If you’re us­ing Re­act, I’m happy to rec­om­mend Re­act Load­able ( https://github. com/jamiebuilds/re­act-load­able), a high­erorder com­po­nent for load­ing com­po­nents ef­fi­ciently. It wraps dy­namic im­ports in a nice API for in­tro­duc­ing code split­ting into an app at a given com­po­nent.

Here is an ex­am­ple stat­i­cally im­port­ing a gallery com­po­nent in Re­act:

im­port GalleryCom­po­nent from ‘./ GalleryCom­po­nent’; const MyCom­po­nent = () => ( < GalleryCom­po­nent/> );

With Re­act Load­able, we can dy­nam­i­cally im­port the gallery com­po­nent as fol­lows:

im­port Load­able from ‘re­act- load­able’; const Load­ableGalleryCom­po­nent = Load­able({ loader: () => im­port(‘./GalleryCom­po­nent’), load­ing: () => <div> Load­ing...</div>, }); const MyCom­po­nent = () => ( <Load­ableGalleryCom­po­nent/> );

Many large teams have seen big wins off the back of code split­ting re­cently. In an ef­fort to re­write their mo­bile

web ex­pe­ri­ences to make sure users were able to in­ter­act with their sites as soon as pos­si­ble, both Twit­ter ( bit.ly/ twit­ter-lite-case-study) and Tin­der ( bit.ly/ tin­der-pwa-study) saw up to a 50 per cent improve­ment in time to in­ter­ac­tive when they adopted ag­gres­sive code split­ting. Stacks like Next.js [Re­act] ( https:// github.com/zeit/next.js/), Pre­act CLI ( https:// github.com/de­vel­o­pit/pre­act-cli), and PWA

Starter Kit ( https://github.com/Poly­mer/ pwa-starter-kit) try to en­force good de­faults for quickly load­ing and get­ting in­ter­ac­tive on aver­age mo­bile hard­ware. An­other thing many of these sites have done is adopt au­dit­ing as part of their work­flow. Thank­fully, the JavaScript ecosys­tem has a num­ber of great tools to help with bun­dle anal­y­sis. Tools like Web­pack Bun­dle An­a­lyzer ( bit.ly/web­pack­bun­dle-an­a­lyzer), Source Map Ex­plorer ( bit.ly/source-map-ex­plorer) and Bun­dle Buddy ( github.com/sam­c­cone/bundle­buddy) en­able you to au­dit your bun­dles for op­por­tu­ni­ties to trim them down.

Mea­sure, op­ti­mise, mon­i­tor and repeat

If you’re un­sure whether you have any is­sues with JavaScript per­for­mance, check out Light­house ( bit.ly/light­house­tools). Light­house is a tool baked into the Chrome De­vel­oper Tools and is also avail­able as a Chrome ex­ten­sion ( bit. ly/light­house-ex­ten­sion). It gives you an in-depth anal­y­sis that high­lights op­por­tu­ni­ties to im­prove per­for­mance.

We’ve re­cently added sup­port for flag­ging high JavaScript boot-up time to Light­house ( bit.ly/light­house-js). This au­dit high­lights scripts that might be spend­ing a long time pars­ing/com­pil­ing, which de­lays in­ter­ac­tiv­ity. You can look at this au­dit as op­por­tu­ni­ties to ei­ther split up those scripts or just do less work.

An­other thing you can do is make sure you’re not ship­ping un­used code down to your users: Code Cov­er­age ( bit. ly/code-cov­er­age) is a fea­ture in Chrome DevTools ( bit.ly/chrome-dt) that alerts you to un­used JavaScript (and CSS) in your pages. Load up a page in DevTools and the Cov­er­age tab will dis­play how much code was ex­e­cuted vs how much was loaded. You can im­prove the per­for­mance of your pages by only ship­ping the code that a user needs.

This can be valu­able for iden­ti­fy­ing op­por­tu­ni­ties to split up scripts and de­fer the load­ing of non-crit­i­cal ones un­til they’re needed. Thank­fully, there are ways we can we can try to work around this and one way is hav­ing a per­for­mance bud­get ( bit.ly/tim-perf-bud­gets) in place.

De­vise a per­for­mance bud­get

Per­for­mance bud­gets are crit­i­cal be­cause they keep ev­ery­body on the same page. They cre­ate a cul­ture of shared en­thu­si­asm for con­stantly im­prov­ing the user ex­pe­ri­ence and team ac­count­abil­ity. Bud­gets de­fine mea­sur­able con­straints so a team can meet their per­for­mance goals. As you have to live within the con­straints of bud­gets, per­for­mance is a con­sid­er­a­tion at each step, as op­posed to an af­ter­thought. Per Tim Kadlec, met­rics for per­for­mance bud­gets can in­clude:

● Mile­stone tim­ings – tim­ings based on the user-ex­pe­ri­ence load­ing a page (eg time-to-in­ter­ac­tive)

● Qual­ity-based met­rics – based on raw val­ues (eg weight of JavaScript, num­ber of HTTP re­quests). These are fo­cused on the browser ex­pe­ri­ence

● Rule-based met­rics – scores. gen­er­ated by tools such as Light­house or We­bPageTest. Of­ten a sin­gle num­ber or se­ries to grade your site.

Per­for­mance is more of­ten a cul­tural chal­lenge than a tech­ni­cal one. Dis­cuss per­for­mance dur­ing plan­ning ses­sions. Ask busi­ness stake­hold­ers what their per­for­mance ex­pec­ta­tions are. Do they un­der­stand how per­for­mance can im­pact the busi­ness met­rics they care about? Ask en­gi­neer­ing teams how they plan to ad­dress per­for­mance bot­tle­necks. While the an­swers here can be un­sat­is­fac­tory, they get the con­ver­sa­tion started.

What about tool­ing for per­for­mance bud­gets? You can set up Light­house scor­ing bud­gets in con­tin­u­ous in­te­gra­tion with the Light­house CI project ( bit.ly/

light­house-ci). A num­ber of per­for­mance mon­i­tor­ing ser­vices sup­port set­ting perf bud­gets and bud­get alerts in­clud­ing Cal­i­bre ( https://cal­i­breapp.com/), Treo ( https://treo.sh) and SpeedCurve ( https:// speedcurve.com/about/).

Get fast, stay fast.

Many small changes can lead to big gains. En­able users to in­ter­act with your site with the least amount of fric­tion. Run the small­est amount of JavaScript to de­liver real value. This can mean tak­ing in­cre­men­tal steps to get there but, in the end, your users will thank you.

Top: JavaScript pro­cess­ing times for CNN.com as mea­sured by We­bPageTest. A high­end phone (iPhone 8) pro­cesses script in ~4s. Com­pare to the ~13s an aver­age phone (Moto G4) takes or the ~36s taken by a low-end 2018 phone (Al­ca­tel 1X).

Above: Statis­tics from the HTT P Archive state of JavaScript re­port, July 2018, high­light the me­dian web­page ships ~350kB of mini­fied and com­pressed script. These pages take up to 15s to get in­ter­ac­tive.

Above: This chart from OpenSig­nal shows how con­sis­tently 4G net­works are avail­able glob­ally and the aver­age con­nec­tion speed users in each coun­try ex­pe­ri­ence. As we can see, many coun­tries still ex­pe­ri­ence lower con­nec­tion speeds than we may think. It’s also worth not­ing that ru­ral broad­band speeds, even in the US, can be 20% slower than in ur­ban ar­eas.

Left: We’re shift­ing to in­creas­ingly car­ing about user-cen­tric hap­pi­ness met­rics. Rather than just look­ing at on­load or domCon­tentLoaded, we’re now ask­ing when a user can use a page. If they tap on a piece of user-in­ter­face, does it re­spond right away?

Top: Split­ting large, mono­lithic JavaScript bun­dles can be done on a page, route or com­po­nent ba­sis. You’re even bet­ter set up for suc­cess if ‘split­ting’ is the de­fault for your toolchain from the get-go.

Above: JavaScript perf bud­gets for my site tee­jun­gle.net us­ing SpeedCurve, which sup­ports a range of bud­get met­rics

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.