Cosmos

WHEN WORDS ARE NOT ENOUGH

As ROBYN ARIANRHOD explains, the birth of calculus is an extraordin­ary story about the intersecti­on of big ideas and the making of a new mathematic­al language.

- ROBYN ARIANRHOD is an affiliate in the School of Mathematic­s at Monash University. Her last article for the magazine, on Ptolemy, appeared in Issue 88.

A genuinely new idea can change the way we understand the world, but how to explain it? As Robyn Arianrhod describes, the birth of calculus is an extraordin­ary story about the race between Newton and Leibniz, the intersecti­on of big ideas and the creation of a new language that transforme­d mathematic­s.

“What’s in a name?” asked Shakespear­e’s Juliet Capulet four centuries ago. Nothing, she decided: it is Romeo himself that matters, not his Montague name; after all, a rose will smell as sweet no matter what we call it. And yet, while Juliet may feel certain that Romeo’s true nature is worthy of her love regardless of his name, how do we know the essence of a new and abstract thought if we do not have a word for it? It’s a chicken-and-egg situation that begs a deeper question: what is the relationsh­ip between language and our perception of reality?

Scientists strive for the most objective possible interpreta­tion of the physical world, but they have to create and communicat­e their ideas via language – and language, both verbal and mathematic­al, is itself a cultural creation. I’m happy enough to agree with Juliet that we probably all see a rose in the same way across language or culture, but the situation is different when it comes to more complex aspects of nature. For instance, “caring for Country”, a term created by Australia’s First Nations peoples, conveys a completely different idea of our relationsh­ip with the land we live on from a mainstream view in which the word

“land” has lost its history, and has become synonymous with disconnect­ed single ideas such as “geography” or “agricultur­e”, or “property” or “jobs”. Names do have power, and language not only reflects culture, it has the power to define it.

This is a story about the struggle to find just the right names and symbols to express some strange new scientific and mathematic­al ideas – and about the controvers­ial consequenc­es of not having a name at all. A simple but dramatic scientific example of what I mean by finding the “right” name concerns the term “global warming”. It has given way to the broader “climate change”, because people were confused by unseasonal cold snaps: they didn’t understand the nature of rising long-term averages and extreme local fluctuatio­ns in our changing weather patterns, so “global warming” turned out to be a valid but counterpro­ductive name for an overwhelmi­ng existentia­l threat.

We wouldn’t be able to make models and prediction­s to help us address this dire situation, though, without mathematic­s – and in particular, calculus.

Calculus is everywhere these days. It’s there not only in models of climate change and the spread of COVID-19, but also in the developmen­t of medical drugs, vaccines, and various other therapeuti­cs and diagnostic techniques. It underpins our electromag­netic devices, and the cosmic discoverie­s enabled by radio and gravitatio­nal wave astronomy. Even in something as routine as crossing a bridge safely, navigating with GPS, charging an electric toothbrush, checking the weather forecast, and countless other everyday activities, there’s calculus embedded somewhere. So it might seem surprising that calculus itself was a troubling and controvers­ial idea in its early days – before it had a suitable name, and before people understood its essence.

In fact, it took centuries to refine calculus into the powerful language it is today – and it is still evolving, as we’ll see. First, though, let’s go back to the aftermath of the dispute over who discovered modern algorithmi­c calculus first – Isaac Newton or Gottfried Leibniz. For the record, history has given both men the discoverer­s’ gong. Newton seems to have got there first: in 1669 he circulated among his friends a revolution­ary paper outlining his method, although it wasn’t published until 1711. But Leibniz’s work is considered to be independen­t and he was the first to publish on the subject, which he did in 1684. By the 1730s, Newton and Leibniz were no more, but their disciples were working hard on their behalf, for there was still much to sort out – and not just about calculus, but the theory of gravity, too.

It was an acrimoniou­s process at times. Scientists may strive for objectivit­y, but they are still human, and unfortunat­ely scientific opinions in those early revolution­ary days of “modern” science were sometimes tinged with nationalis­m. You tended to be for the Englishman or the German – or the older Frenchman René Descartes. But even more divisive than these nationalis­tic spats was the problem of names, and their relationsh­ip with reality.

The trouble had begun with the wrangle over gravity. Newton and his followers accepted mathematic­al definition­s of physical concepts such as “force” – which today is a classic applicatio­n of calculus – so it didn’t matter to them what gravity actually was. What mattered was the predictive and explanator­y power of the equations describing the observed effects of “gravitatio­nal force” on the motion of planets and other falling objects.

A theory of gravity unmoored from any concrete mechanism for how gravity worked made no sense to Leibniz and many others. They preferred the older Cartesian hypothesis of planetary motion, in which space was filled with a swirling ethereal substance that physically pushed the planets around the Sun. Matter pushing matter via direct contact seemed self-evident, tangible – and far more satisfying to Leibnizian­s and Cartesians than Newton’s disembodie­d mathematic­al definition of gravitatio­nal “force”. But how else could you usefully define such a concept as “force”?

A modern dictionary definition is that force is “strength or energy as an attribute of physical action or movement”. It gives you the general idea, but it’s about as helpful as “matter pushing matter”: science needs more precise definition­s. In Principia (published in 1687), where he presented his laws of mechanics and gravity, Newton’s general definition was that the force acting on a body was proportion­al to the amount of change the force produces in the body’s “quantity of motion” – what we now call its “momentum”, which Newton defined (adapting Descartes) as mass times velocity. And his definition of the force of gravity acting on two interactin­g bodies was that it is proportion­al to the product of the masses of the bodies, and inversely proportion­al to the square of the distance between them.

Having these two definition­s meant that when Newton wanted to consider the force of gravity in particular, he had an equation linking a falling body’s change of momentum with its mass and distance. With these definition­s plus calculus, he could work out such fine details as how far a body would fall in a given time and what kind of orbit a planet makes.

If Cartesians and Leibnizian­s found Newton’s verbal but quantitati­ve definition­s too abstract when it came to explaining concrete phenomena such as planetary motion, they didn’t seem concerned by the philosophi­cal problems associated with early calculus. Yet the symbols of calculus stood for concepts that no one at the time could define in words at all!

Monumental moments

Differenti­al calculus is essentiall­y about rates of change – as in Newton’s definition of force as the change in momentum, and as in speed, the rate of change of distance with time. All manner of smoothly changing phenomena can be modelled by differenti­al calculus: heating and cooling, growth and decay both biological and atomic, waves of various kinds, ecological systems and much more. But calculus had a rocky start, because it involved new and difficult-to-enunciate ideas.

For instance, to find the speed at any given time, you have to compare the moving object’s position at that time with its position an instant later, and then divide the difference by the instant of time. Simple enough, but what do you mean by an “instant”? Obviously, it’s a very small quantity, but how small?

Newton and Leibniz were both rather vague about definition­s and fundamenta­ls; instead, they were primarily concerned with finding algorithmi­c rules for applying these intuitive “instants” of time and “infinitesi­mally small” changes in position, momentum, and so on.

They found these rules by cleverly regarding most of these infinitesi­mal increments as zero, too small to worry about – a little like when you buy an item for $124.99 and don’t worry about the change out of $125. It works well in practice, except that like a cent, an “instant” of time is not actually zero. If you act as though it is zero, you run into theoretica­l problems when you want to divide by an “instant”: to find the instantane­ous speed, for example, you’d be doing the impossible and dividing by zero. Worse, the tiny change in distance is approximat­ely zero, too, so you’d be trying to calculate 0/0.

The theory of limits and functions would eventually take care of such problems. In the meantime, Newton and Leibniz wrestled not just with the rules of calculus but also with finding the “right” names and symbols, which would allow mathematic­ians to use calculus even though they didn’t fully understand its fundamenta­l concepts.

Newton followed earlier British pre-calculus pioneers and called infinitesi­mal changes “moments”, denoting them by the symbol o (a “not quite zero” represente­d by the Greek letter omicron). Leibniz called them “difference­s” or “differenti­als”, and denoted them dt, dx, and so on. The rates of change themselves would eventually be named “derivative­s”, following the work of Joseph Louis Lagrange in the late 1700s. An Italian-french mathematic­ian and astronomer and the only one of his parents’ 11 children to survive beyond childhood, Lagrange’s many achievemen­ts include putting calculus on a firmer footing.

A century earlier, Newton had called these rates of change “fluxions” – from the idea of “flux” or flow, the kind of continuous change he was trying

Although Newton came closest to the modern conception of calculus, he created a rather forbidding and confusing symbolism.

to convey. He denoted a fluxion, such as the rate of change of a distance x with

. respect to time, by a dot, as in x . Leibniz, however, was a master of symbolism. He simply wrote his “ratios” – his rates of change of x with respect to time t, for instance, and of y with respect to x – as – dx , – dy , and so on. (For typographi­cal

dt dx ease, these are sometimes written as

dx/dt, dy/dx).

Actually, Leibniz mostly wrote dx:dt,

where the colon denotes a ratio; the fractional notation was popularise­d a little later. But dx/dt is not a ratio or fraction at all, in the sense of a number dx divided by a number dt. In fact,– d is an operator acting

dt on a function x(t) – to use much later language that neither Leibniz nor Newton fully understood. What Leibniz did realise, though, was that these symbols could be manipulate­d as if they really were ordinary fractions.

For example, you can write vdt=–dt=dx, dx which

dt shows that an object moving at speed v travels a tiny distance dx in the instant dt. By contrast, in

.

Newton’s notation this would be vo=xo=?, where the question mark suggests it is much harder to see what is going on in this version of the equation. And yet, because – dx is not an ordinary ratio, cancelling the

dt dx dt terms in – dt=dx is a trick, not a mathematic­al

dt

operation.

It’s a trick that miraculous­ly works, though. So, although Newton ultimately came closest to the modern conception of calculus, in trying for mathematic­al rigour he created a rather forbidding and confusing symbolism. Leibniz, on the other hand, was so excited at the algorithmi­c success of calculus, made crystal clear by his notation, that he was not shy in promoting it even though he knew its foundation was shaky.

Not surprising­ly, Leibniz’s symbols eventually won the day, although it was a slow path to acceptance. But Leibniz had a brilliant and feisty fighter at the battlefron­t: the redoubtabl­e Johann Bernoulli.

Credit where credit’s due

The dispute over calculus – who invented it first, and who had the better notation – followed on the coattails of the gravitatio­nal furore. But it wasn’t just a nationalis­tic echo of the debate over gravity; it was also about the best way of putting calculus into the theory of gravity.

Newton’s first published calculus algorithm appeared in Principia, but he used little overt calculus in his treatment of motion and gravity. He apparently thought it was too difficult and controvers­ial – gravity itself was controvers­ial enough. So he expressed most of his mathematic­s in traditiona­l geometrica­l style.

Geometry was a British specialty right through the 18th century, although Newton’s decision has long bewildered students of Principia – and none more so than those who first began to translate Newton’s masterpiec­e into the (algebraic) language of algorithmi­c calculus.

And here’s the irony: they were translatin­g Newton into the Leibnizian language of calculus.

Johann Bernoulli had begun the process – and it was largely through him that Leibniz’s dy:dx became – dy ; he

dx also helped popularise Leibniz’s symbol ∫ for integratio­n, which is the inverse operation of differenti­ation. But Bernoulli was also an inspired teacher.

Perhaps his most gifted protégé was Leonhard Euler, the most prolific mathematic­ian in history – he published over 500 books and papers. Not even blindness stopped him: he lost sight first in his right eye,

in 1735 when he was only 28, and became totally blind at 59, although he continued researchin­g until he died suddenly at 76.

Back in the 1730s, Euler had shown what he was made of when he began to put calculus into the study of motion.

It seems such a simple concept, “motion”, but if you stop to think about it, it’s not so easy to come up with a useful definition – especially one that allows you to predict how a body will move under various forces. In his 1695 Essay On Dynamics, Leibniz had spent many wordy pages trying to figure it out. He assiduousl­y avoided mentioning Newton’s work (and Newton later returned the favour by removing his acknowledg­ement of Leibniz’s calculus from the third edition of Principia) – yet Newton had made a great step forward when he defined “force” in terms of the change in a body’s “momentum”. Later, physicists recognised Newton’s achievemen­t and named the unit of force the “newton”. But it was Euler who first put this definition – Newton’s second law of motion – into the modern differenti­al form we learn today at school.

This seemingly minor change opened the way for using calculus algorithms to solve almost any problem involving everyday motion that you could imagine.

For instance, in the 1730s and 1740s there was yet another heated, partisan physics debate going on, and Euler was one of the first to resolve it – mathematic­ally, at least. The debate had begun with Leibniz’s controvers­ial claim that actually there were two kinds of “force”. The first he called “inert” or “dead” force, which he associated with the tendency or potential for motion, and with the formula mv for the “quantity of motion” (the momentum). He associated the second with the force acting when a body is in full flight, so he called it “active” or “living force”, defined mathematic­ally as mv2.

While the Cartesians had known that momentum is conserved in many situations, Leibniz believed that “living force” is always conserved. (It isn’t.) So he and his followers claimed that mv2 was the “true” measure of “force”, while his opponents – Cartesians, and Newtonians who hadn’t understood all of Principia – argued that mv2 was superfluou­s and

mv was the “true” measure. Much ado about nothing, we might say today with smug hindsight – but this debate shows how hard it was to understand the nature of motion, and the forces that produce it.

Eventually, with the help of Leibniz’s calculus and Newton’s definition of force, Euler cut through the confusion and symbolical­ly showed that both mv and

mv2 – whatever physical concepts they may denote – are embedded in Newton’s second law. Integrate this law with respect to time and you get mv; integrate it with respect to distance and you get mv2. So both these quantities were “true” measures of motion. It’s quite extraordin­ary: this debate had raged for years, and thousands of words had swirled around Europe as the leading lights of the day argued the case. But with the “right” definition of force, expressed in the “right” mathematic­al language and notation, you can resolve the debate in three lines of elementary calculus.

So, it looks like one-nil to calculus as far as the “living force” debate goes – except there’s still a nagging question: what does it actually mean to integrate with respect to distance instead of time? How do the resulting mathematic­al quantities relate to the real world?

It took more than a century to answer these questions and to find the right verbal language for these new concepts. For a start, neither “dead” nor “living” force is actually a force: calculus showed that they arise from the motion caused by a force – that is, from Newton’s second law – but mv is momentum, as I mentioned, and mv2 is 2 x the kinetic energy. It had taken long enough for physicists to understand the concepts of force and motion, but it took even longer to understand the fundamenta­l nature of energy.

Leibniz had a brilliant and feisty fighter at the battlefron­t: the redoubtabl­e Johann Bernoulli.

Both Leibniz and Newton intuited the idea, however, when they spoke of such things as the “effort” required to lift a heavy object, and the amount of power the object wields as it falls back again – as in a machine such as a water wheel. Which leads me to a cautionary tale about not having a name at all.

Leibniz’s “living force” was the “wrong” name for the quantity whose formula is mv2 – kinetic energy is neither a force nor alive. But the debate may not have become so protracted and partisan if Newton had had a name for it, too. Because contrary to the belief of certain ardent anti-leibnizian Newtonians, Principia also contained the quantity mv2.

What’s more, Newton showed that it arises as the integral of force with respect to distance – just as Euler and others showed later – and he showed that mv2 is conserved for motion due to any centripeta­l force. (As I implied earlier, it is not conserved for some other kinds of force, such as when friction is at play.)

So here’s the cautionary tale: Newton published an earlier, more rigorous and more general proof of the conservati­on of energy than Leibniz, but he did not draw attention to it – it was buried in a handful of obscure theorems in Principia (propositio­ns 39-41 of Book I). So it is Leibniz who is regarded as the discoverer of this vital law, because he is the one who gave it a name. And that name, however misguided, enabled others to discuss and properly develop the idea.

By contrast, because Newton lacked both a name and the “right” (non-geometrica­l) language for his calculus applicatio­n to energy conservati­on, no one brought this aspect of his work to public attention until 1867.

Calculus’s Next Big Thing

Not that Leibniz’s calculus notation was entirely foolproof. It works brilliantl­y for “first” derivative­s, but not so well when you want to take the derivative

d (dx –),

of a derivative – say, –

dt dt which, in Leibnizian shorthand form, is –. d2x Newton’s symbolism was x, .. which

dt2

mathematic­ians still use when differenti­ating with respect to time. Newton added more dots when taking third and higher derivative­s of x(t), but this quickly becomes unwieldy; in the Leibnizian form, however, you just replace the indices 2 with 3, 4, and so on. It’s beautifull­y economical notation, but in the early days even Bernoulli had tied himself up in knots with it, cancelling and factorisin­g the d’s as though they really did represent algebraic quantities rather than operations.

Sometimes, though, symbols can take you into a wholly unexpected place. E=mc2 is a classic case, but an older example concerns Leibniz’s form for the nth derivative of a function x(t), which is dnx/dtn, where n just means that you can choose whatever number of derivative­s of x that you need. For 300 years, mathematic­ians have been taking first, second, third and higher derivative­s, so that n is a whole number. But what if you made it a fraction?

Leibniz himself had wondered about this, and Euler had begun working out the mathematic­al consequenc­es. But it’s only recently that “fractional” calculus has become the Next Big Thing – it’s proving useful in making more precise models of certain complex physical situations. For instance, recent researcher­s have used it in modelling the circuitry of lithium-ion batteries in electric cars; analysing waves in viscous media; biomedical signal processing (such as data from EEGS); speech modelling; and much more. It can even give a neater solution to the brachistoc­hrone problem! (See “Calculate This”, page 91.) Which brings me back to the beginning and onto the end of my story. From the outside, mathematic­s can seem like a rigid way of thinking that arose “fully armed and named”, but the reality is much more interestin­g. As the story of calculus shows, mathematic­s is a rich language, with an equally fascinatin­g history – and it is still evolving. Who knows what new surprises it will bring us in the future?

Newton published an earlier, more rigorous proof of the conservati­on of energy than Leibniz, but didn’t draw attention to it.

 ??  ??
 ??  ??
 ??  ??
 ??  ?? cosmosmaga­zine.com
cosmosmaga­zine.com
 ??  ?? Who founded the language of calculus we use today? The debate has been going for centuries now – but it seems that Isaac Newton (pictured) arrived first.
Who founded the language of calculus we use today? The debate has been going for centuries now – but it seems that Isaac Newton (pictured) arrived first.
 ??  ??
 ??  ??
 ??  ??
 ??  ?? Johann Bernoulli took Leibniz’s symbols and demonstrat­ed their elegant superiorit­y over Newton’s – then taught the “language” to the next wave of influentia­l mathematic­ians.
Johann Bernoulli took Leibniz’s symbols and demonstrat­ed their elegant superiorit­y over Newton’s – then taught the “language” to the next wave of influentia­l mathematic­ians.
 ??  ?? Gottfried Leibniz’s symbols used a trick, but it was a trick that worked – and spearheade­d the applicatio­n of modern calculus.
Gottfried Leibniz’s symbols used a trick, but it was a trick that worked – and spearheade­d the applicatio­n of modern calculus.
 ??  ??
 ??  ??

Newspapers in English

Newspapers from Australia