WHEN WORDS ARE NOT ENOUGH
As ROBYN ARIANRHOD explains, the birth of calculus is an extraordinary story about the intersection of big ideas and the making of a new mathematical language.
A genuinely new idea can change the way we understand the world, but how to explain it? As Robyn Arianrhod describes, the birth of calculus is an extraordinary story about the race between Newton and Leibniz, the intersection of big ideas and the creation of a new language that transformed mathematics.
“What’s in a name?” asked Shakespeare’s Juliet Capulet four centuries ago. Nothing, she decided: it is Romeo himself that matters, not his Montague name; after all, a rose will smell as sweet no matter what we call it. And yet, while Juliet may feel certain that Romeo’s true nature is worthy of her love regardless of his name, how do we know the essence of a new and abstract thought if we do not have a word for it? It’s a chicken-and-egg situation that begs a deeper question: what is the relationship between language and our perception of reality?
Scientists strive for the most objective possible interpretation of the physical world, but they have to create and communicate their ideas via language – and language, both verbal and mathematical, is itself a cultural creation. I’m happy enough to agree with Juliet that we probably all see a rose in the same way across language or culture, but the situation is different when it comes to more complex aspects of nature. For instance, “caring for Country”, a term created by Australia’s First Nations peoples, conveys a completely different idea of our relationship with the land we live on from a mainstream view in which the word
“land” has lost its history, and has become synonymous with disconnected single ideas such as “geography” or “agriculture”, or “property” or “jobs”. Names do have power, and language not only reflects culture, it has the power to define it.
This is a story about the struggle to find just the right names and symbols to express some strange new scientific and mathematical ideas – and about the controversial consequences of not having a name at all. A simple but dramatic scientific example of what I mean by finding the “right” name concerns the term “global warming”. It has given way to the broader “climate change”, because people were confused by unseasonal cold snaps: they didn’t understand the nature of rising long-term averages and extreme local fluctuations in our changing weather patterns, so “global warming” turned out to be a valid but counterproductive name for an overwhelming existential threat.
We wouldn’t be able to make models and predictions to help us address this dire situation, though, without mathematics – and in particular, calculus.
Calculus is everywhere these days. It’s there not only in models of climate change and the spread of COVID-19, but also in the development of medical drugs, vaccines, and various other therapeutics and diagnostic techniques. It underpins our electromagnetic devices, and the cosmic discoveries enabled by radio and gravitational wave astronomy. Even in something as routine as crossing a bridge safely, navigating with GPS, charging an electric toothbrush, checking the weather forecast, and countless other everyday activities, there’s calculus embedded somewhere. So it might seem surprising that calculus itself was a troubling and controversial idea in its early days – before it had a suitable name, and before people understood its essence.
In fact, it took centuries to refine calculus into the powerful language it is today – and it is still evolving, as we’ll see. First, though, let’s go back to the aftermath of the dispute over who discovered modern algorithmic calculus first – Isaac Newton or Gottfried Leibniz. For the record, history has given both men the discoverers’ gong. Newton seems to have got there first: in 1669 he circulated among his friends a revolutionary paper outlining his method, although it wasn’t published until 1711. But Leibniz’s work is considered to be independent and he was the first to publish on the subject, which he did in 1684. By the 1730s, Newton and Leibniz were no more, but their disciples were working hard on their behalf, for there was still much to sort out – and not just about calculus, but the theory of gravity, too.
It was an acrimonious process at times. Scientists may strive for objectivity, but they are still human, and unfortunately scientific opinions in those early revolutionary days of “modern” science were sometimes tinged with nationalism. You tended to be for the Englishman or the German – or the older Frenchman René Descartes. But even more divisive than these nationalistic spats was the problem of names, and their relationship with reality.
The trouble had begun with the wrangle over gravity. Newton and his followers accepted mathematical definitions of physical concepts such as “force” – which today is a classic application of calculus – so it didn’t matter to them what gravity actually was. What mattered was the predictive and explanatory power of the equations describing the observed effects of “gravitational force” on the motion of planets and other falling objects.
A theory of gravity unmoored from any concrete mechanism for how gravity worked made no sense to Leibniz and many others. They preferred the older Cartesian hypothesis of planetary motion, in which space was filled with a swirling ethereal substance that physically pushed the planets around the Sun. Matter pushing matter via direct contact seemed self-evident, tangible – and far more satisfying to Leibnizians and Cartesians than Newton’s disembodied mathematical definition of gravitational “force”. But how else could you usefully define such a concept as “force”?
A modern dictionary definition is that force is “strength or energy as an attribute of physical action or movement”. It gives you the general idea, but it’s about as helpful as “matter pushing matter”: science needs more precise definitions. In Principia (published in 1687), where he presented his laws of mechanics and gravity, Newton’s general definition was that the force acting on a body was proportional to the amount of change the force produces in the body’s “quantity of motion” – what we now call its “momentum”, which Newton defined (adapting Descartes) as mass times velocity. And his definition of the force of gravity acting on two interacting bodies was that it is proportional to the product of the masses of the bodies, and inversely proportional to the square of the distance between them.
Having these two definitions meant that when Newton wanted to consider the force of gravity in particular, he had an equation linking a falling body’s change of momentum with its mass and distance. With these definitions plus calculus, he could work out such fine details as how far a body would fall in a given time and what kind of orbit a planet makes.
If Cartesians and Leibnizians found Newton’s verbal but quantitative definitions too abstract when it came to explaining concrete phenomena such as planetary motion, they didn’t seem concerned by the philosophical problems associated with early calculus. Yet the symbols of calculus stood for concepts that no one at the time could define in words at all!
Monumental moments
Differential calculus is essentially about rates of change – as in Newton’s definition of force as the change in momentum, and as in speed, the rate of change of distance with time. All manner of smoothly changing phenomena can be modelled by differential calculus: heating and cooling, growth and decay both biological and atomic, waves of various kinds, ecological systems and much more. But calculus had a rocky start, because it involved new and difficult-to-enunciate ideas.
For instance, to find the speed at any given time, you have to compare the moving object’s position at that time with its position an instant later, and then divide the difference by the instant of time. Simple enough, but what do you mean by an “instant”? Obviously, it’s a very small quantity, but how small?
Newton and Leibniz were both rather vague about definitions and fundamentals; instead, they were primarily concerned with finding algorithmic rules for applying these intuitive “instants” of time and “infinitesimally small” changes in position, momentum, and so on.
They found these rules by cleverly regarding most of these infinitesimal increments as zero, too small to worry about – a little like when you buy an item for $124.99 and don’t worry about the change out of $125. It works well in practice, except that like a cent, an “instant” of time is not actually zero. If you act as though it is zero, you run into theoretical problems when you want to divide by an “instant”: to find the instantaneous speed, for example, you’d be doing the impossible and dividing by zero. Worse, the tiny change in distance is approximately zero, too, so you’d be trying to calculate 0/0.
The theory of limits and functions would eventually take care of such problems. In the meantime, Newton and Leibniz wrestled not just with the rules of calculus but also with finding the “right” names and symbols, which would allow mathematicians to use calculus even though they didn’t fully understand its fundamental concepts.
Newton followed earlier British pre-calculus pioneers and called infinitesimal changes “moments”, denoting them by the symbol o (a “not quite zero” represented by the Greek letter omicron). Leibniz called them “differences” or “differentials”, and denoted them dt, dx, and so on. The rates of change themselves would eventually be named “derivatives”, following the work of Joseph Louis Lagrange in the late 1700s. An Italian-french mathematician and astronomer and the only one of his parents’ 11 children to survive beyond childhood, Lagrange’s many achievements include putting calculus on a firmer footing.
A century earlier, Newton had called these rates of change “fluxions” – from the idea of “flux” or flow, the kind of continuous change he was trying
Although Newton came closest to the modern conception of calculus, he created a rather forbidding and confusing symbolism.
to convey. He denoted a fluxion, such as the rate of change of a distance x with
. respect to time, by a dot, as in x . Leibniz, however, was a master of symbolism. He simply wrote his “ratios” – his rates of change of x with respect to time t, for instance, and of y with respect to x – as – dx , – dy , and so on. (For typographical
dt dx ease, these are sometimes written as
dx/dt, dy/dx).
Actually, Leibniz mostly wrote dx:dt,
where the colon denotes a ratio; the fractional notation was popularised a little later. But dx/dt is not a ratio or fraction at all, in the sense of a number dx divided by a number dt. In fact,– d is an operator acting
dt on a function x(t) – to use much later language that neither Leibniz nor Newton fully understood. What Leibniz did realise, though, was that these symbols could be manipulated as if they really were ordinary fractions.
For example, you can write vdt=–dt=dx, dx which
dt shows that an object moving at speed v travels a tiny distance dx in the instant dt. By contrast, in
.
Newton’s notation this would be vo=xo=?, where the question mark suggests it is much harder to see what is going on in this version of the equation. And yet, because – dx is not an ordinary ratio, cancelling the
dt dx dt terms in – dt=dx is a trick, not a mathematical
dt
operation.
It’s a trick that miraculously works, though. So, although Newton ultimately came closest to the modern conception of calculus, in trying for mathematical rigour he created a rather forbidding and confusing symbolism. Leibniz, on the other hand, was so excited at the algorithmic success of calculus, made crystal clear by his notation, that he was not shy in promoting it even though he knew its foundation was shaky.
Not surprisingly, Leibniz’s symbols eventually won the day, although it was a slow path to acceptance. But Leibniz had a brilliant and feisty fighter at the battlefront: the redoubtable Johann Bernoulli.
Credit where credit’s due
The dispute over calculus – who invented it first, and who had the better notation – followed on the coattails of the gravitational furore. But it wasn’t just a nationalistic echo of the debate over gravity; it was also about the best way of putting calculus into the theory of gravity.
Newton’s first published calculus algorithm appeared in Principia, but he used little overt calculus in his treatment of motion and gravity. He apparently thought it was too difficult and controversial – gravity itself was controversial enough. So he expressed most of his mathematics in traditional geometrical style.
Geometry was a British specialty right through the 18th century, although Newton’s decision has long bewildered students of Principia – and none more so than those who first began to translate Newton’s masterpiece into the (algebraic) language of algorithmic calculus.
And here’s the irony: they were translating Newton into the Leibnizian language of calculus.
Johann Bernoulli had begun the process – and it was largely through him that Leibniz’s dy:dx became – dy ; he
dx also helped popularise Leibniz’s symbol ∫ for integration, which is the inverse operation of differentiation. But Bernoulli was also an inspired teacher.
Perhaps his most gifted protégé was Leonhard Euler, the most prolific mathematician in history – he published over 500 books and papers. Not even blindness stopped him: he lost sight first in his right eye,
in 1735 when he was only 28, and became totally blind at 59, although he continued researching until he died suddenly at 76.
Back in the 1730s, Euler had shown what he was made of when he began to put calculus into the study of motion.
It seems such a simple concept, “motion”, but if you stop to think about it, it’s not so easy to come up with a useful definition – especially one that allows you to predict how a body will move under various forces. In his 1695 Essay On Dynamics, Leibniz had spent many wordy pages trying to figure it out. He assiduously avoided mentioning Newton’s work (and Newton later returned the favour by removing his acknowledgement of Leibniz’s calculus from the third edition of Principia) – yet Newton had made a great step forward when he defined “force” in terms of the change in a body’s “momentum”. Later, physicists recognised Newton’s achievement and named the unit of force the “newton”. But it was Euler who first put this definition – Newton’s second law of motion – into the modern differential form we learn today at school.
This seemingly minor change opened the way for using calculus algorithms to solve almost any problem involving everyday motion that you could imagine.
For instance, in the 1730s and 1740s there was yet another heated, partisan physics debate going on, and Euler was one of the first to resolve it – mathematically, at least. The debate had begun with Leibniz’s controversial claim that actually there were two kinds of “force”. The first he called “inert” or “dead” force, which he associated with the tendency or potential for motion, and with the formula mv for the “quantity of motion” (the momentum). He associated the second with the force acting when a body is in full flight, so he called it “active” or “living force”, defined mathematically as mv2.
While the Cartesians had known that momentum is conserved in many situations, Leibniz believed that “living force” is always conserved. (It isn’t.) So he and his followers claimed that mv2 was the “true” measure of “force”, while his opponents – Cartesians, and Newtonians who hadn’t understood all of Principia – argued that mv2 was superfluous and
mv was the “true” measure. Much ado about nothing, we might say today with smug hindsight – but this debate shows how hard it was to understand the nature of motion, and the forces that produce it.
Eventually, with the help of Leibniz’s calculus and Newton’s definition of force, Euler cut through the confusion and symbolically showed that both mv and
mv2 – whatever physical concepts they may denote – are embedded in Newton’s second law. Integrate this law with respect to time and you get mv; integrate it with respect to distance and you get mv2. So both these quantities were “true” measures of motion. It’s quite extraordinary: this debate had raged for years, and thousands of words had swirled around Europe as the leading lights of the day argued the case. But with the “right” definition of force, expressed in the “right” mathematical language and notation, you can resolve the debate in three lines of elementary calculus.
So, it looks like one-nil to calculus as far as the “living force” debate goes – except there’s still a nagging question: what does it actually mean to integrate with respect to distance instead of time? How do the resulting mathematical quantities relate to the real world?
It took more than a century to answer these questions and to find the right verbal language for these new concepts. For a start, neither “dead” nor “living” force is actually a force: calculus showed that they arise from the motion caused by a force – that is, from Newton’s second law – but mv is momentum, as I mentioned, and mv2 is 2 x the kinetic energy. It had taken long enough for physicists to understand the concepts of force and motion, but it took even longer to understand the fundamental nature of energy.
Leibniz had a brilliant and feisty fighter at the battlefront: the redoubtable Johann Bernoulli.
Both Leibniz and Newton intuited the idea, however, when they spoke of such things as the “effort” required to lift a heavy object, and the amount of power the object wields as it falls back again – as in a machine such as a water wheel. Which leads me to a cautionary tale about not having a name at all.
Leibniz’s “living force” was the “wrong” name for the quantity whose formula is mv2 – kinetic energy is neither a force nor alive. But the debate may not have become so protracted and partisan if Newton had had a name for it, too. Because contrary to the belief of certain ardent anti-leibnizian Newtonians, Principia also contained the quantity mv2.
What’s more, Newton showed that it arises as the integral of force with respect to distance – just as Euler and others showed later – and he showed that mv2 is conserved for motion due to any centripetal force. (As I implied earlier, it is not conserved for some other kinds of force, such as when friction is at play.)
So here’s the cautionary tale: Newton published an earlier, more rigorous and more general proof of the conservation of energy than Leibniz, but he did not draw attention to it – it was buried in a handful of obscure theorems in Principia (propositions 39-41 of Book I). So it is Leibniz who is regarded as the discoverer of this vital law, because he is the one who gave it a name. And that name, however misguided, enabled others to discuss and properly develop the idea.
By contrast, because Newton lacked both a name and the “right” (non-geometrical) language for his calculus application to energy conservation, no one brought this aspect of his work to public attention until 1867.
Calculus’s Next Big Thing
Not that Leibniz’s calculus notation was entirely foolproof. It works brilliantly for “first” derivatives, but not so well when you want to take the derivative
d (dx –),
of a derivative – say, –
dt dt which, in Leibnizian shorthand form, is –. d2x Newton’s symbolism was x, .. which
dt2
mathematicians still use when differentiating with respect to time. Newton added more dots when taking third and higher derivatives of x(t), but this quickly becomes unwieldy; in the Leibnizian form, however, you just replace the indices 2 with 3, 4, and so on. It’s beautifully economical notation, but in the early days even Bernoulli had tied himself up in knots with it, cancelling and factorising the d’s as though they really did represent algebraic quantities rather than operations.
Sometimes, though, symbols can take you into a wholly unexpected place. E=mc2 is a classic case, but an older example concerns Leibniz’s form for the nth derivative of a function x(t), which is dnx/dtn, where n just means that you can choose whatever number of derivatives of x that you need. For 300 years, mathematicians have been taking first, second, third and higher derivatives, so that n is a whole number. But what if you made it a fraction?
Leibniz himself had wondered about this, and Euler had begun working out the mathematical consequences. But it’s only recently that “fractional” calculus has become the Next Big Thing – it’s proving useful in making more precise models of certain complex physical situations. For instance, recent researchers have used it in modelling the circuitry of lithium-ion batteries in electric cars; analysing waves in viscous media; biomedical signal processing (such as data from EEGS); speech modelling; and much more. It can even give a neater solution to the brachistochrone problem! (See “Calculate This”, page 91.) Which brings me back to the beginning and onto the end of my story. From the outside, mathematics can seem like a rigid way of thinking that arose “fully armed and named”, but the reality is much more interesting. As the story of calculus shows, mathematics is a rich language, with an equally fascinating history – and it is still evolving. Who knows what new surprises it will bring us in the future?
Newton published an earlier, more rigorous proof of the conservation of energy than Leibniz, but didn’t draw attention to it.