Rovelli's a thought-provoking and quite fun to read article (i happen to like Rovelli's writing quite a bit). The main idea is to get rid of a singled out time variable in the Hamiltonian formulation of general relativistic mechanics and, by extension, quantum mechanics. It is argued that our usual time parameter, as it is used in Newtonian and quantum mechanics, as well as in special relativity, is not well-defined in a general relativistic context. Therefore, it must be replaced by a notion of coordinated events that conform a configuration space. Physical systems follow special orbits in the configuration space. often parametrizable by a finite set of state variables (think for instance of the amplitude and phase of a pendulum), so that we can pair events and describe the evolution of one in terms of another. These special orbits are obtained from a variational principle, derived from a Hamiltonian function. When the latter has a separable time we're in a classical, non-relativistic regime. But this is not usually the case. It is then shown how our everyday notion of time can be given a statistical interpretation, and derived in terms of the Gibbs theorem and the postulate of a Gibbs distribution for equilibrium states.

While i don't feel really qualified to properly criticise Rovelli's approach, i must say that it sounds reasonable and quite beautiful. Julian Barbour's The nature of time also seeks to get rid of time as a fundamental concept by defining it as a (quite different) derived quantity, although i don't find his arguments as compelling; the same happened to me with his book The end of time. And of course there are other physicists with some serious arguments on the opposite camp: Sean Carroll's essay What if time does really exist? in the same contest, and Lee Smolin's survey article The present moment in quantum cosmology: Challenges to the arguments for the elimination of time are some of the readings that could help making up your mind (or, if you're like me, increase your incertitude!).

Or you can also watch all the talks in the seminar held at the Perimeter Institute last year, The Clock and the Quantum. Although i haven't had time to do much more than skimming over a couple or three videos (for instance, Barbour's and Roger Penrose's), it looks like a pretty interesting set for those of you wondering what's this queer thing we call time.

]]>
As a student, i was in love with Kaluza-Klein theory and its extremely
elegant explanation of electromagnetism as the purely geometrical
effect of a fourth spatial dimension. The really magic thing is that
the electromagnetic energy momentum tensor (in four dimensions) arises
as a consequence of an *empty* five-dimensional space where particles
follow geodesics; in other words, photons are purely geometry, just as
gravitational forces. The problem, of course, was to explain why we
don't measure that fifth dimension. Kaluza just prescribed that no
physical quantity depended on it, while Klein tried a somewhat more
satisfactory solution by compactifying it to an unobservable size, and
making it periodic, just as the second dimension of a long hose, which
becomes one-dimensional when seen from a distance. Unfortunately, this
beautiful picture seemed to lead to insurmountable difficulties with
chirality or the mass of the electron, unless one goes the string way
and adds more compact dimensions to our universe. I thought
Kaluza-Klein theories were all but abandoned in their original
5-dimensional form these days, but following some links in the recent
review article by Orfeu Bertolami, The Adventures of Spacetime, proved
me utterly wrong. There's been quite a lot of activity in the area
during the last decade, leading even to a Space-Time-Matter
consortium, a sort of physicists' club promoting 5-dimensional gravity
theories without compactification. The consortium is coordinated by
P.S Wessan, and has quite a few members and interesting publications:
see for instance this comprehensive review of KK theories of gravity
for an introduction to Wessan and friend's ideas. What i find
compelling about their approach (and what, at the same time, of course
reveals my prejudices) is that they tackle multidimensional physics
from the point of view of general relativity, rather than particle
physics. However, i guess that a word of caution is in order: i've
read very little about these (to me) novel approaches to KK theories,
and i'm not yet ready to endorse them; if they were right (and i
definitely wish they were), they'd be quite revolutionary: for
instance, they explain quantum indeterminacy as a result of particles
travelling in higher dimensions… that'd be extremely cool (and
actually make real one of my silly ideas of old), but perhaps too cool
to be true? Well, i'll leave it for you to decide (as for me, i think
i'm going to read Wessan's book, Five Dimensional Physics, lest
student dreams can really come true!).

Returning to Bertolami's paper, let me mention that it is part of a
forthcoming book entitled Relativity and the Dimensionality of the
World, the good news being that the above link points to freely
available versions of many of its chapters, written by various
authors, including Wessan and G.F.R. Ellis. The latter writes about
his rather original ideas on time in General Relativity, and the Block
Universe idea, familiar to all relativists, of a world represented as
a frozen 4-dimensional whole. Ellis observes that such a
representation clearly suggests that time is an illusion: the entire
universe just *is*. The problem is that such a view seems incompatible
with irreversible, macroscopic phenomena, as well as with the
fundamental indeterminism inherent to quantum mechanics. To take into
account these facts of life, Ellis proposes an *Evolving Block
Universe*: time passes; the past is fixed and immutable, and hence has
a completely different status than the future, which is still
undetermined and open to influence; the kinds of `existence' they
represent are quite different: the future only exists as a
potentiality rather than an actuality. The point being that our
regular, predictable universe models are based on too simplistic
assumptions and oversimplified systems, and that taking into account
realistic, emergent ones renders the future under-determined. Although
very interesting from a philosophical point of view, Ellis ideas need
much fleshing out before becoming a solid theory of anything. But
still, he makes many a fine point, and quite a lot of good questions
worth thinking about.

Finally, Bertolami's paper draw my attention to a facet of Heinrich
Hertz's work i was totally unaware of, namely, his contributions to
the interpretation of classical mechanics. After gaining a place in
the history of physics with his experimental confirmation of the
existence of electromagnetic waves, and before his tragic death when
he was only 37, Hertz wrote a book, The Principles of Mechanics
Presented in a New Form, where he proposed a formulation of Newtonian
physics freed of forces, using instead a variational
principle. According to Hertz's principle, particles move along paths
of least curvature, where the (three dimensional) metric is defined by
constraints instead of forces. Similar principles were proposed by
Gauss and d'Alembert before Hertz, but the latter was notorious (if
only ephemerally) for pushing to the forefront a view of space-time
defined by matter in a purely relational, Leibnizian fashion: Hertz
tries to derive his *system of the world* from material particles
alone. Unfortunately, i've found little information on-line on Hertz's
ideas, which seem to be better known to philosophers due to their
influence on Wittgenstein (who directly mentions Hertz in his
Tractatus). For those of you with a philosophical soft spot, this
paper presents a re-interpretation of some of Wittgenstein's ideas
under a Hertzian perspective. As a physicist, i find Hertz's ideas
interesting almost only as a historical curiosity, and don't know how
relevant they really are to modern epistemology: comments welcome! ;)

No figures will be found in this work. The methods like i set forth require neither constructions nor geometrical or mechanical arguments, but only algebraic operations, subject to a regular and uniform procedure.

I'm stealing the quote above from a talk entitled
*Proofs and Pictures*^{3}, which started me re-thinking about diagrams
in physics (and maths) in the first place. It was given at the
Perimeter Institute by James Brown, a professor of Philosophy of
Science at the University of Toronto. In this fun talk, professor
Brown explores the use of geometrical reasoning in maths and physics
as a means of actually *proving* results. Some simple but instructive
(and, to me, somewhat surprising and definitely amusing) examples of
such "proving by diagrams" are given in the figure on the left (click
to enlarge), which shows how getting general formulas for arithmetic
and geometric sums may be as easy as counting squares. I'm giving away
just two of them, so that you can try your hand with the other two and
have a little fun (you can also try to invent your own, maybe going to
3- or even n-dimensional cubes, in which case, please, don't forget to
post your discoveries below! :)). Although elementary, these *proofs*
are intriguing: would you accept them as such? Brown argues that they
do, since they can be used to show the validity of the induction step
in the usual algebraic proofs. I'm not sure i buy the argument, but
it's a very interesting one.

Turning our attention to physics, probably the most famous diagrams in
the field are Feynman's. As i'm sure you know, they offer a convenient
notation for manipulating terms in QED's perturbative
expansions. Taken at face value, or, one might say, analytically, the
represent just algebraic combinations of functions (propagators)
entering a power series expansion in a small parameter (the
interaction coupling constant, alpha). But they're usually *interpreted*
as providing the actual physical mechanism for the interaction of real
particles by means of exchanges of *virtual*, unobservable
photons. Albeit intuitive and appealing, this interpretation has
always bothered me. After reading about it in popular science books, i
expected QED being somehow based on photon exchanges from
scratch. Instead, what one has is a principle of least action which
leads to differential equations unsolvable in exact analytical form.
Then, when calculating an approximate solution to a scattering problem
using a power series, one obtains (the analytical equivalent) of
Feynman diagrams and interprets them, so to speak, after the fact. I
would somehow feel more comfortable if the process were the other way
around: start with the (supposedly) physical underlying process (the
photon exchange) and derive the scattering amplitude. Each Feynman
diagram would then represent an actually possible scenario, in the
same sense that an electron choosing one slit in the two-slit
experiment is possible: one can break the superposition and observe
the electron in its way through the slit. But this is of course
impossible: virtual photons are unobservable, if only because they
travel faster than light and violate energy conservation. To add to my
uneasiness, a plain Feynman series leads to divergences to be cured,
non-diagrammatically, by renormalisation. Yet, everyone since Feynman
discusses this spooky photon ping-pong as the right interpretation^{4},
so probably i'm just showing off my lack of understanding! And,
besides, one could arguably point to measurable vacuum polarisation
effects like Casimir's as an experimental proof of the existence of
virtual particles (see for instance this recent, accesible account at
PR Focus). Or one could even see the situation as a derivation of the
interaction underlying mechanism from first principles, an stunning
testament to their power^{5}. At any rate, and specially if one
accepts the mainstream interpretation, Feynman diagrams appear as a
good example of how diagrammatic tools can be more than just a
picture, and not only in mathematics. For more on Feynman diagrams and
pointers to further reading, see their WikiPedia entry, or get
Diagrammar a CERN report by 't Hooft and Veltman with all the gory
details with a deliciously retro (as in written in 1973 using a
typewritter) flavour.

Before leaving the subject of Feynman diagrams, let me mention two
bits of diagrammatic folklore stolen from Peter Woit's latest
book. Naturally enough, recurring diagrams have got pet names over the
years. The first one seems to have been the *tadpole* (for a diagram
shaped, well, like a tadpole), coined by Sidney Coleman and resignedly
accepted by the Physical Review editors after he proposed lollypop and
*spermion* as alternatives. The second anecdote involves a diagram
(depicted above) known as *penguin* since Melissa Franklin won a dart
match over John Ellis: Tommaso Dorigo has recently recounted the story
in his blog.

Roger Penrose's thought is all but geometrical, and it comes as no
surprise that he has made many a contribution to the *physics by
drawing* camp. Every decent course on General Relativity touches
conformal diagrams^{6}, a nifty method envisioned by Penrose and Brandon
Carter (back in the sixties) to bring infinity back into your drawing
board. The trick consists on scaling your metric by a global function
that vanishes quickly enough when your original coordinates go to
infinite. Such scaling is known as a conformal transformation, and has
the virtue of preserving angles; in particular, null geodesics are
mapped into null geodesics and, therefore, the causal structure
(represented by null cones) is untouched. While beautiful and handy, i
think that conformal diagrams do not add anything really new from a
computational standpoint (as Feynman diagrams do), let alone serving as
the basis for actual proofs.

More interesting for our current musings is Penrose's graphical tensor
notation. Tensor indexes (specially in its abstract flavour, also
introduced by Penrose) are a quite convenient housekeeping device,
ensuring almost automatically the consistency of your equations and
even (once one has a bit of practice with them) suggesting their form^{7}.
But, convenient as they are, indexes seem to be confusing for
geometrical minds like Penrose's, who some fifty years ago devised a
pictorial representation for tensor equations^{8}.

As you can see in the figure, the idea is simple: choose a closed
polygon to represent the kernel letter of each tensor, and add an
upwards leg for each contravariant index, and a downwards one for each
covariant index. Index contraction is represented by joining the
respective legs. A wiggly horizontal line represents symmetrisation; a
straight one anti-symmetrisation. One can cross legs to indicate index
shuffling. The metric gets no kernel figure (it's just an arch), so
that contractions of indexes in the same tensor are easily depicted,
and raising and lowering indexes amounts to twist the requisite leg up
or down. To indicate covariant differentiation, circle the tensor
being differentiated and add the corresponding downwards (covariant)
leg. And so on and so forth. Note also that commutative and
associative laws of tensor multiplication allow your using any two
dimensional arrangement of symbols that fits you, which aids in
compactifying expressions. Penrose explains the many details and
twists of the notation in *The Road to Reality* and in his (and
Rindler's) Spinors and Space-time I, where you'll find extensions to
deal graphically also with spinors and twistors. According to the
latter,

The notation has been found very useful in practice as it greatly simplifies the appearance of complicated tensor or spinor equations, the various interrelations expressed being discernable at a glance. Unfortunately the notation seems to be of value mainly for private calculations because it cannot be printed in the normal way.

Besides the (not so obvious nowadays) difficulty mentioned above, i
guess that the main hurdle in adopting Penrose's notation is habit.
After many years using indexes, my algebraic mind seldom finds
equations confusing because of their indexes. But after a little
practice it becomes easier, and i'd say that people who *see* equations
will find it quite natural after a very little while^{9}. I don't know
how popular Penrose graphics are among physicists for private use, but
there's many an example of their application and extension to related
fields. A few years after its introduction, the notation was
rediscovered by Pedrag Cvitanovic, who used a variation of it in an
article on group theory and Feynman diagrams. More concretely,
Cvitanovic uses diagrams similar to Penrose's to represent to
represent the structure constants of simple groups in the context of
non-abelian gauge theories, interestingly linking them with Feynman
diagrams (and closing a loop in this article!). Later on, he would use
the notation very extensively in his on-line book on Group Theory,
where the diagrams go by the name of *bird-tracks*. In a nutshell, the
book is devoted to answer, in Cvitanovic words, a *simple* question:

"On planet Z, mesons consist of quarks and antiquarks, but baryons contain 3 quarks in a symmetric color combination. What is the color group?" If you find the particle physics jargon distracting, here is another way to posing the same question: "Classical Lie groups preserve bilinear vector norms. What Lie groups preserve trilinear, quadrilinear, and higher order invariants?"

From here, an amazing journey through the theory of Lie groups and
algebras ensues, a journey conducted almost exclusively by diagrams.
For, notably, Cvitanovic uses his bird-tracks (as mentioned, a very
evolved kind of Feynman diagrams) to actually *derive* his results. We
have here physics (and maths) by diagrams for real, actually replacing
algebraic reasoning (and, incidentally, a proof that Penrose's
reservations about his notation not being apt for publications are
unfounded nowadays–i wonder how Cvitanovic draws his diagrams).

Before leaving the subject, let me mention a couple more works inspired by Penrose's diagrammatic notation. Yves Lafont has greatly extended it and carefully analysed its application to mathematical problems in the context of category theory and term rewriting systems. If you're privy in the field, or simply curious, take a look at his articles Algebra and Geometry of Rewriting (PS) and Equational Reasoning With 2-Dimensional Diagrams , where Yves explores two-dimensional diagrams a la Penrose with an eye to (possibly automatic and computer-aided) derivations much in the spirit of Cvitanovic. And, turning back to physics, if there's a theory prone to diagrammatic reasoning it must be Loop Quantum Gravity, where the basic constituents are graphs and their transformations. Arguably, LQG is the most fundamental example discussed so far of graphical reasoning applied to physics, for here graphs (and their combinations in spin foams, an evolution of another Penrose invention, spin networks) do stand for themselves, as opposed to representing some underlying algebraic mathematical entity. Wandering into the marvels of LQG would carry us too far afield, so i'll just point out that Rovelli, Smolin and friends use not only Penrose's spin networks, but, on occasion, also the graphical tensor notation we've been reviewing; see for instance their seminal paper Spin Networks and Quantum Gravity, where Rovelli and Smolin presented their famous derivation of exact solutions to the Wheeler-DeWitt equation. The notable thing is, again, the fact that graphic notation is key in many a derivation, and cannot be seen as just an aid to represent some calculations.

Our final example of physics by diagrams comes from the category theory-inspired view of Quantum Mechanics invented by Samsom Abramsky, who has managed to do "quantum mechanics using only pictures of lines, squares, triangles and diamonds". This beautiful notation (or picture language, as their authors call it) is nicely explained in Bob Coecke's Kindergarten Quantum Mechanics, a very pedagogical set of lecture notes where it is applied to the problem of quantum teleportation. Bob's thesis is that teleportation was not discovered until the 90's (despite it's being a relatively straightforward result in QM) due to the inadequacy of the commonly used, low-level mathematical language used to describe Hilbert spaces. Had lines, squares, triangles and diamonds been used from the beginning, teleportation would have followed almost immediately. Or so thinks Bob: go take a look at his article and see what's your take. In any case, its more than sixty full-color diagrams, used instead of boring algebraic formulae, make for a fun reading (or, should i say, viewing). By the way, don't let the mention to category theory put you off: only very basic ideas (explained in the lecture notes) are needed, if at all, in this case, and actually the author's enthusiasm goes as far as making the bold claim that this new graphical formalism could be taught in kindergarten! Maybe that's the gist, since i, for one, find the notation hard to follow, undoubtedly due to my old-school, algebraic upbringing. Just to give you an idea of how this preschool notation looks like and close this long post as it deserves (i.e., with a diagram), here you have how the teleportation protocol (including a correctness proof) looks like:

My copy (Spanish translation) of the fifth edition of L&L's book has 500 pages and just 22 figures!

The link above points to Volume 11 of the collection at Oeuvres de Lagrange, a site that contains what seems to be the complete Lagrange corpus, conveniently scanned and downloadable too.

I would give you a direct link, did it exist. Unfortunately, PI's website is not up to the quality of their other activities. You'll find it by browsing to their Public Lectures Series and from there to page 2 (or search for James Brown). Another very unfortunate circumstance is that the videos are only available for those of you not/ using weird as in freedom operating systems :-(.

That's at least my impression. Penrose, for instance, advocates
for their reality in his *road*. The subject is however controversial
enough to grant the existence of monographs like the recent Drawing
theories apart, by David Kaiser (which i cannot comment on since i've
just added it to my wish list).

But i find this argument hard to swallow. Think for instance in the interpretation of antiparticles as particles travelling backwards in time: it also follows naturally (for some definition of natural) from perturbative series and/or their diagrams, but it is not as easily accepted as the existence of virtual photons. One wonders, where's the limit?

If you haven't your favourite textbook at hand (Hawking and Ellis being mine when it comes to anything related to causal structure), you can find a pretty good introduction on-line in this chapter of Sean Carroll's lecture notes.

There is only so many ways of combining indexes, and if you know what are the free ones on, say, your LHS and the tensors entering the RHS and its general properties (e.g. symmetries), it's often an easy task how their indexes should be combined. It reminds me, in a way, of dimensional reasoning, where knowing the target units and the ingredients gives an often quite accurate clue of how to combine them.

It was introduced in a chapter of the book Combinatorial
Mathematics and its Applications (Academic Press, London, 1971),
entitled *Application of Negative Dimensional Tensors*. But Penrose have
been using it (according to this letter to Cvitanovic (PDF)
since 1952.

An interesting (and not too far fetched) software project would be to write a Penrose diagram editor, possibly with support for tablet input devices. Such a tool would also probably solve the publication issue. In an ideal world, one would use a stylus to draw equations which would get automatically imported as nice diagrams, regular tensor equations with indexes or both. Any takers? ;-)

A case in point is his recent essay The Case for Background Independence, where the meaning, virtues and drawbacks of relationist theories of quantum gravity are explored in detail. More concretely, Smolin describes the close relationship between three key issues in fundamental physics, to wit:

- Must a quantum theory of gravity be background independent, or can there can be a sensible and successful background dependent approach?
- How are the parameters of the standard models of physics and cosmology to be determined?
- Can a cosmological theory be formulated in the same language we use for descriptions of subsystems of the universe, or does the extension of physics from local to cosmological require new principles or a new formulation of quantum theory?

The article begins with a brief historical review of relationism, as
understood by Leibniz and summarized in his principles of sufficient
reason (there's always a rational cause for Nature's choices) and the
identity of the indiscernible (entities with exactly the same
properties are to be considered the same)^{1}. These principles rule
out absolute space-times (like Newton's) or a fixed Minkowskian
background (like perturbative string theory), since they single out a
preferred structure 'without reason', as do theories posing any number
of free parameters (think of the much debated *landscape*)^{2}. As is
well known, Newton won the day back in the seventeenth century, until
Mach's sharp criticism marked the resurgence of relationist
ideas. Mach rejected Newtonian absolute space-time, favouring a purely
relational definition of inertia^{3}, which ultimately would inspire
Einstein in his quest for the general theory of relativity^{4}.

Smolin's article continues with a careful definition, in modern terms, of relational space and time, and follows with a discussion of some current theories featuring background independence: general relativity, causal sets, loop quantum gravity, causal dynamical triangulation models and background independent approaches (by Smolin himself) to M-theory. In a nutshell, it is argued that any self-respecting relational theory should comply to three principles:

- There is no background.
- The fundamental properties of the elementary entities consist entirely in relationships between those elementary entities.
- The relationships are not fixed, but evolve according to law. Time is nothing but changes in the relationships, and consists of nothing but their ordering.

None of the theories above passes without problems this litmus test of
pure relationsm. Take for instance general relativity. To begin with
the dimension, topology and differential structure of space-time are
givens, and thus play the role of a background. And, on the other
hand, only when we apply GR to a compact universe without boundary can
we aspire to a relational view, since otherwise we would have
arbitrary boundary conditions (partially) determining the structure of
space-time. Once you abide to these preconditions, a proper
interpretation of general covariance (in which you identify
space-times related by arbitrary coordinate transformations) provides
a relational description of space-time (for an in-depth discussion of
the subtle interplay between gauge invariance and relationsm, see also
this excellent article by Lusanna and Pari, and references
therein). As a second example, loop quantum gravity is also background
dependent: in this case, the topological space containing the
spin-networks of the theory. Other than that, loops are an almost
paradigmatic case of a relational description in terms of graphs, with
nodes being the *entities* and edges representing their relationships.

After his review of quantum gravity theories, Smolin takes issue with string theory. His subsequent train of thought heavily relies on the fact that relationism, or, more concretely, Leibniz's principle of the indiscernible, rules out space-times with global symmetries. For if we cannot distinguish this universe from one moved 10 feet to the left, we must identify the two situations, i.e., deny any meaning or reality to the underlying, symmetric structure. But, as is happens, the M-theory programme consists, broadly speaking, in maximizing the symmetry groups of the theories embodied in the desired unified description. More concretely, in background-dependent theories, the properties of elemental entities are described in terms of representations of symmetries of the background's vacuum state. Each of the five string theories embodied by M-string (should it exist!) has its own vacuum, related with each other via duality transformations (basically, compactifying spatial dimensions one way or the other one is able to jump from one string theory to the next). Thus, M-theory should be background independent (i.e., encompass different backgrounds), but, on the other hand, one expects that the unique unified theory will have the largest possible symmetry group consistent with the basic principles of physics, such as quantum theory and relativity. Smolin discusses some possible solutions this contradiction (which a lack, er, background to comment intelligently), including some sort of (as yet unknown) dynamical mechanism for spontaneous symmetry breaking (which would result in a Leibniz-compliant explanation for the actual properties–such as masses and coupling constants–that we find in our universe).

After all the fuss, there is disappointingly little to be said about
relationist unified theories^{5}. Invoking again the principle of the
indiscernible, Smolin rules out symmetries that would make (unified)
identities undistinguishable (if two entities have the same
relationships with the rest, they are the same entity). By the same
token, a universe in thermal equilibrium is out of the question.
Reassuringly, our universe is not, and the negative specific heat of
gravitationally bound systems precludes its evolution to such an state.
The case is then made (after casting evolution theory as a relationist
one, which is OK by me) for Smolin's peculiar idea of cosmological
*natural selection*. To my view, it is an overly speculative idea, if
only for the fact that it depends on black holes giving rise to new
universes when they collapse^{6}. If that were the case, and provided
that each new universe is created with random values for the free
parameters of our theories, one would expect that a process similar to
natural selection would lead to universes with its parameters tuned to
favour a higher and higher number of black-holes (which seems to be the
case in our universe). Nice as the idea is, i think we're a little far
from real physics here.

The article closes with a short section on the cosmological constant
problem (with the interesting observation than only casual set theory
has predicted so far a realistic value) and relational approaches to
(cosmological) quantum theory. Again, the author adheres to
non-orthodox ideas. This time, to recent proposals (see here and here)
of hidden-variable theories, albeit they have far better grounds than
the reproducing universes idea. The possibility of a relational
hidden-variable theory is argued for with a simple and somewhat
compelling line of thought. In classical physics, the phase space of a
system of N particles is described by a 6N variables, while a quantum
mechanical state vector would depend on 3N variables. On the other
hand, in a purely relational theory one would need to use N^{2}
variables, as these are the number of possible relations between N
particles. These would be the hidden-variables completely (and
non-locally) describing our particles, which would need statistical
laws when using just 3N parameters.

An amazing journey, by all accounts.

See here for excellent (and free) editions of all relevant Leibniz works, including his Monadology, and here for commented excerpts of the Leibniz-Clarke correspondence.

See also here for an interesting take on Leibniz's principle under the light of Gödel's and Turing's incompleteness theorems as further developed by Gregory Chaitin.

Julian Barbour's "The Discovery of Dynamics: A Study from a Machian Point of View of the Discovery and the Structure of Dynamical Theories" is the definitive reference to know more about the history of the absolute/relative divide. (Another amazing book by Barbour on these issues is "The End of Time : The Next Revolution in Physics", thoroughly reviewed by Soshichi Uchii here. Smolin himself has many an interesting thing to say about Barbour's timeless Platonia.)

Barbour argues in his book that Einstein seems to have
misunderstood Mach's discussions on the concept of inertia, taking it
for the dynamical quantity entering Newton's second law instead of the
inertial motion *caused* by space-time according to Newton's *first* law.

I'm also a bit surprised by Smolin's uncritical acceptance of reductionism, which he simply considers, "to a certain degree", as common-sense.

Tellingly, the only reference where this *theory* is developed is
Smolin's popular science book "The Life of the Cosmos".

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

*Max Planck (1858–1947)*

If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations–then so much the worse for Maxwell's equations. If it is found to be contradicted by observation–well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can offer you no hope; there is nothing for it but to collapse in deepest humiliation.

Arthur Eddington (1882–1944)

I believe that every true theorist is a kind of tamed metaphysicist, no matter how pure a "positivist" he may fancy himself. The metaphysicist believes that the logically simple is also the real. The tamed metaphysicist believes that not all that is logically simple is embodied in experienced reality, but that the totality of all sensory experience can be "comprehended" on the basis of a conceptual system built on premises of great simplicity. The skeptic will say that this is a "miracle creed."

Albert Einstein (1879–1955)

]]>Hertz's students were impressed, and wondered what use might be made of this marvelous phenomenon. But Hertz thought his discoveries were no more practical than Maxwell's. "It's of no use whatsoever," he replied. "This is just an experiment that proves Maestro Maxwell was right - we just have these mysterious electromagnetic waves that we cannot see with the naked eye. But they are there." "So, what next?" asked one of his students. Hertz shrugged. He was a modest man, of no pretensions and, apparently, little ambition. "Nothing, I guess."

Heinrich Hertz (1857–1894)