To all visitors: Kalvos & Damian is now a historical site reflecting nonpop
from 1995-2005. No updates have been made since a special program in 2015.
Kalvos & Damian Logo

Chronicle of the NonPop Revolution


Review of Self-Similar Melodies by Tom Johnson

Reviewed by David Feldman

Back to Tom Johnson home page

     John Cage insinuated Nature, understood broadly, into music. Unlike, say, late 19th century composers, Cage never merely depicts Nature with his music, but rather unfolds music in many ways from Nature, broadly conceived. Cage's Nature extends far beyond the wild, the savage, the pristine, beyond any mere antithesis to the urban or civilized. Cage's Nature embraces the whole of contingent experience, the world as what happens. The multiple radios of Imaginary Landspaces make audible the complexity of our radio wave environment, including the various human productions transmitted thereby; Child of Tree uses sounds made by amplified plant materials. In Cheap Imitation, Empty Words and elsewhere, Cage makes new works by processing the works of other artists much as he processes star charts to make the notes of Etudes Australes. Cage's famous anechoic chamber anecdote emphasizes the impossibility of experiencing true sonic neutrality because of the natural sounds our living bodies constantly make. Cage's silent piece 4'33" presents contingent experience minimally mitigated.

     Nevertheless, Cage's Nature suffers a profound limitation. By dint of his interest in meditation and contemplation, Cage privileges the actual and so neglects the potential. The laws of mathematics constrain Nature by logically delimiting the possible; confronting those laws brings us to Nature unbridled by the chains of history. Beyond the challenge of exploring the physical space about us lies the difficulty of charting the vast conceptual space that comprises Nature's mathematical objects which proliferate exponentially. Tom Johnson, and some other composers, make music by unfolding Nature in this larger aspect.

     The popular imagination reflexively links Mathematics and Music even as it habitually dissociates mathematics and music. On the one hand, the mythic marriage of Mathematics and Music, made in Platonic heaven, posits, perhaps, their primordial unity in the distinct past, or else, their potential convergence in a distant future. On the other hand, popular wisdom reviles as mechanical, inhuman, unexpressive the direct application of mathematical methods employed in the quotidian business of actually composing music, indeed as tantamount to blasphemy against the myth. Popular wisdom permits the myth to manifest itself only by the occasional but telling coincidence of these multiple talents in certain individuals. Of course mathematical materials have entered the works of various composers over the course of this century, notably Schonberg, Nancarrow, Babbitt and Xenakis, but the public celebrates these composers (to the extent that it does or doesn't) for the rich range of other resonances offered by their music; even devotees rarely claim to hear the mathematical aspect unfold in real time.

     Let us leaving the myth aside and examine with an open mind the possible fundamental affinities between mathematics and music. Suppose for a moment that we don't know the meaning of either word, "mathematics" or "music." Suppose we have a knowledgeable informant, someone sensitized to the nuances raised by recent developments. Suppose we ask for definitions. Our informant's first approximations -- mathematics as the science of number and shape, music as the art of sound -- offer us little clue in search of any resonance between these two spheres of activity. But pressed, our informant confesses that both definitions fall short, both include too much, both exclude too much. Ultimately our informant might take refuge in circularity, defining mathematics as what mathematicians do, music as what musicians do. The seemingly empty circularity actually has unexpected content, though. Both mathematics and music emerge from historical processes, mathematicians of one generation spinning variations on ideas and concepts of mathematicians of the previous generation, just as the musicians do. Moreover, both music and mathematics have pretension to the protean, to the universal, to the the representation and even the ultimate synthesis, each in its own way, of virtually every aspect of human experience.

     Picture the body of knowledge and practice that constitutes either mathematics or music in the form of a neuron, a cell with a central nucleus graced with long dendritic extensions, trailing off into invisibility, that fill, however tenuously, a space reaching far beyond the nucleus. Calling music the art of sound amounts to seeing just the nucleus, the sonic, ignoring the performative, the temporal, the spatial, the notational, to say nothing of specific musical works formed only of light, or of concepts, of music symbols never meant for realization in sound, or just of silence. Every discipline evolves, of course, but the abstract essence of both mathematics and music especially allows their evolution to spin off eccentric eddies. Among scientists, only mathematicians have the pleasure of speaking cogently without knowing that to what their words refer. Definiteness of media linearizes the evolution of many nonmusical art forms, blunting any serious pretensions towards the universal. (Architecture, say, has a reach similar to music's, but then one often hears architecture cited for its privileged connections to both music and mathematics.)

     As mathematics and music grow towards dendritic permeation of our conceptual universe, inevitably they make synaptic contact, whence energetic sparks commence to pass in both directions. Tom Johnson's investigation of "self-similar melodies" constitutes such a moment. As the synapse metaphor suggests, his book, at its most original, operates at some distance from the nuclei of either mathematics or music, forcing a rethinking of both spheres. I neither suggest here that Johnson employs exotic mathematical techniques nor that he generates musical materials which scorn traditional music values. Even so, the theorem, the commodity central to mathematical life, barely plays a role here. Theorems delineate the interrelations between the properties of mathematical objects, but for Johnson the interest lies directly with the mathematical objects themselves, rendered palpable and lambent by translation into music. Likewise, the gestalt, the center of traditional musical experience, hardly comes into the story. Most listeners immerse themselves in music for its total impact; typically only theorists dissect and disassociate in search of underlying principles. Yet Johnson propounds a music at least as rich for the mind as the ear. Indeed his example melodies manifest musical value even when he hasn't fully prepared them for our ears: mostly they lack instrumentation, dynamics, definite pitchs and come only with rhythms Johnson deems arbitrary. While radical composers of the 20th century have long invited listeners to seek pleasure in new sound worlds, Johnson writes of a musical pleasure not essentially sonic in nature. Johnson's melodies engage our cognition, then we step back and take pleasure, even sensual pleasure, from our awareness of our own cogitation.

     For all that, a more central perspective also informs Johnson's inquiry. Though he seldom finds himself grappling directly with theorems and never with proofs, mathematical objects interest him when they do form the subjects and critical cases of interesting theorems, when they possess remarkable properties that he can make audible. Though he expostulates a new form of musical pleasure, he doesn't seek to suppress more familiar pleasures. Indeed, he measures his melodies, whatever their mode of manufacture, against traditional metrics of shape and contour, such as he acquired from his composition teachers years ago, or so he says. And I find no contradiction here: interesting art always serves multiple masters.

     Before offering a taste of Johnson's mode of operation, let me first say what the reader doesn't find in his book. Though he suggests avenues for future exploration and hopes other composers will take up where he leaves off, he hasn't written a manifesto, he makes only the gentlest claims for directions he suggests, he condemns nothing. Neither has he written an overview of his own composition career, or even part of it, indeed he only mentions his own pieces in passing, when they have some particular relevance to his point; the examples throughout have the status of outtakes. Nor has he written a music theory book which purports to analyze any existing body of work. Nor a primer on mathematical composition, for he has little to say about spinning fully formed compositions out of his melodies; indeed he stops himself at the first signs of heading in that direction. Finally, Johnson doesn't give us a heavy treatise, but rather a very personal exploration of the material, replete with false starts, interesting asides and a pervasive arch humor that readers who know Johnson's music, say his Shaggy Dog Operas, will certainly recognize. In particular, Johnson assumes no mathematical background from the reader and explains whatever he does simply and completely.

     Johnson decides to concentrate on melody, to the exclusion of counterpoint, harmony and form. He systematizes his own diverse experiments turning certain mathematical objects into sound. He prizes mathematical objects possessing structural redundancy, objects where the whole manifests itself within its parts, within the parts of its parts, ad infinitum. The tendency of musical materials towards hierarchical repetition has a long history, as every student of Schenker's theories surely knows, but Johnson's melodies surrender to hierarchical repetition entirely and exactly. Let us have an example. Consider, as Johnson does, the numerical sequence:

0   1   -1   0   1   2   0   1   -1   0   -2   -1   0   1   -1   0
1   2   0    1   2   3   1   2    0   1   -1    0   1   2    0   1 ...

     Johnson happens to melodicize this sequence by letting the numbers represent diatonic scale degrees, but any other sequence of pitches would work equally well. How did he make this sequence? He started with the very short sequence 0 . Then he replaced every number n in this sequence with the four numbers

          
n   n+1   n-1   n.

     Doing that once gives

0   1   -1   0 .

     Doing it twice gives

0   1   -1   0   1   2   0   1   -1   0   -2   -1   0   1   -1   0 .

     Repeating this process generates arbitrarily long melodies. In this case each new melody begins with a statement of the previous one. This allows us to meld all these finite melodies into one infinite melody; mathematicians call this step "passing to the limit," the sense of "limit" generalizing that familiar from calculus. We'd never have time, of course, to hear an infinite melody, indeed even the finite melody at the 10th generation takes days to unfold. Johnson can call the resulting infinite melody "self-similar" because we can hear the whole melody in every fourth note. Self-similarity only applies approximately to the approximating melodies of finite length.

     Composers and listeners have long regarded economy and coherences as central musical values, yet these values conflict: repetition, including prolongation, generally reinforces coherence and comprehensibility even as it compromises economy. Deviations from the normative levels of musical repetition have engendered some of 20th century musics hottest controversies, from the serialists to the minimalists. Indeed one might argue that the very story of 20th century (European art) music unfolds along just this axis. We shouldn't view the relative prioritization of economy and coherence merely as matters of taste and fashion, like hemlines. Rather, we would think of economy and coherence as tools, not intrinsic values, and then search for the really basic values these tools can serve. What precious ideas does music communicate that we might lose to careless incoherence? What scarce resources, literal or metaphorical, does music consume that we shouldn't waste by profligacy?

     Johnson practices a music that rewards attention beyond its mere sonic profile, his sounds carry ideas. A music of ideas really does require coherence which requires redundancy. By turning not to mere repetition, but to structural redundancy, Johnson has a tool that actually serves both economy and coherence. Johnson makes individual notes serve multiple roles. The casual listener, not knowing what to listen for, might miss the multiplicities that make the music meaningful, but a mere modicum of effort quickly cures this. Milton Babbitt's music also features structural redundancy, but most listeners have little chance of retraining their ears to discern it (nor need they to value Babbitt's music, which yields certain pleasures even if we listen as if Babbitt made incoherence his goal).

     Returning to the example above, the concept of the infinite has vital link to the concept of theorem: nontrivial theorems of mathematics usually speak to infinitely many cases and thus do not admit verification by straightforward exhaustion, even in principle. (Occasionally a celebrated theorem speaks merely to an astronomical number of possibilities and thus admits verification by straightforward exhaustion in principle, but not in practice, e.g. the nonexistence of projective planes of order 10.) Only by the use of logic may one prove an infinity of facts with just a finite deduction.

     The words "logic"/"logical" appear many times in Johnson's book, but always in the more vernacular sense of system/systematic, sense/sensible. Mathematicians have a concept, Kolmogorov complexity, to formalize something like Johnson's sense of "logical." We say a long melody (or numerical sequence) has low Kolmogorov complexity if a comparatively short computer program can generate it. That short program would then embody the sequence's "logic," in Johnson's sense, or more precisely, one of its logics, since a given sequence generally has many programs which generate it. Of course the human mind might fail utterly to recover even a very short program from its output. Kolmogorov complexity doesn't take account of cognitive psychology, an important component in Johnson's concept "logical," but one over which he chooses to pass lightly in this book.

     When two different programs happen to generate one and the same infinite sequence, one has a theorem to prove, one needs logic in the mathematician's sense to account for infinitely many coincidences. As a composer myself, I've had a long interest in rendering this sort of logic audible, and I've often found myself working with materials similar to those in this book. Coincidentally, I wrote a piece based on the same sequence

             
0   1   -1   0   1   2   0   1   -1   0   -2   -1   0   1   -1   0
1   2   0    1   2   3   1   2    0   1   -1    0   1   2    0   1 ...

back in 1980. That piece emphasizes a property of this melody that Johnson doesn't mention: every third note comes from the set {...,-9,-6,-3,0,3,6,9,...}, followed by a note from the set {...,-8,-5,-2,1,4,7,10,...}, followed by a note from the set {...,-10, -7,-4,-1,2,5,8,...} . Thus the melody has a compelling sense when parsed into segments of 3 notes besides its evident sense when parsed into segments of 4 notes. This dual nature constitutes a simple theorem and my piece (Corpus Callosum II) makes the theorem audible by projecting the melody in different ways from two sides of the musical space. The explanation behind the theorem comes from another way of generating the melody: count in binary, and then generate numbers by treating the place values not as 1,2,4,8,16,... , but as 1,-1,1,-1,1... . The theorem follows because 3 divides all the numbers 1-1, 2-(-1), 4-1, 8-(-1), 16-1,... The same idea lies behind the "casting out nines" method children used to learn to check their arithmetic.

     Here's another example of the tension between these two conceptions of "logical." A particular chapter discusses the famous Fibonacci sequence, 1,1,2,3,5,8,13 generated by starting with 1,1 and obtaining the next number by summing the previous 2, ad infinitum. Johnson (re)discovered the fact, (immediately proved for him by his mathematical collaborator Jean-Paul Allouche), that every Fibonacci number equals one more than the sum of all but its immediate predecessor. For example 8 = 1 + (1+1+2+3) . Johnson omits the argument, so I'll explain it. Having just seen that 8 = 1 + (1+1+2+3) , say, case 13 = 1 + (1+1+2+3+5) forms our next concern. We could check this with a little addition, but that wouldn't explain the general pattern. Instead, note that 13=8+5 , the sum of the two previous Fibonacci numbers, but that means 13=(1 +(1+1+2+3))+5 which is what we want just by rearranging parentheses. Because we may repeat this process without obstruction, we see the general truth of the theorem. (Here I've hinted at the logical technique called "mathematical induction.")

     So now we have two programs to generate the Fibonacci sequence, two definitions of the sequence. Johnson takes this new definition as the basis of a melody, itself full of counterpoint:

1  1  2  3  1  1  5  1  1  2  8  1  1  2  3  1  1  5  1  1  2  8  1  2  3  5  8 .

     His method amounts to following each element of the Fibonacci sequence with its new definition and then iterating this. This makes a beautiful music, but we have lost the theorem since the other definition of the Fibonacci numbers has no role here.

     Let me suggest an alternative approach to this phenomenon. Johnson begins that chapter by constructing a (well-known) sequence/melody with just two pitches that he notates as 0 and 1 . He generates the melody by means of a certain automaton, but I'll approach it another way, in segments. I'll leave spaces between the segments so you can see what I'm doing:

0 1 01 101 01101 10101101 0110110101101...

     Here I obtain each segment as the concatenation of the previous two,

0 1 (0)(1) (1)(01) (01)(101) (101)(01101) (01101)(10101101)...

so the lengths of the segments give the Fibonacci sequence by dint of the original definition. Concatenation concretizes addition. I have another way to obtain each segment, illustrated as follows:

0 (1) (0)(1) (1)(0)(1) (01)(1)(0)(1) (101)(01)(1)(0)(1)(01101)(101)(01)(1)(0)(1)...

     Here each segment comes by concatenating all but it immediate predecessor and then an extra (1). Thus the lengths of the segments give the Fibonacci numbers by dint of Johnson's alternative definition. A sufficiently apt musical setting could let you hear both "logics" at once. Observe that the coincidence of these two sequences constitutes a somewhat more elaborate, more concrete theorem, than the mere fact about Fibonacci numbers that inspired it. Making a mathematical idea audible often requires doing more mathematics. Having come this far, I can't resist showing you two other sequences that accomplish much the say thing. I can obtain each segment as the concatenation of the previous two, with the second previous segment reversed

0 1 (0)(1) (1)(01) (10)(101) (101)(10101) (10101)(10110101)

or by reading all but immediate predecessor backwards followed by a (1)

0 (1) (0)(1) (1)(0)(1) (10)(1)(0)(1) (101)(10)(1)(0)(1)(10101)(101)(10)(1)(0)(1)

and I get the same result. Alternatively, I can concatenate the previous two in the other order

0 1 (1)(0) (10)(1) (101)(10) (10110)(101) (10110101)(10110)

or write a (1) followed by everything but the immediate predecessor read from left to right

0 (1) (1)(0) (1)(0)(1) (1)(0)(1)(10) (1)(0)(1)(10)(101)(1)(0)(1)(10)(101)(10110) .

     And one more strange fact: note the similarity of

0 1 (0)(1) (1)(01) (10)(101) (101)(10101) (10101)(10110101)

(again each segment equals the second previous, reversed, followed by the previous)

0 1 (1)(0) (10)(1) (101)(01) (10101)(101) (10101101)(10101)

(each segment equals the previous followed by the second previous reversed).Two segments out of three coincide, and every third pair differ only at their two central bits! I've gone on at this length to illustrate a feature of Johnson's book: his open-ended approach makes the reader a collaborator, he constantly emphasizes how much remains to explore.

     Limitations on space forbid my summarizing each of the book's 25 chapters, but one section particularly deserves detailed consideration, that concerning what Johnson calls "self-replicating melodies"; Johnson himself rightly regards this material as "the most original, and potentially the most useful of the book." Most of the rest of the book makes well-known mathematics (fractals, automata, weaving patterns) audible, but here musical concerns precede the mathematics. Indeed a more formal treatment would greatly clarify this section, answer some questions Johnson leaves open, and suggest further directions for exploration. I shall sketch such a treatment here at the risks of making certain demands upon the reader.

     First to summarize Johnson's setup. He conceives of self-replicating melodies as certain special cycles of notes, beginning in the infinite past and repeating forever. Nevertheless, just for the sake of reference, we fix a starting point for our cyclic melodies. Let M denote such a melody. Write n=n(M) for the number of notes/events in M . Write the notes of M as

..., m0, m1, m2..., m{n-1}, m0, m1,...;

so m0 denotes our arbitrary starting point. The various mi need not be distinct. Now choose a number x , relatively prime to n(M) . (The mi could represent anything, but for now, think of them as numbers which stand for certain pitches.) Johnson calls

..., m0, mx, m{2x}..., m{(n-1)x}, m0, mx,...

a related melody of M ; here we denote it Mx , so M=M1 . In plain talk, we get Mx by taking every x th note of M ; the relative primality of x and n guarantees that M and Mx have the same cycle length, the same event content. In others, we can get the basic cycle of Mx by permuting that of M . Now if we happen to have an equality M=Mx we call M self-replicating at the ratio x:1 . For example, the melody M=1 2 2 3 2 3 3 , n(M)=7, self-replicates at ratio 2:1 , as in 1(2)2(3)2(3)3(1)2(2)3(2)3(3) .

     How does one make melodies which self-replicate? To make the idea clear, we first reformulate the notion of cyclic melody. Mathematics offers a universal cycle of length n , the so-called ring Z/nZ , arithmetic modulo n . (Working in Z/nZ means considering two integers equivalent if n divides their difference, but otherwise performing addition and multiplication as usual. We can list representatives of the element of Z/nZ as {0,1,2,3,..., n-1} .) Let E denote our universal set of musical events. Then we may think of a cyclic melody M as a function M:Z/nZ ----> E . Now given an integer x , we get a function Xx:Z/nZ ----> Z/nZ which just multiplies by x : Xx m= xm . Now Mx just equals the function composition M o Xx . Taking x relatively prime to n guarantees that the function Xx permutes Z/nZ . Mathematicians have (at least) two ways to write permutations. For example, consider X2 with n=7 they might write

0   1   2   3   4   5   6
0   2   4   6   1   3   5

or just

(0)(1   2   4)(3   6   5)  .

In the first (functional) notation one reads down the columns to see that 0 goes to 0 , 1 goes to 2 , 2 goes to 4 , etc. In the second (orbit) notation one reads left to right, 0 goes to 0 , then 1 goes to 2 goes to 4 goes to 1 , then 3 goes to 6 goes to 5 goes to 3 . Here we call (0) , (1 2 4) and (3 6 5) the orbits of the permutation.

     The two notations capture the same information, but the second more immediately clarifies the structure of the permutation in a way we shall immediately find useful. Indeed Mx=M precisely when we have a the function M constant on the content of the orbits of Xx . For example, taking (the elements of) (0) to 1, (1 2 4) to 2 and (3 6 5) to 3 yields the example quoted above.

     A composer who would work with self-replicating melodies has a direct interest in the number and size of the orbits: the number of orbits making up a permutation limits the number of pitchs over which the melody can range. Fortunately, mathematics tells us a lot about orbit decompositions in this context. For example, when we have n=n(M) prime, there must exist an z such that Xz decomposes into (0) and an orbit of size n-1 . (A mathematician would say, a bit more generally, that a finite field has a cyclic multiplicative group.) Then Xzk decomposes into groups of size (n-1)/gcd(k,n-1) , where gcd means greatest common divisor. So picking k equal to a large factor of n-1 yields many small orbits, thus allowing the composer considerable freedom when constructing the melody.

     When we don't have n=n(M) prime, more complicated orbit structures can occur. These do admit a uniform description, well-known at least implicitly. Summarizing, for prime p>2 , Z/pkZ has cyclic multiplicative group of order (p-1)pk-1 , so there exists x such that Xx on Z/pkZ breaks into orbits of sizes 1 , p-1 , (p-1)p , (p-1)p2 , (p-1)p3 , ... , (p-1)pk-1 ; the orbit of size (p-1)pi consists of elements divisible by p(k-1)-i but not by pk-i .

     One may use the higher algebraic concept of direct product to assemble this information concerning n=pk a prime power, so as to describe the multiplicative group Z/nZ for any n , and thereby the possible orbit structures. Putting all this together could give the composer very precise control for designing self-replicating melodies with particular features; space limitations prevent us from going into further detail.

     Beside permutations of Z/nZ , one may consider permutations rho of E as well. These arise when one generalizes the notation of self-replicating, asking that Mx equal, not M itself, but some simple transformation of M , for example, an inversion or a transposition. Now we seek M such that \rho o M o Xx = M and this requires the cycle structure of rho and Xx to intermesh in a precise way. We illustrate this by constructing a counterexample to a conjecture Johnson states on page 260. Indeed he writes:

     ... One thing is missing, however, and I bet it will surprise you when I point it out. In fact, the idea was not just a surprise for me, but a real shock. I spent several days in disbelief, trying to demonstrate that it is not true. I am now convinced that it is true, however, and I am going to state it as if it was a real theorem, even though it is barely a conjecture and I can't prove anything:

     A related melody, produced by playing a melodic loop at some ratio other than 1:1 , can never be the inversion of the original loop, unless it is also a retrograde of the original loop.

     Let us explore this. To make a related melody equal the original melody we make it constant on orbits. To make a related melody invert the original melody about note 0 , we make it alternate signs on orbits. This means the melody can only use the 0 note on odd length orbits, else we violate the alternation of signs when the orbit closes. We can easily find n and x so that Xx breaks Z/nZ into mostly even orbits, for example n=13 , x=8 generates orbit structure

(0)(1    8   12    5)(2    3    11   10)(4    6   9    7) .

     Alas, any melody that alternates signs along these orbits must equal its own retrograde because the diametrical components of these orbits sum to 0 modulo 13 . This was no accident. The orbits have length 4 just because 84=== 1 modulo 13 , with 4 the smallest exponent making this so. That means 82=== -1 , since 1 has just the two square roots 1 and -1 modulo 13 . This follows because the primality of 13 makes Z/13Z a field, meaning a ring where we can divide by nonzero elements. In a field, the degree of a polynomials bounds the number of roots it possesses. Thus we must seek a ring where 1 has more than two square roots. The ring Z/15Z will do, because 12=42=112=-12 . Indeed, n=15, x=2 does the trick. Then X2 has the orbit structure

(0)(1   2   4   8)(3   6   12   9)(5   10)(7   14   13   11)

and we can make a melody that takes the value 0 on (0) , alternates between -1 and 1 on (1 2 4 8) , between -2 and 2 on (3 6 12 9) , between -3 and 3 on (5 10) , between -4 and 4 on (7 14 13 11) . The resulting melodic cycle has the form

0   1   -1   2   1   3   -2   4   -1   -2   -3   -4   2   4   -4

which clearly doesn't equal its own retrograde inversion. Observe

(0)   1   (-1)   2   (1)   3   (-2)   4   (-1)   -2   (-3)   -4   (2)   4   (-4)
 0   (1)   -1   (2)   1   (3)   -2   (4)   -1   (-2)   -3   (-4)   2   (4)   -4

     While the melody self-replicates at x=4 , a shorter counterexample which itself doesn't self-replicate nontrivially at all comes from n=8 , x=3 . Now the orbit structure (0) (1 3) (2 6) (5 7) (4) produces the melody

0  1  2  -1  3  4  -2  -4                         .

     If we consider n=13 , x=2 , X2 has the orbit structure

(0)(1   2   4   8   3   6   12   11   9   5   10   7) .

     Here we may take as our E the twelve-tone chromatic and consider permuting by a half-step transposition, that is, by adding 1 modulo 12 . (Johnson does not discuss melodies self-similar up to transposition.) Now we assign the orbit (0) to silence, but the orbit

(1   2   4   8   3   6   12   11   9   5   10   7)

to notes

0   1   2   3   4   5   6   7   8   9   10   11

moving up the chromatic by half steps, and we get the melody

  silence    0    1    4    2    9    5    11    3    8    10    7    6

self-similar by half-step transposition, as follows,

(silence)   0   (1)   4   (2)   9   (5)   11   (3)   8   (10)   7   (6)
 silence   (0)   1   (4)   2   (9)   5   (11)   3   (8)   10   (7)   6

     Composer Robert Morris discovered this all-interval twelve-tone back in the early '70s and made the row the basis for Not Lilacs a rigorous dodecaphonic composition in a driving jazz idiom. (Morris gave no duration to the silence, thereby making syncopation essential to the projection of his rows self-replicating aspect. Johnson, I believe, moves in circles far from those of the latter-day strict serialists, but not surprisingly the universitality of mathematics, with its complexities emerging from its simplicities, produces connections between enterprises emerging from very different concerns.