An arithmetic of rational numbers is developed organically from the idea of distinction. By explicitly representing distinctions as they unfold naturally, a hierarchy of increasingly complex arithmetic systems, including the boundary arithmetic of Jeffrey James, are shown to grow out of the same root. Conflating some distinctions produces the binary arithmetic of Laws of Form by SpencerBrown. The present paper thus provides a broader contextual framework within which these systems can be related and seen as different branches in the same tree growing from the seed of a primordial distinction.
The first section of the paper provides background and motivation for the approach taken in this work. The second section describes the basic features of prior arithmetic systems that are based on distinction. The third section presents an informal organic development of the present arithmetic system and concludes with a set of axioms upon which an arithmetic of rational numbers may be rigorously based.
The Universe had to have a way to come into being out of nothingness. ...When we say “out of nothingness” we do not mean out of the vacuum of physics. The vacuum of physics is loaded with geometrical structure and vacuum fluctuations and virtual pairs of particles. The Universe is already in existence when we have such a vacuum. No, when we speak of nothingness we mean nothingness: neither structure, nor law, nor plan. ...For producing everything out of nothing one principle is enough. Of all principles that might meet this requirement of Leibniz nothing stands out more strikingly in this era of the quantum than the necessity to draw a line between the observerparticipator and the system under view. ...We take that demarcation as being, if not the central principle, the clue to the central principle in constructing out of nothing everything. — John A. Wheeler
What are numbers, really? If everything is made of number, as Pythagoras declared, then it is necessary to understand the nature of number to understand the essence of all things. The concept of number, however, has evolved since Pythagoras. Just as the atom of Democritus is now understood to be made of more fundamental elementary particles, similarly the ancient notion of number is no longer viewed as a primitive notion. Today mathematicians describe numbers in terms of the more general “prenumerical” notion of a set. The essence of number (and most any mathematical idea at all) can be understood in terms of the idea of set. The modern Pythagorean thus declares that everything is made of sets.
What, then, is a set? A set can be thought of intuitively as a conceptual container, unifying its contents into a single whole. Because sets themselves can be used as the contents for new sets, complex structures can be built up (e.g., numbers). The simple concept of a set is remarkably fertile: almost any mathematical object can be regarded as made of sets combined using logical operations, analogous to how we view all matter as made of elementary particles combined using fundamental forces. In fact, set theory provides the traditional foundation for all of modern mathematics.
Is it possible to provide an even deeper foundation for mathematics? A set is a particular type of distinction, namely, a distinction that creates a one from a many. But not every distinction is a set. For example, logical operators such as not are not sets. Thus, even more fundamental than the notion of a set is the notion of distinction, for every set is defined or created by making a distinction between what is contained in the set, and what is not (e.g., the set itself). So, the Pythagorean maxim then becomes: everything is made of distinction. But can mathematics be systematically based on distinction? It appears that this may be possible. In fact, some mathematicians have taken the first steps already, as we will now see.
Taking distinction as fundamental, let us now attempt to see exactly how some mathematical systems begin to emerge naturally. Just as we can make numbers by repeatedly creating sets, we should be able to make numbers from repeated distinctions. Indeed, in Laws of Form[1] G. SpencerBrown develops a binary arithmetic based upon distinctions. One of the remarkable and unique features of his system is that the values and the operators that act on them are both made from the same distinctions. To give a sense of his system, we will take a brief look at some of its main features.
A distinction in SpencerBrown’s arithmetic is represented by a cross,
This cross at once represents both a value (the marked state) and an operation (crossing from the unmarked state to the marked state). An expression in the arithmetic is formed from a collection of crosses which are nested within or juxtaposed with each other. In addition, an empty space, , is a valid expression whose value in the arithmetic is conventionally called the unmarked state, or the void. Expressions in the arithmetic are transformed into each other through the application of two initial axioms, the axiom of condensation and the axiom of cancellation:
= 

= 
With these two axioms, any expression is equivalent to either the marked state or the unmarked state. The arithmetic presented in Laws of Form is thus a twovalued arithmetic, called the primary arithmetic.
In Laws of Form, SpencerBrown also developed an algebra (called the primary algebra) which governs how expressions in the primary arithmetic may be transformed. The primary algebra is based on two algebraic axioms called position and transposition which govern the transformation of expressions:
 = 
 = 

r 
Remarkably, by interpreting the cross as the logical not and juxtaposition as the logical or this algebra may be interpreted as Boolean algebra, where the marked state and unmarked state correspond to the values true and false, respectively. Thus, the basic laws of logic can be seen as unfolding from the single idea of making a distinction.
In an unpublished article[2], SpencerBrown developed from Laws of Form an arithmetic of natural numbers by limiting the axiom of condensation, so that
≠ 
In this arithmetic system, the numbers 1, 2, and 3 correspond to the distinct expressions
1 ↔  2 ↔  3 ↔ 
The operation of addition is defined by juxtaposing two expressions in the same space. For example, 2+3=5 becomes simply
2+3 ↔  =  ↔ 5 
And the operation of multiplication of two expressions p and q is defined by
p * q ↔ 

Because of the nested crosses, this expression for the product of two numbers has no obvious interpretation as a natural number. In order that the operation of multiplication be closed, i.e., result in a value which is a natural number, it must be made equivalent to an expression that is a sequence of juxtaposed crosses. This requires a specification of what transformations of expressions are allowed in this arithmetic of natural numbers.
Just as the arithmetic of natural numbers is derived from the primary arithmetic by limiting condensation and allowing only cancellation, SpencerBrown proposes that the transformations of expressions of natural numbers allow only the axiom of transposition with the axiom of position being limited. He also makes use of an algebraic generalization of cancellation which is derived from both axioms of the primary algebra (whose justification is not clear since the axiom of position is limited). Using transposition, expressions for the product of two natural numbers then may be simplified to yield a numerical result. The one exception, however, is that the expression for 0*0 does not simplify to 0, i.e., the void. Another problem is that the unrestricted use of transposition implies distribution of addition over multiplication, i.e., (p+r) * (q+r) = (p+q) * r, which is false in conventional arithmetic. An ad hoc limitation to the application of transposition is thus needed to prevent the derivation of false statements in the arithmetic.
Kauffman[3] and Engstrom[4] suggest ways to solve some of the above problems with SpencerBrown’s arithmetic system. Kauffman’s approach is the introduction of a secondary implicit distinction between additive and multiplicative spaces, i.e., dual interpretations for expressions. Engstrom, on the other hand, adopts one interpretation, makes an explicit restriction on transposition, and introduces a new symbol to represent 0 so that the expression for 0*0 then simplifies. Although these approaches address the obvious faults with SpencerBrown’s arithmetic, there are still unresolved questions regarding the justification for its foundations. What, for example, is the justification for adopting the transposition law, which was derived from both axioms of the primary arithmetic, even though one of those axioms is limited? What is the justification for limiting the application of this law? And what is the justification for adopting some of the other algebraic results of the original system, while rejecting others? As it stands, the foundations of this arithmetic of natural numbers appear ad hoc. It would be desirable to have a more intuitive and natural connection with the system upon which it is based. In addition, it is not clear how this arithmetic may be extended beyond natural numbers to obtain an arithmetic of integers or rationals.
Although not directly connected to Laws of Form, Jeffrey James[5] has developed a powerful arithmetic based on distinctions (called boundary arithmetic). His arithmetic system includes integers, rational numbers, as well as some real and imaginary numbers. James assumes as given several different types of distinctions together with a set of axioms that govern their transformation. Rather than using SpencerBrown’s cross notation, James represents a distinction (i.e., boundary) by one of three different types of bracket pairs: round brackets, ( ), square brackets, [ ], and pointed brackets, < >. Linear strings of these bracket pairs are expressions in his calculus. The virtue of this boundary arithmetic is that it provides a surprisingly powerful arithmetic based upon three distinctions and a few simple rules for their transformation. A drawback is that the transformative rules and various types of boundaries are introduced ad hoc without providing any intuitive basis for them. What is the basis for the adoption of three different boundaries? Are there more fundamental justifications for the axioms governing the transformation of expressions? Is there a deeper connection with SpencerBrown’s arithmetic?
Much of the elegance of the primary arithmetic and algebra developed in Laws of Form is due to the organic development of its rules and axioms out of the simple act of distinction, making it more than an arbitrary set of axioms and definitions. Although based on Laws of Form, the natural number arithmetic of SpencerBrown is somewhat ad hoc. The boundary arithmetic of James is much more powerful, but it appears even more ad hoc and has no clear connection that traces it back to the act of making a distinction. It is the object of this paper to show how an arithmetic can be developed that at once has the power of boundary arithmetic while also being organically and intuitively traced back to the act of distinction without any ad hoc axioms or restrictions. As will be shown, such a development does not follow a single unique path but has the possibility of branching out in different directions depending on choices made along the way.
This paper goes further than developing a single arithmetic as opposed to other alternatives: it provides a general context for understanding how distinction gives rise to various arithmetic systems of increasing complexity. From the fundamental idea of distinction grow various simple arithmetics, including both the primary arithmetic in Laws of Form and the boundary arithmetic of James. A contribution of this paper is to unify all these systems in the context of a coherent whole, showing how they are all naturally connected and can be traced back to a single distinction. This context is not itself a rigorous mathematical system, but instead provides a flowing intuitive framework within which axioms can be crystallized at various points to serve as the basis for various rigorous mathematical systems.
The arithmetics of SpencerBrown and James take distinction as their starting point and presuppose its universal consistency and validity. In contrast, we begin prior to distinction and acknowledge that distinction is not the solid foundation that we normally take it to be. To illustrate this failure of distinction, consider the following paradoxes of set theory.
According to Cantor, “a set is a Many that can be thought of as a One.” If, however, we permit without limitation that every “Many” can be made into a set, we can generate an inconsistency. This is known as the Russell paradox. Consider the collection of all sets that are not members of themselves. Can this Many be thought of as a One? If so, then it is a set. Now if it is a set, it must either be a member of itself or not. Either case results in an inconsistency. Thus, this Many can not be consistently thought of as a One. Cantor, however, did not assert that any Many can be thought of as a One; he simply defined sets as a Many that can be thought of as a One. Thus, in order for the notion of a set to be consistent, one must make a distinction between collections that can consistently be made into sets and those that can not. But can this distinction itself be clearly identified and defined? Which collections can and can not be made into sets? There is no obvious criterion for determining whether or not a given collection can be a set. Russell’s solution to this problem was to introduce an ad hoc restriction on what can be a set. Starting from the most primitive set, the empty set, all other sets are explicitly constructed as collections of preexisting simpler sets, creating a hierarchy of more and more complex sets. Because this method of set construction does not permit the construction of sets containing themselves, the Russell paradox is avoided. Apart from appearing ad hoc and arbitrary, this solution has a more fundamental problem which has to do with the fact that it is based on the empty set, i.e., the set that does not contain anything. The empty set is the bare idea of making a set, of creating something from nothing. This definition of the empty set presupposes we understand what is meant by “no thing.” If, however, we regard this “no thing” as some subtle “thing”, then it is not truly nothing. If, on the other hand, we define nothing as the opposite of all things, then our definition becomes circular because we have defined things in terms of nothing.
The logical paradoxes that arise from trying to secure a consistent definition of a set perhaps derive from a more fundamental problem with the attempt to create any distinction at all. We normally assume that it is actually possible to make a perfect, consistent distinction. Could it be, though, that no distinction can in fact be rigorously created? Could it be that, due to the very nature of distinction, all our attempts to create consistent distinctions, whether as sets or anything else, will unavoidably fail? This proposal, although radical, provides an understanding of the root of the problems encountered with set theory. Moreover, because distinctions are the basis of not only mathematical objects, but also of philosophical concepts and all other objects of experience, this proposal may also shed light on a wide range of other problems and paradoxes. An examination of such topics, however, extends beyond the scope of the present paper.
While distinctions may have the appearance of being perfectly clear and definite, upon close examination every distinction will be found to fail. So, when we imagine a distinction to be complete, we ignore (consciously or unconsciously) the true nature of distinction. The imperfection of distinction implies that it is not a solid foundation for a mathematical system. Accordingly, we begin our story prior to distinction. We can not speak or think about what is prior to distinction, however, because words and concepts are based on distinction. But we can use words as pointers to this ineffable reality. We will call it the Absolute, denoted by Ω. Later on, we will also refer to it as the Void, and denote it with an empty space, .
Because the Absolute is prior to every distinction, it is prior even to the distinction between distinction and nondistinction. Because distinction is the basis of logic, the Absolute is prior to logic. Because distinction is the basis of all language, the Absolute is prior to words. The Absolute, however, is not opposed to or exclusive of anything, because it is not ultimately distinct from anything, including distinction. The Absolute encompasses and comprehends all distinction, language, and logic, and everything that is outside of distinction, language and logic. At this ultimate level of infinite comprehension, every distinction, every word, both indicates the Absolute and is itself not distinct from the Absolute. Everything is thus an indication of the Absolute, and is the Absolute. So, through distinctions, the Absolute indicates itself, refers to itself. There is not anything apart from the Absolute, so form and formlessness are indistinguishable from each other and from the Absolute.
This insight, when expressed in the language of logic and its distinctions, appears as contradiction (because we are using the distinctions of words to indicate that which is prior to distinction) and selfreference (because Ω symbolizes and indicates itself). Using pointed brackets < > to represent the shift from symbolized to symbol, from indicated to indicator, and using the symbol = to represent the taking of its two sides as identical, we can express this insight as <Ω>=Ω, meaning Ω seen as “thing” or “distinction” is identical to Ω seen as “nothing” or “nondistinction.” Whether we say something or say nothing, we indicate Ω. Note that the pointed brackets < > do not create a real distinction here; only the appearance of a distinction. Consequently, there is nothing yet and no structure or form; yet there is an appearance of the Absolute that is at once identical with the Absolute. The appearance of distinction, because it is not real, collapses into identity. This can be viewed as a trivial arithmetic of unity. All forms, all signs, all symbols, as well as the absence of these, are all identical with each other and with the Absolute. This arithmetic of unity has only one value, expressed by (and identical to) all forms and their formless essence. It is thus the ultimate in simplicity.
Insofar as everything is identical with the Absolute, there is nothing that has any ultimately distinguishable identity. Yet, although Ω=<Ω> expresses the equivalence of Ω as nothing (i.e., “nothingeverything”) with Ω as something, it also expresses the possibility of imagining two aspects of Ω.[6] Now, based on this possibility, suppose that the equivalence of these two aspects of Ω is forgotten or ignored. Then, suddenly, something appears to exist as distinct from nothing. Form is apparently severed from formlessness. We can express this by writing “<Ω>≠Ω” (where the scare quotes indicate that this inequality is only apparent). Thus, distinction appears now to be real as the result of an ignorance of the original nature of Ω, i.e., an ignorance of <Ω>=Ω. The transformed aspect of Ω denoted by <Ω> now appears as if it is real and independent of Ω. Note that in this new context, the meaning of the symbol Ω has shifted in a subtle but significant way. The original Ω is recognized as equivalent to <Ω>, while now Ω appears to be distinct from <Ω>. In other words, the original Ω includes both form and formlessness, while now Ω appears to represent formlessness as distinct from form. Because this veiled image of Ω should not be confused with the original Ω, to avoid confusion let us represent these two distinguished aspects of Ω as α and β, where α is the transformed aspect of Ω created by the distinction (i.e., “form”), and β is the aspect of Ω that it is distinguished from (i.e., “formlessness”). Thus, because α is distinct from β, and β is distinct from α, we can write α=<β> and β=<α>.
A suggestive, but imperfect, symbol of the above process may be given in a concrete geometric way as follows. Take a blank sheet of paper and draw a circle in the space of the page. We imagine the paper to symbolize an infinite plane and the circle to symbolize a partition of this space into two disjoint subsets: a closed disk and its open complement. (In this geometric illustration, the circle is the boundary of the disk, but strictly speaking the distinction is properly symbolized not as part of either of the two distinct regions in the space; the distinction is the very division which creates and defines the two regions.) Now the circle allows us to identify two regions of the blank space of the page: a space inside the circle and a space outside the circle. To distinguish these two regions of the total space, we mark these two spaces with α and β. The inside space marked by α represents the distinguished aspect of Ω, while the outside space marked by β represents the other aspect of Ω. The whole page represents Ω.
The equation α=<β> means that the space α is on the other side of the distinction from the space β, and β=<α> means that β is on the other side of the distinction from α. The distinction < > can thus be seen in the context of this concrete illustration as an instruction to cross from the space indicated inside the brackets to the space on the other side of the distinction. (Alternatively, it can be seen as a transformation that inverts the two spaces.) Now, since creating or crossing the distinction does not change the nature of the page itself, it is still true that Ω=<Ω>. If we use a blank space (typographically) instead of Ω to symbolize this total space of the page, then this becomes = < >. Since α and β are distinct from each other, we also have α≠β. And since α and β are only parts of the space, and not the whole space, we have α≠ , and β≠ .
It should be emphasized that the above development differs in an important respect from Laws of Form. We have distinguished the unmarked state, β, from the entire space of the page, Ω. In contrast, Laws of Form confuses the space β with Ω, the Void. In other words, Laws of Form has β= . And since α=<β>, it follows that α=<β>=< >. Thus, in Laws of Form the state α and the distinction < > are not separate. In Laws of Form, distinction is confused with the very thing that distinction creates, and the Absolute is confused with the opposite of distinction. In the present system, on the other hand, we recognize that these first must be distinguished before they can be confused, and we make these distinctions explicit to reveal the entire story, as well as other possibilities. In other words, we have α≠< > and β≠ while Laws of Form has α=< > and β= . Another difference is that Laws of Form has < >≠ , while we have < >= . This hints at how Laws of Form can be seen as arising through the process of identifying or confusing distinct aspects of the present system. Tracing explicitly how each system is rooted in the Absolute reveals how they are related to each other.
In summary, if we write < > or , we indicate Ω. If we write β or α, we indicate one or the other of the two distinguished aspects of Ω. If we indicate <α>, we indicate β. And if we indicate <β>, we indicate α.
The indications <α> and <β> are actually compound indications. Two indications are taken together as one. The indication<α> instructs us to indicate α and cross the distinction, i.e., indicate what is distinct from α. To perform this compound indication, we need to be able to combine indications. If we limit ourselves to just the simplest of indications, we have just α, β, and < >. To interpret compound indications, we need to take two indications and consider them as one indication. More generally, we need to be able to indicate multiple indications and take them as a single indication This implicitly originated with the first distinction < > which distinguished α and β from each other (and from the Void), since identifying α as that which is distinct from β (and vice versa) is identifying a single indication with two indications taken together. This capacity to take a many as a one (the seed of the concept of a set) thus emerges when the first distinction is taken as real, and allows us to regard a two indications as a single indication. The deep origin of this capacity derives from the nature of α and β as two aspects of the one Void. Thus, taking a many as a one is possible only because the many are in a deeper sense already one prior to taking them to be many.
Since β and α represent complementary aspects of Ω, if we want to indicate Ω, we can simply indicate both β and α together. Thus, it is natural to have αβ=βα= =< >. So the values of the double indications <α>, <β>, αβ, and βαare defined in terms of existing values. The values of the double indications αα and ββ, however, are as yet undefined. Indicating α twice obviously does not indicate Ω, or β. Nor is a double indication of α obviously the same as a single indication of α: By the capacity to distinguish two indications from one, αα indicates not simply α, but something new. It still indicates the state α, but it indicates it twice, and not just once. Thus, we take αα as distinct from α. Similarly, we take ββ as distinct from β, i.e., αα and ββ are new values distinct from Ω, α, and β. (If we choose, however, we could take the two identical indications as one and identify—or confuse—αα with α, and ββ with β, i.e., α=αα and β=ββ. This would yield a different system.)
Consider now threefold indications, such as <αβ>. Because αβ=Ω, we have <αβ>=<Ω>=Ω. Similarly, <βα>=Ω. So <αβ> and <βα> are defined. As for the expressions <αα> and <ββ>, note that the new values αα and ββ are double indications of the mutually distinct values α and β. Thus, just as <α> indicates the state distinct from α, and <β> indicates the state distinct from β, we may naturally consider the new values αα and ββ to be mutually distinct, so that ββαα=Ω, <αα>=ββ, and <ββ>=αα. Thus, we have now defined values for the threefold indications <βα>=<αβ>=Ω, <αα>=ββ, and <ββ>=αα.
Now observe that <αα>=αβ=<α><α>. Similarly, <ββ>=αα=<β><β>. In addition, <βα>=<Ω>=Ω=βα=<α><β>=<β><α>. Thus, for any two single indications x and y, we have <xy>=<x><y>.
Next consider the triple indications <<α>> and <<β>>. Because we know that β=<α> and α=<β>, it follows from substitution that β=<<β>> and α=<<α>>. Moreover, < >= implies that << >>= . So for any single indication x, we have x=<<x>>.
Also note that because αβ= , it follows that β<β>= , and α<α>= . Moreover, < >= . So, for any single indication x, we have <x>x= .
We thus have provided an intuitive basis for the following algebraic laws:
<xy>=<x><y>
x=<<x>>
<x>x= .
And the basic arithmetic laws are:
β=<α>
α=<β>
βα=< >= .
ββ=<αα>
αα=<ββ>
ββαα= .
We can continue the above pattern of development to generate values α, αα, ααα, αααα, etc. and their inverses β, ββ, βββ, ββββ, etc. The resulting system then provides a system for integer arithmetic, where Ω corresponds to 0, α to 1, αα to 2, β to −1, ββ to −2, and so on. Juxtaposition corresponds to the operation of addition and enclosing an expression in brackets < > corresponds to the inverse operation under addition.
In the previous section, we created integers from collections of indications: α, αα, ααα, αααα, etc. Now observe that the operation of counting (or “iterating”) the indications of α’s is no different than the operation of counting (or “iterating”) the indications of β’s. We use the same counting operation on both indications. Just as < > is an instruction to cross the distinction from a given state, regardless of which state, this counting operator repeats indications, regardless of what those indications are. These counting operators do not by themselves indicate α or β. We have abstracted from indications of particular states here, and are now indicating the number of repetitions of indication itself, without any particular state being indicated. A tripling of a single instance of indication, for example, is the same as a single indication, and the doubling of a tripling of indication is a sextupling of indication. We have now distinguished a new level of abstraction of indication, where combination represents multiplication rather than addition, and the entities themselves are not particular indications but an indication of repetitions of indication. Thus, to avoid confusion of this new level of abstraction from the prior level, we use brackets [ ] around indications to indicate their multiplicative meaning, and we juxtapose them. In other words, whereas for addition we use bare juxtaposition of indications, for multiplication we enclose all expressions in brackets and juxtapose. For example, we write [α][β]=[β], [αα][β]=[ββ], [ααα][β]=[βββ], [αααα][β]=[ββββ], etc. and similarly, [α][α]=[α], [αα][α]=[αα], [ααα][α]=[ααα], [αααα][α]=[αααα], etc. Now we can write [ααα][αα]= [αα αα αα]. Note that because zero repetitions of any indication is zero repetitions, we have [ ][x]=[ ]. Similarly, any repetition of zero indications is zero, so [x][ ] =[ ].
Our new multiplicative level of abstraction, however, has lost connection with the original additive indications, e.g., there is no clear meaning of a hybrid expression such as [mm]n or mmm[nn]m. Although enclosing an expression in square brackets transforms an expression from its meaning as an additive indication to a multiplicative multiplier of indications, there is no means to transform back and thus integrate the additive and multiplicative levels. To combine both addition and multiplication in the same expression, we thus introduce a second distinction ( ) to transform bracketed multiplicative expressions back to additive expressions. In general, we thus have ([x])=x and [(x)]=x. These distinctions thus take us back and forth across the distinction dividing two levels. Thus, the multiplication of expressions of original additive indications is performed by enclosing them in square brackets, juxtaposing, then enclosing in parentheses, e.g., x multiplied by y is represented ([x][y]). For example, ααα times αα is ([ααα][αα]) = ([αα αα αα]) = αααααα.
Now, because ([x][y]) represents x times y, ([xα][y]) represents x+1 times y. Since this is an additional repetition of y, we obtain ([xα][y])=([x][y])y. Now assume that ([xr][y])=([x][y])([r][y]) for a given r. Substituting xα for x, we then have ([xαr][y])=([xα][y])([r][y])=([x][y])y([r][y])=([x][y])([r][y])([α][y])=([x][y])([rα][y]). Thus, by induction, we have shown that ([xr][y])=([x][y])([r][y]) for any r. This is the law of distributivity of multiplication over addition, i.e., (x+r)y=xy+ry. Exchanging variables, we can also write this as ([xy][r])=([x][r])([y][r]) or ([r][xy])=([r][x])([r][y]). Alternatively, if we substitute each of the variables with a variable in parentheses, we can write this as (r[(x)(y)])=(rx)(ry). Or if we just substitute for r alone, we have (r[xy])=(r[x])(r[y]).
Finally, consider a function F that is defined by x → (x), i.e., F{x}=(x). In particular, xy → (xy), i.e., F{xy}=(xy). Now, because x=[(x)] and y=[(y)], we have (xy)=([(x)][(y)]). Thus, by the definition of multiplication, F{x+y}=F{x}*F{y}. From this it follows that F{x}=F{0+x}=F{0}*F{x}, which implies that F{0}=1. In other words, ( )=α. It follows immediately that αα=( )( ), ααα=( )( )( ), etc. And it follows immediately from α=( ) that β=<( )>, ββ=<( )><( )>, βββ=<( )><( )><( )>, etc. Using the axiom that <xy>=<x><y>, we then obtain β=<( )>, ββ=<( )( )>, βββ=<( )( )( )>, etc. We thus see that the constants α and β may be eliminated entirely and replaced by combinations of distinctions: α=( ) and β=<( )>.
Now observe that x^{2} = x*x is expressed as ([x][x]) = ( ([ [x] ][αα]) ). Similarly, it can be shown that x^{3} = x*x*x is expressed as ([x][x][x]) = ( ([ [x] ][ααα]) ). And by induction, it can be shown that, in general, a^{b}=(([[a]][b])).
We now have provided heuristic motivation for all the axioms needed to reproduce the system of Jeff James, having developed them organically from the idea of distinction.
I. <p>p=
II. (rx)(ry)=([(x)(y)]r) or (r[xy])=(r[x])(r[y])
III. ([x])=[(x)]=x
IV. x[ ]=[ ]
Compare I and II to the axioms of Laws of Form:
 = 
 = 

r 
From axiom I it follows that < >= . So, by applying axiom I twice, <<p>p>=< >= , thus obtaining the form of the position axiom of Laws of Form. Note also that axiom I implies that <<x>>=x and that <xy>=<x><y>. Now observe that by axioms II and III, [(pr)(qr)]=[([(x)(y)]r)]=[(x)(y)]r, thus obtaining the form of the transposition axiom of Laws of Form. Axioms III and IV reflect the form of two consequences (which SpencerBrown calls C1 and C3) that are derived from position and transposition axioms. Thus, Laws of Form can be seen as an arithmetic system that arises by conflating the three different types of distinctions.
Finally, we can show how the < > distinction interacts with the [ ] and ( ) distinctions: <(a[b])> = <(a[b])> ([ ]) = <(a[b])> (a[ ]) = <(a[b])> (a[b<b>]) = <(a[b])> (a[([b])([<b>])]) = <(a[b])> (a[b]) (a[<b>]) = (a[<b>]). Thus, < > can be moved outside or inside across the two complementary boundaries: <(a[b])> = (a[<b>]).
We then have the following correspondences with standard arithmetic:
0, 1, 2 ↔ , ( ), ( )( )
−0, −1, −2 ↔ < >, <( )>, <( )( )>
−0=0 ↔ < > =
a+b ↔ ab
a−b ↔ a<b>
a*b ↔ ([a][b])
0*a=0 ↔ ([ ][a])=([ ])=
0*0=0 ↔ ([ ][ ])=([ ])=
−p+p=0 ↔ <p>p=
rx+ry=r(x+y) ↔ (rx)(ry)=([(x)(y)]r)
−(−(x))=x ↔ <<x>>=x
−(x+y)=−x+−y ↔ <xy>=<x><y>
−(ab)=a(−b) ↔ <(a[b])> = (a[<b>])
a^{b} ↔ (([[a]][b]))
x^{0}=1 ↔ (([[x]][ ])) = ( )
0^{0}=1 ↔ (([[ ]][ ])) = ( )
0^{1}=0 ↔ (([[ ]][()])) =
1/a = a^{−1} ↔ (<[a]>) = (([[a]][<( )>]))
a/b ↔ ([a]<[b]>)
(1/x)(1/y) = 1/xy ↔ (<[x]><[y]>)= (<[x][y]>) , for nonzero x, y
x/x = 1 ↔ ([x]<[x]>) = ( ) , for nonzero x
1/(1/x) = x ↔ (<[(<[x]>)]>) = x
a/b + c/d = (ad+cb)/bd ↔ ([a]<[b]>) ([c]<[d]>) = ([([a][d])([c][b])]<[b][d]>)
0/x=0 ↔ ([ ]<[x]>) = ([ ]) = , for nonzero x
x/0 ↔ ([x]<[ ]>) =, for nonzero x
1/0 ↔ (<[ ]>), i.e., an irreducible, undefined value
0/0 ↔ ([ ]<[ ]>) = 0 and 1, note that this must be restricted, as in James
x^{1/y} ↔ (([[x]]<[y]>))
√2 ↔ (([[( )( )]]<[( )( )]>))
i=(1)^{1/2} ↔ (([[<( )>]]<[( )( )]>))
James presents the following correspondences:
e^{x} (x)
ln x [x]
We can perhaps provide a natural justification for these interpretations. Consider again the function F that is defined by x → (x). In particular, xy → (xy) and Now, because x=[(x)] and y=[(y)], we have (xy)=([(x)][(y)]). Thus, by the definition of multiplication, F{x+y}=F{x}F{y}. From this it follows that F{x}=F{0+x}=F{0}F{x}, so F{0}=1. In other words, ( )=( ). Moreover, 1=F{0}=F{x−x}=F{x}F{−x}, so F{−x}=1/F{x}. In other words, (<x>)=(<[(x)]>). In addition, F{nx}=F{x+...+x}=F{x}^{n}, and in particular, F{n}=F{1}^{n}. In other words, (([n][x]))=(([n][[(x)]])) and (n)=(([[(( ))]][n])). We can also see that F{nm}=F{1}^{nm}, i.e., (([n][m]))=(([[(())]][n][m])). So, the function F defined by x → (x) has the following properties:
F{0)=1
F{x+y}=F{x}F{y}
F{−x}=1/F{x}
F{n}=F{1}^{n}
F{nx}=F{x}^{n}
We can extend the function F to rational arguments by observing that F{m}=F{m/n)^{n} and in particular F{1}=F{1/n)^{n}. So F{1/n) is clearly the n^{th} root of F{1). Note, however, that roots are not necessarily unique.
We see that the function F has essentially all the properties of exponentiation with base F{1}. Now F{1}=(( )). Thus if we use the correspondence a^{b} ↔ (([[a]][b])) to write F{1}^{x}, we get (([[(( ))]][x]))=(x). Thus (x) is an exponential to an implicit base (( )). The use of base (( )) is not necessary, but is natural because it provides an especially simple form of the exponential.
If we treat this base as an irreducible number, we do not necessarily need to know the value of F{1} in terms of integers. The distinctions ( ) and [ ] represent a switching between multiplicative and additive levels, and that is just the essence of what the exponential function and its logarithmic inverse does.
Using limits, however, we can extend the function F to the reals and determine a natural value for the base (( )). If we define the real number x as the limit of the (convergent) sequence of rationals x_{n} as n goes to infinity, then we can define F{x} as the limit of F{x_{n}}, thereby extending exponentiation to reals. Using a Taylor series and limits we can even extend F to the complex numbers, as follows. First we write F{x}=(x) ↔ a_{0} + a_{1}x + a_{2}x^{2} + ... From the property F{x+y}=F{x}F{y}, it can be shown that F{x} ↔ 1 + ax + (ax)^{2}/2! + (ax)^{3}/3! + ... Now this takes simplest form when a=1, which also has the nice property that F′(x}=F{x}. Then we have F{x} ↔ 1 + x + x^{2}/2! + x^{3}/3! + ..., which is the power series for e^{x}. (The number e can be defined as equal to the limit of (1+d)^{1/d} as d approaches 0, or indirectly by saying the e is the number such that (e^{d}−1)/d approaches 1 as d approaches 0.) This series is defined for integers, rationals, reals, and complex numbers. One may factor it into real and imaginary parts, and obtain F{ix} = (ix) ↔ cos x + i sin x, where cos x and sin x are the real and imaginary parts. Of course, this extension to the real and complex numbers has simply assumed the notion of limit, when it has not yet been formally developed. It remains an interesting open question to develop real numbers and limits from the present system in an organic way.
Many of the ideas in this paper emerged as a result of numerous lengthy discussions with Joel Morwood. Without his enthusiasm and persistent prodding, this paper would not exist. Appreciation is also due to Jack Engstronm for several lengthy discussions and detailed feedback on an earlier draft of this paper. Thanks also to the following people for reviewing an earlier draft of this paper and providing valuable comments that have helped to improve it: Dick Shoup, William Bricken, Jeff James, and Dave Keenan. Any remaining errors are, of course, mine.