The Nature of Classical Theoretical Physics

George E. Hrabovsky

1.1 What Is Classical Theoretical Physics?

So what is classical theoretical physics? Before we can answer that question we need to take apart the statement and look at its component parts. Let’s begin with the component question, “What is physics?”

The first statement that we can make is that physics is the most fundamental science, the principles revealed by the process of doing physics informs all other sciences and all areas of technology.

So, we need to understand what we mean by science. I am not talking about the principles and facts revealed by science, I am talking about the process of science. To do this we need to take a few steps backwards to illustrate some ideas.

A statement that is either true or false is called a proposition. This is familiar to anyone who has studied logic. Examples of propositions are, “It will rain today,” “I am hungry,” “I will continue to pay attention in this lesson.”

A conception or belief is what we call a notion. This is not necessarily a definition, it is more of a technical undefined term. A notion is not an idea that carries a lot of weight, since it can be either a belief or a conception. In this case, a belief is an acceptance of an idea as if it were true. A conception is something that is either perceived or regarded. Both of these are somewhat nebulous ideas. In this way, the idea of a notion is also nebulous.

If a notion or proposition is directly the result of an observation, or a direct experience, we say it is empirical. A method of making such an observation—or arranging such an experience—is, reasonably enough, called an empirical method.

Popular belief has it that science is the result of empirical methods. In reality, empirical methods, define the topics of interest in a science. This points out an important idea in science: Everything must begin—and end—with nature. In the absence of empirical methods you can never be sure that what you are doing has anything to do with reality.

Something that is not derived from anything else is called primitive. Thus, a primitive proposition is something that can be either true of false that is not based on other propositions or notions. Similarly, a primitive notion is a notion not based on other propositions or notions.

A collection of things is called a set. Often, we use these for arbitrary abstract sets, or sets of numbers. Here, we will also consider collections of proposition and notions to be sets.

Let’s say that we begin with a set of primitive notions and primitive propositions about some topic. By using logical and mathematical statements we can make inferences about the primitive objects that we started with. This process is called deduction, and we can term the set and its inferences as a deductive system.

Any forcefully mad statement of belief—or fact—is what we call an assertion.  Any process that allows us to do something is what we call an action. The list of all conditions that some object is experiencing at a particular time is called its state. The result of an action or state is what we call a consequence. A primitive notion, a primitive proposition, or a logical consequence of such is some assertion of a deductive system.

Should the primitive notions and primitive propositions apply to some real thing by some e empirical method, then we can term the deductive system a physical theory. The primitive notions and propositions take on the title of hypotheses for a physical theory. Any physical theory is a physical science.

Empirical methods produce the data required to establish an accurate hypothesis. Also empirical methods produce the data needed to test any models that predict the outcome of experiments or observations. It is important to know that empirical methods are the only true test of a scientific hypothesis.

Such data are always provisional as they can always be corrected by further observation.

It is not possible to separate an empirical method from the theory that brought about its development. We can reword this to say that there are no “pure empirical methods.” Such an underlying theory need not have anything explicitly to do with a theory being formulated by empirical data, or tested with such data. If one empirical method has a more restrictive theoretical basis than another, we could term that method as being relatively purer.

So, when is a proposition suitable as a scientific hypothesis? There are three conditions that are absolutely essential. The first is that the proposition must conform to all facts that are known within some degree of approximation that is considered as acceptable. The second is that the proposition is self-consistent. The third is that the proposition is compatible with the other hypotheses within the theory.

There are two additional conditions that are not essential. The first of these is that the hypothesis is plausible. What does that mean? It means that the proposition seems reasonable. The second is that the hypothesis was arrived at methodically.

A theory is tested by confrontation with reality. Specifically, the hypotheses that form the theory are tested against existing data, or new data produced for the testing. In this way a theory is either disproved, or it survives. A theory is never proven.

Here is an important point. If a hypothesis is disproved, then the entire theory is also disproved. You can investigate which hypotheses are the cause of the failure, and then you can attempt to mitigate that disproof. This is a process that is sometimes called fine-tuning a theory.

Exercise 1.1: At this point it is tempting to go down the rabbit hole of logic. Logic is very important in science. This is really as far into logic as we are going to go. I recommend that you get a book on logic, or at least consider several Internet resources on logic. At least look up some encyclopedia articles. As we progress consider the logic of statements made. Look for logical fallacies.

So, we now have a pretty good idea of what a science is. What distinguishes physics from other sciences? We began by saying that physics is fundamental. What does that mean? It means that the principles illustrated by physics form the base or core for all other sciences. In that way, we may regard the subject matter of physics as being primitive, or the logical consequence of such primitive ideas. It can be difficult to separate the contents of physics from the contents of associated sciences.

The important thing to recall is that while we divide the subject matter into parts like physics, chemistry, and so on; nature makes no such distinction. This is why it can be difficult to make such distinctions at the boundaries.

So, what do we mean by classical physics? Traditionally, classical physics has been defined as physics that does not consider quantum theory. I think this is too stark and does not point out what makes quantum theory so weird. I think modern physics is physics that understands that the condition of the observer effects the results of any measurement made by the observer. So, the term classical physics is any physics that allows an absolutely objective observation.

What is theoretical physics? Theoretical physics attempts to formulate physical laws and physical theories.

Classical theoretical physics is then the attempt to formulate physical laws and physical theories that allow for an absolutely objective observer.

So, why start with classical physics? We understand—at some level —that the idea of an objective observer is wrong. It is ironic that the correct interpretation of a nonobjective observer only occurs under conditions that we never encounter on a day-to-day basis. In this sense we are all hard-wired as classical physicists. Even with this built-in wiring we need to rewire our brains to think about things correctly. It is like building a house. We erect a scaffolding around it to help in the construction. When the house is finished we remove the scaffolding, as we no longer need it. Classical physics gives us a scaffolding to learn to think about the methods of theoretical physics and its attendant mathematical and computational aspects.

Why start with classical mechanics? It is the foundation of concepts and methods used throughout physics. We could say that classical mechanics is primitive. The biggest desire from classical mechanics is the ability to predict the future. The hope is summed up in a quote from the great eighteenth-century physicist Pierre-Simon Laplace who wrote:

“We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.”

In classical mechanics, if you know everything about a system at some measured time, and you also know the theory that governs how the system changes, then you can predict the future. If this turns out to be true, then we say that the theory of the system is deterministic. If we can say the same thing, but with the past and future reversed, then the same theory will tell you everything about the past. Such a theory is called reversible.

1.2 What is a Physical Law?

Say that we have used empirical methods to collect data about some physical phenomena.  We then use mathematical or computational techniques to attempt to find patterns in that data. Such a perceived pattern is not considered a fact, only a potential fact. We examine other data to see if that pattern exists there, too. If the pattern exists in every case we examine, we then gain a level of confidence in it, and consider it to be a fact. That confident pattern is often called a law of nature.

Laws of nature are almost always expressed as a mathematical relationship between physical quantities.

Laws of nature are always provisional. It is possible that tomorrow someone will discover, in data, that a law of nature is wrong, or incomplete.

You might think that a list of laws are all you need to learn physics. Not so; we fully realize that we do not know all of the laws of physics, so no list is complete. Such a list also ignores the necessary methods of discovery that you require to do physics.

1.3 Simple Dynamical Systems and State Space

So, how do we use these philosophical principles to actually do some physics? One of the first methods of theoretical physics is to simplify out all of the non-essential aspects of it. This is called the method of abstraction. Since there is no hard criterion for the choice of what to remove, it is subject to individual interpretation. What is the most abstract way of looking at the world around us?

That went by a little fast. What is the method of abstraction? It is a form of construction. What are we constructing? We construct a framework that allows us to take the impossibly complicated natural world and systematically remove all complicating factors. We hope that such a framework allows us to identify rules of correspondence between primitive propositions, primitive notions, and their consequences and what we experience of Nature. The abstraction treats particulars of a situation as primitive propositions and primitive notions, or their consequence. The construction process also allows us to endow the collection of such with their own properties. It is a completely creative act constrained by the requirement to conform with experience.

We need to remove everything that exists and look at a region with nothing in it. That is by far the simplest abstraction we can make. It might be very restful, but it is also completely boring.

To make things more interesting, and still quite abstract, we can introduce an object. How do we introduce an object? What kind of object? What are its properties? At this point none of that matters. We are considering only an abstract physical object having no identity. We can call it Thing 1. Now, we state, “Thing 1 is in our region of consideration.” In this way we can introduce as many objects as we like, Thing 2, Thing 3, and so on. For each Thing we assign a natural number to keep count of collection. Such a collection forms a set of physical objects.

Exercise 1.2: At this point it is tempting to go down yet another rabbit hole—this time for set theory. Check the end section of further reading for my suggestions of set theory books for independent study.

A set of of physical objects is called a system. A system that is either the entire universe or is so isolated from everything else that it behaves as if nothing else exists, is what we will call a closed system.

Exercise 1.3: Since the notion is so important to theoretical physics, think about what a closed system is and speculate on whether closed systems can actually exist. What assumptions are implicit in establishing a closed system? What is an open system?

The collection of all information required to specify the condition of a system is called its state. Note that this is a subtle distinction from what we already said about the state of an object in section 1.1.

Exercise 1.4: What is the difference between saying that the condition of a system is its state, and the information that allows you to specify the condition of a system is its state?

Let’s look at an extremely simple closed system. There is a temptation in theoretical physics to assume a system is closed, unless we say otherwise. This is a lazy attitude, and a bad habit to get into.

Why choose a simple system to start with? It is in accord with the method of abstraction.

In physics jargon, the collection of all states occupied by a system is its space of states, or, more simply, its state-space. The state-space is not ordinary space; it's a mathematical set whose elements label the possible states of the system.

Imagine an abstract system that has only one state. We could think of it as a coin glued to the table—forever showing heads.  Here the state-space consists of a single element—namely Heads (or just H)—because the system has only one state. Predicting the future of this system is extremely simple: Nothing ever happens and the outcome of any observation is always H.

The next simplest system has a state-space consisting of two points; in this case we have one abstract object and two possible states. Imagine a coin that can be either Heads or Tails (H or T).  See Figure 1.1.

Lesson 1_1.gif

Figure 1.1: The space of two states, H and T.

In classical mechanics, we assume that systems evolve smoothly—without any jumps or interruptions. Such behavior is said to be continuous. Obviously you cannot move between Heads and Tails smoothly. Moving, in this case, necessarily occurs in discrete jumps. So let’s assume that time comes in discrete steps labeled by whole numbers. A world whose evolution is discrete could be called stroboscopic.

A system that changes with respect to some variable is called a dynamical system.  All dynamical systems have a variable that the system changes with respect to, we can call this the evolutionary variable. Time is usually the evolutionary variable, but it need not be. A dynamical system consists of more than a space of states. It also entails a law of evolution, or we could call it a dynamical law. The dynamical law is a rule that tells us the next state given the current state.

One very simple dynamical law is that whatever the state at some instant, the next state is the same. In the case of our example, it has two possible histories: H H H H H H . . . and T T T T T T . . . . See Figure 1.2. The first law specified that the arrow from H goes to H and the arrow from T goes to T. Once again it is easy to predict the future: If you start with H, the system stays H; if you start with T, the system stays T.

Lesson 1_2.gif

Figure 1.2: A dynamical law for a two-state system.

Another possible dynamical law dictates that whatever the current state, the next state is the opposite.  A diagram for the second possible law is shown in Figure 1.3, where the arrows lead from H to T and from T to H. You can still predict the future. For example, if you start with H the history will be H T H T H T H T H T . . . .  If you start with T the history is T H T H T H T H . . . .

Lesson 1_3.gif

Figure 1.3: Another dynamical law for a two-state system.

We can even write these dynamical laws in mathematical form. The number of variables describing a system are called its degrees of freedom. Our coin has one degree of freedom, which we can denote by the Greek letter sigma, σ. Sigma has only two possible values; σ=1 and σ=-1, respectively, for H and T. We also use a symbol to keep track of the time. When we are considering a continuous evolution in time, we can symbolize it with t. Here we have a discrete evolution and will use n. The state at time n is described by the symbol σ(n), which stands for σ at n.

Let’s write equations of evolution for the two laws. The first law says that no change takes place. In equation form we write,

Lesson 1_4.png

(1.1)

In other words, whatever the value of σ at the nth step, it will have the same value at the next step.

The second equation of evolution has the form

Lesson 1_5.png

(1.2)

implying that the state flips during each successive step.

In each case the future behavior is completely determined by the initial state, such laws are deterministic. We can now assert, without proof, that all of the basic laws of classical mechanics are deterministic.

To make things more interesting, let’s generalize the system by increasing the number of states. Instead of a coin, we could use a six-sided die, where we have six possible states (see Figure 1.4).

Lesson 1_6.gif

Figure 1.4: A six-state system.

Now there are a great many possible laws, and they are not so easy to describe in words—or even in equations. The simplest way is to stick to diagrams such as Figure 1.5. This says that given the numerical state of the die at time n, we increase the state one unit at the next instant n+1. That works fine until we get to 6, at which point the diagram tells you to go back to 1 and repeat the pattern. Such a pattern that is repeated endlessly is called a cycle. For example, if we start with 3 then the history is 3, 4, 5, 6, 1, 2, 3, 4, 5, 6, 1, 2, . . . . We’ll call this pattern Dynamical Law 1.

Lesson 1_7.gif

Figure 1.5: Dynamical Law 1.

How do we write a mathematical expression for this?

Lesson 1_8.png

(1.3)

seems like a good attempt. But wait a minute? What happens when n=6? We need to impose some rule that brings us back to 1, when n>6. We can wrap the right-hand side of the expression in a modulo operation, denoted Lesson 1_9.png.

Lesson 1_10.png

(1.4)

The modulo operation tels us that once the evolution variable exceeds 6 it return back to 1. So the evolution is exactly what we want.

Figure 1.6 shows another law, Dynamical Law 2. It looks a little messier than the last case, but it’s logically identical—in each case the system endlessly cycles through the six possibilities. If we relabel the states, Dynamical Law 2 becomes identical to Dynamical Law 1.

Lesson 1_11.gif

Figure 1.6: Dynamical Law 2.

Exercise 1.5: Relabel the diagram of Dynamical Law 2 so that it looks the same as Dynamical Law 1. To make this formally true write down the rules that transform the given states into the new states to make the diagram the same, what we call isomorphic. Such rules are examples of transformations.

Not all laws are logically the same. Consider, for example, the law shown in Figure 1.7. Dynamical Law 3 has two cycles. If you start on one of them, you can’t get to the other. Nevertheless, this law is completely deterministic. Wherever you start, the future is determined. For example, if you start at 2, the history will be 2, 6, 1, 2, 6, 1, . . .,  and you will never get to 5. If you start at 5 the history is 5, 3, 4, 5, 3, 4, . . .,  and you will never get to 6.

Lesson 1_12.gif

Figure 1.7: Dynamical Law 3.

Figure 1.8 shows Dynamical Law 4 with three cycles.

Lesson 1_13.gif

Figure 1.8: Dynamical Law 4.

It would take a long time to write out all of the possible dynamical laws for a six-state system.

Exercise 1.6: Can you think of a general way to classify the laws that are possible for a six-state system?

1.4 Rules That Are Not Allowed—The Minus-First Law

Not all laws are allowable as prototypes of natural laws. It’s not enough for a dynamical law to be deterministic; it must also be reversible.

Exercise 1.7: Why do laws have to be reversible?

If you reverse all the arrows, and the resulting law is still deterministic, then we say that the law is reversible. Another way to say this is that the laws are deterministic into the past as well as the future. Recall Laplace’s remark, "...for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes."

The question then becomes, “Can one write laws that are deterministic into the future, but not into the past?” In other words, can we formulate irreversible laws? Indeed we can.  Consider Figure 1.9.

Lesson 1_14.gif

Figure 1.9: A system that is irreversible.

The law of Figure 1.9 does tell you, wherever you are, where to go next. If you are at 1, go to 2. If at 2, go to 3. If at 3, go to 2. There is no ambiguity about the future. But the past is a different matter. Suppose you are at 2. Where were you just before that? You could have come from 3 or from 1. The diagram just does not tell you. Even worse, in terms of reversibility, there is no state that leads to 1; state 1 has no past. The law of Figure 1.9 is irreversible. It illustrates just the kind of situation that is prohibited by the principles of classical physics.

Notice that if you reverse the arrows in Figure 1.9 to give Figure 1.10, the corresponding law fails to tell you where to go in the future.

Lesson 1_15.gif

Figure 1.10: A system that is not deterministic into the future.

There is a very simple law to tell when a diagram represents a deterministic reversible law. If every state has a single unique arrow leading into it, and a single arrow leading out of it, then the governing law is a deterministic reversible law. Here is a statement of this overriding law: There must be one arrow to tell you where you’re going and one to tell you where you came from.

The law that dynamical laws must be deterministic and reversible is so central to classical physics that we sometimes forget to mention it when teaching the subject. In fact, it doesn’t even have a name. We could call it the first law, but unfortunately there are already two first laws—Newton’s and the first law of thermodynamics. There is even a zeroth law of thermodynamics. So we have to go back to a minus-first law to gain priority for what is undoubtedly the most fundamental of all physical laws—the conservation of information. The conservation of information is simply the rule that every state has one arrow in and one arrow out. It ensures that you never lose track of where you started.

1.5 Dynamical Systems Using Number Systems

There is no reason why you can’t have a dynamical system with an infinite number of states. What is infinity? When a quantity can grow without ever stopping its growth, we call that a quantity having an infinite limit. A set where we can always add in another element is called an infinite set. A sequence where we can always place another value is an infinite sequence. So, what is an example of an infinite set? We shall choose the set of integers. Here we recall that the set of integers is a mathematical structure containing the natural numbers, the negative natural numbers, and 0. This set is denoted with the double-struck Z, Z, this is a tradition based on the German word for integers, zahlen. We could say that there is a state for every integer. If we label our states m, then we would write mZ, saying that any given state label is some integer (see Figure 1.11).  The idea that we can always add another integer is given by the symbol ∞, it is important to note that ∞ does not represent any kind of number. The idea that we can always add another integer in the negative direction is denoted by the symbol -∞.

Lesson 1_16.gif

Figure 1.11: State space for an infinite system of integers.

A simple dynamical law for such a system is that we shift one state in the positive direction at each time interval (see Figure 1.12).

Lesson 1_17.gif

Figure 1.12: A dynamical rule for a an infinite system.

This is allowable as each state has one arrow in and one arrow out. Infinity is not a number, it is a place-holder. We can easily express this rule in the form of an equation:

Lesson 1_18.png

(1.5)

Exercise 1.8: Determine which of the following four dynamical laws are allowable:
s(n+1)=s(n)-1 .
s(n+1)=s(n)+2 .
Lesson 1_19.png
Lesson 1_20.png
Draw a diagram for each dynamical law.

We can add more states and dynamical laws to our infinite dynamical system. We can say that we are separating the state space into regions (see Figure 1.13).

Lesson 1_21.gif

Figure 1.13: Separating an infinite system into regions.

If we start with a number, then we just keep proceeding through the set of integers. On the other hand, if we start at A or B then we cycle between them. So, we can have a mixture of regions, one where we cycle around in some states, while in others we move off to infinity.

1.6 Cycles and Conservation Laws

Let’s consider a system with three regions. States 1 and 2 each belong to a single and separate cycle, while 3 and 4 belong to the third (see Figure 1.14).

Lesson 1_22.gif

Figure 1.14: Separating the state space into regions.

Whenever a dynamical system breaks the state space into such separate regions there is the possibility of keeping a memory of where we started. Such a memory is a conservation law; telling us that something is kept intact for all time. Let’s say that states 1 and 2 represent the value of a variable, we could relabel them +1 and -1. We also have a cycle between States 3 and 4, where both states have a value of 0. (see Figure 1.15).

Lesson 1_23.gif

Figure 1.15: Replacing the labels with specific values.

Since the dynamical law does not allow you to jump from cycle to cycle, that is a conservation law, since starting at one value means you have that value forever.

As stated above, this can be called information conservation. Information conservation is probably the most fundamental aspect of the laws of physics. Why is that so? We don't really know why the laws of physics have the properties that they have. All we have is the experimentally derived fact that the laws of physics are information conserving.

1.7 The Limits of Precision

Today we know that Laplace was mistaken about the predictability of the world. Predicting the behavior of systems as he appeared to envision it, necessarily requires a perfect knowledge of the dynamical laws governing the system.  There is also the explicit recognition that the, “intellect vast enough to submit these data to analysis,” requires vast computing power. One aspect that is implicit, rather than explicit, is the inability to know the initial conditions to perfect  precision. Such perfection is necessary for perfect predictability. The further away from perfect initial conditions we are the further away fro m predictability we get.

In the real world, it’s even worse; the space of states is not only huge in its number of points—it is continuously infinite. In other words, it is labeled by a collection of real numbers. Real numbers are so dense that every one of them is arbitrarily close in value to an infinite number of neighbors. The ability to distinguish the neighboring values of these numbers is the "resolving power" of any experiment, and for any real observer it is limited. In principle, we cannot know the initial conditions with infinite precision. In most cases the tiniest differences in the initial conditions—the starting state—leads to large eventual differences in outcomes. This phenomenon is called chaos. If a system is chaotic (and most are), then it implies that however good the resolving power may be, the time over which the system is predictable is limited. Perfect predictability is not achievable, simply because we are limited in our resolving power.

1.8 What is a Physical Theory?

A physical theory can be thought of as the specific presentation of a physical situation, along with its consequences. What sets this apart? The fact that a theory has been accepted as at least partly true by the scientific community. So, if someone says, “It is only a theory,” it is important to understand that means scientists accept it as a potential fact—always understanding the provisional nature of such facts. How are theories developed? There are many ways, I will present several, in no particular order.

Estimation is a collection of techniques that allow you to guess, usually based on solid physical principles, the value of important physical quantities within a power of ten. These are often called Fermi problems, from the famous physicist Enrico Fermi, who was always making such estimates.

Abstraction is where we generalized from specific cases to a general principle. Here we begin by making the assumption that the abstraction can be accomplished. Once this is done, we then attempt to work out the consequences so that we can test the generalization against actual data. One particular method is interpolation, where we take empirical data points as ideal; we then produce a curve that connects them (the interpolation) as a hypothetical generalization. Another method is regression, where we assume that the data points have an associated error, here result is a regressive curve that identifies a general law and eliminates the errors at the same time

Specification is the opposite process to abstraction. Here you begin with a general principle or a mathematical formulation and you apply it to a specific situation. In this way you can derive the necessary mathematical or computational framework tailored to a specific situation. If you use this method to make specific predictions for your situation, then you are creating a mathematical or computational model.

One method of deriving a formulation is to propose a relationship among the variables where the structures are imposed by the requirements of the units of the physical quantities involved. This is called dimensional analysis. It is used a lot for deriving required formulas accurate to within an arbitrary constant.

One way of doing theoretical physics is to imagine a situation and then work out its consequences using your physical intuition. Such an activity is called a thought experiment. The success of such experiments is dependent upon your depth of physical intuition. Understand that calculations will likely need to be made, but the physics is the important part.

Another way of doing theoretical physics is to frame your physical phenomena according to some mathematical or computational formulation of the theory. You then make predictions based on this analogy of the real situation. This is called a model, and this forms a large part of theoretical physics.

If you have a mathematical formulation that is too complicated, you might be able to simplify the situation by deriving a new quantity, and then reformulating your theory with respect to that quantity. This type of theory is called constructive, because you are constructing a new formulation.

Another way of simplifying a formulation is to identify if there are situations where the answer is the same no matter how you look at it. This property is called symmetry. As an example, if we can say that the answer looks the same no matter what direction you look at it, we call that spherical symmetry and it reduces the problem from three directions down to only one.

Sometimes a formulation will suggest some mathematical property of a function that can be exploited. Such properties, that will use esoteric and scary language (as we have not explored them yet) are integrability, differentiability, continuity, or the ability to be expanded in series.

There is another way of simplifying theories, it rests on the assumption that different phenomena are part of a single theory that encompasses many phenomena. We call such a larger theory a unification. It was thus that Sir Isaac Newton unified the phenomena of gravity on the earth with the orbit of planets and moons.

Another way of doing theoretical physics is to just play around with the ideas. See if you can make something work without any formal structure.

A final aspect of choosing among competing theoretical systems is simplicity. I tend to discount simplicity arguments, unless one idea is obviously simpler than others. There is no real criteria for simplicity other than a vague sense of aesthetics.

For Further Reading

Leonard Susskind, George E. Hrabovsky, (2013), The Theoretical Minimum, Basic Books. This was my first attempt at this, you will recognize many similarities with chapter one.

Robert Bruce Lindsay, Henry Margenau, (1936), Foundations of Physics, John Wiley and Sons, reprinted with corrections in 1957 by Dover Publications. This is a wonderful book, whose first chapter forms a very nice short course in the nature of theoretical physics.

David Kelley, (2014), The Art of Reasoning. W. W. Norton & Company, 4th edition. This is a mostly non-mathematical presentation of basic logic and some fundamental ideas of critical thinking.

Steven Galovich, (1989), Introduction to Mathematical Structures, Harcourt Brace Jovanovich, Inc. This is a truly wonderful book on the foundations of mathematics. The first chapter is almost 100 pages and forms an introduction to logic, axiomatic systems, proofs, and mathematical discovery. Chapters two through four form a nearly 150 page course in axiomatic set theory.

Donald W. Hight, (1977), A Concept of Limits, Prentice-Hall Inc. (Reprinted in 1977 with corrections by Dover Publications). Chapter 1 is a thorough introduction to sequences and their limits (including natural approach to the Weierstrass-Jordan notation, or as it is more colloquially known, the ε-δ notation.)

Created with the Wolfram Language