Recall the method of exhaustion 1.3.2 in Chapter 1. There, we encountered our first limiting process; namely, we could compute a sequence of approximations of the circumference as the number of segments became larger and larger, and the lengths of those same segments were made smaller and smaller.
More generally, a limit describes the behavior of functions, sequences, or any mathematical objects as some value is being approached. In discussing limiting behavior, we aren’t interested in the value of the function or sequence at some particular value, but the output of the function as this value is approached. Indeed, the segments in the method of exhaustion could not be length zero, nor could there be an infinite number of these segments. We approximate the circumference for some finite number of segments, and hope that the approximation is closer and closer to the circumference as the number of segments grows larger (and the segments are made shorter). In the case of the method of exhaustion, we say that the limit converged, since this limit becomes ever closer to a finite value (namely, \(2\pi\)) as the number of segments grows.
All limits can be reduced to sequences. A sequence is a set of objects of any type for which the order matters. This definition is intentionally vague. We will want to take limits for speeds, line segments, areas, accelerations, energies; indeed, any quantity that might be considered in math and science. To encourage this sense of generality, we will discuss all manner of limiting behaviors in this text (indeed, it is the foundation of Calculus), but they are all simply limits of sequences.
2.2 Definition of Limits
We will denote a sequence with the notation \((a_n)_{n\in\mathbb{N}}\), where \(n = 1, 2, \dots\), the natural numbers \(\mathbb{N}\). The curly braces indicate we are considering a set. The index \(n\) indicates that we have ordered the elements of the set. Obviously, \(a_1\) is the first element, \(a_2\) is the second element, and so on.
With these basics in mind, consider the illustration below.
In this case, \(a_n\) corresponds to the y-coordinate of each point, while \(n\) determines the x-coordinate. Notice that as \(n\) grows larger (that is, as the point changes from green to blue), \(a_n\) is closer and closer to the dotted line labeled by \(L\). In particular, once \(n = 114\) (that is, the point turns red), the point \(a_n\) falls within the bounds \(-\epsilon\) and \(+\epsilon\) surrounding \(L\) and remains within those bounds forever (and thus remains red). As simple as it sounds, this is one of the most general definitions of a limit in Calculus.
Of course, words aren’t enough. We need to express our thoughts in notation. The following will likely look very scary (it does to everyone when they first encounter it), but don’t worry: the idea is fundamentally simple, no more complicated than the illustration and description above.
Definition 2.1 (Definition of a Limit)
Let \(\epsilon > 0\) be arbitrary. Furthermore, let \((a_n)_{n\in\mathbb{N}}\) be a sequence of real numbers. We say that this sequence has a limit \(L\) if there exists a natural number \(N\) such that
\[|a_n - L| < \epsilon \]
for all \(n > N\)
Let’s break this down. The left side, \(|a_n - L|\), is the vertical distance between \(a_n\) and \(L\) in the illustration above. We use an absolute value because sometimes \(a_n\) falls below \(L\) (in which case \(a_n - L\) is negative), while sometimes it lies above \(L\) (in which case \(a_n - L\) is positive). However, we aren’t interested in the position of \(a_n\) relative to \(L\), we are only interested in the distance between \(a_n\) and \(L\), which can be achieved with the help of the absolute value.
The right side is trickier. When mathematicians use language like “let \(\epsilon > 0\) be arbitrary”, what they usually mean is “pick a positive number as small as you please”. We can even imagine making \(\epsilon\) something silly, like \(\epsilon = 10^{-10000000}\). We say that \((a_n)\) has limit \(L\) if there is a value of \(n\), call it \(N\), when the distance between \(a_n\) and \(L\) is smaller than that positive number \(\epsilon\) whenever \(n > N\), which corresponds to the point turning red in the illustration above.
Now here’s the rub: I have allowed you to pick any positive number. Therefore, we are saying that the distance between \(a_n\) and \(L\) can be made smaller than any positive number, no matter how small, for large enough \(n\). Thus, there is a sense in which the distance between \(a_n\) and \(L\) is essentially zero for sufficiently large \(n\), so \(L\) is the limit of \(a_n\). When we have determined that some number \(L\) is a limit of the sequence \((a_n)_{n\in\mathbb{N}}\), we use the following notation:
Definition 2.2 (Useful Limit Notation)
Suppose the sequence \((a_n)_{n\in\mathbb{N}}\) has limit \(L\). We express this as
\[\lim_{n\rightarrow \infty} a_n = L\]
While it may not seem so, the limit described by the method of exhaustion is a limit exactly like the one described in the illustration and definitions above. In the section on the method of exhaustion, ??, we found that the equation describing the length of all segments inscribed in the circle, \(s_n\), is given by
We illustrate that this sequence converges to \(2\pi\) (the circumference of a circle with radius one) below:
The height of \(s_n\) corresponds to the total length of all segments as the number of sides of the inscribed polygon increases. Notice that the distance \(|s_n - 2\pi|\) (the green segment) can be as small as desired by increasing the value of \(n\). Therefore, because the segments of the circle more and more closely follow the circumference \(C\) of the circle, and because the total length of all segments \(s_n\) converges to \(2\pi\), we have:
\[C = \lim_{n\rightarrow\infty} s_n = 2\pi\].
2.3 Limits and Functions
2.3.1 Pointwise Limit
In this text, we will be particularly interested in the notion of a limit in the context of functions. We will first be interested in understanding the limit of a function at a single point. Consider the diagram below.
As before, we are allowed to select any\(\epsilon > 0\) in advance (this corresponds to the distance between the horizontal lines labeled \(L - \epsilon\) and \(L + \epsilon\)). Once again, it can be something seemingly absurd, like \(\epsilon = \frac{1}{9999^{999999}}\). If we can find some other positive number \(\delta > 0\) so that the green box is totally contained between \(L-\epsilon\) and \(L + \epsilon\) for all \(x\) between \(c-\delta\) and \(c+\delta\) for every possible \(\epsilon > 0\), then \(f(x)\) is said to have a limit \(L\) as \(x\) approaches \(c\).
Mathematicians are not satisfied until a concept has been expressed in notation. Once again, this notation might seem intimidating, but don’t worry: the notation is no more complicated than the illustration above.
Definition 2.3 (Limit of Function at a Point)
We say that the limit of \(f(x)\) as \(x\) approaches \(c\) is \(L\), or
\[\lim_{x\rightarrow c} f(x) = L\]
if, for all \(\epsilon > 0\), we can find \(\delta > 0\) such that
\[0 < |x - c| < \delta\]
implies
\[|f(x) - L| < \epsilon\]
Once again, we can break this definition down piece by piece.
Consider \(0 < |x - c| < \delta\). Remember that \(|x - c|\) corresponds to a distance. We have \(0 < |x - c|\), or the distance of \(x\) from \(c\) is greater than zero, so that we are considering a set of \(x\) in the neighborhood of \(c\), but not \(c\) itself. If we find a \(\delta > 0\) (the green vertical lines in the illustration above, which is the size of the neighborhood about \(c\)) such that the horizontal green lines (the minimum and maximum values of \(f(x)\)) are forced to fall within the horizontal lines \(L-\epsilon\) and \(L + \epsilon\) for any possible choice of \(\epsilon > 0\), then we say that \(f(x)\) has limit \(L\) as \(x\) approaches \(c\). This is equivalent to saying that we must find a \(\delta > 0\) such that \(0 < |x - c| < \delta\) implies \(|f(x) - L| < \epsilon\).
As before, we will use the notation
\[\lim_{x\rightarrow c} f(x) = L\]
to denote this limit in this book.
2.3.2 Holes
There are other important nuances to the concept of a limit. Consider the diagram below.
In this diagram we have a straight line, \(f(x) = x + 1\) except we force \(f(1) = 4\) instead of \(2\). We say we have placed a hole (sometimes fancifully called a removable discontinuity) at \(x=1\). We are interested in the notion of a limit at and around \(x = 1\).
Remember that, when trying to determine \(\lim\limits_{x\to c} f(x)\), we consider all \(x\) in the neighborhood \(0 < |x - c| < \delta\). Because of the first inequality, \(0 < |x - c|\), we do not care about the behavior of \(f(x)\) at \(x = c\), but we do care about those values of all other \(x\) in the interval \((c - \delta, c + \delta)\). Therefore, when we drag the center of our box onto the hole at \(x=1\), we do not care whatsoever that the function assumes a much different value than the y-coordinate of the hole; again, we are only interested in the values of \(x\) around \(1\), not \(1\) itself. Furthermore, upon placing the center of our box on the hole at \(x=1\), we can make the green box as small as we please; upon doing so, the box zooms in on the point \((1, 2)\). In other words,
\[\lim_{x\to 1} f(x) = 2\]
This example captures the essence of the quote of this chapter. When considering limits, we do not care what the value of the function at the point is. Rather, we are interested in the behavior of the function as a certain value of \(x\) is approached. In this case, the function \(f(x)\) approaches \(2\) as \(\delta\) is made smaller, thus as \(x\rightarrow 1\), \(f(x)\rightarrow 2\).
One more point to note. If we pick a point \(x = c\) near the hole and drag \(c-\delta\) or \(c + \delta\) over the hole, our box does include the point \(f(1) = 4\) (that is, the box becomes very tall). Once again, as far as the limit is concerned, this does not matter. For any given point, we must be able to compress the box down to infinitesmal size; in other words, we can just choose a smaller \(\delta\) to compress our box a little more and avoid the pathological point at \(x = 1\).
The following example describes how to compute the limit of a function at a hole.
This is an example of a limit with a hole. If we naïvely attempt to plug in \(x = 3\) into both the numerator and the denominator, we obtain \(\frac{0}{0}\), which is meaningless. However, note that we can factor both the numerator and the denominator. Upon doing so, terms will cancel:
Notice that the function is not defined at \(x = 3\). However, there still exists a limit at \(x = 3\). We could “complete” the function by filling in the hole. In particular, we would add the point \(\left(3, \frac{6}{13}\right)\) to our function to fill the hole.
2.3.3 Limit at Infinity and Asymptotes
Occasionally, we cannot remove a function’s discontinuities. Moreover, we are often interested in the behavior of functions, sequences, and sums as some input becomes arbitrarily large. Handling these cases will be the subject of this section. The following illustrations should give the reader an idea of what we will be thinking about:
There are now no holes in the function; instead, it is broken into two pieces. The horizontal line that separates the two pieces of the function is referred to as the horizontal asymptote. On the other hand, the vertical line that separates the function into two pieces is called the vertical asymptote. With this picture in mind, we can now explore the connection between limits of functions and the asymptotes which bound them.
Vertical Asymptotes
Consider the rational function \(f(x) = \frac{x^2 + 2x - 3}{x^2 - x - 20}\). When we considered holes, our plan of attack first involved factoring both the numerator and denominator. We find:
In the case of holes, we found that a term in the numerator and denominator cancelled. In this case, we have no such cancellation. When this occurs, then the function is bounded by vertical asymptotes. Perhaps unsurprisingly, the vertical asymptotes occur where the denominator is equal to zero: \(x = 5\) and \(x = -4\). This is illustrated below:
We want to express vertical asymptotes in the language of limits. In this case, we are interested in the behavior of the function as \(x\) approaches \(-4\) and as \(x\) approaches \(5\). Notice, however, that the behavior of the function is different depending on whether we approach \(x=5\) from the left or from the right. In particular, as \(x\) approaches \(5\) from the right, the function becomes larger and larger. Meanwhile, as \(x\) approaches \(5\) from the left, the function becomes more and more negative. We add to the previous diagram to illustrate this:
In particular, as the arrow in the top right hand corner moves toward \(x = 5\) from the right, the arrow goes upward forever. We use the following notation to express this:
\[\lim\limits_{x\to 5^{+}} f(x) = +\infty \]
In words: “As we approach \(x = 5\) from the right, the function \(f(x)\) approaches positive infinity.”
We also want to express function behavior as an asymptote is approached from the left. This is illustrated in the upper left hand corner of the illustration above. In that case, notice that as we approach \(x = -4\) from the left, the arrow goes forever upward. We can write this in our new notation as
\[\lim\limits_{x\to -4^{-}} f(x) = +\infty\]
In words: “As we approach \(x = -4\) from the left, the function \(f(x)\) approaches positive infinity.”
The function will not always go to positive infinity. The function can also become more and more negative as a vertical asymptote is approached. Indeed, this is the case in the bottom left-hand corner. As the arrow approaches the asymptote \(x = -4\) from the right, the arrow goes downward forever. We express this as
\[\lim\limits_{x\to -4^{+}} f(x) = -\infty\]
In words: “As we approach \(x = -4\) from the right, the function \(f(x)\) approaches negative infinity.”
Horizontal Asymptotes
Functions can also be squished along horizontal lines. Consider the illustration below.
We wish to pick a particular function to analyze. Let’s set \(a = 8\) in the illustration above. Making this substitution, the function is
\[f(x) = \frac{8x^2 + x - 10}{x^2 - x - 30}\]
We wish to determine where the horizontal asymptote will be based on the function itself. Notice that the horizontal asymptote is determined by the value of \(y\) that the function approaches as \(x\) becomes large. Therefore, we are interested in the limit
\[\lim\limits_{x\to+\infty} f(x) = \lim\limits_{x\to +\infty}\frac{8x^2 + x - 10}{x^2 - x - 30}\]
We can analyze the terms in the numerator and denominator and compare how each term grows. In particular, we will divide both the numerator and the denominator by \(\frac{1}{x^2}\):
Notice that as \(x\) becomes larger and larger, the second and third terms in both the numerator and the denominator go to zero. Therefore, the limit becomes
Notice that we could change the value of \(8\) using the slider for \(a\) in the illustration above. The value of \(a\) will determine the horizontal asymptote using exactly the same argument that we just provided.
It should be noted that a function can cross its horizontal asymptote(s). An example is provided in the function below. As \(x\) positive and negative infinity, the function wraps itself more and more closely around the line \(y = 0\):
Slant Asymptotes
There is one more type of asymptote we may encounter, called the slant or oblique asymptote. In this case, the asymptote is neither horizontal nor vertical; instead, the function approaches a sloped line as \(x\) becomes large.
As usual, we are interested in a limit: in particular, we are interested in the behavior of a certain function as \(x\) becomes large. As before, we will select a specific value of \(a\) in the illustration above. In particular, we select \(a = 2\). In that case, we are interested in the following limit:
In this case, we must use a technique from algebra called polynomial long division. We wish to divide the numerator, \(2x^2 + 10x + 25\) by the denominator, \(x + 2\). We provide the steps necessary to divide these polynomials in the aside below. The reader may skip these steps if they wish.
We begin by writing out the polynomials we will be dividing. We write out the division symbol exactly as we do as if we were dividing positive integers:
We want to cancel the \(2x^2\). To this end, we write \(2x\) on top:
Then, we multiply \(x + 2\) by \(2x\) and subtract from the first two terms. It looks like this:
Next, we bring down the \(25\), exactly as we bring down single digits when dividing integers:
Now we want to cancel the \(6x\) in \(6x + 25\). To do so, we add \(6\) to the top:
After doing so, we multiply \(6\) by \(x + 2\) and subtract to obtain:
\(13\) is our remainder because \(x + 2\) cannot go into \(13\) evenly. Therefore, we write our answer as:
Following the steps in the drop-down above, we find that the limit we are seeking is given by
Notice that as \(x\) grows larger and larger, the term \(\frac{13}{x + 2}\) becomes smaller and smaller. Therefore, as \(x\rightarrow +\infty\), the function approaches the line \(y = 2x + 6\), as illustrated in the diagram.
2.3.4 Breaking Limits
Perhaps the most enlightening way to learn anything is to break it; indeed, once broken, it becomes obvious what the working thing was for. In this section, we explore what it might look like for a function not to have a limit.
Some functions have discontinuities, or “jumps”, at certain values of \(x\). Consider the illustration below.
Consider what happens upon placing the center of our box is placed on the point \((4, 4)\). The first thing to notice is that the height of the box is at least equal to \(4\), since this is the difference between the bottom and top branches of the function.
Furthermore, when we let \(\delta\) run to zero, the height of the box approaches four until \(\delta\) is zero, when the green dots line up with the blue dot (which is the center of the box). Remember that we are only interested in those values of \(x\) which are away from the value of \(c\) we are considering (in this case, \(4\)). Said another way, we are interested only in those values of \(x\) such that \(0 < |x - 4|\), not \(x = 4\) itself. Because of this restriction, we have a problem. We want \(|f(x) - L| < \epsilon\) for any \(\epsilon > 0\), no matter how small. But we can’t force our box to have a height less than \(4\) when \(c = 4\). Therefore, \(\lim\limits_{x\to 4} f(x)\) does NOT exist for this example.
2.4 Limit Rules
We do not want to use the definitions of a limit when performing computations, as the reader will see shortly. Indeed, for essentially all applications, the formal definition is completely unnecessary. Instead, we want a body of rules that we can apply when must compute a limit.
2.4.1 Basic Rules
Theorem 2.1 (Limit Rules) Let \(k\) be a constant, and suppose that the limits of \(f(x)\) and \(g(x)\) as \(x\) approaches \(c\) exist. In particular, suppose
\[\lim_{x\rightarrow c}\left[f(x) \pm g(x) \right] = \lim_{x\rightarrow c} f(x) \pm \lim_{x \rightarrow c} g(x) = L \pm M\]
In words: the limit of a sum (difference) of functions is equal to the sum (difference) of the limits, assuming the limits exist.
\[\lim_{x\rightarrow c} [f(x)g(x)] = \left[\lim_{x\rightarrow c} f(x) \right]\cdot\left[\lim_{x\rightarrow c} g(x) \right] = L\cdot M\]
In words: the limit of a product of functions is equal to the product of the limits, assuming the limits exist.
\[\lim_{x\rightarrow c}\left[\frac{f(x)}{g(x)}\right] = \frac{\lim\limits_{x\to c} f(x)}{\lim\limits_{x\to c} g(x)} = \frac{L}{M}\]
In words: the limit of a quotient of functions is equal to the quotient of the limits, assuming the limits exist.
\[\lim_{x\rightarrow c} \left[f(x) \right]^n = \left[\lim\limits_{x \to c} f(x)\right]^n = L^n\]
In words: the limit of a quotient of functions is equal to the quotient of the limits, assuming the limits exist.
\[\lim\limits_{x\to c} k = k\]
In words: the limit of a constant is that constant.
We will prove the limit sum rule, at which point the reader will appreciate why the limit definition is unusable for any practical purpose.
By assumption, \(\lim\limits_{x\to c} f(x) = L\) and \(\lim\limits_{x\to c} g(x) = M\). Let \(\epsilon > 0\) be arbitrary. This means we can find a \(\delta_{f(x)} > 0\) such that
Both \(|f(x) - L|\) and \(|g(x) - M|\) will be less than \(\epsilon\) if we take the smaller of \(\delta_{f(x)}\) and \(\delta_{g(x)}\). In other words, take \(\delta = \min\left(\delta_{f(x)}, \delta_{g(x)} \right)\). In that case, we have
\[|f(x) - L| + |g(x) - M| < \epsilon + \epsilon = 2\epsilon\]
By the triangle inequality (1.5.8), we have
Recall that \(\epsilon > 0\) is an arbitrarily small positive number. Therefore, we have found a \(\delta = min(\delta_{f(x)}, \delta_{g(x)}) > 0\) such that \(|(f(x) + g(x)) - (L + M)| < \epsilon\). Therefore,
A proof like the one above is known as a \(\delta, \epsilon\) proof, which is the basis of proving Calculus from first principles. Proofs like these are the foundation of real analysis, which is a first step into rigorous, research-level mathematics. Thankfully for the reader (and the author), we will not prove much using \(\delta, \epsilon\) proofs in this textbook. Instead, we will rely primarily on pictures, intuitions, and rules like those above, taking their rigorous proofs for granted.
2.4.2 The Sandwich Theorem
Another incredibly useful theorem which will be used repeatedly in this text is the Sandwich Theorem. It is also often referred to as the Squeeze Theorem. In some languages, including German, Italian, and Russian, this theorem is fondly referred to as the Two officers and a drunk theorem. Consider the illustration below:
The concept of this theorem is very easily understood. Let’s suppose I have three functions, \(g(x)\), \(f(x)\), and \(h(x)\). Suppose further that \(f(x)\)\(\leq\)\(h(x)\)\(\leq\)\(g(x)\).
Then, if \(\lim\limits_{x\to c} f(x) = L\) and \(\lim\limits_{x\to c} g(x) = L\), then it must be that \(\lim\limits_{x\to c} h(x) = L\), also. This is fairly intuitive based on the name of the theorem and the illustration above. Since \(h(x)\) is between \(f(x)\) and \(g(x)\), if \(f(x)\) and \(g(x)\) have the same limit, \(h(x)\) must have that limit, too.
Thus, this result is intuitively obvious, and arguably does not need a proof. However, to be complete, we include a proof of the Sandwich Theorem below. It may be skipped without loss of continuity.
Because \(\lim\limits_{x\to c} f(x) = L\) and \(\lim\limits_{x\to c} g(x) = L\), we can find \(\delta_{f(x)}\) and \(\delta_{g(x)}\) such that
Once again, if we let \(\delta = \min(\delta_{f(x)}, \delta_{g(x)})\), then both \(|f(x) - L|\) and \(|g(x) - L|\) are less than \(\epsilon\) simultaneously. We again use the triangle inequality in the following
Furthermore, because \(f(x) \leq h(x) \leq g(x)\), it must be the case that the distance between \(f(x)\) and \(g(x)\), \(|f(x) - g(x)|\), is greater than the distance between \(f(x)\) and \(h(x)\), \(|f(x) - h(x)|\). Thus, it must be that \(|f(x) - h(x)| \leq 2\epsilon\), also.
Finally, we use the triangle inequality again to obtain
Recall that \(\epsilon > 0\) is an arbitrary positive number. Therefore, the difference \(|h(x) - L|\) can be made as small as desired with appropriate \(\delta\). Thus, \(\lim\limits_{x\to c} h(x) = L\), as was to be shown.
2.5 Limit Examples and Problems
We first consider a limit of a function which is continuous over its entire range.
Evaluate the limit \(\lim\limits_{x\to 3} 2x^2\)
In this case, the function is continuous over the entire real line. Therefore, evaluating this limit is no more than plugging in \(3\) into \(x\):
\[\lim\limits_{x\to 3} 2x^2 = 2(3)^2 = 18\]
Next, we consider a function which has a hole. Function holes are sometimes also referred to as removable discontinuities.
Evaluate the limit \(\lim\limits_{x\to 2} \frac{x^2 - 4}{x^2 + x - 6}.\)
Notice that, in this case, if we attempt to plug in \(2\) into the denominator, we get \(2^2 + 2 - 6 = 6 - 6 = 0\), which is undefined. The reader should note, however, that we can factor both the numerator and the denominator:
In other words, the function is not defined at \(x=2\). However, \(\frac{x^2 - 4}{x^2 + x - 6}\) approaches \(\frac{4}{5}\) from both sides as \(x\rightarrow 2\).
2.6 Applications of Sandwich Theorem
2.6.1 Limit of sin(h)/h
Notice that as the slider for \(h\) is brought to 0, the lengths of the blue arc (which is equal to \(h\) if the units are in radians) and the orange segment (corresponding to the quantity \(\sin(h)\)) are roughly the same. Therefore, we might expect that the ratio \(\frac{\sin(h)}{h}\) approaches 1 as \(h\) approaches 0. Indeed, the reader should note that as the slider for \(h\) is brought to zero, the fraction approaches 1. Note that the function \(f(x) = \frac{\sin(x)}{x}\) is sometimes referred to as the \(\text{sinc(x)}\) function (it appears very often in physics, math, and engineering, so it was given its own name), hence the name of the proof.
Proof (Sinc Limit). We will be using the Sandwich Theorem to demonstrate that \(\lim_{h\rightarrow 0}\frac{\sin(x)}{x} = 1\). Therefore, our objective is to bound \(\frac{\sin(x)}{x}\) from above and below with functions that themselves approach 1 as \(x\) approaches 0. Notice that in the figure above, it appears that since the segment of length \(h\) is curved, \(\sin(h) \leq h\). Supposing this is true, then we have:
\[\frac{\sin(h)}{h} \leq \frac{h}{h} = 1\]
For all\(h\). Therefore, we will need to show that \(sin(h) \leq h\). Now we need a lower bound for the fraction \(\frac{\sin(h)}{h}\).
In the figure above, notice that it appears as if \(\tan(h) \geq h\). Supposing this is true, then \(\cos(h) = \frac{\sin(h)}{\tan(h)} \leq \frac{\sin(h)}{h}\). Therefore, we also need to prove that \(\tan(h) \geq h\).The figure below illustrates how we will prove these inequalities:
Notice that the following are implied by the picture above:
Red Triangle\(<\)Blue Sector\(<\)Green Triangle
For the red triangle, its height is \(\sin(h)\) and its base is 1. Therefore, \(A_{\text{red triangle}} = \frac{1}{2} \cdot 1 \cdot \sin(h) = \frac{\sin(h)}{2}\).
Recall that the area of a circular sector is given by \(A = \frac{1}{2}r^2\theta\) (See (1.4)). Therefore, the area of the blue sector is given by \(A_{\text{blue sector}} = \frac{1}{2}\cdot 1^2 \cdot h = \frac{h}{2}\).
Finally, we must compute the area of the green triangle. From the figure above, we have a right triangle with height of length 1 (the radius of the circle) and base \(\tan(h)\). Therefore, \(A_{\text{green triangle}} = \frac{1}{2}\cdot 1 \cdot \tan(h) = \frac{\tan(h)}{2}\).
Therefore, we find that
Red Triangle\(<\)Blue Sector\(<\)Green Triangle\(\Rightarrow\)\(\frac{\sin(h)}{2}\)\(<\)\(\frac{h}{2}\)\(<\)\(\frac{\tan(h)}{2}\)\(\Rightarrow\)\(\sin(h)\)\(<\)\(h\)\(<\)\(\tan(h)\)
Putting everything together, we have the following inequalities:
\[\frac{\sin(h)}{\tan(h)} \leq \frac{\sin(h)}{h} \hspace{2mm}\text{(Since h < tan(h))}\]
and
\[ \frac{\sin(h)}{\tan(h)} \leq \frac{\sin(h)}{h} \leq 1 \Rightarrow \cos(h) \leq \frac{\sin(h)}{h}\leq 1\]
Taking limits to all three of these quantities and noting that \(\lim_{h\rightarrow 0} \cos(h) = 1\), we have:
We have provided an illustration of \(\lim_{h\rightarrow 0}\frac{1 - \cos(h)}{h}\) above. Note that the segment whose length is \(1 - \cos(h)\) appears significantly shorter than the arc whose length is \(h\). Furthermore, it is clear that the value of \(\frac{1 - \cos(h)}{h}\) appears to be approaching 0 as \(h\) approaches 0. Therefore, we might expect that \(\frac{1 - \cos(h)}{h}\) approaches 0 as \(h\rightarrow 0\).