Concrete Mathematics: Chapter 2: Part 3

Topic: Maths, Date: 07/10/2016

Manipulation of Sums

As the book covers the distributive and associative laws very well, this section will only focus on the commutative law. The book states that the sum of integers in the finite set $K$ can be transformed with a permutation $p(k).$ The law looks as follows

$$ \sum \limits_{k \in K} a_k = \sum \limits_{p(k) \in K} a_{p(k)}. $$

Put simply, this means that the sum remains the same as long as all the elements in the set get summed, just in a different permutation. A simple example of this is $p(k) = -k$, which is the same set just in reverse. The book gives an example of the set $K = \{ -1, 0, 1 \}$, so the permutation $p(k) = -k$ gives

$$ a_{-1} + a_0 + a_1 = a_1 + a_0 + a_{-1} $$

While this example isn't too taxing, the following is a little harder to understand. This next example is how Gauss's trick from Chapter 1 can be found using the different manipulation laws. The first step in the example is the most confusing as it is the one which features the commutative law. We start with

$$ S = \sum \limits_{0 \le k \le n}(a + bk). $$

The book says that we now replace $k$ with $n - k.$ Sadly it does little to explain why this is the case. Well, we can show the permutation more clearly by creating a new variable called $k_1$ and saying $k_1 = n - k$, which we can rearrange for $k$ and plug it back into the sum.

$$ \sum \limits_{0 \le k \le n}(a + bk) = \sum \limits_{0 \le n - k_1 \le n}(a + b(n - k_1)) $$

Now it pays to look at the bounds of the sum, $0 \le n - k_1 \le n$ is technically the same as $0 \le k_1 \le n$ (We know this to be true as $n - k_1$ is just the reverse of $k_1$). So we can substitute this back into the summation.

$$ S = \sum \limits_{0 \le k_1 \le n}(a + b(n - k_1)) $$

Now as $k_1$ is just a simple variable, we can replace its letter with $k$. This gives us

$$ S = \sum \limits_{0 \le k \le n}(a + b(n - k)) $$

Just to prove that these are the same, we can expand out the sum to show the individual components for each summation.

$$ \begin{aligned} \sum \limits_{0 \le k \le n}(a + bk) &= (a + b \cdot 0) + (a + b \cdot 1) + \ldots + (a + b \cdot n) \\ \sum \limits_{0 \le k \le n}(a + b(n - k)) &= (a + b \cdot (n - 0)) + (a + b \cdot (n - 1)) + \ldots + (a + b \cdot (n - n)) \end{aligned} $$

You can see that these are essentially the same just with the later summing down. The book now walks us through the other laws with the same example, but those are not too hard to understand, so I won't be covering them here.

Links with Integration

The book suggests we can use the concept of integrals to solve sums. This is first introduced by showing an example in method 4 of chapter 2.5. Unfortunately, it doesn't quite work out as it leaves a small error term. Fortunately, the book's next section shows the real connection between integration and summation.

It is worth noting that the book uses different notation than some people might be used to, e.g $D(f(x)) = \frac{d}{dx} f(x).$ The equation for a derivative that the book uses is more commonly known as

$$ \frac{d}{dx} f(x) = \lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x}. $$

Now the book introduces finite calculus. This is the same as infinite calculus (integration/differentiation) but restricted to integers. This means that $\Delta x = 1$ is the lowest value we can get to with a limit approaching 0 using integers. Therefore the derivative for finite calculus is

$$ \Delta f(x) = f(x + 1) - f(x). $$

This isn't super useful yet, as it just turns equations into sums. Take $f(x) = x^2$ for example

$$ \begin{aligned} \Delta f(x) &= (x + 1)^2 - x^2 \\ &= (x + 1)(x + 1) - x^2 \\ &= x^2 + 2x + 1 - x^2 \\ &= 2x + 1. \end{aligned} $$

Equation 2.48 tells us that this means

$$ \sum \limits_{k = 0}^{x - 1} 2k + 1 = x^2 $$

Finite 'integration' to remove summation is corollary to this delta operation the book has defined. Unfortunately, it doesn't correlate 1:1. Instead we need to use falling powers to remove the summation cleanly.

Falling Powers

Falling powers/factorials are a handy dandy way to keep the existing rules of infinite calculus but for finite calculus instead. A falling power is explained by equation 2.43, but I prefer to have examples:

$$ \begin{aligned} x^{\underline{3}} &= x(x - 1)(x - 2) \\ x^{\underline{2}} &= x(x - 1) \\ x^{\underline{1}} &= x \\ x^{\underline{0}} &= 1 \\ x^{\underline{-1}} &= \frac{1}{x + 1} \\ x^{\underline{-2}} &= \frac{1}{(x + 1)(x + 2)} \\ x^{\underline{-3}} &= \frac{1}{(x + 1)(x + 2)(x + 3)}. \end{aligned} $$

Falling powers can be used with the rule

$$ \Delta (x^{\underline{m}}) = mx^{\underline{m - 1}} $$

So we can solve sums like

$$ \begin{aligned} \sum \limits_{0 \le k < n} k^{\underline{m}} &= \frac{k^{\underline{m + 1}}}{m + 1} = \frac{n^{\underline{m + 1}}}{m + 1} \\ \sum \limits_{0 \le k < n} k &= \frac{n^{\underline{2}}}{2} = \frac{n(n - 1)}{2} \end{aligned} $$

Great! That's the meat of this chapter, anything else isn't too hard to pick up after understanding this.

Understanding Limits

Here I'm just going to briefly mention the limits shown in the infinite sums section of the chapter. The first issue I had was with a limit between two bounds (I forgot to subtract the equation evaluated at 0). This was with the summation

$$ \sum \limits_{k \ge 0} \frac{1}{(k + 1)(k + 2)}. $$

Next the summation is solved by the following steps

$$ \begin{aligned} \sum \limits_{k \ge 0} \frac{1}{(k + 1)(k + 2)} &= \sum \limits_{k \ge 0} k^{\underline{-2}} \\ &= \lim_{n \to \infty} \sum \limits^{n - 1}_{k = 0}k^{\underline{-2}} \\ &= \lim_{n \to \infty} \frac{k^{\underline{-1}}}{-1} \bigg|^n_0 \end{aligned} $$

Now we need to evaluate the limits at $n$ and $0$

$$ \begin{aligned} \lim_{n \to \infty} \frac{n^{\underline{-1}}}{-1} - \lim_{n \to \infty} \frac{0^{\underline{-1}}}{-1} &= \lim_{n \to \infty} -\frac{1}{k + 1} - \lim_{n \to \infty}\left(- \frac{1}{0 + 1} \right) \\ &= 0 - (-1) \\ &= 1 \end{aligned} $$

Which is the same answer the book gives. So I'm happy.

Links:

Difference Quotient (Clip 5) | Commutative law