### Prerequisites – Big O Notation

The previous blog post was generally rather math-oriented and proof-focused. However, this blog post is going to focus more on coding. I will try to keep the material as accessible as possible, but one really important concept moving forward is Big O notation from Computer Science, which we use to denote how efficient a program is.

I highly recommend you watch the video if you don’t know what this is, to at least get a rough idea of what we’re talking about. It’s really important because moving forward we are going to be caring a lot more about efficiency, since Competitive Programming is not just about finding the answer, but finding it quickly.

For now, all you need to know is that if we plug in into the program’s runtime and we’re somewhere around at most, then your program will probably pass the time limit. Computers run at roughly primitive operations per second, and you usually only have 1 or 2 seconds for programming problems. Since we are performing higher level operations and there are unseen constant factors, we play it safe and set as our safe zone. For example, an algorithm will pass the time limit if , but will definitely fail if .

With that said, let’s move on to this post’s topic!

Problems Discussed:

### Exponentiation

Previously, we needed to find . Thankfully, due to modular arithmetic, we know that

where % is the modulo operator.

In other words, if I add or multiply two integers then take their mod, I will get the same result as if I took their mod **first**, *then* added or multiplied them, and then took their mod. For example, observe that , but if we took the mod at each step, we’d have , which is the same answer. Similarly, doing , while as well.

This is nice because it means we never have to deal with numbers greater than when working modulo some . We just need to apply modulo at every intermediate step, and so every step along the way, we curb the numbers before they can get unreasonably big. If we have , then our program never has to deal with integers larger than around , which is still within the bounds of a 64-bit data type, which is good.

Therefore we have the following pseudo-code:

#computes base to the power of exp, modulo MOD def slow_mod_pow(base,exp,MOD): result = 1 base = base%MOD for i in range(exp): #do this exp times result = (result*base)%MOD return result

This seems like a reasonable function – compute with a simple for loop that runs times. Due to this for loop, the function runs in .

However, in the problem from the previous post, we needed to answer different test cases, where could be as large as , with the exponent being as large as per test case. If we just called this function for each test case, the program would definitely not pass the time limit with those constraints. Is there a faster way?

### Fast Exponentiation – Recursive

Again, we need to find . Suppose is even. With the Laws of Exponents, we can rewrite into . Our base was squared, but honestly we don’t really care too much about the size of the base. What we *do* care about is the size of the exponent. Solving is equivalent to the original problem, except the exponent we need to deal with is HALF the size. That’s great!

For instance, let’s try to find . We have that

Therefore, the remainder when is divided by 11 should be 3. Indeed if you fully expand into a 23-digit number, then perform long division with 11 on that value, you would also get a remainder of 3. However, with our method, we never had to deal with any values greater than 121, and it only took us 5 steps, not 32, to compute!

Of course, that’s if we have a perfect power of 2. What if you have an odd number? Well, , so we just need to solve for and multiply it by . We can just call the “if it’s even” part of our function again since here would now be even. Lastly, we have our base case, which is that (if our base is 0, then we just immediately return 0 instead).

This gives us our great new algorithm,

def mod_pow(base,exp,MOD): if exp==0: return 1 if exp%2==1: return (base*mod_pow(base,exp-1,MOD))%MOD else: return mod_pow((base*base)%MOD, exp/2, MOD)

And it’s only 7 lines! Amazing! That’s efficiency not just in runtime, but in coding time as well.

Now, to solve for its runtime, we ask the question: How many times can you divide by 2 until you hit 1? Suppose you can do so times. Well, if , then and .

What about the odd numbers? Well, you can only perform the -1 operation about as many times as you perform the operation, so it should still always be around the order of .

Isn’t that cool? To answer , you would only need around 30 operations, and to answer , you would only need around 60 operations! That’s a MASSIVE upgrade from our previous idea.

### Fast Exponentiation – Iterative

This method also runs in , but I find the former recursive method to be a lot easier to memorize and code, so this one is not strictly necessary to know. However, it’s also rather neat and I wanted to include it anyway because why not.

One way we could answer the query without a recursive function is by first pregenerating for which are powers of 2. In an array, we would store , , , , until we hit the upper bound of our problem. Given , we can easily generate the next term in the sequence by squaring the previous one: . Since making each element take time and there are going to be elements in the array, it should be clear then that this entire array can be pregenerated in time.

How do we use this to do fast exponentiation? Well, if we want to get , what we could do is look at the **binary representation** of . Then, we can decompose into a product of precomputed values from our table of powers of 2.

This really is best explained through an example, I feel. Say for instance you wanted to get . Well, we would decompose 150 as a sum of powers of 2, to get,

Each of the factors in the final product had been precomputed earlier and can be accessed from the array in , and a number only has about bits in its binary representation, so the method still runs in time.

This idea can also be written in another form, without the pregenerated array, which is the one presented in Wikipedia’s article on Modular Exponentiation,

def mod_pow(base, exp, MOD): base %= MOD ans = 1 while exp > 0: if exp%2==1: ans = (ans*base)%MOD base = (base*base)%MOD exp //= 2 #or exp >>= 1 return ans

I’ll leave it to you to parse how this function is the same as the other one I outlined in this iterative section 🙂

Note that in Python, the built-in pow function can take in three arguments, in which case the third one is the modulus , and pow(A,B,M) would return . This pow function is very fast and also runs in time. In other languages though, like C++, you’ll have to implement it on your own. This shouldn’t be too much of a problem, anyway, since it’s only around 7 lines in its recursive form.

Also note that the modulo part technically isn’t a necessary part of fast exponentiation, it’s just that often times exponential growth scales so quickly that without the modulo, you wouldn’t even be able to store something like in your computer in the first place!

### Programming Problem 1 – Tetrahedron

Let’s look at a rather straightforward application of mod_pow with my favorite branch of mathematics—combinatorics! This problem comes from Codeforces Round #113, which was rated for Div 2 contestants. Suppose an ant is on vertex D of a tetrahedron whose vertices are labeled A, B, C, and D. A single step takes the ant from its current vertex to one of the adjacent ones; it CANNOT stay on the same vertex. How many ways can the ant takes *exactly* steps so that on the th step, it returns to D? Output the answer modulo .

Combinatorics problems are probably where you can expect to see a lot of fast exponentiation, thanks to the good old sum and product rule. Notice that at every step, the ant has exactly 3 choices for which vertex to go to next. It makes such choices, then its final th choice is it picking vertex D. Therefore the answer should be . Except… there’s a complication. The th choice can’t have been a D; if it were, then we wouldn’t be able to make our final choice be D. So, we have to subtract all the ways for the th choice to be D.

How many ways can we land on on the th step? Well, with similar logic to earlier, there are choices of 3 vertices, then the final th choice will be D. So, it’s … except also just like earlier, we also have to exclude all the ways for the th choice to be D. This would be , excluding all the ways for the th choice to be D; this is , excluding all the ways for the th choice to be D, and so on and so forth. It’s kind of like inclusion-exclusion.

Finally, we exclude all the ways to return to D in 2 steps, which is 3, excluding all the ways to return to D in 1 step, which is 0. We ultimately get this summation with alternating signs,

If we called mod_pow at every step, our runtime would be , which is teetering towards the land of “unsafe” since could be . Almost definitely the “Time Limit Exceeded” side of unsafe. Thankfully, we should know an easy way to shave off that entire factor of . We can use the formula for a geometric series!

Here, the first element is , the common ratio is , and there are terms in total. Which means we get for our formula,

Awesome! That’s a formula we can evaluate with a single mod_pow call. Implement that and you should be able to get an Accepted for this problem 🙂

#### But how do I divide by 4?

The next concern is how to divide by 4, and it should be clear why you can’t just naively divide when doing modulo every step. We have that , except if we applied mod at every step, we’d get which… isn’t even an integer. Okay then.

We don’t really want to *divide* per se, though. What we want is to find the multiplicative inverse of 4. The multiplicative inverse of , denoted by , is the element such that , where 1 is the identity element. That’s what division really is, after all – multiplying by the multiplicative inverse.

Well, great, but we don’t have an idea of rational numbers in the modulo world; in a modulo ring, we only have integers. Well, that’s fine, it never had to be a fraction. To divide by , we just need to find some value such that . Note that this value will only exist if (why?) Then, to do , we do .

In tackling the next problem, we’ll talk about how to find the multiplicative inverse for an arbitrary input. Here, though, we only need to worry about 4, so we can hard code its value. Since our modulus, , is 1 less than a multiple of 4, we can easily see that . Multiply the right hand side by 4 and you’ll see that we get 1.

### Programming Problem 2 – Number of zero-xor subsets

This problem was taken from the 10th Ad Infinitum contest on Hackerrank.

Consider a set that contains all the integers from 0 to , where is a positive integer and can be up to . How many subsets of satisfy the property that the XORsum of the subset is equal to 0? Define the XORsum of a set to be the result after XORing all the elements in the set. Note that the XOR of the empty set is 0. Since the answer can be quite large, output the answer modulo .

If you’re put off by the XOR operator and the ensuing combinatorics, just skip ahead to once we’ve already derived the formula and are figuring out how to implement it in code. I chose this problem because it shines a light on some other uses and techniques of fast exponentiation. If you just want to take the formula as a given, skip to **Big Exponents** to see how we would code it.

The bitwise-XOR is a logic-gate that accepts two bits as inputs. It returns 0 if the bits are the same, and 1 if the bits are different. We can extend this to deal with integers instead of just bits; express the integers in binary, then XOR the bits that are in the same “place value”. This operation is usually written as in math, but in most programming languages is denoted by the ^ operator. For example, .

Understanding the following properties about the XOR will probably help.

**Lemma 4.1**

*Proof.* Since is being XORed with itself, every pair of bits is always going to match. Therefore, the XOR of every pair of bits gives 0, so the answer is 0.

**Lemma 4.2**

*Proof.* This directly follows from Lemma 4.1. You could also just prove it directly by again observing the truth table. Therefore, 0 is the XOR identity element. Also, it means the XOR is invertible, and each integer is in fact its own inverse.

**Lemma 4.3** XOR is associative.

**Lemma 4.4** XOR is commutative.

*Proof*. Look at the truth table of the bitwise-XOR and notice that it is associative and commutative. The XOR of two numbers is, again, just doing bitwise-XOR on each pair of their bits, and thus it’s also associative and commutative.

Now, let be the set of all subsets of such that their XORsum is is ; we are looking for the size of . If you try filtering the subsets of for some small value of , say , by their XORsum, you’ll notice something peculiar. There are 4 elements in , 4 elements in , 4 elements in , and 4 elements in . Note that

= {{},{00},{01,10,11}, {00,01,10,11}

= {{01},{00,01},{10,11},{00,10,11}}

= {{10},{00,10},{01,11},{00,01,11}}

= {{11},{00,11},{01,10},{00,01,10}}

We now have a hypothesis that we can prove: for any , the sizes of all for will be equal!

**Theorem 4.1** For any and which are valid XORsums of a subset of , .

There exists a bijection between any two sets and for . Let . For each element of , apply the following transformation: if is not in the set, insert it; otherwise, delete it. This transforms each element of into an element of . Why? "Toggling the presence of " is actually the same as XORing with the XORsum. If is not in the set, then inserting it means the XORsum should be updated to include it; if is already in the set, then by Lemmas 4.1, 4.2, 4.3, and 4.4, removing it is the same as XORing to the current XORsum (since is its own inverse).

Therefore every set in , which has an XORsum of , will be transformed into sets whose XORsum is . Therefore, the unique elements of each correspond to a unique element of . We can therefore conclude that .

However we could just do the exact same thing to transform elements of into . This gives us the conclusion that . In order for both to be true, we end up with the fact that . But and are just some arbitrary integers from 0 to , therefore all should be equal in size! Since we have elements in the power set of , divided amongst the different , the result is that

#### Big Exponents

Now let’s actually go about figuring out how to calculate this modulo . If we just use our mod_pow directly, we’re going to face a problem. Our mod_pow runs in , but , and with up to , that is most definitely not going to pass any reasonable time limit. We can’t even store as an integer!

You might be tempted to just apply MOD to the exponent, but that’s not exactly going to work. You will find, for instance, that , but . They’re not equal. What now?

Thankfully, the modulus in our problem is , which is a prime number. There is a well-known theorem in number theory called Fermat’s Little Theorem, which states that for some prime and some such that ,

I won’t cover the proof here because there are a lot of great resources for it already online, or in any number theory text. What this means for us is, consider for some , then

Which gives us

**Lemma 5.1**

And, invoking this many times, we get the fact that

**Theorem 5.1**

Which is exactly what we need for this problem! Many problems in competitive programming will ask you to find the answer mod a prime like or exactly so that theorems like this can be applied. To compute for , we just need , or a call of

mod_pow(2,mod_pow(2,N,p-1),p)

where for this problem, .

Note that this trick only works if is a prime. For non-prime moduli, you would need to use Euler’s Theorem (the one with the Totient Function), which I’ll cover in a later blog post.

#### Dividing with Modulo

Earlier, we just hard-coded the multiplicative inverse of 4 mod , but now we need to be able to find a way to do it for arbitrary . Thankfully, since is prime in our problem, we can easily use Fermat’s Little Theorem again. We have

**Theorem 5.2**

*Proof.* Easy to see where it comes from, right? Multiply both sides by and you just end up with Fermat’s Little Theorem again.

To convince yourself, try doing for some prime . See the result when you do 10*mod_pow(5,p-2,p)%p. If you pick something large like , you’ll get a super large number for , but when you multiply it with , you’ll end up with 2! Finding the modular inverse with this method works in since all we’re doing is mod_pow.

Again, has to be prime for this to work. For general non-prime moduli , you have to fall back to Euler’s Theorem again, just like earlier. We could also use the Extended Euclidean Algorithm, finding a solution to , but that’s a real hassle to code, vs the ~7 lines of mod_pow.

We’ve solved the problem! The answer is just

(mod_pow(2,mod_pow(2,N,MOD-1),MOD)*mod_pow(mod_pow(2,N,MOD),MOD-2,MOD))%MOD

#### Dealing with Negatives and Modulo

You might notice for this problem that, hey instead of dividing by , what if we just compute ? Well, why not! We could have just done that too! There’s only one big caveat, for C++ anyway, and that’s to be careful whenever subtraction and negatives in general are involved in your modulo code.

What is ? Well, that’s . But, in C++, if we took the modulo at every step like we normally do, we get . Which is… technically correct, since , but not really the answer we wanted. The modulo operator , in C++ at least, returns an integer in the range if is positive, and an in the range if is negative. You just need to account for that at the very end whenever negatives come into your code. I usually put an (x+MOD)%MOD at the end to make the result positive.

In Python 3, to my knowledge, the modulo operator % always returns an integer in whether the was positive or negative. Just check your language's documentation, or test it out for yourself, before you get a Wrong Answer!

### Programming Problem 3 – Contemplation! Algebra – Attempt 1

I’m going to spoil right now – this attempt at the problem isn’t going to contain the right answer. I’m saving that discussion for next blog post. However I still think there’s merit to exploring this line of reasoning, as there’s a lot of techniques we can gleam from it which, although not fit for this problem, are generally quite good to keep in mind. What’s more important is the journey, besides just the final destination!

This item comes from UVa Online Judge. For each test case, we will be given the values , , and , and we must output the value of . It is guaranteed that the inputs are nonnegative and fit in a 32-bit integer, while the output will always fit in a 64-bit signed integer.

Let’s try the simplest approach that might come to our minds. Let’s just explicitly compute for and then add them together. Let’s do some algebra!

If , then . We can sub this into to get , and thus we know . Use the quadratic formula here to get

And so we can take the values

Convince yourself that and with this choice of values.

**Problem**. Well, we seem to have stumbled into a problem here, and it’s that and might not end up being rational numbers… or real numbers, for that matter. For fun, try proving that if and are rational numbers, then they must be integers.

*Response*. Well, what now? Are we supposed to do math with complex numbers? I mean… sure, why not? Let , and suppose we decided to ignore the dividing by 2 for now and saved that til the end. So essentially, we let , and . Our problem is now to compute . Even if is negative, that’s fine.

Now, we only need to deal with a specific subset of Complex Numbers, though: those of the form for integer and . You can prove to yourself that these numbers form an Algebraic Field. Prove that they’re closed over addition and multiplication, and that each number in this subset has an additive and multiplicative identity and inverse (except for the additive identity, which has no multiplicative inverse).

We can create a custom object whose properties are , , and , to represent numbers of the form . We then define what addition and multiplication over numbers of this type means. Let and . Then, , and . Do some algebra to convince yourself these are true.

Take a look at our fast exponentiation code from earlier. Did any of our logic explicitly SAY that it had to deal with integers? We can do this over any algebraic ring! If we wanted to do fast exponentiation on something like, say, complex numbers of the form , then nothing’s stopping us, except for having to modify the base case of when the exponent is 0 to return whatever the multiplicative identity in that ring is (which, in this case, is still 1).

**Problem**. Here’s the next problem we have, though. The numbers get outrageously large! Even if will be less than , that doesn’t mean and individually will be. In fact, even if you tried to toss it into a language like Python which have no limits on integer size, it’s still going to be far too slow to finish in time.

*Response*. We can address that problem as we always do, by performing all our operations modulo some prime so that at no point do they ever get outrageously large. Is that allowed? Well, so long as the final answer is less than the modulus, then it will be as if the modulo wasn’t there at all. Take, for example, . The answer is, of course, 3. If we take each element mod 10, we get ; since , the answer should still be the same even with modulo.

We are guaranteed that the final answer will fit in a signed 64-bit integer. So, in Python, I would just choose my prime modulus to be the smallest prime greater than (thank you Wolfram Alpha). For C++ and other languages where you'd have to deal with overflow, I would perform the operations modulo **two** primes, such as perhaps and , then combine them using Chinese Remainder Theorem to find the final answer mod .

We can still divide by in this method, because we get to choose the modulus. If we pick a prime like I suggested earlier, then we can apply Fermat’s Little Theorem to find the modular inverse of easily.

**Problem.** Try out the test case , , and . Solving this gives us and (or vice versa), and . If we run our code so far, we get… 9223372036854775781. Yikes, why? Well, for how I chose to implement this, I let my prime modulus be 9223372036854775783. So, note that

So our answer was… technically correct, modulo the prime. But the problem is not asking for the answer modulo a prime, it’s asking for the answer itself. This is one of the limitations of only considering the modulo ring — we don’t really “have” negative numbers per se. So this solution doesn’t know whether to output the positive or negative remainder of the answer. Since we don’t know whether it ‘should’ be negative or not.

*Response.* Unfortunately, I’ve hit a wall here. I don’t know if there’s a way to fix this solution to account for the negative answers. If any of you find a way to fix it, please email me!

I still think this solution was worth going through even though it eventually ended up being wrong. We got to introduce the techniques of extending a modulo ring, which is definitely useful. Unfortunately, we came up short in the end, but that’s all part of the learning process.

The actual solution is actually rather clever and doesn’t need any of this! It just might not exactly be what you would expect!

### Final Thoughts

Fast exponentiation is an incredibly powerful technique which I definitely consider one of the fundamentals for anyone specializing in math for competitive programming.

Next blog post is a Part 2 which fills in the gaps from these past two posts – how to efficiently find Fibonacci numbers, an alternative solution to the Tetrahedron problem, and the real solution to Contemplation! Algebra. ‘Til next time, enjoy doing more math and programming!

## One thought on “Fast Exponentiation Techniques and Interesting Problems which use them”