Guessing numbers

Time for a new puzzle, I found this one recently on the forums over at xkcd:
Alice comes up to Bob one day and says "I have selected a random number between 1 and 2000, and I will let you hand me a list of 1000 numbers, if my chosen number appears on your list I will give you a prize."

Bob wants to win the prize, but would like a better than 50% chance at it, so he asks "Alright, but may I ask you some yes/no questions about your number?"

Alice doesn't want to make this easy for Bob, so she replies "Ask as many questions as you like, but I am allowed to lie in giving my answers."

Bob says "Well, that isnt very helpful." He then thinks about it and realizes "Actually, as long as you promise never to lie more than nine consecutive times, I think I can do this."

Alice doesn't believe this is possible, and is willing to hand over the prize if Bob can actually come up with such a strategy, so she agrees "Very well, ask your questions, I will not tell more than nine consecutive lies."

What is a possible strategy for Bob that will guarantee that he can narrow down Alices number to no more than 1000 candidates?

So you may ask as many yes/no questions as you want, and in any string of ten consecutive questions, at least one of them will get a true answer.

There is one common false solution I will refute right here, asking the same question 10 times in a row won't work, Alice will just alternate "yes" and "no", and will not have told ten consecutive lies.

Xor Not

Hmm....funny how time goes by. Also, I had wrote down my solution to the boolean problem from last time, but then I lost it, and I kept on putting off working it out agian.

Anyway, lets work it out here. First, let us assume that we have managed to construct a series of boolean objects T0, T1, T01, T23, and so on, where T0 is true if exactly 0 of (A,B,C) are true, and T12 is true if exactly 1 or 2 of (A,B,C) are true and so on for the other T's. So there are in principle 16 such objects, we won't be needing them all, but I just wanted to define the general notation.

If we had these objects then we could easily express NOT A as follows:
NOT A = T0 OR (T1 AND (B OR C)) OR (T2 AND B AND C)

We can see this because exactly 1 of T0, T1, T2, T3 will be true. If T0 is true, NOT A must be true. If T1 is true, NOT A will be true if B or C is true, and false otherwise. If T2 is true, NOT A will only be true if both B and C are true. Finally if T3 is true, then NOT A is just false. Note also that we can make similar expressions for NOT B and NOT C just by permutation.

Alright, how do we go about constructing as many of these T's as possible without using more than 2 NOT gates? Well, we have some of them for free, namely:
T3 = A AND B AND C
T23 = (A AND B) OR (A AND C) OR (B AND C)
T123 = A OR B OR C

Next, we can make T01, costing us 1 NOT gate:
T01 = NOT T23

T1 and T013 are now available:
T1 = T123 AND T01
T013 = T01 OR T3

and T13
T13 = T013 AND T123

Finally we can bring out our other NOT gate
T02 = NOT T13

At last we construct T0 and T2:
T0 = T02 AND T013
T2 = T02 AND T23

This completes the construction of T0, T1, and T2, finishing the problem.

Not Not

Blogging sure slows down when my life is consumed with marking.

Anyway, heres a new one that was told to me by some people in the math room on KGS:
You have 3 boolean inputs, A, B, and C. You are to set up a circut that has 3 outputs, D, E, and F. You may use as many AND and OR gates as you like, but only 2 NOT gates. Arrange it so that D has the value of NOT A, E has the value of NOT B, and F has the value of NOT C, for arbitrary inputs of A, B, and C.

To put that into a bit more "math" and less "circutry"...give me expressions for NOT A, NOT B, and NOT C, if you can only use NOT twice. You may store variables and use them freely, but you may only call the NOT operator a total of 2 times. For example, you could define P = NOT (A OR B) and then use P as much as you like and that would only count as 1 use of a NOT gate.

For a more general version of the problem, you have 2n+1 inputs to negate, and only n+1 uses of NOT. The general version is fairly tough though, and not very fun.

One At A Time

Alright, time for the solution to the blind maze puzzle from last time.

The solution is pretty simple, the main point to notice is that, for a particular N, there are only finitely many mazes that can exist, so you essentially solve them one at a time. Specifically, you number the mazes 1 through M, then begin by writing instructions for the solution for maze 1, which is easy to find explicitly. Next, assume you are in maze number 2, and see where you would be given the instructions you have written so far, and then continue with the instructions to solve maze 2. Step by step, solve all the mazes that you could possibly be in. This will only create a finite list of instructions.

As for the more difficult problem of finding a solution that guarantees ending the program on the final square, imagine running all of those M mazes in parallel, each using the same set of instructions. The goal is to make sure that we are on the final square of every maze simultaneously, then we stop the program.

To prove that there exists a program to do this, let us consider a randomly generated program that is S steps long. The randomly generated program will move north with probability e, south with probability 1/2-e, west with probabilty e and east with probability 1/2-e. I claim that for a sufficiently small e, and sufficiently large S, the robot following this program will be on the final square of every maze simultaneously at some point. In a sense, even if the claim is true, it does little to help us find a program for our solution, since we don't know how long we should run our randomly generated program for. This objection doesn't really matter though, we are only trying to prove existence of a program, we don't need to actually find it. If our program has any non-zero chance of getting to all the endings at the same time, then there exists a program that terminates on the ending of any maze we could be in.

Next, for a particular maze m, the robot is wandering around the maze at random, with probabilities as above. By the Perron-Frobenius theorem, there is a unique steady state distribution of probabilites p(i,j) for the robot to be at location (i,j), and after a large enough number of steps any starting distribution will converge to this one. The main idea will be to show that when e is sufficiently small, p(N,N) becomes close to 1 (in other words, over a very large timescale the wandering robot will spend almost all of its time at the exit).

The main point of the proof is that p(i,j) is proportional to ((1/2-e)/e)^(i+j) for any reachable (i,j) in the maze M (this is the point that I am not sure how to prove, I'm sure its something simple using random walk theory, but my education is deficient there). Since p(N,N) < 1, we must have p(i,j) < e/(1/2-e) ≤ 4e for (i,j) not equal to (N,N) (this happens because if we choose e < 1/4). Thus p(N,N) is at least 1-((N^2-1)*4e). It will take some number of steps to do this for each separate maze, but you can just take the maximum over all the mazes.

Now, note that if we have two events A and B, with probabilities 1-x and 1-y of happening, then the probability of A and B happening is at least 1-x-y (you can prove this from P(A and B) = P(A) + P(B) - P(A or B) and P(A and B) is at most 1). In our case, we have M events (getting to the end of the maze in each maze) and the chance of each happening is at least 1-M*((N^2-1)*4e). So, with a suitable choice of e, such as 1/(2((N^2-1)*4), the probability of all of the events happening is at least 1/2.

So there is some nonzero chance of getting to the end of every maze at once, and so there must exist some set of instructions that will end at the end of the maze. This proof gets us no closer to finding it, I suppose, but who cares about that.

Blind Maze

Puzzles? OK. So, this is one I found a little while ago on the xkcd forums:
There is a featureless maze that you must program a robot to navigate. The maze is an NxN square grid and each tile in the grid is either a floor or a wall. The robot may enter tiles that are floors but not ones that are walls. The robot can only be programmed a sequence of moves that can read only north, west, south, or east. The robot starts in the northwest corner of the maze and the exit tile is at the southeast corner of the maze. It is guaranteed that there is a path through the maze that the robot can navigate.

Upon entering the maze, the robot will follow your program. If a command would tell it to hit a wall, it will ignore that command and move on to the next command. If it lands on the exit tile, you win. If it hits the end of your commands without ever finding the exit, you lose.

Can you find a finite program that will guarantee that the robot will get to the exit at some point?

So, for clarity, your program can only read something like (East, East, South, South, West, South, South, East, East.....), just a chain of {North, South, East, West}, and must be finite in length.

For bonus points, try to make it so that the robot is guaranteed to end on the exit square rather than just getting there at some point. I haven't solved this harder version, and on the xkcd forums the best they had done (last I checked) was prove the existence of such a solution, and the proof was not something I 100% understood.

One At A Time

Alright, time to finish off the solution to the balls in jars problem from before.

The essential proof is that it is possible to find an algorithm that will move from a position (a,b,c), with a>b>c, to a position (x,y,z) such that z is less than c. Thus by repeating this algorithm no more than c times, you must reach zero (given that we can only have integer values).

OK, suppose we start in some position. Call the bucket with the most marbles A, the one with the fewest marbles C, and the other one B. The names of those jars will not change throughout the algorithm, specifically even though C has the fewest at the start it might not at some later point. Let a, b, and c be the number of marbles in jars A, B, and C respectively. Note that even though A, B, and C will not change throughout the algorithm, the values of a, b, and c will.

The algorithm is: Check the value of floor(b/c). If it is even, move marbles from A to C. If it is odd, move marbles from B to C. If C has more marbles than B then stop, otherwise go back to the start.

Simple enough, now, to prove it works. We must prove that at the end B will have fewer marbles in it than C started with.

At the start, we know that b is greater than c, let us write b=qc+t with t less than c, so q being the floor of b/c. For this proof, we will not change the values of a, b, and c, I will assume those values are locked at the start.

Let us suppose b is not at least twice as big as c, so then q=1 and when we move marbles from B to C (as per the algorithm), B will have t marbles in it, and t is less than c.

Now, if q is bigger than 1, I will show that the algorithm will reduce the number of marbles in jar B without changing the value of t, so eventually q will be 1.

So, if we suppose q is even, so we move marbles from A to C. Now C contains 2c, and B contains b. Since b=qc+t=(q/2)(2c)+t, so b mod c and b mod 2c both give t (note that q being even was needed).

If q is odd, we will move marbles from B to C. Now C contains 2c and B contains b-c. Since b=qc+t we can also then show b-c=((q-1)/2)(2c)+t, so b mod c and b-c mod 2c both give t (in this case q being odd is needed).

Each time, B contains fewer marbles, so eventually q will be 1 and then next move B will have t marbles, which was less than the starting value of C. So, this finishes the proof. One simply needs to repeat the algorithm many times (renaming the jars A, B, and C after each iteration) to win the game.

Two In One

Alright, time for the partial solution to the balls in jars problem from last time.

I guess since the two-jar problem is a decent amount easier, I'll solve that out first.

First of all, it is absolutley trivial to see that if the total number of balls in the jars is not an even number, there will be no way to win, since to win you must first arrive at a position of the form (p,p). Next, it is easy to see that any common divisor has no effect on the ability to win from a position, that is to say: the position (p,q) is winnable iff the position (np,nq) is winnable.

At this point, we might be inclined to work out a few winning positions. OK, so what are the winning positions of the form (1,q)? Well, clearly (1,1) is a winning position, and (1,q) can only be a winning position if q is odd. From the position (1,q) we move to the position (2,q-1), but since q is odd for any position of interest this is the same as (1,[q-1]/2) so if we suppose this is another position, say (1,k), we see that q=2k+1 so if (1,k) is a winning position then so is (1,2k+1). From here we can list off the positions as (1,3), (1,7), (1,15)... clearly (1,q) seems to be a winning position if q is 1 less than a power of two. You can prove that by induction if you like, and with a bit more work we can also be sure that there aren't other winning positions of the form (1,q), but I won't bother with that yet.

Next, what about positions of the form (3,q)? Well, first of all q=1 would be a winning position. Next, from (3,q) we can move to (6,q-3) (this assumes that q>3, but other positions are finite enough anyway). Now since we might as well assume q is odd, we can half this position to (3,[q-3]/2) so calling this (3,k) again, we see that if (3,k) is a winning position then so is (3,2k+3). Listing off positions we can see that the winning positions seem to be (3,1), (3,5), (3,11), (3,29)... again summing to a power of two. This makes it seem that a position (p,q) is a winning position iff the sum p+q is a power of two. Well, thats easy enough to disprove, (3,9) is a winning position, but it is merely a multiple of 3 away from (1,3).

The main thing that will finish this off is the realiziation that once you have divided off any multiples in your problem, no new ones will show up, besides 2, and that one will show up every move. Lets prove that. Suppose we are in some position (p,q) with p and q both odd (so that their sum is even, but we do not need to divide off an overall 2), and we assume that p and q are not both multiples of some prime number n (n>2). Now we move to (p-q, 2q) (setting p as the larger one), I claim that p-q and 2q are also not both multiples of n. If they were then q would be a multiple of n (since 2q is) and then p would be (since p-q is and q is). Naturally this argument won't work for n=2, an overall multiple of 2 shows up every single move.

Now I will prove the main result: from position (p,q) with p and q having no common factors, we are in a winning position iff p+q is a power of 2.

You can prove one direction by induction. Prove it for p+q=2^n. It is trivial for n=1, then if it is true for n=k and you are in position (p,q) with p+q=2^(k+1) you can make one move and then half p and q so that p+q will now equal 2^k.

For the other direction we must show that if p+q is a multiple of a prime other than 2 and p and q have no common factors, then it is a losing position. Suppose p+q is z*k where z is a prime other than 2. k must be even, for if p+q is odd then we are trivially in a losing position. In order to win, we must first reach (z*k/2, z*k/2) but we have already proven that once you have divided off multiples, no new ones can show up. A new multiple of z (for z>2) cannot happen, so the position (z*k/2, z*k/2) cannot be reached, so the game cannot be won.

This finishes the proof. While working on the problem I also noticed that the legal move from (p,q) is to (2p,2q) mod p+q, which I thought was sort of cool. Perhaps people who are better at number theory than I can make use of that. Anyway, feel free to try to use this to solve the three jar problem, I was never able to.