Birds On A Wire

Alright, time to post the solution to the dots on a line puzzle from last time. First I suppose I'll link directly the cut-the-knot page with the puzzle. He has a cool little applet thing there you can use to simulate the problem and try to guess the solution yourself, and he also has links to where people have written up derivations, one of which I will regurgitate on here later because I found it to be so interesting.

For today though, I am simply going to post the solution to the problem as Matt and I initially solved it. First, I must solve a very simple problem that gives us a technique that will be needed later. Consider the following problem:
N numbers between 0 and 1 are selected uniformly at random. On average what is the value of the smallest one?

First, let us try to solve this for N=2. We can see that we simply must solve the integral
∫ min(x,y) dxdy

integrated over [0,1]x[0,1], and min(,) is the minimum function. Clearly min(x,y)=min(y,x) so we can instead integrate over the "lower half triangle" and double our answer. So, instead integrate x in [0,1] and y in [0,x] in
2∫ y dxdy

the 2 is because we are only covering half the area we should be covering, and min(x,y) has been replaced by y, because x>y now. The integral is pretty simple to do, and gives the answer 1/3.

Now in the general N case we must consider we have a collection of N points {x[1],x[2],x[3]...,x[N]}, and integrate
∫ min(x[1],x[2],x[3]...,x[N]) dx[1]dx[2]dx[3]...dx[N]

As before, we order them x[1]>x[2]>x[3]...>x[N], so the min function always returns x[N], and we must integrate x[1] in [0,1], x[2] in [0,x[1]], x[3] in [0,x[2]] and so on. We must also find the prefactor, which is one over the area of the triangle we are integrating. Integrating the function 1 over the region of integration, we find the prefactor must be N!. So we must integrate
N!∫ x[N]dx[1]dx[2]dx[3]...dx[N]

over our triangular region. The final answer comes out to be 1/(N+1), sort of neat, but anyway I just wanted to demonstrate the technique so it is less confusing later.

Back to the problem at hand. We have N points {x[1],x[2],x[3]...,x[N]}, and we will order them as x[1]>x[2]>x[3]...>x[N]. Let us define
d[i]=x[i]-x[i+1]

so d[i] is the size of the ith region, and this i runs from 1 to N-1.

Next, we will define S[i] to be equal to d[i] if the ith region is shaded, and S[i] will be zero if it is not. The answer we seek is
S=N!∫ Σ S[j] dx[i]

Integrated over all the x[i] positions, and j summed from 1 to N-1 to add up all the shaded regions.

S[i] will be nonzero (and equal to d[i]) in one of two conditions:
1) x[i+1]-x[i+2] > x[i]-x[i+1]
2) x[i-1]-x[i] > x[i]-x[i+1]

That is, if d[i+1] is greater than d[i] or if d[i-1] is greater than d[i], d[i] will get shaded in.

With calculating probabilities, whenever you have to find the chance that A or B happens, it is often easier to find the chance that neither happened and take one minus that. Similarly here, rather than integrate whenever 1) or 2) happens, it is easier to find the situation of neither of them happening. Let is define q[i] as d[i]-S[i], so q[i] represents the unshaded regions, it is equal to d[i] when S[i]=0 and it is zero when S[i]=d[i]. Clearly then
S=N!∫ Σ S[j] dx[i]
=1 - N!∫ Σ q[j] dx[i]

We can of course move the sum outside of the integral, so we simply must calculate
∫ q[j] dx[i]

for an arbitrary j and then we can sum it up and multiply by N!.

The integration has x[1] running from 0 to 1 and x[i] running from 0 to x[i-1]. q[j] is zero in some of this region and nonzero is other parts, let us identify exactly where q[j] is nonzero. It is exactly the negation of conditions 1) and 2) earlier, that is
1) x[j+2] > 2x[j+1]-x[j]
2) x[j+1] < x[j-1]-2x[j]

This actually needs to break into a few cases, the first inequality is trivial if 2x[j+1] < x[j], since x[j+2] is always positive. The second case depends on whether x[j-1] is greater than or less than 3x[j] (because if x[j-1] is more than 3 times x[j] then the second inequality is made trivial by the fact that x[j+1] < x[j]).

So the integration region over dx[j-1] until dx[j+2] breaks up into many smaller pieces that one can cleanly write out with this information, but its fairly lengthy, so I'm not going to bother here. In this region, q[j]=d[j] so you simply integrate x[j]-x[j+1] in this region. This integral is most easily done in Maple (or whatever other math program you like) by first doing the integrals from dx[N] to dx[j+3] (the function is constant there and the integration region is simple) then doing the next four integrals carefully, then finishing it off integrating to x[1].

In the end, you get a somewhat awful function of N and j. You then sum j going from 1 to N-1 and multiply by N!. You will get something somewhat less awful, but a still terrible function of N. Finally, you must take the limit of N going to infinity (this step will be easy enough to do by hand, but by this point you already have Maple open anyway, and your spirit will have been far too crushed to do such a limit) and in then end you get 11/18. This was the unshaded region (the shaded region was 1-(the integral of q[j])) so the shaded region is given by 7/18.

Next time I'll go into a bit of the analysis by the other solutions found at the cut the knot page. One of them I found to be particularly interesting so I wanted to show it in detail, but I wanted to demonstrate my own horrible method first.

Also, I expect the date on this post will be confusing, I started writing it some time ago, but it took me more than a month to get around to finishing it.

Dots On A Line

Alright, I think I've finally run out of puzzles if I'm going to post this one. It is a rather crazy one that I solved with Matt many years ago and I absolutely love the answer to it. Sadly there isn't really a clean way to solve the puzzle, you have to do it the hard way, and its really not very illustrative of why the final answer is what it is, but thats just life. I initially found this puzzle at cut the knot:
Consider the unit interval between 0 and 1. Select N points randomly on that interval (random with uniform distribution). From each selected point "color in" the part of the interval between it and the closest neighboring point. In the limit as N goes to infinity, what fraction of the interval is colored in on average?

Terrible, I know. To clarify, suppose we had N=5 and selected the points 1/10, 2/10, 4/10, 7/10 and 8/10. Then from 1/10 we would color to 2/10, from 2/10 we would color back to 1/10 (not that that does anything), from 4/10 we color back to 2/10, from 7/10 we color to 8/10 and from 8/10 we color back to 7/10 (again does nothing). Then the colored region would be [1/10,4/10] ∪ [7/10,8/10] so in total 4/10 of the interval was colored in, in this case.

Balancing It Out

Time for the solution to the recent balance scale puzzle.

First, notice that there is a symmetry in the problem that needs to be broken, specifically there is no real distinction between "fake" and "genuine", if you swapped those labels the problem is identical. Let us break that symmetry by selecting a single coin and then we are going to show that all the other coins weigh the same as that special one. In effect, I will declare any coin with the same mass as that one to be "genuine" and a coin of any other mass to be "fake".

Now that we have one genuine coin, take the other nine coins and divide them up into three disjoint sets, one of two coins, one of three coins, and one of four coins. We will denote these sets of coins as 1, 2, 3, and 4, so if I say we weigh 3 vs 1+2 I mean to take the set of three coins and weigh them against the set of two coins plus the single genuine coin.

Naturally, our three weighings are going to be some coins against some other coins, equal in number, such that if a weighing is ever imbalanced we are done immediately (we would have proven there is at least one fake and one genuine coin), so we are free to assume that all weighings are balanced (as opposed to other balance scale problems where you have to divide up the tree into "if left heavy", "if right heavy", and "if balanced").

First weighing, 3 vs 1+2. When this is balanced, we know that the number of fakes in 2 (call that x) equals the number of fakes in 3 (so, x is also the number of fakes in 3). Second weighing, 4 vs 1+3. When this is balanced, we know the number of fakes in 4 is also x. Finally, we weigh 1+4 vs 2+3. When this is balanced we know that x=x+x, guaranteeing that x is 0 and there are no fake coins.

It is sort of neat that you can do those weighings in any order you want, since you don't gain information after a weighing (or rather if you did, you were done right away). But if you do the weighings in another order the information is a bit less natural to analyze.

More Masses?

Alright, new puzzle I found today at Tanya Khovanova’s Math Blog:
Consider you have a collection of 10 coins. The coins can either be genuine or fake, a fake coin weights a slightly different amount than a genuine coin. All fake coins weigh the same and all genuine coins weigh the same. Using three measurements on a balance scale, prove either that all the coins have the same mass or that they do not.

For the balance scale rules, see the old counterfeit masses puzzle. These constant balance scale problems and total lack of hat problems are making me want to rename the blog "Standard Balance Scale Rules Apply".

All Drunk

So, whats the solution to running out of interesting puzzles? Stop posting. I guess the other solution is to post non-interesting puzzles, or possibly interesting non-puzzles, but that seems like so much more work than not posting at all. Anyway, I guess one should post the solution to the drinking game puzzle from last time.

First of all, it is obvious that Alice can get a 1/2-1/2 split with Bob if she wants, so the only question is if she can do any better than that. Bob being forced to drink from the bottle at least once is a disadvantage for him, if not for that rule it is clear that the optimal strategy is 1/2-1/2 in general. Since Bob being forced to drink from the bottle is a disadvantage, we can assume he will try to get rid of that, specifically if the bottle ever has less poison than the cup, Bob will immediately drink from the bottle and the remaining bottles will get split 1/2-1/2.

As soon as Bob has drank from the bottle, the remaining bottles get split 1/2-1/2. If we get to the last bottle and Bob has not yet drank from the bottle, Alice will do a 1-0 split, forcing Bob to drink all of the poison in the bottle. Thus, if there are two bottles left, let us assume Alice does an x-y split (x>y and x is the amount in the bottle). If Bob choses x, he will have free choice on the next one and in the end Bob will have drank x+1/2 and Alice will have drank y+1/2. If Bob choses y, Alice will be able to force him to drink all of the last bottle and so Bob will drink y+1 and Alice will drink x. Bob will see this coming and choose whichever one is lower, so Alice does best to make them equal.

We have x+1/2=y+1 and x+y=1, so we get x=3/4 y=1/4, so if there are two bottles left and Bob must still drink from a bottle once, Alice splits 3/4-1/4 and forces Bob to drink 5/4. Obviously if there are two bottles left and Bob does not have to drink from a bottle, then Bob drinks half of each of the remaining bottles for a total of 1.

Now, if there are three bottles left and Bob must drink from the bottle once (the original problem), then Alice splits the first bottle x-y (x>y and x the bottle) and Bob can select either x+1 or y+5/4. Setting those equal and using x+y=1 we get x=5/8 y=3/8. So Alice divides up the first bottle 5/8-3/8 and forces Bob to drink 13/8 in total, whereas Alice only drinks 11/8.

Drinking Game

New puzzle time? It seems like that can only cause disaster when I finally run out of puzzles. My usual sources seem to be drying up, but I guess I have a few left. This one is from the xkcd forums (though, reworded):
Alice and Bob are going to be playing a game where they must drink poison. There are three bottles of poison to be divided among the players and the goal is to drink as little as possible (as some amount of the poison is lethal, but the players do not know how much). The game begins with Alice pouring some amount of the first bottle into a cup, then Bob selects if he would like to drink everything in the cup or everything in the bottle. Whichever one he does not choose Alice must drink. Then Alice pours some of the second bottle into a cup and Bob chooses who will drink from the cup and who will drink from the bottle, finally the third bottle is divided the same way. A special rule is in place however, Bob is not allowed to select from the cup all three times, he must select the bottle at least once. How should Alice divide up the bottles to minimize the amount she must drink?

Poker Time

Guess its time to blog again. The double draw poker problem has been up for ages, and my readership (yes, you) is probably bored of it.

The solution isn't hard to get at, the first thing is to notice that if Bob is able to grab an ace-high straight flush with his first move then the game is over right away (one might call an ace-high straight flush a "royal flush", but I absolutely hate that notation). So we see that Alice must block all four of the ace-high straight flushes right away, perhaps by taking all four aces and some other random card.

If Bob responds to this by taking any straight flush, Alice can get an ace-high straight flush and win (Bob cannot match that hand because he cannot access any of the aces). However, if Bob responds by grabbing all the kings, then Alice cannot do anything. She can get a queen-high straight flush, but Bob can beat that easily. Apparently just grabbing all the aces does not work.

One can try all the kings or whatever, but it is pretty apparent pretty fast that taking all the tens is the solution. After Bobs next move, Alice grabs an ace-high straight flush if she can, or a ten-high straight flush if she cannot. Without any tens, the best Bob can do is a nine-high straight flush.

Pretty simple, I know, but for some reason I like it.

Double Draw Poker

Alright, a bit of an unusual puzzle this time, but I thought it was sort of neat. I first found this on the xkcd forums:
Alice and Bob are going to play a poker variant. A standard 52 card deck is laid out face up in front of them. Alice goes first, and gets to select any 5 cards from the deck. Then Bob gets to select any 5 cards that are still in the deck (not being allowed to select any cards Alice selected). Then Alice may discard any of her cards and replace them with cards still in the deck (discarded cards are removed from the game and do not return to the deck). Then Bob may discard any of his cards and replace them with cards from the deck. At this point, the game ends and whoever has the better poker hand of 5 cards wins. If both players have equal ranked hands, then Bob wins the game (that being the compensation for going second). Who wins this game when played optimally?

If you don't know the rankings of poker hands, you are beyond help. I guess you could look them up, but I actually would say you won't find this puzzle even slightly interesting so don't even bother.

Genuine Coins

Research is hard, blogging is easy.... In conclusion, time for the solution to that mass puzzle from last time.

I guess I will go through the solution as I found it. First of all, note that if your first measurement is imbalanced, you are probably home free. If the left is heavier than the right, then the right cannot have more than 1 fake mass, and therefore you can divide the right into two piles (throwing out an odd mass if you need to) and use those for your second weighing. Whichever side is lighter cannot have any fakes, and if they are both equal then neither has fakes. Thus if the first measurement has N masses to a side, then if it is unbalanced you can find N/2 genuine masses (or (N-1)/2 if N is odd).

This means that we only need to solve the case where the first measurement is balanced, and this is much more difficult to deal with (as you almost certainly found if you tried to solve it). My first solution found at least 10 genuine masses as follows: Begin by dividing up the masses into 4 groups, A has 10 masses, B has 20, C has 30 and D has 40. Next, weigh A+B against C, if this is imbalanced, use the method above to find at least 15 genuine masses. If it is balanced, this means that A+B and C each have the same number of fakes, either 0,1, or 2 each. This means that D must have 0,2, or 4 fakes, this restriction is very important. Next, weigh A+C against D. Suppose this is balanced as well. Then D must have 2 fakes (it can't have 0, for C would have 2 then), but if D has 2 fakes and this measurement is balanced, then B is free of fakes. Alright, suppose this measurement has D heavy, then if D has 2 fakes, C must have 1 and A+B has 1, but its not in A, so A is genuine, if D instead had 4 fakes, A is still genuine. Finally, if A+C is heavy, then D has 0 fakes.

So, each result in our second measurement points to one coin pile, D heavy implies A, balanced implies B and A+C heavy implies D. Thus we are guaranteed to find at least 10 genuine masses.

We can generalize this, have 4 yet unspecified piles, A,B,C,D and use the method above. This means we must obey the equations:
A+B=C
A+C=D
A+B+C+D=100

Three equations for 4 unknowns, so there is a one-dimensional degree of freedom. We use that freedom to maximize min{A,B,D}. To be strict we also want to make sure that (A+B)/2 isn't too small, just in case the first measurement is balanced, but that isn't really an issue.

One can solve those equations as to get
A=3D/2-50
B=100-2D
C=50-D/2

Just looking at integer values of D and maximizing min{A,B,D} the solution is when D=42
A=13 B=16,C=29

So this guarantees getting 13 genuine masses. One funny point, there is a "better" solution at D=300/7=42+6/7 with
A=100/7=14+2/7
B=100/7=14+2/7
C=200/7=28+4/7

So, if you can use fractional masses, this is better.

Anyway, on Tanya Khovanova’s Math Blog they discuss a solution with 14 masses, but never explain it fully. This might be related to my fractional solution, but that is slightly crazy. Anyway, this solution can be constructed using the following structure for piles: A=14, B=14, C=29, D=42, E=1

First weigh A+B+E vs C, if it is imbalanced, then you can find 14 genuine ones. If it is balanced, then D again has 0,2, or 4 fakes in it, corresponding to C having 2,1, and 0 fakes in it.

Next weigh A+C vs D+E. If this is balanced, then D must have 2 fakes, C has 1 and A has 1, so B+E are all genuine (had D no fakes, C would have 2 and E cannot make up for it). If this measurement has D+E heavy, then D cannot have 0 fakes (C would have 2) so if D has 2 fakes, C has 1 and A cannot have any, so A is genuine (if D has 4 fakes, A is also genuine). Finally, if our measurement has A+C heavy, then D cannot have 2 fakes in it, so D is genuine.

So we have A+C heavy implies D is genuine, D+E heavy implies A genuine, and balanced implies B+E genuine. This guarantees at least 14 genuine masses are found. The word genuine has lost all meaning to me at this point.

Masses Again

Alright, another puzzle from Tanya Khovanova’s Math Blog. This one is another counterfeit mass thing.
There is a pile of 100 masses, and 4 of them are slightly heavier than the others. The 96 'true' masses are identical to eachother in weight, and the 4 'fakes' are slightly heavier than the others, but also identical to eachother in weight. Using 2 measurements on a balance scale, identify at least one 'true' mass.

As a follow-up, you can try to identify a collection of true masses, how many can you find in those two measurements? For rules on the balance scale, it is the same as the old one from the counterfeit masses puzzle.

I know that typically people make the counterfeits lighter than the genuine ones, but then it screws me up when I write the solution because I want the heavy ones to be the odd ones out.

Axiom Of Hats

Man, I meant to blog one more time before the go congress, but blogging is so hard. Anyway, solution time. Puzzle from last time was another hat one. The solution is rather similar to puzzles I have posted here before, but it still took me a little while to get it.

As always, it is easier to start with the two person, two colour case. Suppose the two colours are white and black, and we have two rooms. Label the two rooms white and black and go into the room that corresponds to the hat colour of the other person. If you both have the same colour, you will go to the same room, but if you have different hat colours you will go to different rooms.

Naturally, a strategy for the more general case must just be a function that maps from the hats you see to the room you go to. Lets try two hat colours and N people next.

We have two rooms again, call one room even and one odd. If you see an odd number of black hats, go to the odd room, if you see an even number of black hats, go to the even room. Then, if the total number of black hats is even, white-hatted people will go to the even room and black-hatted people will go to the odd room. Naturally it works the other way if the total number of black hats is odd. One could also arrive at this strategy by assigning the rooms numbers 0 and 1 and call white 0 and black 1. Then add the hat colours you see mod 2 and go to that room.

This generalizes to the strategy for N people and K colours. First number the hat colours 0 through K-1 and number the rooms 0 through K-1. Then add the hats colours you see together mod K and go to that room. We can see this strategy works by considering the sum of all the hats (call that S), then a person with hat colour x will go to the room S-x mod K. If two people with hat colours x and y go to the same room, then S-x=S-y mod K and so x=y mod K and so x=y, two people go to the same room means they have the same hat colour, so this solution works.

Generalizing this to the case of countably infinite hat colours is trivial, you can just use the same solution and forget the mod K stuff. Each hat is a natural number, and add together the hats you see and go to that room.

With infinite logicians it is a bit more tricky. In this case there are infinitely many of them to add together, so you can't just go to the 'infinity room' and be done with it. Since we have been handed the Axiom of Choice, we might as well make use of it.

What can one do with the Axiom of Choice? In most cases, the most efficient thing to do is the same old equivalence class thing. Let us construct an equivalence class of hat sequences. Two sequences are said to be the same if they differ in only finitely many places. Now we can use the axiom of choice to construct a set S which contains exactly one element from each equivalence class. Since each logician can see all but their own hat, they can all determine what equivalence class we are in, and thus can select an element x in S that differs from the real sequence of hats in finitely many places. We will add together those finitely many hats to construct our solution.

Alright, so everybody sees most of the full sequence of hats yi and they all have agreed on a sequence xi that differs from the sequence yi in finitely many places, they must now choose a room to go to. Let us suppose that all the hats are correct. Then it is simple, you just go to the room that xi says you go and it will all work (the hats and the rooms have both been numbered as naturals). Now, let us suppose that person j is wearing the wrong hat colour, that is xj ≠ yj but all the other xi=yi. Now he is going to go to the room suggested by the hat xj, which is xj-yj too many, so everybody else must also go xj-yj too many also. It is possible for this number to be negative, so first we must number the rooms as integers instead of naturals. So, the strategy is that when you are person n and you see hat j wrong, you go to room xn+xj-yj. Naturally, if you see more than one wrong hat, just add up xi-yi of all the hats that are wrong (there are only finitely many) and add that to your own xn and go to that room.

Tanya Khovanova has a complaint with this solution that even though the Axiom of Choice allows you to select the set S that we will use for the solution, it does not guarantee that there is a method of distributing the set S to a collection of people. If they cannot distribute the set S, they cannot guarantee that they all have the same one (and I would suppose they almost certainly do not, but that is a bit of an odd statement to make). I am not sure if I agree with her on this point, but I generally am against the Axiom of Choice in the first place, so it is a bit of a non-issue for me. At any rate, I suppose one could distinguish another axiom where you have the Axiom of Choice, and you may actually describe the set it gives you, then this puzzle simply needs that more powerful axiom, to solve.

Hats Again

Time for new puzzles that Mark told me but he actually got from Tanya Khovanova's math blog:
1000 logicians are going to play a game against The Adversary. The Adversary has a collection of hats that come in 6 different colours. There are also 6 rooms. The Adversary is going to gather all the logicians together and put a hat on each of their heads, and then the logicians must simultaneously decide what room they are going to enter. After the logicians have each gone to their chosen rooms, each room must have logicians with only one colour of hat. The logicians may strategize before the hats are assigned, find a strategy that guarantees that each room will have a unique hat colour in it. Standard hat rules apply, of course.

You can assume that the logicians know what colours are available. If you like, you can also generalize to N logicians and K hat colours, its not much harder. What is harder is allowing N and K to be infinite, you will need the axiom of choice in this case, but its also an interesting puzzle.

Flip A Coin

About time for the solution to the lowest number game from last time. I tried solving out the harder version myself, but I didn't manage to get anywhere, more about that later.

Before I construct the strategy, I want to point out a somewhat neat theorem about the game and discuss an incorrect strategy. One idea is just to pick 1 with probability 1/2 and 2 with probability 1/2, since this is a zero sum game, every wins zero on average if they all do the same thing. Seems like a reasonable idea, but you can beat this by always picking 3. There is a 50% chance that the other players will pick the same number as eachother and you just win, the rest of the time, they pick different numbers and you lose. Thus your average winnings are 0.5x$2-0.5x$1, which is more than zero. This hints at an interesting idea, there is no number that is the largest number players can pick in a symmetric equilibrium. Suppose each player has a strategy S which is a series of probabilities P(n) for picking number n. Suppose there is a largest number K that S suggest you will ever pick (that is, P(n)=0 for n>K). Now, you generate your random number, and it says you should play K. I say you should play K+1. If K was going to win, so will K+1 (it would only win if the opponents were about to pick the same number). But it is possible K+1 will win and K will not (specifically if both other opponents pick K). Thus it is advantageous for you to change strategy S into a slightly different strategy, and so it is not an equilibrium.

Alright, to find the actual solution, let us assume that we are going to search for a symmetric Nash equilibrium. I think there is a theorem that symmetric games always have at least one symmetric equilibrium, but I cannot find any such theorem online. Anyway, we must find a list of P(n) that is the chance a player picks n, and we know there is no K such that P(n)=0 for n>K (that doesn't actually help us, but I think its neat).

OK, suppose we had P(n), what properties would it have? First of all, no player would do any better by deviating from the strategy. Let us also assume that they will do no worse (so we get equalities instead of inequalities). Second, we know that when everybody plays P(n) they have expected winnings of zero (just by symmetry and it being a zero sum game).

Suppose 2 of the players are playing the strategy suggested by P(n) and the third player decides to write down 1. What are his expected winnings? They are:
2(1-P(1))2-P(1)(1-P(1))-P(1)(1-P(1))

That is, he wins $2 if both opponents do not pick 1, and he loses $1 if one picks 1 while the other does not and this can happen two ways. In other cases he wins nothing and loses nothing. Setting this to zero (player 3 does no worse to unilaterally change his strategy), we see that P(1)=1/2.

What happens if he instead changes his strategy to just write down 2? His expected winnings are now given by
2P(1)2+2(1-P(1)-P(2)2)-2P(1)(1-P(1))-2P(2)(1-P(1)-P(2))

That is, he wins $2 if both opponents pick 1 or if they both pick numbers larger than 2. He loses $1 if one opponent picks 1 and the other does not (can happen in two ways) and he loses $1 if one opponent picks 2 and the other picks larger than 2 (can happen 2 ways). Setting this to zero and solving for P(2) we get P(2)=1/4.

This suggests P(n)=1/2n is the solution we are looking for. This can be confirmed by looking at what happens if the third player writes down A. His winnings are
n to A-1 P(n)2+2(1-Σn to AP(n))2-2Σn to A P(n)(1-Σj to nP(j))

Σn to A means sum n from 1 to A. These terms can be explained the same as the A=2 case as before. Subbing in P(n)=1/2n we find that this equals zero for all A and so this strategy is a Nash equilibrium.

Actually, this strategy is quite easy to perform in real life, you simply flip a coin until it comes up heads and write down the number of flips it took, this gives the correct P(n).

For the case where ties result in everybody losing their dollar, the game is more difficult, since it is no longer zero sum, there is a chance of money exiting the system. You can try something similar where your expected winning each round is the sum of P(n)3 (which is a third of the total amount of money lost each round, on average), but then you cannot iteratively solve for each P(n) as we did, you just have infinitely many coupled equations.

The 4 player game (with friendly ties) also has this problem, since a player playing 1 wins if nobody else picked 1, but also does not lose if the other two players pick the same number, so the sum over P(n)2 shows up in your first equation. The 4 player game also has annoying Nash equilibria like two players pick 1 and the other two pick 2.

Anyway, enough about that game.

Lowest Number Game

So, most of the puzzles Mark has been feeding me lately have actually come from a single website, rather than from his crazy new hypothetical officemates. For 'credit where credit is due' purposes, I will direct your attention to Tanya Khovanova’s Math Blog, which still has a few more puzzles on it that I am going to put up here, but that can wait. First, a game that I learned about some time ago, but never knew before that it had such an interesting analysis:
Three people each put $1 on the table. They then each simultaneously write down a positive integer. The integers chosen are then revealed and whoever wrote down the lowest unique one wins the $3. If everybody wrote down the same integer, then they each get their $1 back. Find the equlibrium strategy for this game.

This is a fairly standard game, and some people actually play it with large groups, it works out pretty well. The 3 player game is quite solvable though, and has a fairly cool solution. I suppose you could also change ties to "everybody loses their $1" and see what change it makes, I haven't done that yet. I guess there might be multiple equilibria in the strategy space, so the puzzle is to find any of them.

Two At Most

Time to post the solution to the follow-up coin problem from last time. The solution isn't hard to come by, and is basically an extension of the first move of the solution from last time. Last time the first measurement was the 1,2,3 against the 6 and since they balanced, you knew for a fact that the 6 was in fact the 6 (it is the only single coin that can weigh as much as three others). This time you weigh the 1,2,3,4,5 against the 7,8. They will come out balanced, but there is no way the left side weighs less than 15 and no way the right side weighs more than 15, so they must each be exactly those weights. This leaves the 6 uniquely identified.

Actually, this question has an interesting generalization, if we have N coins with masses 1 through N and we know their masses, how many measurements of a balance scale does it take to prove the mass of one of the coins? It is easy to prove that you can always do it in three or fewer measurements as follows: N can be written as a sum of 3 triangle numbers (this is a result of the Fermat polygonal number theorem, though for triangle numbers it was initially shown by Gauss). Call these numbers A, B, and C, assume that A ≥ B ≥ C. Weigh 1,2,3,4...k against A such that they balance (we know such a k exists, A is triangular), the balancing proves that the weights on the right has a mass at least A. Next weight 1,2,3,4,...m and A against (B+A), (B+A) being the single mass of mass B+A, and m is defined by 1+2+3+4+...+m=B. When they balance it proves that B+A is at least B+A. Finally weigh the 1,2,3,...j and (B+A) against N, where j is defined by 1+2+3+4+..+j=C. When they balance it proves that N is at least N. But N is at most N, so N is identified.

Actually, two weighings is always sufficient to prove that you can find one mass, this was proven in a paper entitled Baron Munchhausen’s Sequence, which also contains the proof from the previous paragraph. The proof is rather long though, and there is little sense in reproducing it here.

Probably Correct

Alright, Canada day seems like a good time to post the solution for the labeled masses problem. Working on it yourself you probably found it is best to assume that the coins are labeled correctly and attempt to prove that rather than something else. Any method that would actually prove the coins are labeled correctly would also show that they are labeled incorrectly if they are not (if it didn't do that, one would hardly accept that it proves they are labeled correctly). Actually, this gives us something of a rewording of the problem, which I will later transform into the followup problem. Assume the coins are unlabeled, but you know which coin is which, you must prove to an audience (of mathematical puzzle solvers) that you know which coin is which using only 2 measurings.

Anyway, moving on to the solution (to the problem as initial worded). Its tricky to do anything involving weighing 1 coin against 2 right away, since if you weigh 1+2 against 3, even if it balances you cannot be sure that it was not the 1+3 against 4 or the 2+4 against the 6 or something. You could also try having 4 coins on one side, but that cannot work since on the second measurement you must split up those 4 coins or you cannot tell them apart. Perhaps something involving 2 vs. 2 works, but I didn't find anything. The thing that does work is to weigh the 1+2+3 against the 6. If it is imbalanced, we are done, if it is balanced then you have identified the 6 (it is the only one that can have as much mass as 3 others combined) and you have identified a group that is the {1,2,3}, since they are the only 3 light enough that a single coin could balance them.

So, in one measurement we have identified the {1,2,3}, the {4,5} and the {6}. The next measurement is to do the 6 and the 1 against the 5 and the 3.

If they are labeled correctly, the 5+3 should be heavier than the 6+1 (so, if that doesn't happen we are done). Since the 6 is identified, and the 1 must weight at least 1, the 6+1 must weigh at least 7, but since the 5+3 came out heavier, the 5 could not be any lighter and neither could the 3, so the 5 and 3 are correct. By a similar logic, the 5+3 cannot together weight more than 8, so the 1 cannot afford to be any heavier if the 6+1 is to be less than 8, thus the 1 must be the 1. That leaves the 2 and 4 both uniquely identified.

This completes the solution. I cannot prove that there aren't any other solutions, but I greatly doubt their existence.

Next, the follow-up:
A mathemagician has 8 coins, with masses 1 through 8, he knows which coin is which. He intends to prove to an audience that he knows the correct mass of at least one of the coins using a single measurement of a balance scale. What measurement can he perform to prove the mass of any one of the coins?

You may assume that the audience knows that the coins have masses 1 through 8, but they do not know which one is which. After one measurement, the mathemagician must be able to select a single coin and say "I have now proven this one is the x" for some value of x. You may also assume the audience is a bunch of mathematicians, they will not be tricked by some sort of lies.

Labeled Masses

So, Mark seems to be quite the source of new puzzles recently, I guess its those crazy people in his office or something. Anyway, here is the latest one from him:
There are 6 coins on a table, having masses of 1,2,3,4,5,6 grams. They are also labeled 1 through 6. Using two weighings of a balance scale, you must prove if they are correctly labeled or that they are not.

It is the same balance scale as the counterfeit masses puzzle, that is to say, using the balance scale to make a measurement can be visualized as follows: The scale has two places to put coins, you put as many coins as you like in each of those two places, then you push a button and the scale tells you which of the two places has more mass on it, or that they are balanced. The button will work 2 times and then the scale will break.

Board Of Hamming

OK, I think the checkboard puzzle has been up for long enough now. Time for the solution.

Mark managed to find the solution much faster than I did, but thats because I managed to confuse myself for quite sometime with Hamming codes, because I have gotten far too used to that stuff doing these other puzzles. To construct the solution, let us assume that the geometry of the board will play no part, and that there simply are N squares that have stones (or not) on them. Alice must remove or add a stone to be able to communicate a single square to Bob. A solution seems possible at least, since Alice has N possible moves and must communicate one of N possible choices, so there is theoretically enough room, but there can be no room wasted from any position.

First let us try to solve the N=2 case. The board configuration is represented by an element of {00,01,10,11}, where "1" means that square has a stone and "0" means it does not. Bob must have a function that maps from that set to {A,B} where "A" is the square on the right and "B" is the square on the left. From any given configuration, Alice must perform exactly 1 bitflip and specify "A" or "B" to Bob. Alice needing this ability puts restrictions on Bobs possible functions.

A function that works for Bob is f(00)=f(10)=A, f(01)=f(11)=B. The first bit is ignored and the second bit specifies which square is being pointed at. Alice simply looks at the board, if it already pointing at the correct answer, flip the first bit, if instead it is pointing at the incorrect answer, flip the second bit. This will always give Alice a move to make. Note that Alice needed to reserve a move as the "nothing move" just in case The Adversary chose a position that she didn't want to move from.

Let us try the N=3 case. Now we need a function from {000,001,010,011,100,101,110,111} on to {A,B,C}. Let us assume that The Adversary has set up the board in position 000 and has pointed at one of the squares A,B, or C. Alice must change the board to one of {001, 010, 100}, so without loss of generality, we might as well assume f(001)=A, f(010)=B, and f(100)=C. Next, assume The Adversary has set up 110, meaning Alice must choose from {010,100,111}, since f(010)=B and f(100)=A we must have f(111)=C or The Adversary will defeat us. Next, assume The Adversary has set up 011, so Alice must choose from {111,001,010} since f(111)=C, f(001)=C, and f(010)=B, The Adversary can select square A in this case. This proves there cannot exist a function that solves the puzzle on a board with 3 squares.

The failure in the N=3 case is not surprising if you realize that f has to map from a set with 8 elements on to a set with 3 elements, and this cannot be done symmetrically to the set with 3 elements (3 does not divide 8, that is). In general, we will have to find a function from a set with 2N elements to a set with N elements, so this will only divide evenly if N is a power of 2. For the full problem N=64, so there is still hope.

You can try out the N=4 case next if you like, just by explicitly constructing the function, it all works out if you do it right, but its not especially illuminating as it is difficult to see how to generalize it just from one particular function (at least it was difficult for me).

For general N, one can get a hint of the solution by remembering that from any position there must be a "dummy move", that is, suppose you have a strategy f (mapping from board positions to special squares). If the adversary set up a position x that has the feature f(x) is already the special square, Alice must be able to make a "nothing move", a move that does not change the value of f. Let us assume that there is one square on the board that f is not sensitive to, so Alice can always just toggle that one if she needs. Then there really are only N-1 squares on the board that f looks at, and Alice has the choice of if she wants to toggle one of the N-1 squares or not. The adversary still has N choices of special squares though. We suspect that N should be a power of 2 for a solution to exist, so N-1 is 1 less than a power of two, of course. This immediately made me think of Hamming codes, as those only exist of length M when M is 1 less than a power of 2.

Let us go over what Hamming codes are again. Consider the set of all binary strings of length M (call this set S). We want to construct a set H that is a subset of S with the following feature. For every element x in S there exists an element y in H such that y is no more than 1 bitflip away from x. In the Sam and Ray problem, this was used so that Sam could specify a member of S accurate to 1 bitflip by only specifying a member of H (and it took fewer bits of data to specify an element of H than to specify an element of S).

For our game, let us try the following strategy for Bob. Bob will look at the N-1 squares on the board (ignoring the agreed-upon "dummy square"), this specifies a binary string of length N-1, which will either be a hamming code exactly or be 1 bitflip away from one. If it is 1 bitflip away from one, then the location of that bit that needs to be flipped is the special square. If it is 0 bitflips away from a Hamming code, then the dummy square is the special square.

Can Alice always manage to communicate any square to Bob? If The Adversary sets up a Hamming code exactly, then it is trivial, Alice simply flips the bit on the special square. If The Adversary chooses the dummy square as the special square, the Alice can always do 1 bitflip to put the board into a Hamming code position (unless it is already in one, in which case she just flips the dummy square). The remaining cases (which are, admittedly, the majority of the cases) still need to be checked. First let us reexamine how the Hamming codes actually look.

Consider the set S, binary strings of length M, where M is one less than a power of two, so M+1=2K. Number the bits 1 through M, and divide them into "parity bits" and "data bits" as follows: a parity bit is one whose number is a power of 2, all others are data bits. We can see that there are K parity bits and M-K data bits. We will construct the set of Hamming codes, H, by letting the data bits run over all possible values, and the parity bits will check the parities of the data bits. The parity bits are labeled with the powers of two, so they are bits 1,2,4,8,16,... The parity bit labeled 1 will check the parities of data bits 3,5,7,9,... (all the numbers with a 1 in the ones place of their binary expansion). The parity bit labeled 2 will check the parities of data bits 3,6,7,10,11... (all the numbers with a 1 in the twos place of their binary expansion). The parity bit labeled 4 will check the parities of data bits 5,6,7,12,13,14,15... (all the numbers with a 1 in the fours place of their binary expansion). The check the parities of a bunch of bits means to XOR them together.

To reuse a section of an earlier post, with some slightly changed notation..."for the case M=7, the set H looks like
0000000
1101001
0101010
1000011
1001100
0100101
1100110
0001111
1110000
0011001
1011010
0110011
0111100
1010101
0010110
1111111

Note that bits 3,5,6,7 run over all possible values. Bit 1 checks the parity of bits 3,5,7. Bit 2 checks the parity of bits 3,6,7. Bit 4 checks the parity of bits 5,6,7.

Further note that every bit is being checked by a unique combination of at least two parity bits. Bit 5 is being checked by 4 and 1 (because 5 in binary is represented as 4+1) and only bit 5 is being checked by bits 4 and 1 alone. This idea will hold in general."

"if we are given a binary string of length M, we can find an element of H that is different in no more than 1 spot through the following method: Find the element of H with the same data bits, it will differ in one or more parity bits. If it differs in one parity bit, we are done. If it differs in more than one parity bit, add of the values of those parity bits to find a value j. You will find that if you flip data bit j, all of the parity bits will now be correct and you will have exactly an element of H. Thus you either differ in only one parity bit, or the incorrect parity bits specify what data bit is off from an element of H."

Anyway, you can try this explicitly in the case N=4 and you will find it works out, great, Alice always has a bit she can flip to inform Bob of the special square, how do we prove it works in general? Well, first we need to look at the Hamming codes a bit differently.

Consider a typical Hamming string, like 0110011, this has ones at 2, 3, 6, and 7. the data bits {3,6,7} look like {011,110,111} in binary, their 1's place XORs to 0, their 2's place XORs to 1 and their 4s place XORs to 0, or more simply, they together XOR to 010. Actually, the full thing {2,3,6,7} is {010,011,110,111}, which XORs to 000. In general an element of H will do that, so that whole construction was a bit over the top.

Suppose you are given an element of S, to find the corresponding element of H you just XOR the values of all the 1's together, and flip that bit. For example 1101110 is {1,2,4,5,6} is {001,010,100,101,110} XORs to 100, thus you must flip bit 4 to get a hamming code, we can see that 1100110 does XOR to zero, it is a hamming code.

This gives us a more succinct strategy for Alice and Bob (and a proof that it works). Alice and Bob number the squares 0 through N-1 (0 will serve the job of the "dummy square"). Alice then looks at the squares where The Adversary places stones and XORs them together, getting a number y. She then XORs the value of the special square with y, and performs her bitflip on the result. Now the values of the squares with stones on them will XOR together to the special square, this works just by symmetry of XOR (that is, if A XOR B = C then A XOR C = B, even if A,B,C are binary strings). It is easy to see that the solution only works if N is one less than a power of two, suppose N was 45 or something. Then when Alice XORs everything together, she might get a result of 59 (or any number as high as 63 is possible) and she cannot flip that bit, it does not exist.

This more succinct wording of the strategy is actually the strategy that Mark found initially. I found it rather amusing that we both found the same solution, but had each had a rather different formulation of that solution.

Game On A Board

Its a bit quick for me to post a new puzzle, but Mark actually gave me a pretty cool one recently. Also I have a bit of a backlog of puzzles to put up, so I might as well fire this one out:
Alice and Bob are going to play a game against The Adversary. The game is played on an 8x8 checkerboard, and there is a collection of featureless stones alongside the board. The game begins with Bob being removed from the room, and The Adversary selects one of the squares on the board and declares it the "special square", showing his choice to Alice. The Adversary then places any number of stones on the checkerboard, such that each square has either zero stones or one stone on that square. Alice is then required to either remove exactly one stone from the board or add one stone to any empty square. Bob is then brought into the room and is shown the final configuration of the board. Bob must then successfully identify the special square. Before the game, Alice and Bob may strategize. Find a strategy that guarantees Bob can find the square that was chosen by The Adversary.

In case it wasn't obvious, you may assume The Adversary is an adversary.

Alone At The Party

Time for the solution of the follow up to the shaking hands problem.

I came up with the puzzle itself upon realizing that the people in the initial problem naturally pair up. If one person shook everybodys hand and one person shook nobodys hand, those two people are naturally partners, and everybody shook exactly one hand of that partnership.

To see how this puzzle solves out, lets consider a simpler case. With N=1, there are 3 people at the party, including yourself. You asked the other two people how many hands they shook, and they each gave you a different answer. The possible answers are in the set {0,1,2}, so you must have heard the answers {0,1}, {0,2}, or {1,2}. It is easy to see that {0,2} is impossible, since 2 shakes hands with everybody and 0 shakes hands with nobody, they couldn't have both gone to the same party. In the case {0,1} it must have been you who shook hands with the 1, since it wasn't the 0 who did, thus, you shook one hand. In the case {2,1} 2 must have shaken hands with 1 and with yourself, and so again, you shook hands with exactly 1 person. Therefore we see that for N=1, the answer to the puzzle is 1.

One can arrive at this answer much faster by simply seeing that the two other people shook hands with a different total number of people, thus exactly one of them must have shaken hands with you. You don't know if they shook hands with eachother, and don't care, you get the answer of 1.

Now, generalizing to N, the 2N people we asked must have given answers from the set {0,1,2,...2N}, and so exactly one of those must not have appeared. There is no way both 0 and 2N appeared though, so the answers were either {0,1,2,...2N-1} or {1,2,3,...2N}. In the former case, 0 shook hands with nobody, and 2N-1 shook hands with everybody but 0, so lets remove that pair and reduce everybodys number by 1, after which the answers we have heard are {0,1,2,...2N-3}. N has been reduces by exactly 1 and the total answer has also, so if F(N) is the answer we seek, F(N)=F(N-1)+1. In the other case {1,2,3,...2N}, 2N shook hands with everybody, and 1 only shook hands with 2N, so lets remove that pair and reduce everybodys number by 1 again, after which the answers we have heard are {1,2,3,...2N-2} again we see that F(N)=F(N-1)+1. Since F(1)=1, we can see that F(N)=N again, just like the last puzzle.

Note that if there are an even total number of people at the party, the puzzle cannot be solved. The same reduction down to a simpler case still holds, but when you get down to yourself and 1 other person, you have no way to know if you shook hands with that other person or not. Basically, the two cases split the puzzle into whether or not partners shook hands. When you are a member of a partnership you need some additional information on the puzzle to finish things off, but if you are the odd man out, then you do not care if partners shake hands with eachother.

More Is Less

So, I find it to be rather unfortunate that the night after posting my puzzle about shaking hands from last time, I managed to come up with a similar problem that I consider to be more interesting. This leaves me with the issue of either solving that puzzle from last time and then posting the new puzzle as a follow-up, or just posting the new puzzle and solving them both at once next time. The problem with the former is that after the first puzzle is solved, the second puzzle is very easy to do (its the exact same logic), but it does have one extra thing one needs to notice in order to solve it, so it still is not trivial. The problem with the latter is that I find it unappealing for aesthetic reasons that don't really make sense.

Going with aesthetics, I guess I might as well solve the puzzle from last time now. The logic is very simple, first of all notice that each person has 2N people available to shake hands with (of the 2N+2 people, they cannot shake their own or their partners), and so the number of hands each person shakes is between 0 and 2N. You asked 2N+1 people how many hands they shook, and you received 2N+1 unique answers, thus you must have received every answer between 0 and 2N, pigenhole principle and all that. We will identify each person by the number of hands they shook, that is, person k shook k hands, and this assignment gives each person a unique number.

Person 2N must have shaken hands with every person they could, and person 0 must have done the opposite. These two people must be partners, or those two people did not go to the same party. Thus, (0,2N) is a pair. Now, each person at the party outside of that pair must have shaken hands with exactly one person in that pair (2N, to be specific), so lets just remove pair (0,2N) and reduce everybodys number by 1. This has now set up the exact same situation as the original problem, but we have reduced N to N-1. Everybody's number is 1 lower, but they are all still unique. This means that whatever the solution is to the problem with N (call that F(N)) it will be 1 lower when N is instead N-1. That is to say
F(N) = F(N-1)+1

Which means
F(N)=F(1)+N-1

so we just have to solve the N=1 case.

When N=1, the answers you were told were 0,1, and 2. 2 and 0 must be partners by the same logic as before, and 1 must be your own partner. Since 2 shook hands with 2 people, the other hand must have been yours, and 0 did not shake your hand. This means that F(1) is 1 and therefore F(N) is N. Proving that you shook hands with N people, exactly 1 person from each couple.

Now for the follow-up puzzle:
You go to a party with 2N other people (so there are 2N+1 people in total at this party). At the party people shake hands with other people. No person shakes hands with themselves. After the party, you ask each person that was at the party how many different people they shook hands with, and each person gives you a unique answer. How many different people did you shake hands with?

Very much the same setup, but the "couples" concept has been stripped away.

Shaking Hands

New puzzle time. Mark told me this one a week or two ago, and since he is approximately 90% of my readership I'm not sure why I'm posting it, but here we go:
You and your partner go to a party with N other couples (so there are 2N+2 people in total at this party). At the party people shake hands with other people. No person shakes hands with themselves or with their partner. After the party, you ask each person that was at the party how many different people they shook hands with, and each person gives you a unique answer. How many different people did you shake hands with?

Mark initially asked me this with N=5, and then found it wasn't more interesting to generalize it to N. I will initially ask it with N and find it isn't any more interesting to specify it to 5.

Less Gladiators

Time for the solution to the second gladiator problem. As before, I guess the thing to do is start with a simpler case.

Suppose Alice has a single gladiator of strength A, and Bob has two gladiators of strengths B1 and B2. Again, Bob needs only decide which of the two gladiators to send first and the whole match is set.

If Bob sends gladiator 1 first, he wins with chance B1/(A+B1) and loses with chance A/(A+B1). If he loses the first game, he then has a B2/(A+B2) chance to win the second game.

This gives Bob's chance of winning as:
B1/(A+B1)+AB2/(A+B1)(A+B2)
=1-A2/(A+B1)(A+B2)

unsurprising, its one minus Alices chance of winning both matches. This is symmetric in B1 and B2 again, so it is independent of Bobs strategy. Its true that the full game is also strategy-independent, but it is actually a bit weirder to prove this one. I was not able to come up with a proof myself, this was in the paper where I found this problem.

Let us consider to replace the gladiators with lightbulbs that have a fixed burn out time. The burn out time of one of these idealized lightbulbs is memoryless, and if the lightbulbs expected lifetime is x, then the probability that it is still burning after time t is given by e-t/x.

If we have a lightbulb with lifetime x and a lightbulb with lifetime y both burning at the same time, the chance that y burns out first is given by x/(x+y). So two gladiators fighting can be appropriately replaced by a "lightbulb match" where both Alice and Bob must turn on one of their bulbs and whichever goes out first is defeated. Since the burnout time is memoryless, the chance that a given lightbulb will burn out in the next five minutes is the same as the chance it had to burn out in the past five minutes, so it is enough that the winner simply leave their light on and the loser must turn on a new bulb. In the end whoever runs out of lightbulbs is the loser of the match.

We can see this game could just be replaced by Alice and Bob showing up on separate days, Alice turns on one of her bulbs until it burns out, then she turns on another one and so on until she runs out of bulbs, when she is out of bulbs she records the total lifetime of her team. Bob then comes by and doe the same thing. Whoever has more total lifetime wins the gladiator match. Its pretty clear there is no strategy in this game, but it is the same as this gladiator match game.

This game is a bit less fun than the other one, since human intuition arrives at the "all strategies are the same" result much faster than the other game. I guess thats because in the simpler case its very obvious. If our opponent has only one gladiator, then it does not matter what order we send ours in, our opponent must defeat them all anyway. In spite of the result being more obvious though, the proof is actually trickier to think of, because the setup does not have as much of an obvious "zero expectation value" minigame.

More Gladiators

Alright, this one is the final problem from "Games People Don't Play":
Alice and Bob are competing team managers in a gladiator competition. Alice's gladiators have strength a1, a2, a3...an and Bob's have strength b1, b2, b3...bm. Each round, Alice will select a gladiator from her team and then Bob will select one from his team and the chosen gladiators will fight.

If a gladiator of strength x fights a gladiator of strength y, the chance the one with strength x wins is given by x/(x+y). The losing gladiator is eliminated and then a new round begins with Alice and Bob selecting gladiators to fight. The competition ends when one of the teams has run out of gladiators.

What is the optimal play? In particular, suppose Alice always chooses to send in her strongest gladiator, how should Bob respond?

Its the exact same as last time, but this time the gladiators strengths are constant, rather than gaining confidence from the fights.

Bob Wins?

People still read my blog, right? Whatever, I'm sure history will look back and thank me for all this, or something like that. Anyway, the gladiator battle problem has been up for long enough, lets take a look at the solution. First, lets consider the a simple case, where Alice has one gladiator of strength A, and Bob has two gladiators of strength B1 and B2. Bob needs only decide which of the two gladiators to send first and the whole match is set. If Bob sends gladiator 1 first, he wins with chance B1/(A+B1) and loses with chance A/(A+B1). If he loses the first game, he then has a B2/(A+B1+B2) chance to win the second game. In total, the chance Bob has to win is given by:
B1/(A+B1)+AB2/(A+B1)(A+B1+B2)
=(B1(A+B1+B2)+AB2)/(A+B1)(A+B1+B2)
=(A+B1)(B1+B2)/(A+B1)(A+B1+B2)
=(B1+B2)/(A+B1+B2)
Symmetric under exchanging B1 and B2, meaning that it does not matter who Bob chooses first. Actually, it is also equivalent to what Bob's chance of winning would be if he could merge his two gladiators into a single one of strength B1+B2 and just have that one fight. This suggests that the whole "gladiator battle" this is just a complication of a much simpler problem. Lets replace the gladiators strengths with amounts of money. Now Alice and Bob each have total sums of money A and B, and each round they each place some money on the table, Alice placing x and Bob placing y. Alice is then given an x/(x+y) chance to win and Bob has a y/(x+y) chance to win. The winner of the round takes the money sitting on the table and then a new round begins. The game ends when one person has all the money. Now, in the "gladiator battle" version of the game, the players have restrictions on what amounts they are allowed to bet, but if we can prove that all betting strategies are equally good then we will have proven that the gladiator game restrictions do not matter. Alright, when Alice places x, and Bob is choosing y, what y maximizes his potential gain? Bobs expected gain is x with probability y/(x+y) and -y with probability x/(x+y), so Bobs expected gain is:
x*y/(x+y)+(-y)*x/(x+y)
Clearly zero. This means that all strategies for Bob are equally good, Bob can play the gladiator battle randomly and his probability of winning is just given by
Σ bi/(Σ aj+ Σ bi)
Where aj are the strengths of Alice's gladiators and bi are the strengths of Bob's gladiators.
There is another, slightly more elegant, proof of this result. I like it a bit more because the first one felt very unsatisfactory, but this other one is very cool though rather weird. Consider each gladiator to have an integer strength (this can be accomplished by approximating irrational strengths with rational ones, and then multiplying by denominators until everything is an integer). Each gladiator will have k beads assigned to them, where k is their strength. The beads are then all shuffled and placed on a string (you can imagine this as a bunch of coloured beads and each gladiator has their own colour). When two gladiators fight, whichever one has the furthest right bead wins the fight and then gets to claim all the beads owned by the defeated gladiator. It is not hard to see that this gives the correct probabilities of the battles and is a suitable representation of the entire tournament. In the end, only the rightmost bead matters, and the strategies are irrelevant. Human intuition on this problem is a bit funny. Most anybody will tell you that it is obvious that you must always send your strongest first, and the occasional person will actually be convinced that you must use your weakest first, but the idea that it is irrelevant which you do is very hard for people to grasp intuitively.

Gladiator Battle

Been some time since I had a new puzzle, and there are still two left in the paper "Games People Don't Play", though they are basically the same with a slight variation. Here is the first one:
Alice and Bob are competing team managers in a gladiator competition. Alice's gladiators have strength a1, a2, a3...an and Bob's have strength b1, b2, b3...bm. Each round, Alice will select a gladiator from her team and then Bob will select one from his team and the chosen gladiators will fight.

If a gladiator of strength x fights a gladiator of strength y, the chance the one with strength x wins is given by x/(x+y). The losing gladiator is eliminated and the winning gladiator gains confidence from the fight, increasing the winning gladiator's total strength to x+y. Then a new round begins with Alice and Bob selecting gladiators to fight. The competition ends when one of the teams has run out of gladiators.

What is the optimal play? In particular, suppose Alice always chooses to send in her strongest gladiator, how should Bob respond?

It seems needlessly elaborate, especially given that in theory you have to solve for all possible values of a1 through an and b1 through bm. It actually has a fairly elegant solution, but requires something of a mathematical transformation of the puzzle to reduce it to an easier setup.

XOR Solution

OK, anybody who cares has certainly solved the number puzzle from last time. Lets work through the solution now.

For convenience, I'll state the five facts here
1. N has 2 digits XOR N is even
2. N contains the digit "7” XOR N is prime
3. N is the product of two consecutive odd integers XOR N is one more than a perfect square
4. N is divisible by 11 XOR N is one more than a perfect cube
5. N is a perfect square XOR N has 3 digits

Notationally, I'm going to use each statement with a letter 'a' or 'b' to refer to the first part or the second part of the statement. Specifically, when I say "statement 4a" I mean "N is divisible by 11", and "statement 2b" means "N is prime" and so on.
Alright, most people latch right onto 1 and 5 right away, to figure out how many digits the number has, and that is a sensible thing to do. Also note that 5a and 3b are exclusive, since a number cannot be a perfect square and one more than a perfect square (except for 1, I suppose, but it is easy to verify that 1 is not a solution to the puzzle). Looking at 3a, being the product of two consecutive integers means N=m(m+2) for m odd. That is N=m2+2m=(m+1)2-1 for m odd. So 3a says that N is exactly one less than a perfect even square. Thus, by 3, N is next to a perfect square, and so 5a is false.

This gives us that 5b is true and so 1a is false and 1b is true. But since N is even, and 3a says it is odd, 3b must be true. So we have N is even, has 3 digits, and is one more than a perfect square. Letting K=N-1, K=m2 for m odd. Consider 4b now, if that is true then K is a perfect cube and so being a perfect cube and a perfect square means it is a perfect sixth power. We know 26=64, 36=729, and 46=4096, so the only consistent thing would be if K=729. This would be N=730, which satisfies 2a and not 2b, while satisfying 4b and not 4a. This is the answer.

I recall some more elegant method of eliminating the false statements in 2 and 4, but I cannot remember it now. As usual, feel free to comment with any alternate methods you have.

Number Puzzle

Time for another puzzle. This is a somewhat simple one I found somewhere random on the internets:
There exists an natural number, N, for which all of the following statements are true:

1. N has 2 digits XOR N is even
2. N contains the digit "7” XOR N is prime
3. N is the product of two consecutive odd integers XOR N is one more than a perfect square
4. N is divisible by 11 XOR N is one more than a perfect cube
5. N is a perfect square XOR N has 3 digits

What is N?

For the statements where the puzzle refers to digits, it is meant to be digits in a base ten representation of course.

When I found this puzzle, I dismissed it as being stupid and probably not worth the effort. I'm glad I gave it another shot though, the answer is somewhat pleasing to find. I'll admit its not amazing, but it shouldn't take more than 10 minutes and it is nice to work through the details.

Cards and Birthdays

Time for an aside from puzzles that I got from "Games People Don't Play", and an aside from puzzles in general. This post is going to be about the birthday paradox and human intuition on probability.

You are probably familiar with the birthday paradox, but I will state it anyway, just to be clear. The paradox (as with most "paradoxes" it isn't really a paradox, but just something unexpected) is the answer to the question:
How many people do you need to have in a room before there is a greater than 50% chance that two of them have the same birthday?

If you ask this question of an average person, they will probably state something like 100 or so. If they have some basic math behind them, they might guess 365/2, so about 180. The actual answer is around 20 (i'll derive it later). The basic misunderstanding comes from the difference from the question posed and the following question:
How many people do you need to have in a room before there is a greater than 50% chance that one of them shares your birthday?

Note the difference, in the first question you just needed any two people to share a birthday, in the second question you needed somebody to land on a specific birthday. This results in a big difference since in the second case each person has a 1/365 chance of having your birthday, but in the first one each person has a k/365 chance of landing on somebodys birthday, where k is the number of people in the room before them.

I always say that humans have no good sense of probability, evolution never gave us a need to understand things like the Monty Hall scenario on an intuitive level. We also have a bad habit of noticing patterns where none exist, another byproduct of our evolutionary history. It is fortunate that we have developed the mathematical capabilities to handle probability for us, or science would never have gone anywhere.

The birthday paradox came up in a conversation I was having with my officemate the other day. He found a site that asked the question
What are the odds that you will ever see a shuffled deck of cards in the same order twice in your life?

Ignoring the fact that this question isn't really well-posed, it is something of an interesting question. The site basically gave an argument involving 52! (the number of ways to arrange the cards in the deck) and concluded the answer was "very small". Actually, I guess most of the post was on deriving 52! rather than the analysis of what to do with that number. My instinct was to agree with this conclusion, but then a part of my brain said "birthday paradox" and I had to wonder. The other part of me saying back "52! is like 1065" was a persuasive counterargument though. It basically depended on what the 20 was to the 365 in the birthday paradox... was it the logarithm? the root? just plain old linear with a coefficient? seems it is time to do the calculation.

Suppose we have a list of N outcomes of some experiment, all equally likely (N=365 for the birthday paradox, or N=52! for this card problem) and we check the experiment k times. What are the odds that we see the same result twice? Alternately, how large does k have to be for the probability we have seen the same result twice to be greater than P?

For the card situation lets assume that k is about 30000, this is an approximation of checking once a day for 80 years. So for approximation purposes, N>>k>>1. Now, what is the probability that none of our experiments ever get the same result? the chance that the first result is new is 1 (or N/N), the chance the second result is new is (N-1)/N, the chance the third one is new is (N-2)/N and so on up to the chance that the kth one is new is (N-k+1)/N. So the probability that we never see the same one twice is given by
p=N!/((N-k)!Nk)

We can use Stirling's approximation on this (since Stirling's approximation for n! is embarrassingly good by the time n is 10) to get
p=NNe-N√(2π N)/((N-k)N-ke-N+k√(2π (N-k))Nk)
=(1-k/N)k-N-1/2e-k

This is basically in a form that is ready to solve for the birthday paradox. Setting N=365 you find that for k=20, p=0.589 (remember p is the chance that none of them share a birthday) for k=21 p=0.556, for k=22 p=0.524 for k=23 p=0.492. So we see that at 23 people it is more than 50% that two of them share a birthday.

In the case of N=52! we need a few more approximations. I initially tried to use the fact that (1+x)m=1+mx for small x (since k/N is certainly small) but there is a problem with that. Writing out the next term it is actually (1+x)m=1+mx+m(m-1)x2/2, so if m is as big as x is small (specifically m2x2 is not negligible compared to mx) then this is not going to be valid. For our problem, x=k/N and m=k-N-1/2, so m2x2 is like k2 which is not small at all.

We can approximate the term (1-k/N)k-1/2 as 1-k(k-1/2)/N as long as the next term is small. The terms look like 1, k2/N and k4/N2, so the third term is small compared to the second as long as k2 is small compared to N. This is true for our case, but I'll come back to that later.

Given the approximation (1-k/N)k-1/2 = 1-k2/N, we have
p=(1-k/N)-Ne-k(1-k2/N)

We need to figure out what to do with (1-k/N)-N, which is a number very slightly smaller than 1 to a very large negative power, so its a bit unpredictable. If you remember your first year calculus course, you will see quickly what to do, the limit as m goes to infinity of (1+x/m)m is ex (this is sometimes a definition of the exponential function). So we may approximate (1-k/N)-N as ek, giving us
p=1-k2/N

Thus, the probability that you see the same event twice is approximately k2/N. For k=30000 and N=52! this is about 10-59, so we can confidently say it won't happen. Turns out the human intuition is correct in this case, the birthday paradox won't overcome crazy scenarios like 52!.

Asking the other question of "how large does k have to be to achieve some fixed P" like the birthday paradox's 50%, the answer is a bit funny. You find that to achieve a 50% chance of seeing the same result twice you get k=√(N/2), so it would appear that the birthday paradox goes as the square root on N, rather than the logarithm or something else. However, this is technically mistaken, since we needed k2 to be much smaller than N in our approximations, this answer is invalid. It is correct to say that the birthday paradox asymptotes to something similar to √N, since if it is much weaker than that (like logarithm or cube root or something) our approximations become good again and you get the answer of square root.

More Frungy

Time to slack from work, and instead post the solution to the card problem from last time. In standard form, it is easiest to simply solve the simpler case first. For general notation, we will assume the deck has N red cards and N black cards, the final goal is to solve N=26.

First, the N=1 case. There is little point in betting on the first card, the adversary will just make you wrong anyway, so after the first card comes by, you know what the second card is. You bet everything on the second card, are correct, and end with $2.

Now for N=2. There is actually still no point in betting on the first card (this is a pretty general result), so lets say the first card is red. Now there is 1 red and 2 black in the deck. One might be inclined to say there is no point in betting here, but then the adversary will surely show us a black card the second time and we are in the N=1 case. Really, the fact is we would be very happy is the adversary showed us the second red card on the second round, so happy that we might be willing to pay him a small amount of money to make it happen.

Suppose you bet $0.1 on the second card being black, then if it is black, you have $1.1 and are in the N=1 case, so you get to double that to $2.2. If the second card is red, you have $0.9, but now there are only black cards in the deck, so you get to quadruple that and end with $3.6. Clearly the adversary will let you end with $2.2. Now suppose we instead bet $0.5 on the second card being black. If it is black we have $1.5 and get to double that to $3. If it is red we have $0.5 and get to quadruple that to $2. Clearly the adversary will let us have $2.

Alright, we need a middle ground. Bet x on the second card being black. If it is, we have 1+x and we get to double that to 2+2x. If it is not black we have 1-x and we get to quadruple that to 4-4x. The adversary will give us whichever is lower, and since one is an increasing function and the other is a decreasing function, the lowest one is maximized when they are equal. Solving 2+2x=4-4x we have x=1/3. So we should bet 1/3 on the second card being different than the first one and we are guaranteed to end with $8/3.

Note that in the solution to the N=2 case the answer comes out path independent, in that after two black cards and one red card have come by, you have $4/3 no matter if it was BRB or BBR (I'm sort of assuming that the adversary chooses to show black cards rather than red when the situation is symmetric). Naturally it has to be path independent, if the different paths gave different amounts of money, the adversary would send you on the worst path he can find. Due to this, it is a general feature that the solution for general N be path independent.

Now let us try to solve with general N. As we have seen, since the solution is path independent on how the adversary shows us cards, there must be a function f(r,b) that represents how much money we have when r red cards have gone by and b black cards have gone by. Clearly f(r,b) could depend on the specific value of N we use, so it really should be f(r,b,N), but I'm going to suppress that last variable. We also know that f(0,0)=1, and f(r,b) must equal f(b,r) by a similar argument to the one that shows path independence, and f(N,N) is the answer we seek.

Suppose we have see r red cards and b black cards go by, so we have f(r,b) money according to our strategy, and we now bet x on the next card being red (if we are instead betting on black, we just let x run negative). If the next card is red, we have f(r,b)+x money and if it is black we have f(r,b)-x money. This means that:
f(r+1,b)=f(r,b)+x
f(r,b+1)=f(r,b)-x

For some value of x. Eliminating x, we have
2f(r,b)=f(r+1,b)+f(r,b+1)

Similar to the relation for Pascal's triangle. To make it look a bit more familiar, lets define g(r,b) by f(r,b)=2r+bg(r,b), then g obeys the relation
g(r,b)=g(r+1,b)+g(r,b+1)

Now that looks like the Pascal relation, but its backwards. Next lets define h(r,b)=g(N-r,N-b), so h counts how many cards are left in the deck rather than how many cards we have seen go by. Then h obeys
h(r,b)=h(r-1,b)+h(r,b-1)

Perfect for solving as a previously solved problem. But since we knew f(0,0)=1, this means we know h(N,N)=1, so h is a bit backwards. We simply modify this by defining p(r,b)=h(r,b)/h(0,0). Now p obeys
p(r,b)=p(r-1,b)+p(r,b-1)

and p(0,0)=1, so p is the Pascal function, that is
p(r,b)=(r+b)!/(r!b!)

We can find h(0,0) by noting that p(N,N)=h(N,N)/h(0,0)=1/h(0,0), so h(0,0)=1/p(N,N), giving
h(0,0)=N!N!/(2N)!

so that we have h
h(r,b)=p(r,b)h(0,0)
=(r+b)!N!N!/(r!b!(2N)!)

This gives us g
g(r,b)=h(N-r,N-b)
=(2N-r-b)!N!N!/((N-r)!(N-b)!(2N)!)

Finally, we get f
f(r,b)=2r+bg(r,b)
=2r+b(2N-r-b)!N!N!/((N-r)!(N-b)!(2N)!)

ugh.

At least we really only care about 1 thing, f(N,N), its simple enough:
f(N,N)=22NN!N!/(2N)!

Which is just 22N divided by 2NCN. For N big enough to use Stirling's approximation (which 26 is), we have
f(N,N)=22NN2Ne-2N(2π N)/((2N)2Ne-2N √(2π 2N))
=√(π N)

Actually comes out quite simple. For N=26 this is about 9.

So this actually is the complete construction of the strategy. At any given time you have seen r red and b black cards come by, you have f(r,b) money left, and you should bet f(r+1,b)-f(r,b) on red (or, equivalently, f(r,b+1)-f(r,b) on black) and you will end with f(N,N) money. Actually performing the strategy can be done in a slightly different way though.

Suppose you have K assistants, where K is the number of possible configurations of the cards in the deck (that is, K=2NCN). Each assistant is given 1/K money to start and is told a list of R's and B's that could correspond to the cards in the deck (for N=2, the assistants are told RRBB, RBRB, RBBR, BRRB, BRBR, BBRR), and you let them all bet simply assuming that their own list is correct. Precisely one of them will be correct and will double his money 2N times (once for each card). The rest all lose their money. The correct assistant ends with 22N divided by 2NCN money, exactly f(N,N).

Lets try this for N=2, we have our 6 assistants which will be labeled as follows:
1.RRBB
2.RBRB
3.RBBR
4.BRRB
5.BRBR
6.BBRR

Each assistant is given $1/6.

This game starts, and assistans 1,2,3 bet their $1/6 on red while assistants 4,5,6 bet their $1/6 on black. We (the real holder of the $1) add this up and bet a net zero. Suppose the first card is red. This eliminates assistants 4,5,6 and assistants 1,2,3 now all have doubled their money to $1/3. We have a total money of $1.

Next card, assistant 1 bets his $1/3 on red and assistants 2 and 3 bet their $1/3 on black. We add this up and bet $1/3 on black. Suppose the next card is black. This eliminates assistant 1 and doubles assistants 2 and 3's money to $2/3. We have a total money of $4/3.

Next card, assistant 2 bets his $2/3 on red and assistant 3 bets his $2/3 on black. We add this up and bet nothing. Suppose the card is black. Assistant 2 is eliminated and assistant 3 doubles his money to $4/3. We have a total money $4/3.

Final card, assistant 4 bets his $4/3 on red. We add this up and bet $4/3 on red. We double our money to $8/3.

This game also has one other neat feature if you actually play it. All betting strategies are fair, as long as you never bet on a card that does not remain in the deck. That is to say, in any given position of red and black cards left, no matter what you bet on what card coming next, your expectation of final money (against a shuffled deck, not an adversary) is the same, as long as when the deck is out of one type of card you bet maximally on the card colour that is left.

Card Betting

Continuing on my plan of posting the puzzles I found in the paper "Games People Don't Play", I have a card flipping puzzle this time. Back when I started this blog I posted a card flipping problem that Bart had told me, and the puzzle this time is something of a continuation of that one:
There is a deck of 52 cards, 26 red and 26 black. You will be playing a game where you bet on what is the next card in the deck. You begin with $1 and can make a bet of any real number between zero and the amount of money you have on whether the next card in the deck is red or black. The top card of the deck is then revealed and if you were correct you gain the amount you bet, if you were incorrect you lose the amount you bet. Then the game continues with betting on the next card in the deck. The game ends when there are no cards remaining in the deck to bet on. What is the optimal betting strategy assuming that the deck is being arranged by an adversary?

I really like the sort of puzzle that seems flat out impossible to do anything at first, putting the player up against an adversary seems like a pretty insurmountable obstacle, but you still have some wiggle room.

Alice Wins?

I guess I should put up the solution to the second two numbers problem. The fact that I ask for an equilibrium strategy sort of gives something away I suppose, because in the first one there is no equilibrium strategy (Alice can make Bobs winning chance 1/2+ε for any positive ε so there is no optimal one). To begin, what is the strategy space for Bob? Bob sees a number, x, and must decide if that number is larger or smaller based simply on that number. All Bob can do is select a function f(x) to give him the probability that he says x is the larger number. The solution to the last puzzle strongly suggests that f(x) should be a strictly increasing function, but it is worth noting that if f(x) is simply 1/2 everywhere, then Bob is certain to get 50% on this game, so there is no way Alice can push him lower than that.

Suppose Alice is given f(x) (Bobs strategy) and is given the two numbers y and z from (0,1) (also, we may as well assume z>y). Now, if Alice chooses to show Bob y, Bob has a 1-f(y) chance to win, and if Alice chooses to show z, Bob has a f(z) chance to win. Thus Bobs chance to win (if Alice knows f(x)) is given as:
min(1-f(y),f(z))

Now we must integrate this over all space of y and z. The natural integral would be ∫ ∫ dy dz with both going over (0,1), but this does not work because we have already assumed z is the larger of the two numbers. We should simply restrict the integral to z ∈ (0,1) and y ∈ (0,z) and multiply by 2 to take care of the other half of the region where y happens to be larger. Bobs chance to win is given as
2 ∫ ∫ min(1-f(y),f(z)) dy dz

with y from 0 to z and z from 0 to 1.

The next thing to know is that Bob is trying to maximize this integral. We must select an f(x) to make this as large as possible. Remember we already know that with f(x)=1/2, we trivially get 1/2 for the answer, so really we need only ask if we can make it larger than 1/2. You can try some functions out for a while, but in the end you will see that nothing will integrate to more than 1/2.

We can prove that Bob cannot get better than 1/2 by using 2 features about the min and max functions:
1.min(a,b)+max(a,b)=a+b
2.min(a,b) ≤ max(a,b)

Now we define
P=2 ∫ ∫ min(1-f(y),f(z)) dy dz
Q=2 ∫ ∫ max(1-f(y),f(z)) dy dz

integrated over y from 0 to z and z from 0 to 1 as before. We seek to prove that P ≤ 1/2 for all choices of f.

Fact 2 from before shows us right away that P ≤ Q, and fact 1 gives us
P+Q=2 ∫ ∫ (1-f(y)+f(z)) dy dz
=2 (∫ ∫ 1 dy dz - ∫ ∫ f(y) dy dz + ∫ ∫ f(z) dy dz)

The first integral is simply 1/2 (the area of integration) and in the third integral you can do the y integral separately to get ∫ zf(z) dz integrated from 0 to 1. For the second integral we simply rewrite the integration region from y ∈ (0,z) and z ∈ (0,1) to z ∈ (y,1) and y ∈ (0,1) so we can do the z integral first to get ∫ (1-y)f(y) dy integrated from 0 to 1. We currently have
P+Q=1 - ∫ (1-y)f(y)dy + ∫ zf(z)dz

But using the substitution z=1-y the two integrals are trivially identical for any f(x). Therefore,
P+Q=1

Now since P ≤ Q we have (by adding P to both sides) 2P ≤ P+Q so 2P ≤ 1 therefore P ≤ 1/2 as we sought.

OK great, so if Alice knows Bobs f(x), there is no way Bob can choose an f(x) that guarantees to get any better than 1/2. So if there is an equilibrium strategy, it must be 50/50. But is there one? Can we construct a strategy for Alice that makes sure Bob cannot get better than 1/2 even if we have no idea what f(x) he chose? We can, and its actually what one would instinctively do if one were playing this game.

Consider you are Alice and you are handed the numbers 0.1 and 0.5, which one would you show Bob? The obvious choice is 0.5, 0.1 looks too obviously the low one. How about 0.8 and 0.4? Less obvious, but it would seem the rational guess is 0.4. How about 0.7 and 0.4? again 0.4, 0.7 has less space above it than 0.4 has below it. How about 0.8 and 0.6? 0.6 is pretty clearly better. How about 0.7 and 0.3? now it seems they are symmetric, no good way to choose. How about 0.6 and 0.4? again symmetric.

See what the strategy is? Just show Bob whichever number happens to be closest to 0.5. Now, if we obey this strategy, what is Bobs chance of winning? First let us rescale the numbers down by 0.5 so that the two numbers, y and z, range from -1/2 to 1/2 and we show Bob whichever one is closer to zero.

Consider the y-z plane in the box y ∈ (-1/2,1/2) z ∈ (-1/2,1/2). The numbers we get select a point in this region. Which number is y and which is z doesn't matter here, select which is which at random.

Divide up the region into 4 pieces using the lines y=z (from bottom left to top right) and the line y=-z (from top left to bottom right). The 4 regions are "top" (where z>0 and y is between -z and z) "bottom" (where z<0 -y="" -z="" and="" between="" is="" left="" right="" where="" y="" z="">0 and z is between -y and y). If our two numbers put the point in either "top" or "bottom" we show Bob y. If the point is in "left" or "right", we show Bob z.

Suppose we have a point in "top". We show Bob y and he has to guess that y is the smaller number. The chance of him doing so is 1-f(y). For all points in "top", we would integrate this to get the total probability of Bob winning, given a point in "top". That is, when the point lands in "top", Bobs chance of winning is ∫ 1-f(y) integrated over "top". In "bottom" we show Bob y and he must guess it is the larger number. He does this with probability f(y) so, 1/4 of the time, Bobs chance of winning is ∫ f(y) integrated over "bottom". Note that ∫ f(y) in "top" is the same as ∫ f(y) in "bottom", since they both have the same y values.

Using the same argument for "left" and "right" we get Bobs chance of winning is (∫ 1-f(y) over "top")+(∫ f(y) over "bottom")+(∫ 1-f(z) over "right")+(∫ f(z) over "bottom"). The integrals of unknown functions all cancel, leaving ∫ 1 over "top"+∫ 1 over "bottom" giving 1/4 each (their areas) for a total of 1/2.

Simply put, if Alice shows Bob 0.3 (say), and Bob knows this number is the one that is closer to 1/2, then the other number is equally likely to be in (0,0.3) or (0.7,1) and Bob cannot tell those apart.

Thus if Alice follows the strategy "show Bob whichever number is closer to 1/2", then Bob cannot do better than 1/2 no matter what f(x) he chooses. If Bob uses the strategy f(x)=1/2 (so, "flip a coin") Bob cannot do worse than 1/2 no matter how Alice chooses the number to show. So this gives the equilibrium strategy (or, at least, an equilibrium strategy).

I find it funny that if you do this game with real people, the Alice player will probably typically hit on the correct strategy naively, even though they will then likely try to follow it up by "out-thinking" the other player, because thats just what humans do.

Two More Numbers

Alright, now that the two numbers puzzle is out of the way, time for the follow-up. I first learned this puzzle in a paper called "Games People Don't Play" which I found on the comments at the xkcd blag:
Two numbers are selected uniformly at random from the interval (0,1) and shown to Alice. Alice must select one of these two numbers and show it to Bob. Bob must then guess if Alice has shown him the larger of the two or the smaller of the two. What is the optimal (equilibrium) strategy for the players?

This time Alice does not get control of the two numbers, and Bob is aware of the distribution, but Alice gets to choose what number Bob sees, so who gains the advantage from those changes?

Only A Little

Time for the solution to the Two Numbers problem. I suppose I could just state the solution right away, but I like to do something of a derivation first.

First of all, consider the results of the following strategy for Bob:
If the number you see is greater than zero, guess it is larger, if the number is less than zero, guess it is smaller

Easy enough to implement, if both numbers that Alice wrote down are positive, then you are still 50/50. If they are both negative then you are again 50/50, but if one number was positive and one was negative, then you are guaranteed to win.

Alright, thats all well and good, naturally if Alice knows you are doing this, then she won't ever write down a positive and negative number together of course, so this is not actually guaranteed to give you a better than 50% chance of winning the game. Now, instead of greater than/less than zero, why not use a random number K. Perform the following strategy:
Select a nonzero probability distribution over the reals f(z). After Alice writes down the numbers, select a number K using f(z). If the number you see is greater than K guess it is the larger one, if the number you see is smaller than K then guess it is the smaller one.

By nonzero probability distribution I mean that f(z)>0 always and ∫f(z)dz = 1 when integrated over all the reals. Basically, the strategy is to assume that the unseen number is near K.

Suppose Alice writes down x and y with y>x, then if x>K or K>y you are simply 50/50, but if y>K>x you win the game 100%. The chance of you winning is therefore
∫f(z)dz+(1-∫f(z)dz)1/2
=1/2+1/2∫f(z)dz

Where the integral is between x and y (giving the chance K lands between x and y). Since f(z)>0 always, this is greater than 1/2 in all cases and is a valid solution. Naturally, if Alice knows f(z), she can make that integral as small as she likes, but she cannot make it non positive.

One can make a slightly more general (but not actually any better) solution as well. Consider that Bob looks at the number before choosing K. Then the chance that K will be less than x (and thus guess x is larger) is ∫f(z)dz integrated from -∞ to x. Let p(x) be that number (the probability that we say x is larger). Clearly we have p(x) positive, increasing, and 0 at -∞ and 1 at +∞.

Let Alice choose number x and y with y>x. Now the chance Bob wins given some function p(x) is 1/2 (the chance he is given y) times p(y) (the chance he says y is larger) plus 1/2 (the chance he is given x) times 1-p(x) (the chance he says x is smaller). thus giving
1/2 p(y)+1/2(1-p(x))
=1/2+1/2(p(y)-p(x))

Not surprising given that p is the integral of f from before.

Anyway, the only thing that we need to demand of p(x) is that is be strictly increasing and bounded between 0 and 1. It does not matter that it goes to 0 at -∞ or 1 at +∞ (though, I guess it might as well). So the strategy of saying the number you see, x, is greater with probability p(x) is more general than the strategy of picking K from f(x) (though not any better, even if it is more general). Again, given p(x) Alice has the power to make p(y)-p(x) as small as she wants, but she can never stop it from being positive.