More Is Less

So, I find it to be rather unfortunate that the night after posting my puzzle about shaking hands from last time, I managed to come up with a similar problem that I consider to be more interesting. This leaves me with the issue of either solving that puzzle from last time and then posting the new puzzle as a follow-up, or just posting the new puzzle and solving them both at once next time. The problem with the former is that after the first puzzle is solved, the second puzzle is very easy to do (its the exact same logic), but it does have one extra thing one needs to notice in order to solve it, so it still is not trivial. The problem with the latter is that I find it unappealing for aesthetic reasons that don't really make sense.

Going with aesthetics, I guess I might as well solve the puzzle from last time now. The logic is very simple, first of all notice that each person has 2N people available to shake hands with (of the 2N+2 people, they cannot shake their own or their partners), and so the number of hands each person shakes is between 0 and 2N. You asked 2N+1 people how many hands they shook, and you received 2N+1 unique answers, thus you must have received every answer between 0 and 2N, pigenhole principle and all that. We will identify each person by the number of hands they shook, that is, person k shook k hands, and this assignment gives each person a unique number.

Person 2N must have shaken hands with every person they could, and person 0 must have done the opposite. These two people must be partners, or those two people did not go to the same party. Thus, (0,2N) is a pair. Now, each person at the party outside of that pair must have shaken hands with exactly one person in that pair (2N, to be specific), so lets just remove pair (0,2N) and reduce everybodys number by 1. This has now set up the exact same situation as the original problem, but we have reduced N to N-1. Everybody's number is 1 lower, but they are all still unique. This means that whatever the solution is to the problem with N (call that F(N)) it will be 1 lower when N is instead N-1. That is to say
F(N) = F(N-1)+1

Which means
F(N)=F(1)+N-1

so we just have to solve the N=1 case.

When N=1, the answers you were told were 0,1, and 2. 2 and 0 must be partners by the same logic as before, and 1 must be your own partner. Since 2 shook hands with 2 people, the other hand must have been yours, and 0 did not shake your hand. This means that F(1) is 1 and therefore F(N) is N. Proving that you shook hands with N people, exactly 1 person from each couple.

Now for the follow-up puzzle:
You go to a party with 2N other people (so there are 2N+1 people in total at this party). At the party people shake hands with other people. No person shakes hands with themselves. After the party, you ask each person that was at the party how many different people they shook hands with, and each person gives you a unique answer. How many different people did you shake hands with?

Very much the same setup, but the "couples" concept has been stripped away.

Shaking Hands

New puzzle time. Mark told me this one a week or two ago, and since he is approximately 90% of my readership I'm not sure why I'm posting it, but here we go:
You and your partner go to a party with N other couples (so there are 2N+2 people in total at this party). At the party people shake hands with other people. No person shakes hands with themselves or with their partner. After the party, you ask each person that was at the party how many different people they shook hands with, and each person gives you a unique answer. How many different people did you shake hands with?

Mark initially asked me this with N=5, and then found it wasn't more interesting to generalize it to N. I will initially ask it with N and find it isn't any more interesting to specify it to 5.

Less Gladiators

Time for the solution to the second gladiator problem. As before, I guess the thing to do is start with a simpler case.

Suppose Alice has a single gladiator of strength A, and Bob has two gladiators of strengths B1 and B2. Again, Bob needs only decide which of the two gladiators to send first and the whole match is set.

If Bob sends gladiator 1 first, he wins with chance B1/(A+B1) and loses with chance A/(A+B1). If he loses the first game, he then has a B2/(A+B2) chance to win the second game.

This gives Bob's chance of winning as:
B1/(A+B1)+AB2/(A+B1)(A+B2)
=1-A2/(A+B1)(A+B2)

unsurprising, its one minus Alices chance of winning both matches. This is symmetric in B1 and B2 again, so it is independent of Bobs strategy. Its true that the full game is also strategy-independent, but it is actually a bit weirder to prove this one. I was not able to come up with a proof myself, this was in the paper where I found this problem.

Let us consider to replace the gladiators with lightbulbs that have a fixed burn out time. The burn out time of one of these idealized lightbulbs is memoryless, and if the lightbulbs expected lifetime is x, then the probability that it is still burning after time t is given by e-t/x.

If we have a lightbulb with lifetime x and a lightbulb with lifetime y both burning at the same time, the chance that y burns out first is given by x/(x+y). So two gladiators fighting can be appropriately replaced by a "lightbulb match" where both Alice and Bob must turn on one of their bulbs and whichever goes out first is defeated. Since the burnout time is memoryless, the chance that a given lightbulb will burn out in the next five minutes is the same as the chance it had to burn out in the past five minutes, so it is enough that the winner simply leave their light on and the loser must turn on a new bulb. In the end whoever runs out of lightbulbs is the loser of the match.

We can see this game could just be replaced by Alice and Bob showing up on separate days, Alice turns on one of her bulbs until it burns out, then she turns on another one and so on until she runs out of bulbs, when she is out of bulbs she records the total lifetime of her team. Bob then comes by and doe the same thing. Whoever has more total lifetime wins the gladiator match. Its pretty clear there is no strategy in this game, but it is the same as this gladiator match game.

This game is a bit less fun than the other one, since human intuition arrives at the "all strategies are the same" result much faster than the other game. I guess thats because in the simpler case its very obvious. If our opponent has only one gladiator, then it does not matter what order we send ours in, our opponent must defeat them all anyway. In spite of the result being more obvious though, the proof is actually trickier to think of, because the setup does not have as much of an obvious "zero expectation value" minigame.

More Gladiators

Alright, this one is the final problem from "Games People Don't Play":
Alice and Bob are competing team managers in a gladiator competition. Alice's gladiators have strength a1, a2, a3...an and Bob's have strength b1, b2, b3...bm. Each round, Alice will select a gladiator from her team and then Bob will select one from his team and the chosen gladiators will fight.

If a gladiator of strength x fights a gladiator of strength y, the chance the one with strength x wins is given by x/(x+y). The losing gladiator is eliminated and then a new round begins with Alice and Bob selecting gladiators to fight. The competition ends when one of the teams has run out of gladiators.

What is the optimal play? In particular, suppose Alice always chooses to send in her strongest gladiator, how should Bob respond?

Its the exact same as last time, but this time the gladiators strengths are constant, rather than gaining confidence from the fights.

Bob Wins?

People still read my blog, right? Whatever, I'm sure history will look back and thank me for all this, or something like that. Anyway, the gladiator battle problem has been up for long enough, lets take a look at the solution. First, lets consider the a simple case, where Alice has one gladiator of strength A, and Bob has two gladiators of strength B1 and B2. Bob needs only decide which of the two gladiators to send first and the whole match is set. If Bob sends gladiator 1 first, he wins with chance B1/(A+B1) and loses with chance A/(A+B1). If he loses the first game, he then has a B2/(A+B1+B2) chance to win the second game. In total, the chance Bob has to win is given by:
B1/(A+B1)+AB2/(A+B1)(A+B1+B2)
=(B1(A+B1+B2)+AB2)/(A+B1)(A+B1+B2)
=(A+B1)(B1+B2)/(A+B1)(A+B1+B2)
=(B1+B2)/(A+B1+B2)
Symmetric under exchanging B1 and B2, meaning that it does not matter who Bob chooses first. Actually, it is also equivalent to what Bob's chance of winning would be if he could merge his two gladiators into a single one of strength B1+B2 and just have that one fight. This suggests that the whole "gladiator battle" this is just a complication of a much simpler problem. Lets replace the gladiators strengths with amounts of money. Now Alice and Bob each have total sums of money A and B, and each round they each place some money on the table, Alice placing x and Bob placing y. Alice is then given an x/(x+y) chance to win and Bob has a y/(x+y) chance to win. The winner of the round takes the money sitting on the table and then a new round begins. The game ends when one person has all the money. Now, in the "gladiator battle" version of the game, the players have restrictions on what amounts they are allowed to bet, but if we can prove that all betting strategies are equally good then we will have proven that the gladiator game restrictions do not matter. Alright, when Alice places x, and Bob is choosing y, what y maximizes his potential gain? Bobs expected gain is x with probability y/(x+y) and -y with probability x/(x+y), so Bobs expected gain is:
x*y/(x+y)+(-y)*x/(x+y)
Clearly zero. This means that all strategies for Bob are equally good, Bob can play the gladiator battle randomly and his probability of winning is just given by
Σ bi/(Σ aj+ Σ bi)
Where aj are the strengths of Alice's gladiators and bi are the strengths of Bob's gladiators.
There is another, slightly more elegant, proof of this result. I like it a bit more because the first one felt very unsatisfactory, but this other one is very cool though rather weird. Consider each gladiator to have an integer strength (this can be accomplished by approximating irrational strengths with rational ones, and then multiplying by denominators until everything is an integer). Each gladiator will have k beads assigned to them, where k is their strength. The beads are then all shuffled and placed on a string (you can imagine this as a bunch of coloured beads and each gladiator has their own colour). When two gladiators fight, whichever one has the furthest right bead wins the fight and then gets to claim all the beads owned by the defeated gladiator. It is not hard to see that this gives the correct probabilities of the battles and is a suitable representation of the entire tournament. In the end, only the rightmost bead matters, and the strategies are irrelevant. Human intuition on this problem is a bit funny. Most anybody will tell you that it is obvious that you must always send your strongest first, and the occasional person will actually be convinced that you must use your weakest first, but the idea that it is irrelevant which you do is very hard for people to grasp intuitively.