Here are the calculations for the story.
On Monday, Joe needs to bet epsilon of his total wealth. On Tuesday, he bets 10 epsilon. On Wednesday, 100 epsilon, and on Thursday 1000 epsilon. At this point it is OK if he is broke, since he is guaranteed to win this last bet. Thus, he will at most lose 1111 times epsilon before winning. (Note, his winning will be 111.1, and so he will have made epsilon/9 profit for the week in this case.)
So if epsilon = 1/1111, he won't go broke over the week. He is guaranteed to make epsilon/9 each week. This translates into a winning of 1/10000 of his total wealth. After some 10000 bets, he will have more than doubled his wealth in points to 6 points total. After another 10000 bets, he will have finally made up for not studying for the first quiz by having a total of 12 points. But after 100K bets, he will have more than 3000 points. At 200K bets, he will have more than 3 million points. Still not enough on average to pass, but at least he is now making 30 points per week. Enough to break even on the quiz and the 10 point homework. At 300K bets, he will be bringing home 30,000 points per week. Thus, it will only take a few thousand more bets before he will have more than 100% on average.
He might as well wait the extra two weeks generate the A+.
Alternatively, one can consider what other people have said about this problem. For example, Olin, Doris [1] says, "The flaw in the surprise examination paradox is traced to the "prima facie unexceptionable but in fact faulty epistemic principle" (P$\sb {\text 5}$): If $A$ is justified in believing $p\sb 1,\cdots,p\sb n$, $p\sb 1,\cdots,p\sb n$ strongly confirm $q$, $A$ sees this and has no other evidence relevant to $q$, then $A$ is justified in believing $q$."
Last modified: Mon May 6 09:54:36 2002