In the easy subproblem the required precision was quite low, so you should be able to solve it simply by simulating a lot of games. We will now show how to solve the hard subproblem.
Note that in order to describe the state of the game you only need to know the number of people remaining, your position among them, and the person currently holding the gun. In fact, it is sufficient to assume you’re always at position 0 with i other people in the circle and the gun at position j. Thus, there are only O(n2) states.
The obvious choice now is to use dynamic programming to compute the probability of your victory for each state:
$$\text{prob}[i][j] = \text{p}\_\text{suc}[i][j] + (1-i/c) \cdot \text{prob}[i][(j+k) \bmod (i+1)]$$
“p_suc” is the probability that the player holding the gun successfully hits and you win. If j = 0, then p_suc[i][j]=0 (if you shoot yourself, you lose), else $\text{p}\_\text{suc}[i][j] = i/c \cdot \text{prob}[i-1][j \bmod i]$. Note that i/c is the probability that the gun hits.
Computing the values isn’t as easy as it seems, because each probability in this formula depends on itself – it’s possible for the gun to come back to the person who now holds it without anybody losing. Generally, such problems can be solved using Gaussian elimination, but we won’t need it this time.
Suppose we’ve already computed prob[i − 1][…] (and therefore also p_suc[i][…]) and we want to compute prob[i][…]. The various values of j can depend on each other in one or more cycles which can be found in O(n) time. For any such cycle C of l elements, we can start with j = C[0] and apply the formula for prob[i][j] repeatedly l times to obtain
$$\text{prob}[i][C[0]] = \sum_{m=0}^{l-1} \text{p}\_\text{suc}[i][C[m]] \cdot (1-i/c)^m + \text{prob}[i][C[0]] \cdot (1-i/c)^l\,.$$
For i > 0, we always have 1 − i/c < 1. So we can compute the sum in O(l) time and divide it by 1 − (1 − i/c)l to get prob[i][C[0]]. Once we know that, the other values on the cycle can be computed easily.
The last question remaining is: how to compute the probabilities modulo 109 + 9 and pick the largest one? One option is to work with fractions, but their denominators grow quickly. It’s better to compute all probabilities in the same form as the output, replacing division with multiplication by the multiplicative inverse – Fermat’s little theorem says q−1 ≡ qmod − 2, which can be computed in O(log mod). We only need to compute them once for each cycle, which doesn’t affect the complexity.
These numbers are hard to compare, though. How to pick the largest one? Well, nothing stops us from computing the probabilities in decimals too and picking the largest of them. The only situation where this trick would fail is when there would be multiple candidates too close to distinguish which is greatest (or if they’re equal), but checking the data shows there’s always only one maximum that’s greater than all other probabilities by at least 10−10.