This talk is on a risk probability minimization problem for finite horizon continuous-time Markov decision processes. Under the assumption of non-explosion of the controlled state processes as well as the finiteness of actions available at each state, we will show the existence and computation of an optimal policy. Finally, we give two examples to illustrate our results: one shows that our value iteration algorithm can be used to obtain both the value function and an optimal policy, and the other explains the difference between the conditions in this talk and those in the previous literature.