An optimal control problem for a regime switching stochastic differential equation under continuous-time, switching and impulse controls with recursive cost functional is considered. Using the approach of dynamic programming, the corresponding Hamilton-Jacoobi-Bellman quasi-variational inequality is derived, to which the value function is proved to be the unique viscosity solution. Due to the appearance of all those kinds of controls, as well as the regime switching (governed by a Markov chain) and the recursive cost functional (determined by a backward stochastic Volterra integral equation), quite a few particular technical difficulties have to be overcome under some delicate conditions to prove the continuity of the value function.