The opposite argument is that it is "all recruiting."
It has to be a combination of the players and the system, and the second half adjustments. We are just too easy to come back against. The odds of us losing all of those games (not including Longwood) is .000000000026975. That should have been the odds of Mooney retaining his job.
That's not how probability works.
You can ask, what is the probability of losing 8 games last season in which we had an 85% chance to win at some point? (Longwood, Wyoming, Old Dominions, Duquesne, St. Joes, La Salle, Davidson, St. Louis). Choosing a probability threshold such as 85% is necessary to make the type of argument you are making, and 85% gives your argument in particular the most favorable position.
The total number of games in which we had an 85% win probability at some point last season was 21 (the 13 wins plus the 8 losses listed above). We won 13 instead of 21*0.85 ~18. You have asserted this difference between expected and observed wins is due to a systematic error in the win probability model (it does not account for Mooney's bad coaching.) How can we tell if the difference is due to randomness (sometimes you get 5 heads in a row when flipping coins), or some systematic effect (Mooney's poor coaching)?
The variance of a binomial random variable with probability p is np(1-p) where n is the number of independent trials (games). The standard deviation is therefore sqrt(np(1-p)). For us this is sqrt(21*0.85(1-0.85)) ~ 1.64. The observed number of wins in games where Richmond was given an 85% win probability at some point was ~3 standard deviation below expectation. This happens 0.13% of the time randomly, roughly 340,000 times more often than your method of multiplying probabilities together yields (3.83E-7% if you give us a 98% max win probability for Longwood) but still below what is generally considered to be random chance (5% for most people). Using the 85% win probability threshold does show evidence of a systematic effect.
Using other probability thresholds produces different results though. For example, if you choose to look at games with a win probability of 90% at some point, we only performed less than 1 standard deviation worse than expected, pretty far from being statistically significant evidence of systematic error in the model (due to Mooney's poor coaching or otherwise). To be sure of a systematic error in the ESPN win probability model I would want to look at many more games than just a single season.