Street Statistics:

The Case Against "One-Shot Stops"


For the uninitiated, "one-shot stop" is a term popularized originally by police officer Evan Marshall, who, in subsequent collaboration with sheriff's deputy Ed Sanow, compiled official reports of shootings from law enforcement agencies around the country and attempted to derive the incidence of instantaneous incapacitation brought about by wounds inflicted by firearms. On the face of it, this seems like a very worthy investigation with a wealth of useful data to support the analysis. However, to separate the effects of multiple gunshot injuries and different ammunition types (which also seems like sound reasoning), Marshall used only those incidents involving a single injury to the torso, hence, "one shot." Marshall further required the definition of incapacitation or "stopping" to be a complete cessation of attack / flight within one or two seconds. Single shot incapacitation within three to five seconds, or ten seconds or thirty, was allegedly excluded from the study sample.

Immediately, I have a problem with this stopping criterion. Apart from hits disabling the central nervous system, directly or indirectly, there is no known physiological mechanism for incapacitation other than loss of blood pressure leading to collapse, and that process always takes more than two seconds (I am discounting psychological factors entirely). I will come back to why this is important later.

Moreover, no one involved, neither the shooter nor the subject shot, can assess the outcome of the hit within this time frame objectively (this is why I exclude any psychological basis for "stopping"; no one shot has time to realize that they have been hit and think about that in two seconds). To the extent that the criterion has been applied faithfully, the data are highly biased; to the extent that more than two seconds transpired before anyone realized it was over (which is most of the time, I would bet) the data are inconsistent with the argument forwarded.

For this criterion to be faithfully adhered to, it would be necessary for the subject to be in full view of the shooter for the entire event. Yet, once shooting commences, the police typically continue to shoot until the subject is on the ground. The police are not trained to take a shot and wait to see what happens; they are often trained to take "double taps". This study asks us to believe that most of the data is drawn from instances in which police officers are confronted by very close range threatening situations in full view in which they only shot once or in any event only hit once. I don't think Marshall or Sanow have argued the scenario that way, but if not then I question the integrity of the reported data.

Marshall's and Sanow's (now defunct) Stopping Power site was the the principal advocate in cyberspace for the argument in favor of the "one shot stop." I am willing to think that Marshall, Sanow, and their adherants are honest, intelligent, well intentioned, and probably decent human beings to boot. I say that in preface because I want the reader to understand that my criticisms are not knee-jerk reactions fueled by some kind of quasi-religious fervor over "big bore vs. 9 mm / .357", "light and fast vs. slow and heavy", "Facklerite vs. Marshallite", or any such nonsense.

One of the adherants of Marshall and Sanow's analysis, Dale Towert, made the best reasoned argument in favor of limiting the study to incidents of single wounds that I have seen (the article is no longer posted in its original entirety). It is a compelling argument. Unfortunately, it fails to address the question of experimental integrity. By this I am not referring to the honesty of the researchers but rather to the quality of the conclusion that one can draw unless one also knows the counter-argument to one's study. In the scientific world, serious researchers always test the arguments against their experiments, hypotheses and conclusions. After all, their peers will not fail to ask these questions! Unless you also know what happens in those instances which you have excluded from your analysis and have a solid explanation of the behavior observed in those cases you cannot rest upon the conclusions drawn from the limited dataset.

To date, Marshall and Sanow have not published an analysis (that I have seen) of the behavior of subjects to multiple injuries. Because medical data strongly supports the view that handgun injuries are rarely fatal (about 10% of the time) and, as I believe, instantaneous incapacitation is attributable to near misses or direct hits on the central nervous system, one-shot stops almost certainly have a high proportion of center hits.

In other words, we have a deterministic conclusion: instances in which only one shot is fired of necessity drop the opponent in his tracks nearly every time.

Consequently, the conclusions drawn about specific calibers and ammunition natures are pretty weak indeed. As I have argued, such instant collapse is not predictable or dependable even with extremely powerful rifles and it depends almost entirely upon bullet placement, though the wound cavity does play a role, biasing the probability slightly in favor of bigger splashes than smaller ones (but not enough to get excited).

So, fundamentally, I have a serious problem with the very question posed by this analysis. I think it necessarily leads to erroneous conclusions because it answers itself. I believe that this is the chief reason that all the compiled figures have tended to crowd closer and closer to 100 % as time goes on (rather than that the authors are guilty of fabricating the results). Does this also tend to show which loads are better than others? Probably so, as long as one doesn't start splitting hairs, because it doesn't tell you how much better they are than one another in any practical sense.

What it definitely should not communicate is the idea that if I am called upon to defend my life with Brand X ammunition and have time for only one pull of the trigger that there is 91 % probability that my assailant will collapse instantly at my feet. The odds are nowhere near 100 %.

This is subtle, so note the distinction: what it says is that if I am in an engagement and only one shot is fired, which results in the termination of the engagement, then there is an xx % probability that this one shot will drop the assailant within two seconds. That conditional clause is crucial, because it excludes the overwhelming number of situations in which many shots are required and the assailant remains on his feet and conscious for many seconds or minutes, or never loses consiousness.

A far more meaningful statistic would include all documented instances of that particular load being used, indicating the frequency that the outcome was a "stop". You could still use Marshall's criterion for a stop if you wished, but the important thing is that you would not be discarding all the data for the instances in which no stop occurred at all.

Towert attempts to defend this exercise in mathematics by saying that it isn't statistics just because it is expressed as a percentage. I beg to differ. Any fractional representation of a particular outcome in a sample of events is a probability statistic. Maybe a poorly computed probability statistic, but a statistic nonetheless. And it should mean that xx % of the time the outcome is realized. Naturally, in a single instance any outcome is possible, but the probability of the desired outcome is xx %. When that isn't the case, then your data or your methods are flawed.

And we have both here. Towert, in his essay on a "New and More Comprehensive Look" at the Marshall / Sanow data (no longer posted) cites a particular load for which only 76% of the originally reported 50 shootings becomes nearly 97% of the later reported 450+ shootings. Notwithstanding the argument that I have just made about deterministic outcomes, this situation only works out if nearly all of the 400+ subsequent shootings were one-shot stops (less than five could fail to stop). Is that believable? Why was a trend of that strength not evident in the first 50 incidents reported? This kind of data is what inspires the reaction of the most belligerent critics. It does look very fishy. My conclusion is that one sees what one expects and with a definition as subjective as the one employed as a pass-fail criterion for this study the opportunity for such bias is abundant.

More embarassing however is the apparently highly sophisticated statistical analysis that a correspondant of Towert applies to this sadly subjective pile of shooting incidents, resulting in a number of parameters which he has reported to six significant digits! At best, we have two, and I don't care how many incidents you have in the sample. The old jokes about statistical analysis hold here: "garbage in, garbage out."

As a final word on this subject, after three decades of intensive scientific investigation of the terminal performance of handgun bullets in the aftermath of the notorious 1986 Miami Shootout, the FBI reversed itself and returned in 2015 to using the 9 x 19 mm Parabellum cartridge with an improved 147 grain bullet as its official duty load, arguing that no appreciable difference may be discerned in the simulated wound tracks made by full size service cartridges when comparing between calibers using modern ammunition, nor in the statistical results from shooting incidents, that FBI training demonstrates that personnel are more accurate when shooting 9 mm, and that increased magazine capacity is a distinct advantage for the 9 mm, an important consideration.

This web page was designed by HTL
Mail to: Ulfhere at Rathcoombe.net

Copyright 2000 - 2023 -- All Rights Reserved