Another blog post I must caveat with ‘I don’t know a huge amount about this but’, I might have to invent an acronym that asserts this for all future posts. A sort of defensive skin to deflect the more obvious criticisms.
Well, let’s imagine a scenario. You build a product that can be used in an almost infinite number of ways by your customers. A car that can be driven fast or slow, in the city or on the open road for long journeys or short trips, for example. Now imagine that a well-meaning person decides that your car mustn’t be damaging to the world we live in and that it should be low impact. They design a test that will prove if your car is low impact by picking one particular example of how it can be used and uses that as a benchmark. A set speed or sequence of speeds and a set duration. This protocol is widely publicised.
You know that your car has to undergo this test so you work night and day to make sure that when it’s being driven to those parameters it will pass the test. What this means is you focus in on your objective, my car must pass this test.
Which is not the same as ‘my car must be low impact on the environment’. Because the test is not representative of how the car would ever be used. It’s a formula designed to be repeatable and comparable with the cars your competitors make. It’s a scientific assessment, pure and simple.
Now, you’re no longer making cars, you’re making elite athletes. Once again you need to make sure your product is clean, that it compares favourable to the rest of the competition. You do this by submitting your athlete to tests. Scientific, repeatable tests performed under conditions you know will be consistent and repeatable. You focus your efforts on ensuring your athlete always passes those tests.
But your athlete doesn’t need to be clean all the time. When this test isn’t being performed the athlete can be as dirty as you like, you just need to ensure when it’s tested it avoids a positive test.
In both cases it’s easy to see that the burden has shifted. By making the test the thing you need to pass you dilute the purpose of the test in the first place, you lose sight of the desire that cars and athletes run clean. That regardless of when and how we assess them they will always be ethically sound.
VW, sports federations and coaches should clearly have the moral fortitude to see that the test is not ‘the thing’, the aspiration for a universally clean product is the objective, however, the testers and test setters have a far more significant role to play than many of us have so-far assumed. Testers and regulators must design, facilitate and communicate assessment regimes that reflect a wider range of behaviours. A regime that communicates less about simple pass and fail but more about a universal, undeniable commitment to provable fairness, any time any where.
More cars and athletes will be shown to have done just enough to pass the test and we’ll admonish them for not being clean outside of those tests. We must at this time look hard on the people that let this scenario develop. Right now I don’t really think badly of VW for what they did, and by virtue of the fact that it did happen, neither did quite a lot of people at VW. The fact is they worked damn hard to build an engine algorithm that produced a fantastic efficient output under the laboratory test conditions. That the parameters didn’t represent real world usage was not their fight. So when a coach and an athlete conspire to beat a test, can we empathise and understand that it’s the test setters that have brought this situation about, albeit for very noble and ethically sound reasons?
I don’t have the answer, of course, but I hope the question itself is worth considering.
With thanks to Edward Borrini for inspiring the original thought.