Tuesday, February 23, 2016

More on journal quality control and the forward march of science

There is a view that whereas scientists are human, with all the cognitive and moral foibles that entails, Science as an institution is able to rise above these and, in a reasonable amount of time, find the truth. The idea is that though human scientists are partial to their own views, shade things to their advantage, are motivated by ambition, hubris, lucre and worse and might therefore veer from truth in the direction of self advancement and promotion, the institution of Science is self-correcting and the best system we have for finding out what the reality is like (think of the old Upton Sinclair quip: "it's difficult to get a man to understand something when his salary depends upon his not understanding it"). Or, another way of putting this, the moral and cognitive virtues of the ideal scientist are beyond human scientists but not beyond the institution as a whole.

This is a nice story. But, I am skeptical. It relies on an assumption that is quite debatable (and has been debated); that there is a system to scientific inquiry, a scientific method (SM). If there is such, it has resisted adequate description and, from what I can tell, the belief in SM is now considered rather quaint within philosophy, and even among practicing scientists. This is not to deny that there are better and worse arguments in favor of doing things one way or another, and better and worse experiments and better and worse theories and better and worse data. My skepticism does not extend to an endorsement of the view that we cannot rationally compare alternatives. It is to deny that there is a method for deciding this independent of the issues being discussed. There is no global method for evaluating scientific alternatives, though locally the debate can be rationally adjudicated. It is in this sense that I doubt that there is SM. There are many SMs that are at best loosely related to one another and that are tied very closely to the insights a particular field has generated.

If this is so, then one can ask for the source of the conviction that Science must overcome the failures of the individuals that comprise it. One reason is that conversation generally improves things and Science is organized conversation. But, usually the idea goes beyond this. After all, Science is supposed to be more than one big book group. And what is generally pointed to is the quality control that goes into scientific discourse and the self correcting nature of the enterprise. Data gets refined, theories get corrected using experiment. The data dependence and primacy of experiment is often schlepped out when the virtues of SM are being displayed.

There is, of course, some truth to this. However, it seems the structures promoting self-correction might be quite a but weaker than is often supposed. Andrew Gelman discusses one of these in a recent post (here). He notes that there is a very high cost to the process of correction that is necessary for the nice little story above to be operative. There are large institutional forces against it and individual scientists must bear large costs if they wish to correct these. On the assumption that the virtues of Science supervene on the efforts of scientists, this suggests that the failings of the latter are not so easily filtered out in the former. Or, at the very least, there is little reason to think that they are in a reasonable time.

There is a fear among scientists that if it is ever discovered how haphazard the search for truth actually is that this will discredit the enterprise. The old "demarcation problem"looms as a large PR problem. So, we tell the story of self correcting science, and this story might not be as well scientifically grounded as we might like. Certainly the Gelman post highlights problems, and these are not the only ones we know of (think Ionnides!). So, is there a problem?

Here's my view: there is no methodological justification of scientific findings. It's not that Science finds truth in virtue of having a good method for doing so, rather some science finds some truth and this supports local methods that allow for the finding of more truths of that kind. Success, breads methodology that promotes more success (i.e. not successful method leading to greater truth). And if this is right, then all of the methodological fussing is besides the point. Interestingly, IMO, you tend to find this kind of methodological navel-gazing in precisely those domains that seem least insightful. As Pat Suppes once put it:
It's a paradox of scientific method that the branches of empirical science that have the least theoretical development have the most sophisticated methods of evaluating evidence.
This may not be that much of a paradox. In areas we know something, the something we know speaks for itself, and does so eloquently. In areas where we know little, then we look to method to cover our ignorance. But method can't do this and insight tolerates sloppiness. Why have we made some scientific progress? I suspect that luck has played a big part. That, and some local hygiene to support the small insights. The rest is largely PR we use to to make ourselves feel good, and, of court, to beat those whose views we do not like.


  1. Your link to the Gelman post does not really lead to the Gelman post, but to your Gmail inbox, which is inaccessible to us (fortunately).

    1. Sorry. I really am bad at this editorial stuff. Anyway, I believe that I fixed it.

  2. "It's a paradox of scientific method that the branches of empirical science that have the least theoretical development have the most sophisticated methods of evaluating evidence."

    I can see how this might be true within cognitive science (I assume you are comparing acceptability judgments with typical, complicated psychological experiments), but I am more skeptical of this idea across all of science. Are the Large Hadron Collider and the Kepler telescope not sophisticated methods for evaluating evidence, and don't we have quite elaborated theories in particle physics and astrophysics?

  3. >>”It's a paradox of scientific method that the branches of empirical science that have the least theoretical development have the most sophisticated methods of evaluating evidence.…In areas we know something, the something we know speaks for itself, and does so eloquently. In areas where we know little, then we look to method to cover our ignorance.”

    This seems a bit cynical for my tastes :). I’m guessing there is a less cynical view. We know more in areas where the data is typically more consistent/robust, therefore those areas require less experimental control/sophistication. And we know less in areas where the stuff is messy/complicated/difficult to control, so we need fancier methodology. So, there is perhaps some truth to the correlation Suppes observes, but it seems to me that there is a more reasonable explanation than “cover our ignorance” for that correlation.

    Of course, restating it my way suggests we should continue trying to develop fancy methodology - we really don’t have a choice, when the data is messy, I think.

    1. I think that Suppe's paradox says something interesting. When you know little then the only thing you have to go on is the "data." I put this in scare quotes because in many cases one doesn't even know what the "data" is, or, more accurately, what the relevant data is. So as this is all we have, we fuss over it, caress it, massage it, and try to get it to do something. We are of the opinion that if only we get the data right, then the insights will flow. I think that this is largely incorrect, and that it reflects a wrongheaded E view of knowledge and how it develops. The hard part is finding some kind of cognitive traction and this comes when some bit of theory makes contact with some bit of non-obvious "reality." When that happens, then things take off. Of course, data is always important, but once you know something the fussing is directed and has a purpose and is easier to manage. The problem with lots of "science" is that we don't know anything deep and so the Empiricist temptation is very strong as the data is all that we have and fussing makes you look busy and serious. Plus the current tools are impressive looking. So, agree with Suppes' observation that fussiness is inversely proportional to insight, and that this is no accident. In fact, one might take fussiness wrt method as a mark of ignorance.

      Last point: is developing methodology a good idea? Locally yes, generally no.Refining and elaborating those local methods that have worked can be useful. Some forms of argument have proven their worth in a particular domain. Some kinds of data have proven to be insightful. Finding these and understanding their general features is a great thing to do. But, concern about methods IN GENERAL is, IMO, futile and a waste of time. If this be cynicism, so be it.