The conceptual foundations of decision-making in a democracy
(2003)–Peter Pappenheim– Auteursrechtelijk beschermd
[pagina 273]
| |||||||||||||||||||||||
J&A to Part Three:
|
1) | Truth and falsity are essentially (regarded as) properties or classes of unambiguously formulated statements of a language, the ‘object-language’. (To be unambiguously formulated, the relations between objects in that statement must conform to a formal language such as logic or mathematics.) To discuss classes of this object language, for instance the classes of true and false statements, we must be able to speak freely about this object language in another language which is about languages, a ‘meta-language’. | ||||
2) | Reality does not conform to language, it just ‘is’. Reality therefore cannot be part of the object-language, it can only be described by it. In fact, that is the purpose of the object language. | ||||
3) | To confront statements of the object language with reality, we need a (meta-)language which contains both the object-language and all existing facts, all reality. (We do not need to actually describe these facts in this meta-language; it is sufficient that the meta-language can refer to, point to, these facts in whatever form is suitable.) To be able to decide on truth and falsity, the meta-language must be a formal language. | ||||
4) | If in the meta-language we can agree on the meaning of both:
|
||||
5) | A statement in an object-language will then correctly be qualified as true if, after translating all facts referred to in that statement into the meta-language, we indeed find that it corresponds to a fact. If in the meta-language we find no fact corresponding to the statement, then we will qualify the statement as not true, i.e. as false. (We presume that care has been taken that the meta-language possesses the operation of negation). |
A statement thus is qualified as false not because it would correspond to some odd entity like ‘a false fact’ which would be impossible to define, but because it does not stand in the particular relation of ‘corresponding to a fact’ with any of the expressions in the meta-language which represent reality. Such a statement does not correspond to anything real, for we have taken care that all reality has an expression in the meta-language, for instance in the form of a variable. The statement itself is part of reality for, if put into writing, we can point at it. Because - per definition - the meta-language contains all of reality and all legitimate (conform to the formal language) statements about that reality, whatever the statement is intended to refer to exists only if it is part of the meta-language. Using Popper's example: if we can agree on the fact about which material the moon is made of (rocks etc.) then we can agree on the truth (or falsity) of the statement: ‘the moon is made of green cheese’. With hindsight, such an agreement is of little consequence. But if the statement had been made before we could sample material from the moon, then - having recovered such material and agreed on its nature - we would be able to decide that the statement was false, that it does not correspond to any reality about the material of the moon. |
|||||
6) | Once we possess a meta-language in which we can speak:
If we can state in this way the condition under which each statement of the object-language corresponds (or not) to the facts, we can define, purely verbally, yet in keeping with common sense: ‘a statement is true if and only if it corresponds to facts’. |
||||
7) | This truth is an objectivist, absolute, notion of truth, but it gives us no certainty that our decision is correct, no assurance that if we in this way qualify a statement as true or false, it will really be true or false. For it is subject to the correctness of our agreement about the facts referred to in the meta-language. We may agree that the animal in front of us is a unicorn, but that statement is true only if it really is a unicorn; the agreement only provides a conventional criterion of truth.
On the contrary, Tarski showed that if the object-language is rich enough to be of any real, practical use, for instance if it contains arithmetic, there cannot exist any general criterion for truth and falseness. It applies only to the truth of singular statements such as ‘this is a black swan’. As we cannot be sure that our meta-language contains, and correctly points to, all facts of which reality is made up, any conclusion that a statement is false always implies ‘to the best of our knowledge’. Whenever we agree about the facts to which our language refers, our decision about the truth of a statement in the sense of correspondence to these facts does not require any further input from us and is in that sense totally objective. If we agree on what is black and |
what is a swan, and if we agree that the creature in front of us is a swan and that its colour is black, we must accept as ‘really’ true the statement ‘black swans do exist’ and as false the statement ‘all swans are white’. But such truth or falsity remains subject to the condition that no-one is committing an error or is lying, and of that we can never be sure. It is also conventional because formal languages are conventions. |
SUMMING UP: Tarski has shown that truth and falsity can be part, are legitimate concepts, of an unambiguous language. But the acceptance of the truth of a statement will always remain conditional: ‘we are seekers for truth but do not possess it’. As evident as that conclusion may seem to those who have no knowledge of the philosophical discussions about truth, Tarski's work was necessary to justify the use of the common-sense concept of truth without which there could be no rational and effective social decision-making; it also establishes the conventional character of any decision about truth and the conditions under which such a decision can be justified.
B) TARSKI'S TRUTH AND POPPER ON SCIENTIFIC THEORIES. Popper (1980) argues that Tarski's notion of truth can apply to scientific theories. If - in accordance with logic - we deduce from a theory (all swans are white) that a basic statement (there is a black swan) must be false, and if we agree on the truth of the colour and sort of a bird to which we are pointing (‘it is a black swan’) we can conclude from that statement that the ‘theory’ is false and that this conclusion is objective in the above sense. We can never conclude to its truth because ‘all’ does not refer to a fact, we cannot observe or point to all past, present and future swans. However, the swans, their colour and the incompatibility of a black swan with the theory are legitimate facts in Tarski's scheme.
To be true in the Tarski meaning, the second law of thermodynamics must correspond to a fact which has a name in the meta-language containing all of reality. As a scientific theory held by physicists, the second law of course is part of that language. But if understood in that way, the statement would be at tautology: ‘the second law is the second law’. So whatever fact the theory is supposed to assert must be something else, must exist outside the mind of physicists. What is that something else? It must be some kind of agent ensuring the increase of entropy in any thermodynamical process. If so, we could - by producing that agent - prove the truth of our theory.
Obviously, Popper does not see scientific theories as pointing to some agent. To ‘objectivate’ scientific theories, he introduces the notion of a factual ‘regularity’ of nature. But any regularity either asserts just a repetition of facts, or it implies the existence of an agent producing that regularity, something ensuring that the regularity will continue into the future. Popper emphatically rejects the first and states that a statement which only describes past repetition cannot qualify as a scientific theory. To save the objectivity and the relevance of both his demarcation criterion the refutation through a falsifying test result, Popper states that:
a) | Scientific theories should be universal; they must apply to all specific cases of the phenomenon which they intend to explain. |
b) | Such regularities follow from a ‘law’ of nature expressed by the theory, which thus corresponds to a fact. |
ad a) Popper states, as I would, that the unconditional use of the universal quantor ‘all’ in a statement makes it doubtful that the statement is a statement of fact to which Tarski's concept of truth can be applied. Hence Popper correctly concludes that we can prove the truth only of singular statements and that we can prove the falseness of universal statements (but never their truths) by proving the existence of a falsifying singular statement. Of all other kind of statements we can prove neither truth nor falseness.
We then can logically prove the falseness of such a universal statement ‘all swans are white’ by finding a coloured swan. But we could also have proved its truth by looking at all swans and finding them all white. The only problem is practical, namely to make sure that we have indeed looked at all swans. What we really need - but cannot do - is to extend that qualification to future swans, and that applies to both the falsification and the corroboration. For it may very well be that we found a black swans, but that he was the last one of their kind so that the statement ‘all swans are white’ will be true in the future... until evolution introduces a swan of a different colour. Given the vagaries of reproduction we can expect to regularly find individual off-colour swans, some of which might benefit from that new color and become a new subspecies. The falseness of the assertion then is a foregone conclusion. Referring to Tarski then does not solve the problem of conclusive decidability, even in a negative sense, of scientific theories.
ad b) I hold that a theory, which always contains abstract concepts, does not fall under the scope of Tarski's concept of truth, and have refuted(in J&A, 3b.3, p. 296) Popper's assertion of the objective, observer independent nature of these concepts. Various people have objected to Popper's application of Tarski's concept of truth to scientific theories because these are far more complex than the simple examples which Tarski used (snow is white), for instance Bronowski (Schilpp, p. 626). Popper has given his answer, arguing that although scientific theories are indeed far more complex, they do not essentially differ from Tarski's simple examples.
This is where I disagree. Scientific theories are abstract constructions. They are (largely hypothetical) explanations of how and why the events we experience come about. Tarski nowhere gave any indication that his concept of truth might apply to such abstract hypothetical entities. The burden of proof therefore rest with those who propose such an extension, and to my mind Popper has failed to provide it.
I have found just one such attempt: ‘in addition to explaining facts’, says Popper (Schilpp, p. 1094), p ‘theories might also describe facts; strange facts, highly abstract regularities of nature’. He seems to confuse the actual regularity (in the sense of recurrence) with the ‘law’ which is supposed to generate this recurrence. Scientific theories intend to describe this ‘law’, not the recurrence. They are a mental construction by which we attempt to explain observed recurrence and, more important, justify the expectation of future recurrence. We assume that recurrence is the consequence of regularity in the sense of ‘obedience to a law of nature’.
The expectation of recurrence can be tested against facts: that is what experiments are all about. But by that we do not test the explanation itself, only its claim to universality, its extension to events not part of the history used to build the theory (in determining its parameters). It tests its worth as a basis for deciding about facts unknown to us; it does not engage the explanation itself. The theory could be ‘Tarski-true’ only if it postulates the existence of some real and autonomous object, a force (or a Godly command) which forces reality to behave the way it does. Our theory must be considered as a description of such a fact which enables us to point at it in the meta-language; the theory itself cannot be that fact.
Popper constantly treads a line between holding on to his concept of objective truth of scientific theories, and my instrumental view. In ‘Objective Knowledge’, p. 318, he writes:
‘Although we have no criterion of truth, and no means of being even quite sure of the falsity of a theory, it is easier to find out that a theory is false than to find out that it is true (as I have explained elsewhere). We have even good reason to think that most of our theories - even our best ones - are, strictly speaking, false; for they oversimplify or idealize facts. Yet a false conjecture may be nearer or less near to the truth. Thus we arrive at the idea of nearness to truth, or of a better or less good approximation to the truth; that is, at the idea of verisimilitude.’
Obviously I agree.
Popper did not address the question: what exactly does this method of determining verisimilitude (empirical content) really measure, what can any method using tests measure? All measuring is determined by its reference, by a basic yardstick. Verisimilitude in Popper's concept measures a distance to a ‘true’ theory, which presumes that such a true theory exists but is not known to us. If such a theory is to be true, it cannot contain any ‘oversimplifications’ nor ‘idealisations’ of facts. The only ‘theory’ of which we can with confidence state that it will meet this requirement is reality itself, i.e. the actual facts and events. Scientific theories are interesting precisely because they are oversimplifications and idealisations, and thus per definition untrue in the Tarskian sense. What we test when confronting a theory with reality is not its correspondence to some hypothetical ‘true’ theory, but its use as a simplification of reality for our decision-making. What we determine, at least theoretically, by tests, what verisimilitude really refers to, is Popper's ‘empirical content’ of a theory, not the explanation itself. The ‘objective’ nature of our scientific knowledge lies not in some theoretical correspondence with reality, but in the very real consequences of taking the wrong decision when applying it: there is nothing subjective in the collapse of a bridge constructed on the basis of a theory which has lead to wrong conclusions about the strength of a specific construction.
My main contention in the PART THREE about Truth is that scientific theories, certainly the most interesting ones, are not (intended as) facts in the sense of Tarski's theory of truth but are a means to organise and control our conception of reality, for instance for decision- making. All such theories contain abstract concepts. If we reject - as I argue - the autonomous existence of world three objects, then we must also reject Popper's claim to total objectivity of the application of Tarski's concept of truth (as correspondence of statement to fact) to scientific theories.
Freeman on Objective Knowledge. 3a.2)
The notion of factual objectivity versus procedural objectivity can be found in an article about Peirce by Eugene Freeman (Schilpp, p 464-465). He writes:
‘I make a distinction between what I call factual objectivity and rule objectivity. The former is ontological and involves conformity to reality or facts. The latter is epistemological and involves conformity to rules established by fiat or social agreement. Factual objectivity is closely related to the ordinary language sense of objectivity, which presupposes the realistic (but uncritical) distinction between mutually exclusive ‘subjects’ (or selves) and ‘objects’ (or not-selves). If we disregard for the moment the practical difficulties of reaching ontological and epistemological agreement as to where the demarcation line between the self and the not-self is to be drawn, we find the ordinary language meaning of objectivity, as given for example in Webster's unabridged dictionary, quite instructive. ‘Object’ is defined in one context as ‘The totality of external phenomena constituting the not self’; ‘objectivity’ is defined derivatively as ‘the quality, state or relation of being objective’; and objective, in turn, is defined as ‘something that is external to the mind’.
Further on, he writes:
‘In order for objectivity to be factual objectivity, i.e. to have factual content, the demarcation between self and not-self must be ontologically correct - the line must be drawn between them in what as a matter of fact is the right place. It must also meet whatever epistemological requirements are necessary to give us the requisite assurance that we know that it is correct. But, whereas philosophers are able to agree that what is ‘inside’ the self is subjective and what is: ‘outside’ is objective, they are not always able to agree on where the line of demarcation between self and not-self is to be drawn, because of the enormous variation in their ontological and epistemological assumptions. Thus what is objective for one philosopher may be a mere figment for another.’
Freeman goes on to draw the conclusion that factual objectivity is a chimera, and that all we can aim at is rule objectivity. His conclusion is logically correct, but follows only from the requirement that factual objectivity of a statement be ontological. As explained in the chapter about the subjectivity of any information process (p. 55), Freeman's conclusion applies to any statement and thus robs the term ‘objectivity’ of any discriminatory power.
If we accept the nature of the information process (PART TWO B) and its inevitable functional character, we will not expect a qualification like ‘objective’ to have any ontological basis or to define an unequivocal, absolutely precise demarcation line between elements of the population; such distinctions are a means for apprehending reality by our conscious thinking, not a part of that reality. As long as it adequately fulfils the function we have in mind, the fact that a distinction such as ‘self versus not-self’ and ‘subjective versus objective’ cannot be ontological does not provide a sufficient reason for rejecting it. In the chapters concerned, it has been shown that these concepts can provide useful and philosophically legitimate tools if applied to an information process. ‘Subject’ and ‘object’ then refer to entities which can (but need not) have the distinct and autonomous existence required by an ontological basis.
Against Feyerabend's Anarchy. 3b.1)
SUMMARY: Because no method can claim to be totally objective and conclusive and because scientists do not generally and consistently follow a method, Feyerabend rejects any method for evaluating scientific theories and advocates total anarchy in this field.
I agree with the two premises, but they do not lead to the conclusion that we must reject any method. They only justify the conclusion that a decision for against a method like Popper's evaluation of scientific theories is not dictated by any absolute necessity nor any authority above man. But the same can be said about Feyerabends rejection of any method: he surreptitiously and cleverly extends his refutation (of the claim of objectivity and of conclusiveness of any method) to a refutation of the method itself; his radical anarchism is based on a gross breach of the rules of argumentation.
Man is, and should be, just as free to use method if that suits his purpose as he is free to forego it. No appeal to freedom can justify a ban of any method for the evaluation of scientific theories to which we have agreed in a democratic procedure because we expect it to improve our decision-making. Below, I will first refute his arguments for total anarchy and then present some valuable concepts which he introduces.
A) FEYERABEND'S ARGUMENTS DO NOT JUSTIFY RADICAL ANARCHISM. Feyerabends book ‘Against Method’ I found very stimulating, and his assertions about the practice of science look sound enough to me. There is just one thing wrong with them: as said, they do not lead to radical anarchism in the field of evaluation of scientific theories. In short, his arguments are:
- | The actual behaviour of scientists refutes any claim that they follow a common and well defined method. |
- | No method for the evaluation of scientific theories can be totally grounded in logic or fact, no method is absolutely objective. |
- | Acceptance of a unique and immutable method would ipso facto prevent any scientific progress except for marginal improvements of accepted theories. |
- | A pluralistic science is necessary both for progress, and for the unhindered development of individuals. |
Quoting the last sentence of chapter 15 his book, his major conclusion is:
‘And Reason, at last, joins all those other abstract monsters such as Obligation, Duty, Morality, Truth and their more concrete predecessors, the Gods, which were once used to intimidate man and restrict his free and happy development: it withers away.....’
Readers of Feyerabend's rich book will understand that one cannot do justice to its conclusions in a few sentences. The above quotations will do because I do not take issue with these statements; on the contrary, they are totally congruent with what has been said in my book.
The reader must by now know that I heartily concur with Feyerabend that abstract concepts should not be the masters of man, but his servants. So let us join Feyerabend in his celebration of the demise of Reason with a capital R, of a Reason above human authority, of Reason as an instrument of repression. My chapter on Popper's world three objects was written precisely to do away with capitals for abstract human concepts in general, and reason and truth in particular.
But I emphatically object to the position which, Popper, Feyerabend and so many other philosophers take, namely that it is impossible to find a passage between the Charybdis of absolutely objective methods and the Scylla of relativistic anarchism, a position which has never been substantiated by any argument worthy of that name. Such a passage exists. It is provided by a qualified instrumental view of knowledge, namely the functional one. The rejection of any kind of functionalism does not find its origin in the theories of these philosophers, but is due to an acute allergy to either relativism or method.
The abstract concept of instrumentalism covers a wide range of views. Before taking a stand on it, we should therefore ask: what kind of instrumentalism are we talking about? A blanket rejection of instrumentalism à la Popper and Feyerabend is as unjustifiable as a blanket acceptance would be.
The concept that must be rejected is the ontological instrumentalism involved in what Nicolai Hartmann calls ‘Teleologisches Denken’, and which he adequately refutes in a book of that title. It is Instrumentalism with a capital I, the view that everything can be explained, and exists only, in function of some purpose, goal, end-use. It should join all other ‘monstrous’ abstract concepts written with a capital. A scientific theory is not ‘just a tool’, nor ‘just’ anything else: it is a scientific theory, period.
But rejecting instrumentalism as an explanation of the existence of an item in no way precludes the view that once it has come into being, it can have a function which may be an important property of that item and that its persistence can often be explained by the function it fulfils. As explained in PART TWO, chapter ‘Some remarks about functionality’, p. 44, that is the rule in the living world. If so, function must be taken into account. If the phenomenon we study is an information process of a living creature, then function will even be its prime aspect and the most obvious, ‘natural’, point of view from which to evaluate it.
The functional view of thought is so ingrained in our common sense and common language, we are so used to assuming that somebody who thinks does so with a purpose, that we call any purposeless action or speech ‘thoughtless’. What we really mean with this qualification is not that the person in question did not think at all, nor that we must always act or speak with a purpose in mind. When we say ‘thoughtless’, we mean that the person in question did not adequately consider the consequences of his act or talk. We clearly consider the expected result of one's action or communication an adequate, ‘natural’ basis for evaluation. The criterion for evaluating these consequences is the expected result as seen from the point of view of the actor, in the light of his objectives, whatever they may be, which implies a function of the information process in reaching these objectives.
A proper starting point for discussing methods for the evaluation of scientific theories might be the description of the kind of regularities which we can detect in the conduct of scientists. We probably will find that many of them consider it appropriate to confront their theory with facts whenever that is not too difficult. These same scientists also might not always consider such tests to be absolutely conclusive, especially if they refute their pet theory. The conclusion from such observations will be that most scientists do not act like obedient disciples of either Popper or Feyerabend, while a functional view of method can explain what scientists actually do, or at least profess to do. Why does Popper reject such a qualified instrumentalism and does Feyerabend not even consider it?
Popper motivates his efforts to counter instrumentalism by the fact that from an instrumental argument we cannot deduce a conclusion that is generally valid: such conclusions will always be conditional upon the (subjective) acceptance of a common objective. A conditional method cannot provide the weapon he wants for preventing the perversion of science by demagogues and other assorted scoundrels or misguided idealists.
Feyerabend does not reject method if used by an individual for his own purpose. His arguments aim at the situation where an individual (or a group) endowed with the required power appropriates for itself the authority to prescribe what method another individual must use. He also objects to any method which would definitely and absolutely disqualify a scientific theory. I share both objections. But such arguments aim not at method, but at the authority which would impose the application of a method and at considering its verdicts to be final. The presentation of Feyerabend's arguments suggest that this confusion between method and authority may have been intentional. For the correct way to put the problem is quite simple and does not lead to his anarchism if we accept the functionality of method and the conditionality, conventionality, that goes with it.
Clearly this conditionality, conventionality, is the main cause for both to reject any kind of instrumentalism. For that conditionality invalidates their purpose of making mandatory, because rationally inevitable, their own basic political philosophy (Popper's liberalism of the enlightenment versus Feyerabend's Hegelian Marxism). Both freedom-loving philosophers want to impose their view on us, but not as a personal and thus subjective choice, but as being a totally objective consequence of reality. They achieve this result by putting one concept above human authority, namely Truth in the case of Popper and Freedom in the case of Feyerabend, both with a capital. Popper's absolute Truth is refuted elsewhere. Here I will deal with Feyerabend's concept of Freedom.
Surely we cannot deny that we achieve the maximum freedom possible if we consider and use all abstract concepts, including freedom, as tools that should serve us at our discretion and should never become our masters? Freedom cannot then claim any authority which we would deny to reason, duty, morality and truth. For as soon as we do, as soon as we start writing any of these with a capital, and serving it through the intermediary of priests who claim for themselves the right to impose upon all the meaning they attach to it, it grows into one of the monsters so dreaded by Feyerabend.
If the real purpose is to preserve the autonomy, the freedom, of individuals, then it pertains to these individuals to choose for or against method and - if they choose for method - to decide for which method. In case a freedom loving individual proposes a social venture requiring a decision on the acceptance of a method for deciding about scientific theories, the first question to ask is ‘who should have the authority to make such a decision?’ Freedom mandates that each of us should be allowed to answer that question according to his own autonomous views, free of coercion. The answers to the question of authority may range from ‘nobody’ (anarchists), ‘all of us’ (democrats) to ‘the church’ (theocrats) etc. That subject has been discussed in (Part One, chapter ‘Norms in a democracy’, p. 23). As explained in Part Five, the use of deceitful argumentation is an exercise of power to manipulate the decisions of individuals against their own objectives, a form of coercion. It is a sign of disrespect of the autonomy of individuals, and of democracy as an institution. Specifically, it is a cause for the disqualification of the argumentation in democratic decision-making.
Back to ‘Against Method’. What Feyerabend rightly exposes as barren and oppressive is not method and reason in general, but their hegemony. On page 179/8 he writes:
‘Without “chaos”, no knowledge. Without a frequent dismissal of reason, no progress. We must conclude then, that even within science reason cannot and should not be allowed to be comprehensive and that it must often be overruled, or eliminated, in favour of other agencies. There is not a single rule that remains valid under all circumstances and not a single agency to which appeal always can be made.’
Quite so. But can one from these and similar considerations conclude, as he does, to the necessity of anarchism? To answer that question we must clarify what we mean by anarchism. For all the arguments that Feyerabend brings to bear on the subject only lead to the conclusion that reason, rules and method should not be permitted to enslave us, that they are not absolute. If that is to be called anarchism, it will be evident from my criticism of Popper that I also am an anarchist and that my brand of instrumentalism is totally consistent with such a view. It admits rules, methods, norms etc. only as long as we as accept them as free men. Such ‘tolerant’ anarchism is equivalent to the rejection of any a priori authority above the human individual as expressed by the democratic principle. It is quite different from radical anarchism which dismisses all limitation of individual freedom by rules, norms or method, however arrived at.
To which one does Feyerabend subscribe? He is a self-confessed ‘out-and-out anarchist’ who will use - as he writes on page 214 - ‘an irrational theory falsely interpreted’ to further his cause. He argues that there can be no totally conclusive and objective method for the evaluation of scientific theories and that a certain measure of anarchism is inevitable. His arguments only invalidate any claim of universality and total objectivity for both a method and its application and only justify the position of the tolerant anarchist. He then claims to have proved the position of the anarchist but now gives that word the meaning of radical anarchist which rejects the use of all method. He never shows how from the rejection of total universality and objectivity we can deduce the rejection of method in general. Indeed, one cannot. The implication that the first leads to the second is a breach of the basic rules of deduction and smacks of deceit when perpetrated by a philosopher of science.
And he is consistent in this breach of rules of deduction. From the statement that science is not the only source of knowledge, he correctly concludes that we cannot justify a universal preference for scientific knowledge above other types of knowledge such as myths. But he implies - without arguments - that this conclusion also applies in case of a well-defined problem such as the choice of an economic theory in social decision-making when dealing with unemployment.
Because other methods of gaining knowledge play a role in scientific discovery we must - according to Feyerabend - abstain from discriminating between scientific method and other methods. Again a form of argumentation which is at odds with elementary logic. It only invalidates a claim to absolute priority for the scientific method, irrespective of circumstances. It does not preclude the possibility that scientific statements may allow a more objective evaluation than the others, and the same freedom advocated by Feyerabend allows us to follow a preference for objectivity if that suits a our purpose.
Some kind of ambiguity seems to creep into Feyerabend's work where he uses the expression ‘sound knowledge’. First, he says, about a certain view of the relation of man to his universe; ‘Today... we can say that this was a correct guess’. Can we? What means of evaluation, what criteria, does he leave us to qualify knowledge as ‘sound’, a guess as ‘correct’? We have to page forward a good deal before we find that the closest he comes to an evaluation method is voting. A sound theory, a correct guess, is the one that manages to collect the most votes! Scientific theories in his view require no method of evaluation different from myths or religions. Test results to Feyerabend are just a means for gathering votes. As a fellow democrat, I have just one question for him: if we feel that we need them, why are we not allowed to decide by democratic means such as voting on the methods by which we want scientific theories to be evaluated?
As said, Feyerabend also introduces some valuable concepts. We will very briefly review three of them, and show that they too need not lead to radical anarchism.
B) NEW THEORIES OFTEN LACK TESTING METHODS. In chapter 12 Feyerabend states that really new theories will usually fail tests, if those tests were devised for the etablished theories. To develop meaningful tests for such a new theory requires new skills and new auxiliary (testing) theories whose development requires time. Until these are available, we must save the new theory. This cannot be done on the basis of available factual evidence.
The history of science points to the pervasiveness of that problem. It is compounded by the one exposed by T. Kuhn (1976), namely that new theories are difficult to fit into the ongoing practice of science, especially if they involve a new cosmology (paradigm). But acknowledging the problem does not lead to the conclusion that - if the present practice of evaluation is inadequate to deal with a new theory - only ‘irrational means remain such as propaganda, emotion, ad hoc hypotheses, and appeal to prejudices of all kind.’ Why should blind faith in a new theory be - as Feyerabend writes on page 154 - the only motive for upholding it until we have found the auxiliary sciences, the facts, the arguments that turn the faith into sound knowledge? What is ‘sound’ if there are no acknowledged methods for evaluation? Feyerabend's voting is precisely that method of social decision-making that promotes propaganda, appeal to emotions etc.. Is
knowledge sound as soon as a majority of all voters, whatever their understanding of the theory, has decided in its favour?
If our objective is not to fight any method, but to save new theories which may hold some promise of improving our understanding and decision-making from unwarranted rejection or neglect, then we should declare that to be our objective, and keep an open mind as to the best means to achieve it. We might then - as I propose - establish methods and institutions whose task it is to integrate new theories into the scientific world and see to it that they get a fair chance to compete with the established theories. That seems a more promising road to success than methodological anarchy, as borne out by history. As is to be expected in anarchy, new theories not fitting in easily with existing research programmes are at the mercy of well-entrenched and highly undemocratic academic authorities and of the purse strings of profit-oriented and risk-avoiding business or ‘commissars of the people’.
Obviously, whatever road we choose, at least some individuals must have sufficient faith in the new theory to make the effort necessary to help it gain entrance into the working program of the corresponding institutions or at least be put through a vote. Method, such as my limited demarcation criterion, is not a substitute for such personal faith, but a means to make acceptance of a theory more dependent on its merit and less on the guile and power of its proponents as pitted against the defenders of its well-established rivals.
I agree with Feyerabend that our present institutions of science often discriminate against new and revolutionary theories. But it is precisely the unrealistic choice between Feyerabend's radical anarchism and Popper's conclusive but usually impracticable recipe which provides institutions with an excuse to perpetuate the status quo and to ignore the problem instead of looking for methods which could alleviate it.
C) NO RULES ARE ABSOLUTE. We should not, says Feyerabend (p. 154), extend Popper's solutions for methodological and epistemological problems to politics and even one's own life. The search for truth (I presume he aims at the concept of absolute truth) might create a monster, might harm man, turn him into a miserable, self-righteous mechanism without charm and humour. He says all this and much more in order to answer in the negative the question: ‘Is it desirable to live in strict accordance with the rules of a critical rationalism?’ Of course it is not desirable to live in strict accordance to any such rule! But it is just as undesirable to live without any rules. In fact we cannot even conceive of any man who lives without at least implicit rules: he just would not live at all, let alone live in a society worthy of that name.
As he is wont to, Feyerabend starts by pretending that the enemy to fight is the claim of absolute and universal validity of a method, in casu the rules of critical rationalism. Any democrat should reject any claim to universal validity. But then Feyerabend implies that, by refuting that claim, he has refuted the admissibility of method itself; again, that is a switch of conclusions which he does not justify and which cannot be justified on the basis of the arguments presented in his book.
In the practice of decision-making we have to make choices for or against a scientific theory. These choices will determine our ability to survive, and thus are of interest to all of us. Feyerabend fails to address the question: if we have rejected Popper's method, what then? and for a good reason. For doing so would lead to either:
- | the conclusion of this book, to the (provisional) acceptance of democratically established methods for the evaluation of scientific theories |
- | exposing his work as an attempt impose on us his own ideology. |
D) THE INCOMMENSURABILITY OF SCIENTIFIC THEORIES. To give, in a few words, a definition which does justice to the concept of incommensurability is not easy. Feyerabend presented this concept on page 225 by confronting the reader with a great variety of examples. I will deal with it the chapter of CS dealing with Collingwood (p. 407). In short, theories that are really different from each other involve different ‘cosmologies’, develop a different set of words and symbols, and change the meaning of many words which are retained. We have then no ready-made, common and stable basis for comparison. That statement is congruent with my view about information, especially its holistic character. Ergo, concludes Feyerabend, anarchy.
Wrong. The above condition leads to incommensurability only if we were to require that theories be comparable by themselves. If we take the functional view and compare theories with a purpose in mind, say their adequacy for establishing facts in social decision-making, then we can construct a language and devise methods which are adequate for the specific purpose of comparing theories. Ideally, we should do so before, not during, the process of developing a new theory and present it in a form in which it can be related to competitive theories. That is precisely what honest and democratic scientists like Einstein do. Feyerabend acknowledges this, but dismisses the argument on the ground that such an attempt will never succeed 100%. Probably true, but irrelevant. For once we have rejected both the necessity and the possibility of a conclusive and totally objective evaluation of theories, we have ipso facto been relieved of the need for absolute commensurability and will have abandoned the quest for any revealed, absolute, totally objective method and criterion for judging scientific theories. We will be satisfied with that level of commensurability which we can attain provided it gives us a better than even chance of choosing the theory which best meets our purpose, in which case the comparison is justified. There is nothing in the whole argumentation of Feyerabend that even suggests the impossibility of an ordinal evaluation of theories on certain specific characteristics of special interest to us, such as their explicative or predictive power. In most decision-making situations we are faced with only a few competing theories. An ordinal ranking according to a limited number of specific criteria will then be sufficient for making a rational and justifiable choice. Again, Feyerabend ducks the explanation of why, if 100% commensurability is impossible, we must forego the search for whatever commensurability can be achieved.
The claim of the radical anarchist that, if we cannot devise perfect methods, we should refrain from developing any has curious consequences. Feyerabend professes great concern with freedom and democracy. He pleads for an educational system that gives priority to teaching the pupil to think for himself. I applaud. But if this is to be more than an empty phrase, the pupil must also have something to think about freely. If he must choose, the alternatives should be
presented in a form which is as commensurable and free from subjective biases as we can make them. New scientific theories should be presented - as far as that is possible - in a form (and, if necessary, with some kind of dictionary) that enables evaluation also for those that do not (yet) understand or adhere to its cosmology. In fact, that is precisely what Galileo attempts with his example of the people in the boat (as referred to be Feyerabend). In the Part Five, chapter ‘The rhetoric of democratic argumentation’, p. 223, we have dealt with the legitimacy of metaphors and its conditions.
Democracy provides a criterion for the legitimacy of arguments: we will accept only those arguments which respect the audience as being its own judge. Arguments that attempt to mislead the audience do not respect this criterion, they put the authority of the orator above that of the audience which thus loses control over its destiny. Such a loss of control is not due to any force beyond human control. It results from human action (misleading propaganda) which could to a large extent have been prevented by... agreeing on and enforcing adequate methods of argumentation.
The accent is on ‘adequate’. Not just any method will do. As rightly argued by anarchists, any method to which we adhere reduces our freedom. But as shown, the absence of any method also endangers our freedom. The choice is not between method or no method. The solution is to develop methods which preserve more freedom than they cost. We must see methods as tools which have nothing eternal or absolute, but must be continuously evaluated as to their effectiveness in relation to the purpose which led us to establish them in the first place.
Popper's Non-Solution of the Problem of Induction and Rationality. 3b.2)
Popper addressed the problem, first posed by Hume, in the following terms:
- | Are we ever logically justified in drawing, from events we experienced, conclusions about events which we did not experience? |
- | Can we - from experience alone - develop knowledge that exceeds the knowledge we have of these experiences? |
His answer to both questions is an emphatic ‘no’. Let us rephrase these questions in terms applicable to scientific knowledge, and start with the second question.
A) HUME'S PROBLEM OF INDUCTION. Can we deduce a scientific theory from any evidence by simply looking at this evidence? To answer yes would be to subscribe to the Baconian argument that our scientific theories are already contained in the reality we observe, and that thus any scientific discovery is nothing but a recognition of what has been there all along. Popper calls this the ‘bucket theory of the mind’ which he rejects as both empirically and logically false. The reader who remembers our view of information will have understood that I concur with Popper. Certain events which we have observed may suggest a theory, but they will do so only if we want to build one. The experience that the sun rises every morning will suggest a theory only because we expect that if it does, it is not by pure chance; we surmise that behind that occurrence there must be a regularity (conformity to a rule). We then create a conjecture about such a regularity.
We do not ‘observe’ or experience the first or second law of thermodynamics. We can observe the fact that if we throw a stone, it continues its trajectory; we can observe that two clouds of gas tend to average out their temperatures. But we do not observe the theory, we construct it because we want to find a cause, a reason, why it must be so. To build a theory we do not need repetition. We may construct a theory about a phenomenon the first time we become aware of it. Events at most trigger or catalyse our efforts to construct theories.
The facts just are. Whatever we devise to explain why they are the way they are is something which we add to the facts. Any time we end up with more information than we have registered as a direct consequence of an experience, that ‘more’ cannot logically have been contained in what we had experienced. So any theory, scientific or otherwise, contains something which we added over and above the experience we have registered about a fact. It is this something which leads us to deduce, from facts which we have experienced, statements, thoughts about ‘facts’ of which we have as yet no experience. The theory thus can never have been contained in the experience.
Applied to scientific theories, the first question can be rephrased as: ‘Once we have developed a theory, can we deduce from experience alone anything about the validity of that theory?’
Popper's well-known answer is: ‘Only in one case, and only negatively’. The one case is: if we have a universal theory that satisfies certain conditions, then it is logically possible to deduce from certain events that it must be false. Any theory satisfying the demarcation criterion has the
logical form that we can deduce from it a class of events that are prohibited by it, that would conflict with the theory. Popper calls the statement of such an event a ‘basic statement’.
If a basic statement belongs to the class prohibited by the theory, accepting it as true ipso facto implies a contradiction between the theory and the reality it pretends to represent. Stating that such a theory is false then is a totally objective evaluation, once the truth of the basic statement is not questioned. That evaluation involves no induction, yet we did learn from experience, namely that the theory is false, at least if we subscribe to Tarski's concept of truth as explained in the chapter ‘Truth and falsity’, p. 273.
What does it take for a theory to be thus decidable? It must be universal in the sense that certain (combinations of) facts are universally excluded by it, that such facts can never and nowhere exist in reality, in the universe. The following will sum up (in Tarski's terms) Popper's solution of the problem of induction.
- | If we talk in a formal language, such as logic, in which we can refer both to our theory and to reality, |
- | If we are able to agree on facts, i.e. if our common language is adequate, |
- | If we have a universal theory from which we are able to deduce a class of singular statements (called hereafter basic statements) referring to facts whose existence is excluded by the theory, |
- | Then we can test the truth of the theory by looking for facts corresponding to these basic statements. |
- | Finding such a fact leads us to reject the theory as false. |
- | Not finding any does not mean that the theory is true; we can only conclude that we have (not yet) been able to falsify it. |
If the first two conditions are met, the decision - based on such a test - to reject the theory as false is totally objective. For then the outcome is decided only by facts and is totally independent of any subjective motivation.
From Tarski's concept of truth we can directly conclude that it only is possible to:
- | prove the truth of singular statements (there exists a...) |
- | prove the falsity of universal statements (all... are...) |
Both are basic tenets of the logic of predicates.
But what do we really refute by accepting a basic statement as true? We have not proved the falsity of any particular element of the theory, nor even of the theory as a whole, but of just one of its attributes, namely its claim to universality. Given the limitations inherent in being human, we should expect any theory to prove false somewhere outside the space and time frame presently accessible to us. The presumption that a theory is not universally valid therefore is the natural assumption to make, whatever the result of any test. If I still use Popper's concepts and theories, it is because they are useful even if we do not share his belief that they permit conclusive and totally objective decisions about the falsity of theories.
In his article ‘The Corroboration Of Hypotheses’ (Schlipp, pp 221/240) Hillary Putnam suggested that Popper's theory was not free from contamination by inductivist tendencies, thereby incurring the wrath of Popper and suffering the reproach of not having read or understood Popper's work. It seems to me that the same reproach can be made to Popper as this was not the major point Putnam wanted to make. For his statement is not without merit.
The arguments put forward in Volume One, in the chapter about the testability of scientific hypotheses (p. 108), and in the present volume in the chapter called ‘The role of initial conditions’, (p. 293), lead us to conclude that:
a) Popper's negative solution of the problem of induction is logically correct, but does not solve Hume's problem. To deduce that a theory which has been falsified today will also be falsified tomorrow involves induction. Popper does not make that claim: with him, a theory that has been falsified today has been falsified, period. It has been exposed as not universally valid in space and time and this demonstration claims nothing about its validity in the future. After any number of tests we still remain totally ignorant about the events of which we have no knowledge. This pleonasm is actually the core of Hume's problem of induction, and there can be no solution to it within any purely analytical system, either positive or negative. The - correct - conclusion Popper draws from a negative test is that the theory cannot correspond to some real and universal entity, some kind of totally autonomous fact. Popper's conclusion that if a universal theory has been proved false, that is sufficient cause for totally rejecting it, presumes that such a theory could ‘objectively’ be true in the Tarski meaning, that it could correspond to a ‘fact’. That view is refuted further on, in the chapter dealing with the Popper's world three objects. Note that a theory which is objectively true can never be improved. Should we ever find one, then the corresponding field of science would have reached its final stage. It seems to me that only a person totally devoid of any imagination and sense of history will expect any scientific theory ever to be definitive. The ‘natural’ assumption it that all empirical scientific theories are false and will sooner or later be falsified as our ability to observe expands.
b) As argued in volume one, chapter ‘The testability of scientific theories’, actual testing of a hypothesis almost always involves conventions beyond agreeing on basic facts and a formal language. The results of such tests will not by themselves be logically compelling and falsification cannot be objectively conclusive.
B) ONLY A FUNCTIONAL CONCEPT OF RATIONALITY CAN SOLVE HUME'S PROBLEM. Popper's view of truth and rationality and his consequent requirement of conclusive and objective testability derive mainly from the problem he tried to solve. That problem (abbreviated by Popper as ‘HL’) is Hume's scepticism about an objective rationality of induction, of finding a logical and conclusive argument for the rationality of deriving statements (conclusions) concerning instances of which we have no experience from other, repeated, instances of which we do have experience. To explain this scepticism, Popper (1979, p. 5) quotes Russel, as follows:
#Russell says about Hume's treatment of induction: ‘Hume's philosophy... represents the bankruptcy of eighteenth-century reasonableness’ and, ‘It is therefore important to discover
whether there is an answer to Hume within a philosophy that is wholly or mainly empirical. If not, there is no intellectual difference between sanity and insanity. The lunatic who believes that he is a poached egg is to be condemned solely on the ground that he is in a minority...’
Russell then goes on to assert that if induction (or the principle of induction) is rejected, ‘every attempt to arrive at general scientific laws from particular observations is fallacious, and Hume's scepticism is inescapable for an empiricist.’
Thus Russell stresses the clash between Hume's answer to ‘HL’ and (a) rationality, (b) empiricism, and (c) scientific procedures.
I hope that my discussions in sections 4 and 10 to 12 will show that all these clashes disappear if my solution of the problem of induction is accepted. If so there is no clash between my theory of non-induction and either rationality, or empiricism, or the procedures of science.#
As argued in PART THREE about scientific theories, drawing conclusions from tests as to the future validity of a scientific theory according to Popper's principles usually contains a conventional element and it either involves induction, or falsifies a claim which we would never expect a scientific theory to meet anyway. It has also been shown that his demarcation criterion is not applicable to many fields of empirical science because of the very nature of the material involved (living systems). Are we then condemned to Hume's scepticism?
Indeed, we are, as long as we persist in according an autonomous existence to world three objects like truth, as long as we cling to the illusion that an intellectual difference between sanity and insanity corresponds ipso facto to an existential difference. To escape from this type of paradoxes, we must face the obvious, which is that all these concepts have been created by men with a purpose in mind, and that it is futile to discuss them without taking that purpose into account. No (purely analytical) discussion about rationality that relies totally on abstract concepts will lead to (synthetical) factual conclusions. Some input from the ‘real’ world of experience is always necessary.
It may be perfectly rational to profess the belief of being a poached egg, for instance as a means to avoid the draft. If a man has just killed his wife and children, the subjective belief of being a poached egg may be the best means available for to enable him to live with that reality. This example illustrates a fact which seems to escape many philosophers, namely that there are two faces to a great many concepts, depending on whether we apply them as a society or as an individual, a distinction which is forbidden by the nefarious belief in the autonomous existence of such concepts.
As shown in the chapter about induction (p. 99), induction is rational in decision-making, also about scientific theories, any time we can justify the expectation that we will be better off when applying induction in that decision-making than we would be by forgoing it.
Popper on Scientific Theories. 3b.3)
1) WHY POPPER'S DEMARCATION CRITERION IS NOT ADEQUATE. Popper wanted to determine a boundary between empirical scientific theories and all others. To that end he developed his famous demarcation criterion: to merit the qualification of ‘empirical and scientific’, a theory must be conclusively decidable by confrontation with facts.
I have argued in the chapter ‘The testability of scientific theories’ (p. 108) that two fields of investigation defy such decidability by the very nature of the reality to be investigated: living systems and all systems which involve stochastic theories whose probabilities diverge by a relevant margin from zero or from one. Both kinds of systems are testable and decidable, but only if we agree on certain assumptions. In the case of living systems that is the extent to which the initial conditions of a test have been met. In the case of stochastic systems we have to agree on an acceptable risk of taking the wrong decision.
Many theories defy the actual construction of tests whose outcome cannot be questioned. For instance it may be impossible in practice to ensure that the tests meet the ceteris paribus clause which lies at the basis of the conclusivity of the verdict of any test. With complex living systems the set of possible factors may be so large that we can never be sure to have identified all relevant factors, let alone that they have remained paribus. We will often have to assume, on the basis of common sense, logic or intuition, that a certain factor we cannot control in an experiment will have only a negligible effect on the outcome. The decision to accept the outcome of the experiment as conclusive then is purely conventional and based on a shared speculation about the effect of that factor or on a shared acceptance of a certain risk of taking a wrong decision. Such acceptance is not objective in Popper's sense. Yet it can be absolutely rational to accept the test result even if it is not totally conclusive, for the following reason.
Popper (1980, p.86) considers a theory to be empirical if it divides the class of all possible basic statements unambiguously into the following non-empty subclasses. First, the class of all basic statements with which it is inconsistent and, secondly, the class of those statements which it does not contradict. I hold that if a theory permits us to divide reality into a class of events more likely to happen in a given situation and another of events less likely to happen, then it can tell us something about our world of experience and can help us in our decision-making. If we can show that the failure to meet the condition of conclusive and practical falsification follows from the nature of the phenomenon investigated and that it is not due to reluctance of the proponent to submit his theory to the judgement of reality, then the holders of an instrumental view of knowledge will admit such a theory as a potential candidate for empirical scientific knowledge, because such a theory may be the best and most objective available. They will brand the a priori rejection à la Popper as undemocratic dogmatism.
The converse objection has also been made, namely that a demarcation criterion also rests on conventions such as logic and on the acceptance as true of certain singular statements (such as the reading of a thermometer); its verdict therefore also cannot be objective. These objections are unfounded, for these conventions and statement of facts only stipulate the will and capability to communicate, plus the good faith of scientists. These are minimum requirements for the
success of any social venture. Accepting the conventions of language, formal and common, and agreeing to the truth of certain evident facts is quite different in nature from agreeing that in a certain experiment the ceteris paribus clause has been respected while in fact we have no assurance that it was. If we question a theory holding that political tensions have no influence on the propensity to consume, that is quite different from rejecting the reading of a well tested thermometer on the grounds that no one can prove that it has been measuring correctly during the experiment.
2) POPPER ON STOCHASTIC THEORIES. Popper might challenge the statement that he does not admit stochastic theories, and would refer me to his chapter on probability (1980), especially section 69. Yet the condition under which he admits a stochastic theory confirms my view that, in terms of Imre Lakatos, he is a ‘naive’ falsificationist and therefore would reject - as an unwarranted dilution of his own criterion - the limited demarcation criterion which I propose
Popper did show that his demarcation criterion can accommodate certain stochastic theories by introducing a mathematically correct concept which I will try to state without the use of mathematics. Suppose we throw a dice and are interested in the number of times a six turns up in a series of throws, and express it as a percentage of this number of throws, i.e. we record the average percentage of sixes in the series of throws.
If the dice are true, we can expect this percentage to fluctuate around one-sixth of 100 (= 16.66666....%). For any given length of the series of throws, we can calculate the probability that the actual percentage found will lie within a certain range around this 16.666..., say between 16 and 17.3. The larger the sample, the smaller the probability that we make an error if - finding that the result falls outside these limits - we assume that the dice are not true, that we have falsified the theory that they are true. Yet however large the sample, and thus however small the probability that the average will fall outside - say - 16 and 17.3, that occurrence can never be excluded, and thus there will always remain a risk - however small - of making a wrong judgement when rejecting the dice as not true.
Suppose that we take a sample of a trillion throws. If the dice are true, the chance that we find an average falling outside the above limits is negligible, almost zero. If the distribution is - as in this case - of the usual Bernoulli, Normal, Poisson etc. type, i.e. is tapering off at least at one end, increasing the size of the sample above this extreme size will not decrease this extremely small chance by any appreciable degree. It will never be zero, but can hardly get much less than the almost zero which we have already achieved in the sample of one trillion.
Even though we can never exclude the occurrence of even the most unlikely but not impossible event, we can with confidence state that if it did occur, it cannot be reproduced at will. So Popper added a condition for a test to be accepted as logically falsifying a theory: it must be reproducible at will. Obviously a falsification resulting from such an improbable event as the one mentioned above does not meet that condition. Popper thus admits stochastic theories as scientific as long as we can specify test conditions such that a falsification by chance - while not
impossible by itself - is improbable enough to be considered ‘irreproducible at will’. Only repeated confirmation by tests of a falsifying basic statement would then in Popper's world logically force any reasonable man to consider the theory refuted. He can thus accommodate in his realm of empirical scientific theories such stochastic theories as quantum mechanics. But he will reject the kind of stochastic theories we inevitably have to work with in social theory, where any reasonable and useful confidence limits we establish for deciding on acceptance or rejection of a theory may be of the order of one in ten, or at best one in a hundred. Such a decision is not conclusive. The theory is only inter-subjectively (and in that sense objectively) testable by agreeing on a subjective convention such as accepting certain confidence limits.
Either we must - as Popper implies - deny such theories the qualification of scientific and empirical, or we must use in our evaluation of scientific theories a demarcation criterion which accepts stochastic theories as scientific by agreeing to abide by an a priori but conventional level of probability of taking the wrong decision. The last road is also the democratic one if we evaluate a theory which we need for social decision-making, for it is more objective than any other we may choose.
Taking it means that we sacrifice totally objective and conclusive decidability. Did we lose much? As argued in various places, the notion of an absolute truth is far from compelling, certainly if applied to scientific theories. Also, most verdicts based on test results imply a (subjective) decision as to the status of the ceteris paribus clause. By sacrificing absolute objectivity, we have sacrificed only a chimera.
3) THE ROLE OF INITIAL CONDITIONS IN EVALUATING TESTS. Actual testing of a theory always involves conventions beyond agreement on the factual results of the test. We must, for instance, agree on some kinds of auxiliary statements such as initial conditions which specify the circumstances under which the test is made. Testing the second law of thermodynamics would require at least agreeing on:
- | the existence of two clouds of gas in non-deformable receptacles |
- | on the total isolation of clouds from their environment |
- | their connection by a removable separation |
- | the means of measuring their pressure and temperature |
- | the influence which that measuring could have on them |
- | their state at the start of the test |
- | the exclusion of any other influence on the test |
That point was made by Hillary Putnam in his article ‘The corroboration of Hypotheses’ (Schilpp, pp 221/240). Incidentally, that article also points out the inevitable instrumental character of scientific hypotheses. We can never deduce, says Putnam, any fact (positive or negative) from a theory, and thus test it, without specifying a set of initial conditions which must be met. If a theory fails to meet a test, we can then never be sure whether we must impute that failure to the theory or to the fact that in this test the initial conditions were not met. He also points out that such initial conditions and whatever other auxiliary statements we have used to define the test situation are not ‘given’ by nature nor by the theory, but follow from human decisions
which ipso facto may be wrong. He thus rejects Popper's contention that failing a test proves a theory to be false, and faults Popper for not having adequately addressed the problem of initial conditions. (The ceteris paribus clause is such an initial condition.)
Popper rejected that criticism by replying that he did consider initial conditions. Without such initial conditions we cannot deduce any basic facts from a theory, he states in ‘The Logic of Scientific Discovery. Any prediction which is deduced from a theory presupposes the definition of some conditions. To deduce - from Newton's theory of gravitation - the position of a planet requires at least the initial condition defining the position of the reference point (sun or earth) from which the position of the planet is measured.
But then, Popper says, to falsify a theory we do not need to predict from it the occurrence of a basic statement. On the contrary, we need only to deduce from it that certain (classes of) basic statements cannot occur at all, that they are prohibited by the theory. We can then do without the assumption of singular existential statements of facts. Correct. If we assert the theory that all swans are white, the simple occurrence of a black swan will falsify that theory. We need not use any specific basic statements to deduce that a swan of any other colour than white will falsify the theory.
Yet the simple statement of the class of falsifying basic statements does not constitute a test; only the actual search for events corresponding to such a falsifying statement does. In the case of swans, it means looking everywhere for... what? In the first instance, we can say we will look for birds of the size and form of a swan. Suppose we find one which is not white. How do we decide that it is a swan and not some other species of bird? If we admit any other difference but colour, we have simply transformed the test of the theory into a discussion about the definition of a swan; the test implies that all, and really all, features of the off-colour swan be exactly equal to those of a white swan except its colour: ceteris paribus as to features. Secondly, it must be established that no other factor except its genetic endowment is responsible for the colour of that swan, again the ceteris paribus clause. How can we be sure of that? We will abstract from the joker who catches a swan and paints it black. But it may very well be that some agent in the environment, for instance in its food, will in Burenia (not on your atlas) colour pink the feathers of a swan which would be white in Holland. (Feeding on shrimp will turn pink the other-wise off-white flesh of a trout). Or bacteria could colour them, if not black, at least grey. I tried hard, but I could not think of any concrete test situation of any existing theory meriting the name of scientific and empirical which does not imply the ceteris paribus clause: it is a real auxiliary statement, a real initial condition, for any test.
A statement of the kind ‘all swans are white’ is not really a scientific theory. As Popper writes, a scientific theory must be a ‘bold conjecture’, it must tell us something which would be highly improbable if we just apply common sense to our everyday experience. It must explain, it must contain more, much more, than just some universal statement of pure facts such as ‘the sun will rise tomorrow’. It must tell us what Popper calls ‘highly abstract facts’. Someone who rejects the notion of the autonomous existence of world three objects would consider the last two words a contradictio in terminis. But I agree that a scientific theory will usually contain highly abstract elements.
It is - to my knowledge - a common tenet of philosophy that one cannot deduce any proposition - positive or negative - about real facts from a purely abstract deductive (analytical) system. Any such proposition must be synthetic, must contain a statement about facts; specifically, it will include the class of negative factual statements expressing the ceteris paribus clause which is synthetic.
Popper was well aware of this, as shown for instance in Schilpp, pp 1005, where he says ‘... if the apples of Newton's tree were to rise up from the ground (without there being a whirlwind about)...’. This shows that he did not consider the ceteris paribus clause as an auxiliary assumption, but as implicit in any theory. This is confirmed by his statement: ‘If we do make auxiliary assumptions, then of course we can never be certain whether one of them, rather than the theory under test, is responsible for any refutation’ (Schilpp, p 998). But implying the ceteris paribus clause in the theory does not alter its nature of an auxiliary assumption in a test: it is a real initial condition which must be met for any test to be conclusive. The very simple and logical conclusion from all this is that any failure of a theory to meet a test always leaves us with the dilemma of whether:
- | the theory has been falsified |
- | or the initial conditions of the test have not been met. |
Scientists are quite familiar with this situation and one wonders why Popper objects to this conclusion until one realises that - by accepting it - no theory involving abstract constructions can ever be conclusively and objectively falsified by confrontation with accepted facts. He pays lip service to this notion by stating that a theory can only be falsified by the acceptance of a falsifying theory, but then sweeps it under the rug by equating the statement of the falsifying theory with a singular statement of fact, which it is not if it contains the ceteris paribus clause. Saying that ‘the work of the experimenter consists in eliminating all possible sources of error’ (1980), p. 107) and suggesting that agreement on such success is tantamount to an agreement on a simple matter of fact, such as the reading of a thermometer, simply defines the problem away.
If we accept that falsification of a theory is conventional, we find ourselves in a situation very similar to the one encountered when dealing with the possibility of repairing hypotheses, and we can deal with it in a similar way, namely by a methodological rule. The rule we have established to determine which changes of a theory are admissible can be extended to the initial conditions. We can agree that ‘we will not accept ad hoc imputation of the failure of a test to the initial conditions.’ In fact this is more or less what Popper achieved by incorporating implicitly the ceteris paribus clause in the theory instead of considering it an initial condition.
CONCLUSION: Methodological rules can solve the problem of decidability, but at the same time that will irrevocably divest any decision about an empirical scientific theory of any claim to absolute objectivity, even if we agree on the truth of basic statements.
J&A 2b.3.3C)
Against the Autonomous Existence of Popper's World 3 Objects.
1) INTRODUCTION. in the chapters about knowledge it has been explained why knowledge in general and scientific knowledge in particular can be understood only if we acknowledge its function as a tool which man uses to manipulate and communicate reality. That frankly instrumentalistic view is rejected by Popper and that issue has to be settled.
The specific instrumentalistic view of knowledge which Popper rejects sees scientific knowledge exclusively as a means of prediction. Popper shows that while knowledge might indeed fulfil that function, there must be more to it. Specifically, he asserts that the basic aim of the scientist is to find explanations, models, laws etc. whose common property is that they represent in some way our real world of experience. That we can deduce from them predictions is a bonus which is gratefully accepted by the users of scientific theories. But for the scientist these predictions are mainly tests of the validity of his explanation (see f.i. The Logic of Scientific Discovery=LSD, 1934, p. 61, note 1*). I agree. In this first work he only investigates the logical foundations of scientific discovery and evaluation, and I it seems to me that this work still has withstood the test of tune and the attacks of fellow epistemologists.
In a later work (Conjectures and Refutations=CF, 1963, p 223 etc.) he justifies his use of the concept of truth in the formal, logical analysis, by referring to Tarski's exposition explained in J&A 3a.1.2.. Popper notes that (Tarski) has rehabilitated the correspondence theory of absolute or objective truth and that this theory is not only applicable to a formalized, but also to any consistent, and even natural, language. ‘He (Tarski) vindicated the free use of the intuitive idea of truth as correspondence to fact’. While I agree with the extension, that still only justifies the absoluteness and objectivity of such a statement about truth within language, as explained in the above addendum. It cannot and does not go beyond saying that if we accept the object and meta languages, we must accept the truth verdict as absolute and objective in the sense of ‘not dictated by any subjective considerations except those which led us to accept the languages’. In that sense the acceptance of the verdict remains conventional and therefore accommodates a functional view of truth.
That leaves ajar the door to the relativism which is so repulsive to many, including Popper, who would like to have then views on morals or truth backed by the power of total objectivity. In this case Popper wanted to extend the absoluteness and objectivity of Tarski's truth to cover not only the language, but also the reality about which we talk. To that effect he introduced (Objective Knowledge, 1972) the notion of a thought content which is independent of the thinker, and the assertion of an autonomous existence of world three objects. Alas, there is no shortcut for defending our views on their own merits and by our own efforts and strength; relying on an autonomous existence of thought contents is a chimera.
If we do accept the autonomous existence of scientific theories, then anyone with concern for objectivity must accept Popper's theory in preference to any instrumentalistic or conventional concept of knowledge and truth; such concepts always will contain subjective elements. Popper was attacked precisely on that assumption; he exposed his view on this subject in 1972, in ‘Objective Knowledge’, hereafter abbreviated as OK, almost forty years after his Logic of Scientific Discovery (LSD) in which he introduced the world 3 objects to which we will revert further on.
His idealistic view implicit in the autonomous existence of world three objects has been countered before, notably by Feyerabend. But this has always been in a context different from Popper's. I wish to preserve his rational philosophy at least for certain aspects of the scientific activity. Indeed, the preservation of such rationality seems to me a prerequisite for scientific activity to be fruitful. My task is thus to refute the autonomous existence of world three objects within the context and rationality of Popper's work, something which to my knowledge has not been attempted before; if it has been, it got little publicity. I hope to convince my readers that my moderate instrumentalism, my functionalism, coupled to a (subjective) choice for democracy provides a more realistic and plausible concept and justification of (striving for) objectivity than Popper's autonomous existence of world three objects.
To avoid any misunderstanding: I do not intend to prove that there can be no abstract concept which is independent of any human being. Such proof is logically impossible. All I hope to achieve is to show that Popper's arguments in favour of such autonomous existence are far from conclusive, and that another - less idealistic but more convincing - alternative is both available and sufficient to explain the phenomena which Popper presented as substantiation for this autonomous existence.
POPPER's THEORY OF WORLD THREE OBJECTS. Popper distinguishes between knowledge in the sense of knowing, which is a subjective state in the mind of an individual, and the content of that knowledge, the thing that is known, which Popper considers objective, to have an existence independent from the knowing individual. This (content of) knowledge thus is an object that can be adequately studied and explained without any reference to any specific human individual. Such ‘objective’ knowledge may consist of problems, theories, arguments.
Popper distinguished three worlds:
-world one: all physical objects and physical states. |
-world two: man's states of consciousness, mental states, behavioural dispositions to act etc. |
-world three: the objective content of thought, especially scientific and poetic thoughts and works of art. |
I have no argument with this classification and with Popper's statement that the contents of thought are an object as soon as any individual has become conscious of it. On the contrary, it seems to me a valuable concept for the study of knowledge. The main disagreement concerns the status of world three objects. Not only does Popper claim for them an objective existence, but he also states this existence can be independent of human individuals. Once a thought has been committed to any kind of memory its content has, according to Popper, achieved the status of an independent object that subsists as long as the medium in which it has been registered subsists (the memory of an individual, but also paper plus the ink, grooves on a record etc.) This independent existence of world three objects is illustrated by Popper in ‘Objective Knowledge) by the following examples:
A) | (p. 113/114) ‘I consider two thought experiments:
|
||||
B) | (p 115/116) ‘One of the main reasons for the mistaken subjective approach to knowledge is the feeling that a book is nothing without a reader: only if it is understood does it really become a book; otherwise it is just paper with black spots on it. This view is mistaken in many ways. A wasps' nest is a wasps' nest even if it is deserted; even though it is never again used by wasps as a nest. A bird's nest is a bird's nest even if it was never lived in. Similarly a book remains a book - a certain type of product - even if it is never read (as may easily happen nowadays).
Moreover, a book, or even a library, need not even have been written by anybody: a series of books of logarithms, for example, may be produced and printed by a computer. It may be the best series of books of logarithms - it may contain logarithms up to say fifty decimal places. It may be sent out to other libraries, but it may be found too cumbersome for use; at any rate, years may elapse before anybody uses it; and many figures in it (which represent mathematical theorems) may never be looked at as long as men live on earth. Yet each of these figures contains what I call ‘objective knowledge’; and the question of whether or not I am entitled to call it by this name is of no interest. The example of these books of logarithms may seem far-fetched. But it is not. I should say that almost every book is like this: it contains objective knowledge, true or false, useful or useless; and whether anybody ever reads it and really grasps its content is almost accidental. A man who reads a book with understanding is a rare creature. But even if he were more common, there would always be plenty of misunderstandings and misinterpretations; and it is not the actual and somewhat accidental avoidance of such misunderstandings which turn the black spots on white paper into a book, or an instance of knowledge in the objective sense. Rather, it is something more abstract. It is the possibility or potentiality of being understood or interpreted, or misunderstood or misinterpreted, which makes a thing a book. And this potentiality or disposition may exist without ever being actualized or realized.’ |
Compelling as these examples seem to be, books versus wasps' nests provide good material for exposing the catch. The function of the wasps' nest follows directly from its physical properties. Nothing needs to be added to the structure of the nest except wasps who use it. That is not so with information contained in a book. Popper points out that it is necessary that the reader understands the language in which the book is written if the information contained in the book is to emerge. That does not invalidate his point: he only says that there is some informa-
tion in it and that, once committed to paper, this information is independent from either writer or reader.
It is this independence which I contest. Consider the processes of committing a thought to paper and of reading about such a thought. The thought originates in the brain of an individual in a process which begins before he has chosen the words which he will use to communicate it. At that stage an outside observer could observe only a series of electro-chemical states in the brain of the thinker, even though, to the individual who wants to put it into writing, that thought now is an object. The electro-mechanical states just prior to the translation into words are neither spurious nor arbitrary. They are intended to represent something. This something is what Popper calls the content of the thought. At this stage, the representation is totally dependent on, only exists, in the mind of the thinking individual and will disappear with it. The object which he intends to represent may or may not exist independently of the individual.
To discuss whether a world three object has an autonomous existence or not requires that it be communicated. In the case of a written text, it must be translated into words and written down. Now we all know that verbal communication is never perfect. Language consists of a finite, discrete set of symbols. The world we want to talk about is infinite and often continuous. In addition, nobody is a perfect writer or reader, for two reasons. First, no human is perfect. Secondly, the meaning we attach to words is not something absolute which has been revealed to us. The meaning we associate with a word is developed through our experience and through our contact with other people. All words that are intended to describe reality are ambiguous. Their thought content, the representation they generate in our brains are the result of the particular conformation of our brain cells coupled to our experience, most of it formed during the few years in which we learn to speak. To the extent that, on reading a word, the representations we make overlap, that overlap is the result of our common experience and constitution of our brain. And the main factors in achieving that overlap are the competence of those who teach us our language and our own ability and willingness to learn from them. An abstract concept, a purely formal language, is precise, is unambiguous, because it does not correspond to any reality until we add a reference to objects represented by the variables of that language, at which moment it loses its unambiguity. The precision and unambiguity of abstract concepts and formal languages are totally conventional and have meaning, content, only for those who are capable of understanding these conventions and are prepared to accept them.
Following Frege (see Vol. Two, ‘Sense and Referent’), we can distinguish between three potential thought contents in any message:
- | The subjective thought content of the individual who wrote the message, i.e. the representation he wanted to communicate. |
- | The Popperian, ‘objective’ content of the message, related to Frege's ‘sense’ |
- | The subjective thought content of the reader, i.e. what he thinks the writer wanted to communicate, the representation which he forms as a consequence of reading the message. |
Ideally, all three contents should be equal, but unless a message contains exclusively expressions of a formal language, that will seldom happen. This imperfection raises the question: if the representation which the reader makes differs from the one which the writer had in mind,
which one of these two - if any - corresponds to Popper's objective content?
As explained in the chapter about Frege, the answer is that such an objective content does not exist. Given the importance of that assertion, it will be substantiated again in the context of Popper's work.
Obviously, if a word is intended to point to an existing object, that referent is objective and independent from both writer and reader. An object is not however a thought content. A thought content about an object can never be more than a representation we make of that object. If we persist in using thought content to refer to whatever the representation is supposed to represent, then it is just another word for referent. Per definition all abstract concepts, and thus most of world three objects, do not refer to any particular object and thus can have no objective content.
The only interpretation of any content other than the two individual ones seems to be the overlap between the individual contents. But that overlap is no more ‘objective’ than the individual ones. It is however relevant for social decision making because it refers to what I call ‘social content’, which is what culture is all about. Unless interpreted as social content, the second Popperian - objective content is a fiction, an illusion.
For example, take the sentence: ‘Before him lay a small forest, and he hesitated between crossing or skirting it.’ Let us look at the word ‘forest’. The writer thought about a hazel grove, the reader imagines a pine forest. What then is the objective content of the message?
That depends on the representation we attach to the word ‘forest’. Webster says: A forest is a large tract of land covered with trees and underbrush, a woodland. Such a definition is independent of the reader only if all readers make the same representation when using it. But what is large, when does a patch become a large tract? Is hazel a tree or a shrub? What is the density of trees required to distinguish a forest from a number of isolated trees, from a savanna? Obviously, the answer is a matter of convention. The perceived objectivity of any content of a message rests on conventions and requires acceptance by individuals of such conventions. Conventions need not be explicit. They may evolve in the use society makes of a certain word. In fact, that is how we learn the meaning of most words like forest: by experiencing how other people use it.
What Popper calls the objective content of a book corresponds to what I would call the social content. This illustrates the appropriateness of distinguishing between individual and social knowledge. Social content is almost independent of any single individual: it is dependent on a great many number of them, on society and its history. The proof is that a word often changes its meaning over time, in which case the Popperian, intersubjective, content has also changed, which is incompatible with an objective nature. For example, new developments in biology may induce us to change the classification of hazel from shrub to tree or vice versa, ipso facto changing the social content of the above message.
The view of information and its role in life provide a starting point for a definition of this social content. Unfortunately, an adequate treatment of that subject transcends both the scope of this book and the competence of its writer. It would be something like ‘that part of the content of a word which is common to the overwhelming majority of the members of that society’. The two
factors - overlap and majority - are inversely correlated. The overlap and majority required are determined by the level at which we consider our education to have been successful. Except for formal languages, the social content is usually fuzzy at the edges, it contains a stochastic element. In theory the social content might be defined - in the spirit of Frege's ‘Begriffsschrift’ as: ‘forest’ means a group of at least ‘X’ trees of any kind with an average distance of not more than ‘Y’ metres from each other’. It is purged of all connotations which an individual writer or reader may attach to the message in excess of what - by the above convention accepted in society - is to be included in the message. This social content is not however autonomous. It is conventional and exists only in relation to a specific society whose individuals have given and accepted a common value for X and Y. Imposing such a precise definition of all terms is totally impracticable and also unnecessary, as for all practical purposes the time-honoured learning process has proved totally adequate for decision-making... provided we really want to eliminate misunderstandings by making the necessary effort whenever we disagree about a subject. Most misunderstandings persist because people are unwilling to make the necessary effort or to run the risk of having to change their position.
The notion of ‘factual objectivity’ introduced in ‘Can Knowledge be Objective’ (3a.2, page 86) explains why any objectivity of knowledge resides not in the thought, but in the object about which we think, in the referent of a representation, namely whenever that referent, that object, has an existence which does not depend on the individual holding the thought or making the representation.
If in our example the text reads: ‘the forest lying between Sherham and Plimpington’, then such a forest is assumed to have an existence independent from any reader or society. But is this forest a thought content? No, it is a specific forest. The only thought contents of the message are still the representations which writer and reader have formed about that forest. Its semblance of objectivity resides in the fact that there is now a totally objective referee to settle any difference of opinion - given the will to do so - about the content of the message: that referee is the forest itself. By including in the message an ever more detailed description of the forest and by continuously checking that description against the forest itself, we can reduce the variation of individual representations, thus increasing the overlap, the precision and unambiguity of the social content of the message, up to a point where we are confident that any disagreement about the decision involving the forest is not due to a misunderstanding and will not lead to a different decision.
Concerning the books of Popper's example, the notion that what is encoded in the books represents something implies that the readers can attach a meaning to it. But man can attach meaning to words only if they can be integrated into his learning experience. One of the major problems in getting new ideas accepted is precisely that they do not fit into the subjective, ‘holistic’ representation-of-reality of its contemporaries. Primitive men stumbling on Popper's saved libraries would see them as some kind of magic. Modern books about science require a whole culture, a ‘modern’ experience, to be understood. If we assume sufficient knowledge in the minds of the readers to understand these books, these readers could rewrite them if the books were destroyed. So if all educated individuals survived, the reconstruction of our modern world would be very rapid, precisely because the subjective knowledge has been saved. Without that
individual knowledge, it would require a whole process of civilisation to regain our present. level of knowledge.
In other writings Popper wields other arguments for the autonomous existence of world three objects. The gist of these arguments is that a problem or solution pertaining to world three exists - in world three - before being discovered. I will quote his example (Schilpp, p. 1050):
‘When I speak, probably not clearly enough, about the autonomy of world 3, I am trying to convey the idea that world 3 essentially transcends that part of world 1 in which it is, as it were, materialized, the stored-up part of world 3, ‘world 3.1’. Libraries belong to it, and probably certain memory-carrying parts of the human brain. I then assert the essential and fundamental equation: world 3 > world 3.1, that is, world 3 transcends essentially its own encoded section.
There are lots of examples, but I will take a simple one. There can be no more than a finite number of numbers in world 3.1. Neither a library nor a human brain incorporates an infinite series of numbers. But world three possesses the lot, because every number has a successor. This theory must have belonged to world 3 almost from the beginning. In the remote past however it was nobody's world 3.1 (or 3.2 - that is, the part of the world 3 which has been grasped or understood by some people); but one day the procedure of adding to an integer was invented, and then the theory became a theorem, and so achieved world 3.1 status. In a similar way, world 3.1 may not at a certain time contain the notion of a prime number. But prime numbers (and of course also this notion) may be discovered as implicit in the series of numbers, and Euclid's theorem - ‘There is no greatest prime number’ - may be proved. Once discovered, Euclid's new theorem may start to belong to world 3.1. But before its discovery, it existed in world 3. Not only that; it was open to anybody to discover it, but nobody (or so it seems), however much he disbelieved it, could add its negation to world 3 (that is to world 3.1), except as a falsehood.
But I wish to go further. There are world 3 objects which possess no world 3.1 materialisation at all. They are yet to be discovered problems, and the theorems which are already implied by the materialized world 3.1, but which have never been thought of. (Had they been thought of, they would have belonged to the brain part of world 3.1).’
Just as before, I will use Popper's own example to make my point. He states that world 3 possesses an infinite series of natural numbers. Not so. World 3 (and 3.1) possesses the finite series of all numbers ever thought of, plus the abstract concept that every number has a successor. That much we know and agree about. But we have no justification to ascribe any existence to a number never thought about until it has been thought about. Numbers themselves have no autonomous existence. They are part of the arsenal of abstract concepts we have created to deal with reality. The theory that every number has a successor can, more correctly, be formulated by saying that our logic holds that, given the algorithm used, there is no limit to the series of numbers. And logic is conventional. That there is no limit to our ability to create numbers is a theory about our capacity to generate numbers.
A potential existence cannot be considered to be autonomous if it requires another phenomenon to turn this potentiality into reality; it has no existence until the conception has taken place. Fol-
lowing Popper's reasoning, we would have to assign a world 1 existence to all animals that could - conceivably - be generated as a consequence of mutations or combinations of genes. That would stretch the meaning of the word ‘existence’ beyond any as yet accepted limits, to render it so general as to rob it of any discriminatory power and thus make it useless. Such reasoning is tantamount to the assertion that anything exists whose existence cannot be proved to be impossible.
We can agree that there are (potential) world 3 objects which are implied by actual world 3.1. objects but which have not yet been realised. The mechanism which generates this implication is not however autonomous. In the above example of numbers it is the result of shared conventions of logic and mathematics. Or it follows from (the limitations of) our present knowledge. For the potential existence of as yet unrealised world 3 objects can also find its cause in the real and autonomous existence of world 1 objects coupled to the limitations in our present knowledge. The rejection of Aristotle's claim that eels are generated by mud (he claims to have seen so himself!) and the emergence of the theory that they spawn in the Sargasso sea always had a potential existence independent of human activity: it was implied in what eels actually do. World three objects which are intended to describe world 1 objects should be implied by world 1! But do they ipso facto have an autonomous existence? What about Aristotle's theory about the origin of eels? A false theory cannot have any objective existence and the most likely prediction about any scientific theory is that sooner or later it will be proved false, at least in its explication, as new discoveries in other fields of science render the current concepts obsolete.
The last argument of Popper may seem to be the most convincing: world 3 objects can generate specific other world 3 objects in a way that is uniquely determined before any individual has made its discovery: given our laws of logic, the conclusion that the series of natural numbers is infinite is forced on us once we accept the statement that every number has a successor. Again, that only seems to argue for an autonomous existence. The above infinity is inescapable only for those individuals who adhere to the convincing theory or axiom (it is no more than that) that every natural number has a successor, and who subscribe to the conventions of logic. We accept the infinity of natural numbers because we submit to the discipline we impose on our thinking by accepting the two above conventions and/or because the verdict of experience has never contradicted it. Both the notion of infinity and its acceptance as applied to possible numbers is conventional and subjective: there is nothing ‘out there’ corresponding to it.
Concluding, all appearance of the autonomous existence of world three objects can be explained either by shared conventions or as a reflection of the autonomous existence of world 1 objects. To prevent any misunderstanding, note that:
- | As stated, the above does not preclude that any thought of an individual can become the object of an information process (either through introspection or when communicated to other individuals) when attempting to understand, analyse or evaluate it. But the result of that information process remains a ‘representation’ of that ‘object-thought’ by the individuals inspecting it. Remember that any division between subject and object refers only to their position within an information process; it is functional, not ontological, it is not a property of the elements thus qualified, it says nothing about the independence of the object from the subject. |
- | I do not here assert that there cannot be any autonomous world 3 objects. To prove that is logically impossible. I only argue that neither Popper nor - to my knowledge - anyone else has made a compelling case for accepting such an existence and that a contemporary view of the information process cannot accommodate such an assertion. Also, it is not necessary for dealing with abstract concepts and therefore redundant, as the functional approach provides a simple and ‘natural’ explanation of all phenomena put forward by the proponents of autonomous existence of abstract concepts in support of their assertion and satisfies all necessary and sufficient requirements of objectivity. |
Why make such a fuss about it? Because accepting the autonomous existence of world 3 objects has a consequence that is relevant for the factual component of social decision-making, especially in science. The truth or falsity of theories meeting certain conditions (for instance Popper's demarcation criterion) would be a fact, would exist, and it would be above any human authority. It would justify putting scientific theories meeting Popper's conclusive demarcation criterion in a separate class because their truth, or at least their falsity, could be conclusively decided upon by confrontation with facts, a property imputed to physics. These would differ not by degrees, but in their very essence, from a theory which cannot be thus evaluated. All theories except physics would be pseudo-science, what Rutherford called ‘stamp-collecting’, which would exempt them from any methodological discipline and allow only fashion as a selection criterion.
If on the other hand Popper's objective content does not exist, but actually is what I understand by social content, then no theory is objectively and conclusively decidable as being ‘really’ false by confrontation with facts on the ground that they do not correspond to any ‘fact’. Without the appeal to objective content, theories which do meet his conclusive demarcation criterion differ only by degrees from those that do not. They do have an advantage: they allow a more objective way for deciding about them and thus lead to conclusions which will be more readily accepted by (scientific) society, although the history of science shows that this is not a foregone conclusion. But they do not differ essentially from theories whose evaluation requires additional conventional elements.
Lakatos: Research as a Program. 3b-3-2)
Had Lakatos been less modest and lived longer, I probably could have dealt with the evaluation of scientific theories simply by referring to and quoting from his work. His premature death did not grant him sufficient time to fully develop the implications and applications of his theory. And his modesty and kindness prevented him both from squarely taking on and refuting Feyerabend's nihilism and from acknowledging his fundamental differences with Popper. He took great pains to present himself as a disciple of Popper who has developed some important ‘extensions’ but who remains within Popper's ‘research program’ and leaves intact its ‘hard core’. That is incorrect. Lakatos rejects the claim of conclusivity and total objectivity of decisions (about scientific theories) which are based on his method while Popper himself explicitly considers that claim to be part of the hardcore of his theory.
Lakatos subscribes to the falsificationist element of Popper's theory, namely that one can never prove the truth of a scientific theory worthy of that name and that at best one can prove that it is refuted by a fact, and thus ‘falsified’. In his book ‘The Methodology of Scientific Research Programs’, p. 33, Lakatos distinguishes two types of falsificationists: the ‘naive’ and the ‘sophisticated’.
- | The naive falsificationist considers a theory falsified if it is refuted by finding true a ‘basic statement’. | ||||||
- | The sophisticated falsificationist will consider a theory falsified only if a better theory has been proposed and corroborated. A theory is better if:
|
While Lakatos never explicitly says so, Popper falls - by his own admission - into the first category. It is true that Popper developed the concepts which Lakatos uses to define the sophisticated falsificationist, and for that he richly deserves our gratitude. But in addition Popper requires the decision about the falsity of a theory on the basis of tests to take precedence over any comparison between theories. A theory falsified according to his method is false, period, and that verdict he holds to be totally objective and conclusive. That clearly is the position of the ‘naive’ falsificationist. The comparison of theories can - according to both Popper and Lakatos - only yield provisory, qualified conclusions.
Popper's methods are derived from his demarcation criterion and are mainly prescriptions as to how to apply it under various circumstances. Their justification relies on the concept of an absolute truth and on the objective and conclusive nature of a falsification. The methods proposed by Lakatos are justified by the explicit objective which we pursue when deciding about scientific theories. His acknowledged objective is to learn something about the nature of the world of our experience, of our universe. The demarcation criterion is for Lakatos - as it is for me - a means to arrive at a first and tentative decision about which theories to compare and to make that decision as objective as we can.
Lakatos also has brought to the foreground the interdependency of scientific theories, noting that one theory will serve as an axiom for other theories. One of the purposes of Part Two on life and information was to show that such interdependency does not follow from a deficient way of constructing theories, but from the nature of the information process and the ‘wholeness’ of our experience of reality.
Lakatos also has put forward the conclusion presented in VOL. ONE that such interdependency also governs the testing of theories becaue the decision about the truth of the singular statement by which we decide to consider a theory falsified or corroborated usually involves the acceptance of theories from other fields, for instance one explaining the working of a thermometer. He writes on p. 23:
‘The methodological (i.e. sophisticated) falsificationist realizes that fallible theories are involved in the experimental techniques of the scientist, in the light of which he interprets
the facts. In spite of this, he ‘applies’ these theories, he regards them in the given context not as theories under test, but as unproblematic background knowledge which we accept (tentatively) as unproblematic while we are testing the theory.’
We thus grant ‘observational’ status to what is a theory, but only tentatively, and only as a matter of convention. The conventions involved are usually unproblematic or at least - says Lakatos - are ‘institutionalized and endorsed by the scientific community’. That however does not make them less conventional. If we use fallible theories as axioms in the theory to be evaluated, if we base this evaluation on tests which obtain their discriminatory power from the acceptance of fallible theories as observational statements, we rob the decisions thus arrived at of any pretension to reflect an actual state of affairs as to the truth or falsity of the theory thus evaluated. A falsification which depends on accepting as true other fallible theories cannot but be conventional. Conventions are governed by instrumental considerations, in casu the need to arrive at a common decision on a scientifc theory.
We justify the use of such conventions not by any appeal to an absolute objectivity, but by the fact that they are unproblematic, i.e. have not been legitimately challenged. What we mean by a ‘legitimate challenge’ will depend on the kind of society we consider and the view held in that society about justice. To accept a convention because it is unproblematic, unchallenged, is justifiable in a democratic society as defined in this book.
That any method for deciding about scientific theories is dependent on the kind of society we have has been made by others. In the collected papers dedicated to the memory of Imre Lakatos (Method and Appraisal in Economics, p 181/205), T.W. Hutchison points to various such authors and remarks (p 203) that any prescription of a method for the evaluation of scientific theories ‘must inevitably be based on certain ethical and political choices’. None of these authors however did what I do, namely to start from such an explicit choice and then deduce what consequences that choice has for the evaluation of scientific theories. Also none did place such decision-making in the context of a view on life and information in general, a context which is just as relevant as the political one.
Elsewhere, Lakatos states (p. 35): ‘... no experiment, experimental report, observation statement or well corroborated low level falsifying hypothesis alone can lead to falsification. There is no falsification before the emergence of a better theory.’ Facts which contradict a theory then are just anomalies until a theory emerges which is not ad hoc and which - in addition to the facts explained by the previous theory - also explains some or all of the facts which contradict the original one.
As said, Lakatos did not want to draw the logical conclusion from his theory. Like Popper, he wants a purely objective demarcation criterion which would shield scientific knowledge from the vagaries of the current moral and political authority. On page 7 he writes:
‘The Catholic church repudiated Copernicus, The Soviet communist party rejected Mendelian genetics and killed its proponents, and the new liberal establishment exercises the right to deny freedom of speech to theories concerning race and intelligence, all by arguing that these theories are pseudo-scientific. This is why the problem of demarcation
between science and pseudo-science is not a problem of armchair philosophers: it has grave ethical and political implications.’
Like Popper, Lakatos failed in this quest for a purely objective demarcation criterion which would shield scientific knowledge from the current moral authority. At best, his theory presents an objective method of evaluation... provided that instead of absolute truth we accept an instrumental (I would say functional) criterion of evaluation, namely that the theory has a positive heuristic. In plain language, it requires the theory to make a positive contribution to the improvement of our knowledge. By itself, the instrumental character of a method already implies its conventional character. But the interdependency of scientific theories which Lakatos emphasises precludes any conclusive and totally objective evaluation even if we accept the criterion of positive heuristic.
The idea that we can immunise knowledge from human authority is a chimera, is simply unthinkable, because knowledge is a social undertaking. Some authority will have to decide which theories to teach at school, to use as a basis for decision-making by government, to subsidise in research. It is equally unthinkable that any authority which has that power will abide by rules established by people who do not have the power to force them to do so.
The problem of demarcation will never be solved as long as the problem of authority is not solved. Once we have solved the problem of authority by choosing for democracy, that choice dictates that we must find the most objective criterion or method for distinguishing between science and pseudo-science, and for choosing between scientific theories. But while it remains desirable that the objectivity be absolute, that is no longer vital. As shown in Part One, the choice for democracy justifies the establishment of norms (such as rules for evaluating scientific knowledge) and states the conditions which that process must meet. These rules, proposed in Part Three and Five, come very close to the ones developed by Popper and Lakatos and borrow heavily from them. That is not surprising, as both were totally committed democrats.
The main but crucial difference is that my approach highlights that there is no shortcut to obtaining - by getting together - the power required for imposing and defending democratic rules for decision-making, including the choice of scientific theories. The rules for democratic decision-making require maximal objectivity when establishing facts, but they also entail that no claim of absolute validity or of total objectivity (of an evaluation of a theory) can suffice as a legitimation for rejecting without further justification dissenting opinions or new theories. We can use my single purpose demarcation criterion as a first selection of theories which seem to be worth further consideration, and evaluate those by Lakatos' method for comparing theories. If only one theory is available, we always can compare it to the theory ‘we cannot say anything about the phenomenon considered’.
The Context of the Evaluation of Scientific Theories. 3b.8)
The experience which motivated my investigation into the foundations of social theories was the apparent chaos in the evaluation of social theories in general and in economics in particular.
It was therefore appropriate to use as a starting point ‘Method and Appraisal in Economics’, a collection of papers presented at or resulting from the Nafplion Colloquium on Research Programmes in Economics. The major theories discussed there are those of Popper, Lakatos, Polanyi, Kuhn, Feyerabend and Toulmin. In the course of my investigations some other names appeared, notably Ravetz.
The subject of this chapter is usually approached from two distinct angles:
- | the explanation of how scientific theories are created, either its logic (Popper cum suis) or, like Polanyi, from the point of view of the individual scientist, accentuating the inevitable subjective element of any knowledge |
- | science as a social venture seen from the point of view of the scientific community (e.g. Toulmin, Kuhn and Ravetz). In addition to doing justice to the fact that science is the product of a social venture, these theories, especially the ‘Weltanschaungs’, cosmology, type of Kuhn and Toulmin, are relevant in that they accentuate the interdependency of knowledge. |
I have already dealt with Polanyi and will revert to Kuhn and Toulmin in the capita selecta. How scientific theories come about and gain acknowledgement is relevant to this book only to the extent that this is a consideration in the decision to choose or reject a scientific theory, and in the choice of norms which should govern such a decision in social decision-making.
If we see scientific theories as a product of human social activity, just like an aircraft, then attempts to formalise scientific theories on some a priori basis - like the Vienna school - look somewhat weird. No one would evaluate the design of an aircraft on the basis of some a priori structural characteristics, nor of some ideal and unattainable objective. We would use its flight performance in terms of load-capacity, speed, safety, economy and similar functionalist criteria, even its looks.
None of the above theories of scientific knowledge explicitly acknowledges - as an explanatory and normative factor - the purpose of this knowledge. The most functionalist is Toulmin, which describes and evaluates scientific theories in terms of their function as explanation for recognised regularities. Hillary Putnam, and in my country De Groot/Hofstee, emphasise prediction. But that begs the question: why do we recognise regularities and, if we do so, why should we want explanations for them? Popper did point out that we recognise regularities because we look for them; but he gave no reason for our doing so. In Part Two about life and information I have proposed such a reason: we look for regularities and for their explanation as a means to improve our decision-making.Ga naar voetnoot1)
Attributing an own, autonomous existence to scientific theories has obfuscated their obvious functional character. Only by endowing theories, once they have been conceived, with the status of autonomous object could they be studied like any other autonomous object, without taking account of the human activity necessary for their generation and communication. If we reject this autonomous existence and see scientific theories as the product of social human activity and as having an existence only in the minds of individuals, the natural question to ask in attempting to evaluate them is: what do the various individuals and groups have in mind when creating and evaluating them?
Scientific theories first have to be produced; then they are used to solve problems. The obvious classification of the individuals involved is into producers of theories and consumers who use them for solving their problems. That distinction - as in economics - does not refer to the nature of the individuals thus classified, but to their function in the social activity of science. Scientists are both: producers of their own theory and - in their auxiliary statements - consumers of the theories of other scientists. The ‘subjectivistic’ theories of knowledge (as Popper calls them) use as explanatory variables the motives of the scientist, sometimes both as a producer and a consumer, but fail to make the distinction. By and large, these subjectivistic theories are producer-oriented, dealing mainly with the explanation of how scientific theories came about. Both Kuhn and Toulmin place the production of scientific theories and their evaluation into the same hands, or rather heads. They do not see the scientists who judge theories as consumers but rather as competitive producers judging the products of their peers. That indeed is often the case, not because of any ‘natural law’, but because the consumer of science is conspicuously absent in all literature of the philosophy of science.
This book is written from the point of view of the consumer who needs scientific theories to establish the quid factii in his decision-making, including the scientist who uses theories from other fields for his own theory; all must evaluate the product science. The production of science is only relevant to the consumer whenever understanding the production helps us to evaluate the product. Knowledge about the subjective motivations of a scientist can provide only circumstantial evidence. The consumer needs criteria and methods for evaluating the product itself: scientific theories. He will derive such a criterion from the use to which he intends to put the theories, usually to improve his decision-making. The contribution of a theory may be very specific, such as aerodynamics to the design of an aircraft. Or it could be as general and diffuse as a more realistic cosmology, a better background for looking at life, at man and at the theories which men produce. Such a cosmology may serve as a focus, as a means to integrate, to make a whole of, the various bits of more detailed, specific and precise scientific knowledge (something like Kuhn's paradigm).
Looking at science from the point of view of the consumer is not classic utilitarianism! It does not specify any particular use like the creation of wealth, only its function in decision-making,
whatever its purpose. The holistic nature of knowledge implies that any theory in whatever field which gives us a better grip on some part of reality can improve decision-making and is useful in that respect. The demarcation criterion and the requirement of maximal objectivity in evaluation of scientific statements proposed in this book were dictated by this objective.
In our western scientific community the most generally acknowledged objective of science is the development of mental tools for getting a grip on reality by designing theories that give the ‘best’ representation of it in terms of correspondence to facts. To be suitable as an axiom for other theories (for instance as initial condition) or for application in technology, a scientific theory must be at least provisionally accepted as (the most) true, on the basis of an evaluation which, in a democracy, should be as objective as we can make it. To the extent that the scientific community also strives for objectivity, there is congruence in a democracy between the acknowledged objective and thus a criterion for the evaluation - of the scientific community and of society.
The emphasis I had to put on the point of view of the consumer is the consequence of its general neglect in the academic, especially philosophic, community. It does not express the view that the producer is irrelevant: an adequate and efficient production of science requires congruence, harmony between the objectives of producers and consumers. The social theories of science come into their own when investigating how the production and distribution of science should be organised to further that purpose. We cannot rely on the market alone. First, much of science becomes a public good once it has been produced; using its findings cannot then be allocated on the basis of price. Secondly, most scientists, and certainly the most creative ones, probably did not choose their career primarily on the basis of its financial rewards. The democratic procedure of ‘voting’ (with money or any other way) by the consumer at large is out of the question because he does not actually buy theories, but only suffers the consequences of the application of such theories by others without being able to decide to which factor he should impute these consequences. Also, he is not competent to evaluate statements by scientists. The research and engineering departments of producers provide an intermediary market having the necessary expertise and testing facilities, but only for fields of science directly connected to production of specific and marketable goods.
To the extent that a method for the evaluation of scientific theories will be implemented at all, it will mostly be applied by scientists. As social theories point out, individual scientists evaluate a scientific theory on the basis of many different and subjective criteria such as its contribution to their own status or that of their faculty. Objectivity ranks low in the set of criteria they actually use. If questioned, they point to the pressures under which they have to operate, to the bureaucracy which is inevitable in organisations like a university etc., in short a list of all the problems encountered in the pursuit of an objective evaluation of theories. But lacking an acknowledged common basis for an ultimate and operational criterion, they do not really address the solution of the problem. I have tried to provide a workable starting point for consumers to defend their interest. In a democracy which acknowledges the responsibility of each member for his objectives, the consumers cannot rely on producers or on some nameless institution. They must do the job themselves, either by making a living out of it in a department of philosophy (methodology) or as an independent group of scientists who have a stake in it as ‘consumer’ of theories from other fields.
- voetnoot1)
- In an article whose reference I have lost, Putnam points out that the factor responsible for theories to survive is not some theoretical standard but success in practice. And in practice an important feature of a theory is its explanatory power. Popper does not forget it: formally it is included in his concept of empirical content. That formal concept defines the border between events which are included and those which are excluded by the theory. But I hold that scientific theories do much more: they help us understand the world around us, make a representation of it, of the whole of it. Our cosmology is not an assembly of parts which are conceived as separate entities with a clear scheme of how to assemble them into a whole. We conceive it more as what psychology calls a ‘Gestalt’, it is holistic. Putnam points out that any test or application of a theory consists of three elements:
-The theory (f.i. Universal theory of Gravity, UG)-auxiliary statements (AS) such as initial conditions-the (expected) outcome which has to be explained (EXPL) or predicted
If the perceived outcome does not match the prediction or is not adequately explained, then any of the three elements may be in cause.