He and students such as Kleene and Henkin have developed a wide range of areas in philosophical and mathematical logic, includ-ing completeness, definability, computability, and a num-ber
Trang 1propositional connectives, which include accounts of
*material implication, *strict implication, and relevant
implication The Megarians and the Stoics also
investi-gated various logical *antinomies, including the *liar
para-dox The leading logician of this school was Chrysippus,
credited with over 100 works in logic
There were few developments in logic in succeeding
periods, other than a number of handbooks, summaries,
translations, and commentaries, usually in a simplified
and combined form The more influential authors include
Cicero, Porphyry, and Boethius in the later Roman
Empire; the Byzantine scholiast Philoponus; and
al-Fa¯ra¯bı¯, Avicenna, and Averroës in the Arab world
The next major logician known to us is an innovator of
the first rank: Peter Abelard, who worked in the early
twelfth century He composed an independent treatise on
logic, the Dialectica, and wrote extensive commentaries.
There are discussions of conversion, opposition, quantity,
quality, tense logic, a reduction of de dicto to *de re
modal-ity, and much else Abelard also clearly formulates several
semantic principles, including the Tarski biconditional for
the theory of truth, which he rejects Perhaps most
impor-tant, Abelard is responsible for the clear formulation of a
pair of relevance criteria for logical consequences
(*Rele-vance logic.) The failure of his criteria led later logicians to
reject relevance implication and to endorse material
implication
Spurred by Abelard’s teachings and problems he
pro-posed, and by further translations, other logicians began
to grasp the details of Aristotle’s texts The result, coming
to fruition in the middle of the thirteenth century, was the
first phase of *supposition theory, an elaborate doctrine
about the reference of terms in various propositional
con-texts Its development is preserved in handbooks by Peter
of Spain, Lambert of Auxerre, and William of Sherwood
The theory of *obligationes, a part of non-formal logic, was
also invented at this time Other topics, such as the
rela-tion between time and modality, the convenrela-tionality of
semantics, and the theory of *truth, were investigated
The fourteenth century is the apex of medieval logical
theory, containing an explosion of creative work
Suppo-sition theory is developed extensively in its second phase
by logicians such as William of Ockham, Jean Buridan,
Gregory of Rimini, and Albert of Saxony Buridan also
elaborates a full theory of consequences, a cross between
entailments and inference rules From explicit semantic
principles, Buridan constructs a detailed and extensive
investigation of syllogistic, and offers completeness
proofs Nor is Buridan an isolated figure Three new
liter-ary genres emerged: treatises on syncategoremata (logical
particles), which attempted to codify their behaviour and
the inferences they license; treatises on sentences, called
‘sophisms’, that are puzzling or challenging given
back-ground assumptions about logic and language; and
trea-tises on insolubles, such as the liar paradox
The creative energy that drove the logical inquiries of
the fourteenth century was not sustained By the middle
of the fifteenth century little if any new work was being
done There were instead many simplified handbooks and manuals of logic The descendants of these textbooks came to be used in the universities, and the great innova-tions of medieval logicians were forgotten Probably the
best of these works is the *Port Royal Logic, by Antoine
Arnauld and Pierre Nicole, which was published in 1662 When writers refer to ‘traditional logic’, they usually have this degenerate textbook tradition in mind (*Logic, tradi-tional.)
Since the beginning of the modern era most of the con-tributions to logic have been made by mathematicians Leibniz envisioned the development of a universal lan-guage to be specified with mathematical precision The syntax of the words is to correspond to the metaphysical make-up of the designated entities The goal, in effect, was
to reduce scientific and philosophical speculation to com-putation Although this grandiose project was not devel-oped very far, and it did not enjoy much direct influence, the Universal Characteristic is a precursor to much of the subsequent work in mathematical logic
In the early nineteenth century Bolzano developed a number of notions central to logic Some of these, like analyticity and logical consequence, are seen to be relative
to a collection of ‘variable’ concepts For example, a
proposition C is a consequence of a collection P of propo-sitions relative to a group G of variable items, if every appropriate uniform substitution for the members of G that makes every member of P true also makes C true This
may be the first attempt to characterize consequence in non-modal terms, and it is the start of a long tradition of characterizing logical notions in semantic terms, using a distinction between logical and non-logical terminology Toward the end of the nineteenth century one can dis-tinguish three overlapping traditions in the development
of logic One of them originates with Boole and includes, among others, Peirce, Jevons, Schrưder, and Venn This
‘algebraic school’ focused on the relationship between reg-ularities in correct reasoning and operations like addition and multiplication A primary aim was to develop calculi common to the reasoning in different areas, such as propo-sitions, classes, and probabilities The orientation is that of abstract algebra One begins with one or more systems of related operations and articulates a common, abstract structure A set of axioms is then formulated which is satis-fied by each of the system The system that Boole developed is quite similar to what is now called Boolean algebra Other members of the school developed rudimen-tary *quantifiers, which were sometimes taken to be extended, even infinitary, conjunctions and disjunctions The aim of the second tradition, the ‘logicist school’, was to codify the underlying logic of all rational, scientific discourse into a single system For them, logic is not the result of abstractions from the reasoning in particular dis-ciplines and contexts Rather, logic concerns the most gen-eral features of actual precise discourse, features independent of subject-matter
The major logicists were Russell, the early Wittgen-stein perhaps, and the greatest logician since Aristotle,
530 logic, history of
Trang 2Gottlob Frege In his Begriffsschrift (translated in van
Hei-jenoort (ed.), From Frege to Gödel), Frege developed a rich
formal language with full mathematical rigour Despite
the two-dimensional notation, it is easily recognized as a
contemporary *Higher-order logic Quantifiers are
understood as they are in current logic textbooks, not as
extended conjunctions and disjunctions Unlike the
alge-braists, Frege did not envision various domains of
dis-course, each of which can serve as an interpretation of the
language Rather, each (first-order) variable is to range
over all objects whatsoever Moreover, in contemporary
terms, the systems of the logicists had no non-logical
ter-minology
Frege made brilliant use of his logical insights when
developing his philosophical programmes concerning
mathematics and language He held that arithmetic and
analysis are parts of logic (*logicism; mathematics, history
of the philosophy of ), and made great strides in casting
number theory within the system of the Begriffsschrift To
capture mathematical induction, minimal closures, and a
host of other mathematical notions, he developed and
exploited the *ancestral relation, in purely logical terms
Unfortunately, the system Frege eventually developed
was shown to be inconsistent It entails the existence of a
concept R which holds of all and only those extensions
that do not contain themselves A contradiction, known as
*Russell’s paradox, follows
A major response was the multi-volume Principia
Mathematica, by Russell and Whitehead, which attempts
to recapture the logicist programme by developing an
elaborate theory of *types (*Higher- order logic.)
Antino-mies are avoided by enforcing a *‘vicious-circle principle’
that no item may be defined by reference to a totality that
contains the item to be defined Despite its complexity,
Principia Mathematica enjoyed a wide influence among
logicians and philosophers An elegant version of the
the-ory, called simple type thethe-ory, was introduced by Ramsey
It violates the vicious-circle principle, but still avoids
formal paradox
The third tradition dates back to at least Euclid and, in
this period, includes Dedekind, Peano, Hilbert, Pasch,
Veblen, Huntington, Heyting, and Zermelo The aim of
this ‘mathematical school’ is the axiomatization of
partic-ular branches of mathematics, like geometry, arithmetic,
analysis, and set theory Zermelo, for example, produced
an axiomatization of set theory in 1908, drawing on
insights of Cantor and others The theory now known as
Zermelo–Fraenkel set theory is the result of some
modifi-cations and clarifimodifi-cations, due to Skolem, Fraenkel, and
von Neumann, among others
Unlike Euclid, some members of the mathematical
school thought it important to include an explicit
formu-lation of the rules of inference—the logic—in the
axiomatic development In some cases, such as Hilbert
and his followers, this was part of a formalist philosophical
agenda, sometimes called the Hilbert programme
(*Formalism.) Others, like Heyting, produced axiomatic
versions of the logic of *intuitionism and intuitionistic
mathematics, in order to contrast and highlight their revi-sionist programmes (see Brouwer)
A variation on the mathematical theme took place in Poland under Łukasiewicz and others Logic itself became the branch of mathematics to be brought within axiomatic methodology Systems of propositional logic, modal logic, tense logic, Boolean algebra, and *mereology were designed and analysed
A crucial development occurred when attention was focused on the languages and the axiomatizations them-selves as objects for direct mathematical study Drawing
on the advent of non-Euclidean geometry, mathemati-cians in this school considered alternative interpretations
of their languages and, at the same time, began to consider metalogical questions about their systems, including issues of *independence, *consistency, *categoricity, and
*completeness Both the Polish school and those pursuing the Hilbert programme developed an extensive pro-gramme for such ‘metamathematical’ investigation (*Metalanguage; *metalogic.) Eventually, notions about syntax and proof, such as consistency and derivability, were carefully distinguished from semantic, or model-theoretic counterparts, such as satisfiability and logical consequence
This metamathematical perspective is foreign to the logicist school For them, the relevant languages were already fully interpreted, and were not to be limited to any particular subject-matter Because the languages are com-pletely general, there is no interesting perspective ‘out-side’ the system from which to study it The orientation of the logicists has been called ‘logic as language’, and that of the mathematicians and algebraists ‘logic as calculus’ Despite problems of communication, there was signifi-cant interaction between the schools Contemporary logic is a blending of them
In 1915 Löwenheim carefully delineated what would later be recognized as the first-order part of a logical sys-tem, and showed that if a first-order formula is satisfiable
at all, then it is satisfiable in a countable (or finite) domain
He was firmly rooted in the algebraic school, using tech-niques developed there Skolem went on to generalize that result in several ways, and to produce more enlight-ening proofs of them The results are known as the Löwenheim–Skolem theorems (*Skolem’s paradox.) The intensive work on metamathematical problems culminated in the achievements of Kurt Gödel, a logician whose significance ranks with Aristotle and Frege In his
1929 doctoral dissertation, Gödel showed that a given first-order sentence is deducible in common deductive systems for logic if and only if it is logically true in the sense that it is satisfied by all interpretations This is known as Gödel’s completeness theorem A year later, he proved that for common axiomatizations of a sufficiently rich version of arithmetic, there is a sentence which is nei-ther provable nor refutable nei-therein This is called Gödel’s incompleteness theorem, or simply *Gödel’s theorem The techniques of Gödel’s theorem appear to be gen-eral, applying to any reasonable axiomatization that
logic, history of 531
Trang 3includes a sufficient amount of arithmetic But what is
‘rea-sonable’? Intuitively, an axiomatization should be
effec-tive: there should be an *algorithm to determine whether
a given string is a formula, an axiom, etc But what is an
‘algorithm’? Questions like this were part of the
motiva-tion for logicians to turn their attenmotiva-tion to the nomotiva-tions of
computability and effectiveness in the middle of the 1930s
There were a number of characterizations of
computabil-ity, developed more or less independently, by logicians
like Gödel (recursiveness), Post, Church
(lambda-defin-ability), Kleene, Turing (the *Turing machine), and
Markov (the Markov algorithm) Many of these were
by-products of other research in mathematical logic It was
shown that all of the characterizations are coextensive,
indicating that an important class had been identified
Today, it is widely held that an arithmetic function is
putable if and only if it is recursive, Turing machine
com-putable, etc This is known as *Church’s thesis
Later in the decade Gödel developed the notion of set
theoretic constructibility, as part of his proof that the
axiom of *choice and Cantor’s *continuum hypothesis are
consistent with Zermelo–Fraenkel set theory (formulated
without the axiom of choice) In 1963 Paul Cohen showed
that these statements are independent of Zermelo–
Fraenkel set theory, introducing the powerful technique
of forcing (*Independence.) There was (and is) a spirited
inquiry among set theorists, logicians, and philosophers,
including Gödel himself, into whether assertions like the
continuum hypothesis have determinate truth-values
(*Continuum problem; *mathematics, problems of the
philosophy of.)
Alfred Tarski, a pupil of Łukasiewicz, was one of the
most creative and productive logicians of this, or any
other, period His influence spreads among a wide range
of philosophical and mathematical schools and locations
Among philosophers, he is best known for his definitions
of *truth and logical consequence, which introduce the
fruitful semantic notion of *satisfaction This, however, is
but a small fraction of his work, which illuminates the
methodology of deductive systems, and such central
notions as completeness, decidability, consistency,
satisfi-ability, and definability His results are the foundation of
several ongoing research programmes
Alonzo Church was another major influence in both
mathematical and philosophical logic He and students
such as Kleene and Henkin have developed a wide range
of areas in philosophical and mathematical logic,
includ-ing completeness, definability, computability, and a
num-ber of Fregean themes, such as second-order logic and
sense and reference Church’s theorem is that the
collec-tion of first-order logical truths is not recursive It follows
from this and Church’s thesis that there is no algorithm for
determining whether a given first-order formula is a
logi-cal truth Church was a founder of the Association for
Symbolic Logic and long-time guiding editor of the Journal
of Symbolic Logic, which began publication in 1936
Vol-umes 1 and 3 contain an extensive bibliography of work in
symbolic logic since antiquity
The development of logic in the first few decades of this century is one of the most remarkable events in intellec-tual history, bringing together many brilliant minds work-ing on closely related concepts
Mathematical logic has come to be a central tool of con-temporary analytic philosophy, forming the backbone of the work of major figures like Quine, Kripke, Davidson, and Dummett Since about the 1950s special topics of interest to contemporary philosophers, such as modal logic, tense logic, *many-valued logic (used in the study of
*vagueness), *deontic logic, relevance logic, and nonstan-dard logic, have been vigorously studied The field still attracts talented mathematicians and philosophers, and
s.s
*logic, traditional; logical laws
I M Bochen´ski, A History of Formal Logic, tr and ed Ivo Thomas
(New York, 1956)
Alonzo Church, Introduction to Mathematical Logic (Princeton, NJ,
1956)
Martin Davis (ed.), The Undecidable (New York, 1965).
Jean van Heijenoort (ed.), From Frege to Gödel (Cambridge, Mass.,
1967)
William Kneale and Martha Kneale, The Development of Logic
(Oxford, 1962)
Alfred Tarski, Logic, Semantics and Metamathematics, 2nd edn., tr.
J H Woodger, ed John Corcoran (Indianapolis, 1983)
logic, informal Informal logic examines the nature and function of arguments in natural language, stressing the craft rather than the formal theory of reasoning It supple-ments the account of simple and compound statesupple-ments offered by *formal logic and, reflecting the character of arguments in natural language, widens the scope to include inductive as well as deductive patterns of infer-ence
Informal logic’s own account of arguments begins with assertions—the premisses and conclusions—whose rich meaning in natural language is largely ignored by formal logic Assertions have meaning as statements as well as actions and often reveal something about the person who makes them Not least, they are the main ingredient in patterns of inference Apart from the crucial action of claiming statements to be true, the assertions found in an argument may play other performative roles, such as war-ranting a statement’s truth (on one’s own authority or that
of another), conceding its truth, contesting it, or—instead
of asserting it at all—assuming the statement as a hypoth-esis Assertions also have an epistemic dimension It is a convention of natural language (though hardly a universal truth) that speakers believe what they assert Appraising the full meaning of a premiss or conclusion therefore involves gauging whether the statement was asserted merely as a belief or, in addition, as an objective fact or even as an item of knowledge Finally, assertions have
an emotive side Few arguments of natural language are utterly impersonal Attitudes and feelings seep from the language of argument and can easily influence what direction a sequence of reasoning may take Because
532 logic, history of
Trang 4informal logic sees assertions and arguments as woven
into the fabric of discourse, the threads it traces are
extremely varied: embedded but possibly incomplete
pat-terns of deductive and non-deductive inference, hidden
assumptions, conversational implications, vagueness,
rhetorical techniques of persuasion, and, of course,
fallac-ies Such topics, though important for understanding
arguments in natural language, lead it far from the
con-cerns of formal logic That informal logic lacks the
preci-sion and elegance of a formal theory is hardly surprising,
therefore, but it probably comes as close as any enterprise
ever will to being a science of argumentation r.e.t
I Copi, Informal Logic (New York, 1986).
F W Dauer, Critical Thinking: An Introduction to Reasoning
(Oxford, 1989)
logic, intuitionist: see intuitionist logic.
logic, many-valued: see many-valued logic.
logic, modal: see modal logic.
logic, modern Logic, whether modern or traditional, is
about sound reasoning and the rules which govern it In the
mid-nineteenth century (say from 1847, the date of Boole’s
book The Mathematical Analysis of Logic), logic began to be
developed as a rigorous mathematical system Its
develop-ment was soon speeded along by controversies about the
foundations of mathematics The resulting discoveries are
now used constantly by mathematicians, philosophers,
lin-guists, computer scientists, electronic engineers, and less
regularly by many others (for example, music composers
and psychologists) Gödel’s incompleteness theorem of
1931 was a high point not only for logic but also for
twenti-eth-century culture Gödel’s argument showed that there
are absolute limits to what we can achieve by reasoning
within a formal system; but it also showed how powerful
mechanical calculation can be, and so it led almost directly
to the invention of digital computers
Many arguments are valid because of their form; any
other argument of the same form would be valid too For
example:
Fifty-pence pieces are large seven-sided coins
This machine won’t take large coins
Therefore this machine won’t take fifty-pence pieces
An auk is a short-necked diving bird
What Smith saw was not a short-necked bird
Therefore what Smith saw was not an auk
Both of these arguments can be paraphrased into the
form:
(1) Every X is a Y and a Z.
No Y is a W.
Therefore no X is a W.
(Thus for the first, X = fifty-pence piece, Y = large coin,
Z = seven-sided object, W = thing that this machine will take.) This form (1) is an argument schema; it has schematic
letters in it, and it becomes an argument when we trans-late the letters into phrases Moreover, every argument got from the schema in this way is valid: the conclusion (after ‘Therefore’) does follow from the premisses (the
sentences before ‘Therefore’) So we call (1) a valid argu-ment schema.
Likewise some statements are true purely by virtue of
their form and hence are logically valid We can write
down a statement schema to show the form, for example:
If p and q then p.
Here the schematic letters p, q have to be translated into
clauses; but whatever clauses we use, the resulting sen-tence must be true Such a schema is logically valid; we can regard it as a valid argument schema with no pre-misses
What does it mean to say that a particular argument, expressed in English, has a particular argument schema as its form? Unfortunately this question has no exact answer
As we saw in the examples above, the words in an argu-ment can be rearranged or paraphrased to bring out the form Words can be replaced by synonyms too; an argu-ment doesn’t become invalid because it says ‘gramo-phone’ at one point and ‘record-player’ at another For the last 100 years or more, it has been usual to split logic into
an exact part which deals with precisely defined argument schemas, and a looser part which has to do with translat-ing arguments into their logical *form
This looser part has been very influential in philosophy
One doctrine—we may call it the logical form doctrine—
states that every proposition or sentence has a logical form, and the logical forms of arguments consist of the logical forms of the sentences occurring in them In the early years of the century Russell and Wittgenstein put forward this doctrine in a way which led to the pro-gramme of *analytic philosophy: analysing a proposition was regarded as uncovering its logical form Chomsky has argued that each sentence of a natural language has a structure which can be analysed at several levels, and one
of these levels is called LF for logical form—roughly
speak-ing, this level carries the meaning of the sentence How-ever, Chomsky’s reasons for this linguistic analysis have nothing to do with the forms of valid arguments, though his analysis does use devices from logic, such as quantifiers and variables One can hope for a general linguistic theory which gives each natural-language sentence a logical form that explains its meaning and also satisfies the logical form doctrine; logicians such as Montague and his student Kamp have made important suggestions in this direction, but the goal is still a long way off
Let us turn to the more exact part of logic Experience shows that in valid argument schemas we constantly meet words such as ‘and’, ‘or’, ‘if ’; moreover, the sentences can
be paraphrased so that these words are used to connect clauses, not single words For example, the sentence
logic, modern 533
Trang 5Fifty-pence pieces are large seven-sided coins can be
paraphrased as
Fifty-pence pieces are large coins and fifty-pence
pieces are seven-sided
We can introduce symbols to replace these words, for
example for ‘and’, ∨for ‘or’, ¬ for ‘it is not true that
’ and →for ‘if then’ Unlike the schematic letters, these
new symbols have a fixed meaning and they can be
trans-lated into English They are known as *logical constants
Round about 1880 Frege and Peirce independently
sug-gested another kind of expression for use in argument
schemas We write
∀x …x…
to mean that ‘…x…’ is true however x is interpreted The
expression∀x can be read as ‘For all x’ For example, the
sentence
Fifty-pence pieces are large seven-sided coins can be
rewritten as
∀x (if x is a fifty-pence piece then x is a large
seven-sided coin),
or, using the logical constants,
(2) ∀x (x is a fifty-pence piece → (x is a large
coin x is seven-sided) ).
This last sentence says that whatever thing we consider (as
an interpretation for x), if it’s a fifty-pence piece then it’s a
large coin and it’s seven-sided The symbol x is not a
schematic letter in (2), because the expression ∀x becomes
nonsense if we give x an interpretation Instead it is a new
kind of symbol which we call a bound variable The
expres-sion∀x has a twin, ∃x, which is read as ‘For some x’ These
two expressions are the main examples of logical
*quanti-fiers
Quantifiers are somewhere between logical constants
and schematic letters Like logical constants, they do have
a fixed meaning But this meaning needs to be filled out by
the context, because we need to known what range of
interpretations of the bound variable is allowed This
range is called the domain of quantification (Frege assumed
that the domain of quantification is always the class of all
objects But in practice when we say ‘everybody’ we
usu-ally mean everybody in the room, or all adults of sound
mind, or some other restricted class of people.)
With the help of the symbols described above, we can
translate English sentences into a *formal language For
example we can translate (2) into
∀x (A(x)→ (B(x) C(x) ) ).
Here A, B, and C are schematic letters which need to be
interpreted as clauses containing x, such as ‘x is a
fifty-pence piece’; this is what the (x) in A(x) indicates The
grammar of this formal language can be written down in a
mathematical form By choosing a particular set of
sym-bols and saying exactly what range of interpretations is
allowed for the schematic letters and the quantifiers, we
single out a precise formal language, and we can start to
ask mathematical questions about the valid argument schemas which are expressible in that language
For example a first-order language is a formal language
built up from the symbols described above, where all quantifiers are interpreted as having the same domain of quantification but this domain can be any non-empty set
First-order logic is logic based on argument schemas
writ-ten in a first-order language
What is the dividing-line between valid and invalid argument schemas? There are two main approaches to this question In the first approach, which we may call the
rule-based or syntactic one, we suppose that we can
intu-itively tell when a simple argument is valid, just by looking
at it; we count a complicated argument as valid if it can be broken down into simple steps which we immediately recognize as valid This approach naturally leads us to write down a set of simple valid argument schemas and some rules for fitting them together The result will be a
logical *calculus, i.e a mathematical device for generating
valid argument schemas The array of symbols written down in the course of generating an argument schema by
the rules is called a formal proof of the schema.
Once we have a logical calculus up and running, the mathematicians may suggest ways of revamping it to make it easier to teach to undergraduates, or faster to run
on a computer There is a great variety of logical calcu-luses for first-order logic, all of them giving the same class
of valid argument schemas Two well-known examples are the *natural deduction calculus (Gentzen, 1934), which breaks down complex arguments into intuitively
‘natural’ pieces, and the tableau or truth-tree calculus (Beth, 1955) which is very easy to learn and can be thought
of as a systematic search for counter-examples (see the next paragraph)
There is another approach to defining validity, the
semantic approach In this approach we count an
argu-ment schema as valid precisely if every interpretation which makes the premisses true makes the conclusion
true too To phrase this a little differently, a counter-example to an argument schema is an interpretation which
turns the premisses into true sentences and the conclusion into a false sentence; the semantic definition says that an argument schema is valid if and only if it has no counter-examples
At first sight this is a very paradoxical definition; it makes the following highly implausible argument schema valid just because the conclusion is true whatever we put
for X:
The Emperor Caligula’s favourite colour was X.
Therefore Omsk today is a town in Siberia with a popu-lation of over a million and a large petroleum industry,
and X = X.
Nevertheless, one can argue that the semantic approach works if the language of our logic doesn’t contain any words (such as ‘Omsk’ or ‘today’) that tie us down to spe-cific features of our world This is an untidy view, because the notion of a specific feature of our world is not sharp;
∨
∨
∨
534 logic, modern
Trang 6should it include the physical laws of the universe, or the
mathematical properties of sets? One has to answer
ques-tions like these in order to draw a line between logical
necessity and other kinds of necessity (physical or
mathe-matical), and probably there will always be philosophical
debate about how best to do this
For first-order logic the problem happily doesn’t arise
One can prove that every first-order argument schema
which is justified by any of the standard logical calculuses
is valid in the semantic sense This is a mathematical
theo-rem, the soundness theorem for first-order logic Conversely
if an argument schema is not proved valid by the logical
calculuses, then we can show that there is an
interpreta-tion of the schema which makes the premisses true and
the conclusion false This again is a mathematical
theo-rem, the *completeness theorem for first-order logic (Gödel,
1930; this is quite different from his incompleteness
theo-rem of 1931) The completeness theotheo-rem justifies both the
rule-based approach and the semantic one, in the
follow-ing way The chief danger with the rule-based approach
was that we might have overlooked some rule that was
needed The completeness theorem assures us that any
schema not justified by our logical calculus would have a
counter-example, so it certainly wouldn’t be valid And
conversely the chief danger with the semantic approach
was that it might make some argument schema valid for
spurious reasons (like the example with Omsk above)
The completeness theorem shows that if an argument has
no counter-example, then it is justified by the logical
cal-culus In this way the valid first-order argument schemas
are trapped securely on both sides, so we can be very
con-fident that we have the dividing-line in the right place
For other logics the position is less clear For example,
in monadic second-order logic we have some quantifiers
whose domain of quantification is required to be the
fam-ily of subsets of a particular set Because of this restriction,
some truths of set theory can be expressed as valid
schemas in this logic, and one consequence is that the
logic doesn’t admit a completeness theorem In temporal
logics there are logical constants such as ‘until’ or ‘it will
sometime be true that ’; to define validity in these
log-ics, we need to decide what background assumptions we
can make about time, for example whether it is
continu-ous or discrete For these and other logics, the normal
practice today is to give a precise mathematical definition
of the allowed interpretations, and then use the semantic
definition of validity The result is an exact notion, even if
some people are unhappy to call it logical validity
This is the place to mention a muddle in some recent
psychological literature The question at issue is how
human beings carry out logical reasoning One often reads
that there are two possible answers: (1) by rules as in a
log-ical calculus, or (2) by models (which are interpretations
stripped down to the relevant essentials) as in the
seman-tic approach This is a confusion There is no distinction
between rule-based and semantic ways of reasoning The
rule-based and semantic approaches are different
explana-tions of what we achieve when we do perform a proof: on
the rule-based view, we correctly follow the rules, whereas on the semantic view we eliminate counter-examples
Can we mechanically test whether a given argument schema is logically valid, and if so, how? For first-order logic, half of the answer is positive Given any standard logical calculus, we can use it to list in a mechanical way all possible valid argument schemas; so if an argument schema is valid, we can prove this by waiting until it appears in the list In fact most logical calculi do much bet-ter than this; we can use them to test the schema system-atically, and if it is valid they will eventually say ‘Yes’ The bad news is that there is no possible computer pro-gram which will tell us when a given first-order argument schema is invalid This was proved by Church in 1936, adapting Gödel’s incompleteness theorem (Strictly it also needs Turing’s 1936 analysis of what can be done in
prin-ciple by a computer.) This does not mean that there are
some first-order argument schemas which are undecid-able, in the sense that it’s impossible for us to tell whether they are valid or not—that might be true, but it would need further arguments about the nature of human cre-ativity Church’s theorem does mean that there is no purely mechanical test which will give the right answer in all cases
A similar argument, again based on Gödel’s incom-pleteness theorem, shows that for many other logics including monadic second-order logic, it is not even pos-sible to list mechanically the valid argument schemas On the other hand there are many less adventurous logics— for example, the logic of Aristotle’s *syllogisms—for which we have a decision procedure, meaning that we can mechanically test any argument schema for validity
A final question: Is there a particular logical calculus which can be used to justify all valid reasoning (say, in sci-ence or mathematics)? For the intuitionist school of Brouwer, it is an article of faith that the answer is ‘No’ On the other side, Frege believed that he had given a logical calculus which was adequate at least for arithmetic; but
*Russell’s paradox showed that Frege’s system was incon-sistent
For the moment, the heat has gone out of this question
In modern mathematics we assume that every argument can be translated into the first-order language appropriate for set theory, and that the steps in the argument can all be justified using a first-order logical calculus together with the axioms of Zermelo–Fraenkel *set theory This has become a criterion of sound mathematical reasoning, though nobody ever carries out the translation in practice (it would be horrendously tedious) Versions of this trans-lation are used to check the correctness of computer soft-ware, for example where lives may depend on it There is a more radical reading of our question In many situations we carry out reasoning along quite differ-ent lines from the logical calculuses mdiffer-entioned above For example, when someone pays us money, we normally take for granted that it is legal tender and not a forgery, and so when it adds up correctly we infer that we have
logic, modern 535
Trang 7been given the correct change Strictly this is not logical
reasoning, because even when the premisses are true, the
conclusion could be false (and occasionally is) But it is
rea-soning of a kind, and it does follow some rules Logicians
generally disregarded this kind of reasoning until they
found they needed it to guide intelligent databases For
this purpose a number of non-monotonic logics have been
proposed; the name refers to the fact that in this kind of
reasoning a valid conclusion may cease to be valid when a
new premiss is added (for example, that the five pound
note has no metal strip)
Several other alternative logics have been suggested,
each for its own purposes Linear logic tries to formalize the
idea that there is a cost incurred each time we use a
pre-miss, and perhaps we can only afford to use it once An
older example is intuitionist logic (Heyting, 1930), which
incorporates a *verifiability principle: we can’t claim to
have proved that there is an A until we can show how to
produce an example of an A Each of these logics must be
justified on its own terms There is no reason to think that
the list of useful logics is complete yet w.a.h
*logic, traditional; quantification
J C Beall and Bas C von Fraassen, Possibilities and Paradox:
An Introduction to Modal and Many-Valued Logic (Oxford,
2003)
H D Ebbinghaus, J Flum, and W Thomas, Mathematical Logic,
2nd edn (New York, 1996)
D Gabbay, Elementary Logics: A Procedural Perspective (London,
1998)
—— and F Guenthner (eds.), Handbook of Philosophical Logic,
2nd edn in 18 vols (Dordrecht, 2001– )
Wilfrid Hodges, Logic, 2nd edn (London, 2001).
W H Newton-Smith, Logic: An Introductory Course (London,
1985)
W V Quine, Philosophy of Logic, 2nd edn (Cambridge, Mass.,
1986)
A Tarski, Introduction to Logic and to the Methodology of Deductive
Sciences, 4th edn (New York, 1994).
logic, paraconsistent.A logical system is paraconsistent
if it does not sanction the principle that anything follows
from a contradiction The rejected inference is called ex
falso quodlibet, and is expressed in symbols thus: p, ¬p q.
Paraconsistent logics have application to the logic of
belief, and other propositional attitudes, especially if one
wants to develop something analogous to *possible
worlds semantics A person who has contradictory beliefs
is not thereby committed to every proposition
what-soever A ‘world’ that is ‘compatible’ with one’s beliefs
need not be consistent, but it should not trivially make
every proposition true Other applications of
paraconsis-tent logic concern reasoning with faulty data and
*dialetheism, the view that some contradictions are true
Dialetheism is one attempt to deal with paradoxes like the
Liar Most systems of *relevance logic are paraconsistent
s.s
Graham Priest, ‘Paraconsistent Logic’, in Dov M Gabbay and
F Guenthner (eds.), Handbook of Philosophical Logic, vi, 2nd
edn (Dordrecht, 2002)
logic, philosophical: see philosophical logic.
logic, relevance: see relevance logic.
logic, second-order.Consider ‘Socrates is wise’ In a first-order logic the name ‘Socrates’ may be replaced by a bound variable to yield ‘something is wise’ It is a further question whether the predicate in this sentence may also
be replaced by a bound variable A formal logic that per-mits this replacement is called ‘second-order’ In the stan-dard semantics for second-order logic, first-order variables range over a domain of individuals, whereas second-order variables range over sets, properties, relations, or func-tions on the range of the first-order variables So under-stood, second-order logic is extremely powerful It is
*incomplete (there can be no finite deductive system in which every second-order logical truth is deducible), *cat-egorical (any two models that satisfy a set S of sentences are isomorphic), and not compact (even if every finite sub-set of S has a model, S itself may lack a model) In a non-standard (Henkin) semantics the second-order variables range over a separate domain of individuals So under-stood, second-order logic is complete, categorical, and
*higher-order logic; categoricity; incompleteness
Stewart Shapiro, Foundations without Foundationalism: A Case for Second-Order Logic (Oxford, 1991).
logic, traditional The rough-and-ready title given by later logicians to the methods and doctrines which once dominated the universities, but which were supplanted in the twentieth century by the ‘modern’ or ‘mathematical’ logic with which the names of Frege and Russell are espe-cially associated Sometimes called ‘Aristotelian’—or ‘syl-logistic’, or the ‘logic of terms’—it originated with Aristotle in the fourth century bc, though it acquired a great many accretions in the intervening 2,000 years The older logic was limited, it is customary to say, by the uncritical assumption that propositions are of the sub-ject–predicate form This contention, however, is mis-leading; not least because the subject–predicate distinction is actually quite at odds with the formal system which is supposed to be based on it
Most traditional logicians certainly accepted that non-compound propositions invariably contain *subjects and predicates At its vaguest, the idea was perhaps that to make any judgement at all is to say something about something It is easy to drift from this to the more specific doctrine that every proposition contains two distinct ele-ments: an element which names or refers to something (a
‘subject-term’), and an element (the ‘predicate-term’) which expresses what is said about it Thus, in ‘Socrates is bald’, the name ‘Socrates’ refers to a person, and the expression ‘is bald’ says something about this person The subject of a proposition in this sense—what it is about—is not part of the proposition but something to which part of it refers, not the name ‘Socrates’ but the
536 logic, modern
Trang 8person who bears it If some traditional logicians failed to
stress the difference, this may have reflected uncertainty
about the status of the predicate The difference between
‘Socrates’ and Socrates is perfectly clear; not quite so clear
is the difference between ‘is bald’ and is bald
This asymmetry is one aspect of what is really a very
considerable difference: subjects and predicates belong to
quite distinct categories Granted that an expression like
‘ is bald’ plays a predicative role, a subject is anything of
which this may be said A subject-term is therefore a word
or expression which fulfils two conditions: it constitutes a
grammatical answer to a question like ‘You said that
something (someone) is bald: of what (whom) did you say
this?’ and it must produce good English when it is
substi-tuted for x in ‘x is bald’ Proper names, referring
expres-sions like ‘Plato’s teacher’, and a variety of other items,
satisfy these conditions; but it is obvious that predicative
expressions cannot themselves be subject-terms, because
‘is bald is bald’ makes no sense at all
The subject–predicate distinction, then, revolves
around the difference between naming or referring to
something and saying something about it But no such
dis-tinction can sensibly be applied to the traditional system
The crowning glory of that system, it is agreed on all sides,
is the doctrine of the syllogism But this doctrine, as we
shall see, requires—as indeed does the rest of the system—
that what is the predicate of one proposition can be the
subject of another
Traditional logic was for the most part concerned with
the logical properties of four forms of proposition More
often than not these were said to be
All S is P.
No S is P.
Some S is P.
Some S is not P.
‘All S is P’ was called the ‘universal affirmative’ or ‘A’ form,
‘No S is P’ the ‘universal negative’ or ‘E’ form, ‘Some S is P’
the ‘particular affirmative’ or ‘I’ form, and ‘Some S is not P’
the ‘particular negative’ or ‘O’ form That a proposition is
universal or particular was called its quantity, and that it is
affirmative or negative was called its quality
A moment’s reflection shows that ‘All S is P’ cannot
properly belong in the same list as the rest, because ‘No
Greek is bald’ is good English, while ‘All Greek is bald’ is
merely good gibberish This drawback, though, could be
remedied simply by taking ‘Every S is P’ to be the correct
form A more serious problem concerns the innuendo in
the symbolism, which is in any case frankly espoused by
those who use it, that S and P stand for subjects and
predi-cates If ‘is bald’ is a predicative expression, P clearly
can-not be a predicate in ‘No S is P’, since ‘No Greek is is bald’
looks like a mere typing error
The stuttering ‘is’ could be removed in one of at least
two ways One would be to give up the idea that the
predi-cate is ‘is bald’ in favour of saying that it is merely ‘bald’
This is no doubt the ulterior motive behind the half-baked
suggestion that pro-positions contain a third element, over
and above the subject and the predicate, namely the copula (i.e ‘is’) Another way would be to give up the practice of
writing, for example, ‘No S is P’ in favour of ‘No S P’.
But the difficulties do not end there We have seen that
a subject-term is anything that takes the place of x in an expression like ‘x is bald’ According to this criterion,
‘Every man’, ‘No man’, and ‘Some man’ are perfectly good subject-terms But substituting them in the standard forms again produces meaningless repetition: ‘Every every man is bald’, and so on Again there are two ways of
coping: one is to say that not ‘Every S is P’ but the simple S
P is the correct form, the other that not ‘Every man’ but
merely ‘man’ is the subject-term
These different ways of coping led our symbolism in quite different directions One leaves us with only two elements (subject and predicate); the other first with three elements (subject, predicate, copula), then with four (sub-ject, predicate, copula, and a sign of quantity) All these distinct, and mutually inconsistent, ways of analysing propositions are at least hinted at in the traditional text-books
As we saw at the outset, the subject–predicate distinc-tion arises in the context of singular proposidistinc-tions like
‘Socrates is bald’ In the traditional textbooks, singulars are treated as universals, on the feeble pretext that in
‘Socrates is bald’ the name ‘Socrates’ refers to everything
it can This notion was generally expressed in technical terminology: the name was said to be ‘distributed’ or to
‘refer to its whole extension’ These obscurities presum-ably reflect a disinclination to say something that is obvi-ously absurd (that one is talking about the whole of Socrates), something that is obviously false (that only one person can be called Socrates), or something that is obvi-ously vacuous (that the name is here meant to name everyone it is here meant to name) Be that as it may, it is worth noticing that the singular propositions which are paradigmatic in the exposition of the subject–predicate distinction become quite peripheral in the exposition of the syllogism What this indicates is that the subject–pred-icate distinction is merely a nuisance so far as the formal system of traditional logic is concerned
How then should the propositions discussed in tradi-tional logic be symbolized? The only analysis which is truly consistent with the traditional system is one in which propositions are treated as containing two distinct sorts of elements, but these are not subjects and predicates; they are logical *constants and *terms The constants, four in number, are:
‘All are ’ (A)
‘No are ’ (E)
‘Some are ’ (I)
‘Some are not ’ (O) These are two-place term-operators, which is to say, expressions which operate on any two terms to generate propositions
What are terms? Given our operators and the require-ment that a term must be capable of filling either place in
logic, traditional 537
Trang 9them, this question answers itself A term is typically a
plural noun—like ‘baldies’—or any expression—like
‘per-sons challenged in the hair department’—that does the
work of an actual or possible plural noun (‘possible’
because any particular language may or may not have a
single word with the same meaning as a complex
expres-sion) Small letters from the beginning of the alphabet will
be used to stand for terms, i.e as term-variables, and these
will be written after the term-operator Thus ‘Anyone
who disagrees with me is a complete fool’ is of the form
Aab, where a =‘persons who disagree with me’ and
b =‘ complete fools’.
The traditional system relied upon two kinds of
*nega-tion The distinction between ‘Not everything which
glis-ters is gold’ (negating a proposition) and ‘Everything
which glisters is not gold’ (negating a term) is worth
fight-ing for, despite the common practice of usfight-ing the second to
mean the first Propositional-negation will be represented
by N (meaning ‘It is not that ’); term-negation by n
(meaning ‘non-’) Term-negation may preface either or
both terms Thus ‘Everything which doesn’t glister is gold’
is Anab, ‘Everything which glisters isn’t gold’) is Aanb, and
‘Everything which doesn’t glister isn’t gold’ is Ananb.
We need in our symbolism also ways of representing
connections between propositions Aab & Abc will signify
the conjunction of these two propositions Aab→ Anbna
will signify the (in this case true) assertion that the second
proposition follows from the first, and Aab≡ Aba the (in
this case false) assertion that these two propositions are
equivalent, i.e that each follows from the other
The laws of the traditional system may be classified
under two headings: those which apply to only two
propositions, and those which apply to three or more The
square of opposition and immediate inference fall under
the first heading, syllogisms and polysyllogisms under the
second
The *square of opposition depicts various kinds of
‘opposition’ between the four propositional forms A and
E are contraries, meaning that, if a and b stand for the same
terms in Aab and Eab, these two propositions cannot both
be true but may both be false; hence Aab→NEab and Eab
→ NAab I and O are subcontraries, meaning that they
can-not both be false but may both be true; hence NIab→ Oab
and NOab→ Iab A and O are contradictories, as are E and
I, meaning that one of each pair must be true, the other
false; hence Aab≡ NOab and Eab≡ NIab I is subaltern to A,
as O is to E, meaning that in each instance the second
implies the first; hence Aab→ Iab and Eab→ Oab.
*Immediate inference, which consists in drawing a
con-clusion from a single premiss, encompasses conversion,
obversion, contraposition, and inversion Conversion
consists in reversing the order of terms It is valid for E and
I, invalid for A and O; hence Eab→ Eba and Iab→ Iba The
valid inferences Eab→ Oba and Aab→ Iba are called
con-version per accidens Obcon-version consists in negating the
second term of a proposition and changing its quality It is
valid for all four forms; hence Eab→ Aanb, Aab→ Eanb,
Oab→ Ianb, and Iab→ Oanb Contraposition consists in
negating both terms and reversing their order It is valid
for A and O; hence Aab → Anbna and Oab → Onbna Inversion consists in inferring from a given proposition another having for its subject the negation of the original
subject It is valid in the following cases: Eab → Inab,
Eab→ Onanb, Aab→ Onab, and Aab→ Inanb
*Syllogisms draw a conclusion from two premisses They contain three terms: one (the middle term) is com-mon to the premisses, another is comcom-mon to the conclu-sion and one of the premisses, and the third is common to
the conclusion and the other premiss We will use b to sig-nify the middle term, a and c to sigsig-nify what are called the
extreme terms Perhaps the best-known syllogism (it was called Barbara) may be illustrated by the following simple example:
Any workers who voted for that party were voting for their own unemployment
Those who vote for their own unemployment are fools to themselves
Any workers who voted for that party are fools to themselves
Traditionally syllogisms were set out this way, with the conclusion under the premisses like the lines of a sum
In our symbolism, this example is of the form (Aab & Abc)
→ Aac
Polysyllogisms have more than two premisses but may
be reduced to a series of conventional syllogisms:
Some university teachers profess to believe in acade-mic freedom but do nothing to defend it
Those who profess such a thing but do nothing about
it are not practising what they preach
Teachers who fail to practise what they preach are a disgrace to their profession
Some university teachers are a disgrace to their profession
This has the form (Iab & Abc & Acd)→ Iad, but it may be regarded as the summation of two conventional
syllo-gisms, namely (Iab & Abc)→ Iac and (Iac & Acd)→ Iad
It is customary to say that there are 256 forms of syllogism This number results from a convention con-cerning how syllogisms are depicted: the order of terms in the conclusion is fixed, but that in the premisses is reversable The conclusion is thus restricted to taking one
of four forms: Eac, Aac, Oac, or Iac Each premiss, how-ever, may take any one of eight forms: one is Eab, Eba, Aab, Aba, Iab, Iba, Oab, or Oba, and the other is Ebc, Ecb, Abc, Acb, Ibc, Icb, Obc, or Ocb The number 256 is simply
4×8× 8
Syllogisms were classified in the traditional textbooks according to their mood and figure The mood of a syllo-gism is essentially the sequence of term-operators it con-tains The mood of Barbara, for example, is AAA (hence the name) The various moods, 64 in all, are there-fore EEE, EEA, EEO, EEI, and so on The figure of a syllo-gism is determined by the arrangement of terms in its premisses Aristotle distinguished three figures; later
538 logic, traditional
Trang 10logicians, whose conception of figure differed significantly
from his, decreed that there are four:
(1) ab, bc.
(2) ab, cb.
(3) ba, bc.
(4) ba, cb.
The identity of a syllogism is completely specified by its
mood and figure, so the number 256 is the product of 4
(figures) and 64 (moods) Of these 256, 24 are said to be
valid (some authors, for reasons that will be indicated in a
moment, say 19, or even 15) Omitting brackets, the 24,
arranged in their figures, are:
(1)
Aab & Abc→Aac Aab & Abc→Iac Iab & Abc→Iac
Aab & Ebc→Eac Aab & Ebc→Oac Iab & Ebc→Oac
(2)
Aab & Ecb→Eac Aab & Ecb→Oac Eab & Acb→Eac
Eab & Acb→Oac Iab & Ecb→Oac Oab & Acb→Oac
(3)
Aba & Abc→Iac Aba & Ibc→Iac Iba & Abc→Iac
Aba & Ebc→Oac Aba & Obc→Oac Iba & Ebc→Oac
(4)
Aba & Acb→Iac Eba & Acb→Eac Eba & Acb→Oac
Aba & Ecb→Oac Aba & Icb→Iac Iba & Ecb→Oac
Of these, five are ‘weakened’, meaning that they draw
par-ticular conclusions from premisses that merit a universal
one If these are omitted, the number of valid forms is 19
Among these 19, 15 either draw a universal conclusion
from universal premisses or a particular conclusion from
one universal and one particular premiss: these were
sometimes called ‘fundamental’
But the convention behind the numbers given in the
traditional textbooks is wholly improper The effect of
reversing the order of terms in E and I propositions is to
produce mere equivalents, while in A and O
non-equivalents are produced The textbook account
there-fore includes duplication It excludes from the syllogism,
moreover, the varieties of negation that are permitted
in immediate inferences, and is as a consequence
incomplete
The traditional system encompassed what were really
eight logically distinct propositional forms:
Eab (Eba, etc.).
Enab (Aba, etc.).
Eanb (Aab, etc.).
Enanb (Anab, etc.).
NEab (Iab, etc.).
NEnab (Oba, etc.).
NEanb (Oab, etc.).
NEnanb (Onab, etc.).
Any one of these eight forms is expressible in eight
ways Eab, for example, is equivalent to Eba, Aanb, Abna,
NIab, NIba, NOanb and NObna A proper account of
the syllogism, then, would cover 64 forms of proposition:
the correct number of syllogisms is therefore 262,144
P T Geach, ‘History of the Corruptions of Logic’, in Logic Matters
(Oxford, 1972)
J N Keynes, Formal Logic, 4th edn (London, 1906).
J.Łukasiewicz, Aristotle’s Syllogistic, 2nd edn (Oxford, 1957).
A N Prior, Formal Logic, 2nd edn (Oxford, 1962), pt 2, ch 6.
C Williamson, ‘Traditional Logic as a Logic of
Distribution-Values’, Logique et analyse (1971).
—— ‘How Many Syllogisms Are There?’, History and Philosophy
of Logic (1988).
logic of discovery.*Deduction in the testing of scientific theories For example, the exhibiting of logical relations between the sentences of a theory (such as equivalence, derivability, consistency, inconsistency) or between a theory and estabilished theories; the logical inferring of predictions from a theory *Popper argues against the view that scientific theories are conclusively inductively verifiable but argues for their deductive and empirical falsifiability A claim of the form (∀a) (Fa), ‘Every a is F’, cannot be confirmed by any finite number of observations
of a’s that are F, because there could always in principle exist an undiscovered a that is not F, but (∀a)(Fa) can be
refused by the discovery of just one a that is not F.
*Popper has an evolutionary epistemology of scientific discovery The formulation of theories is analogous to genetic mutation in evolutionary theory Theories and mutations arise randomly as putative solutions to environmental problems, and only those conducive to the survival of the species in that environment themselves survive through trial and error Popper adopts a Platonist view of logic, on the grounds that proofs are (sometimes surprising) discoveries not unsurprising inventions s.p
Karl R Popper, The Logic of Scientific Discovery (London, 1980),
ch 1, sect 3
logical atomism: see atomism, logical.
logical constants.An argument’s logical form is shown
by analysing its constituent propositions into constant and variable parts, constants representing what is common to proportions, variables their differing content The con-stants peculiar to syllogistic logic are ‘All … are …’, ‘No … are …’, ‘Some … are …’, Some … are not …’; those of propositional calculus are truth-functional connectives like implication, conjunction, and disjunction; those of
predicate calculus add the quantifiers ‘For all x …’ and
‘There is an x such that …’ Constants concerning identity,
tense, modality, etc may be introduced in more complex
logical determinism: see determinism, logical.
logical empiricism: see empiricism, logical.
logical form: see form, logical.
logical form 539