An Introduction to Formal Logic

1

Arguments

1.1 Arguments

🧭 Overview

🧠 One-sentence thesis

Logic evaluates arguments by distinguishing good ones from bad ones, where an argument is a structured series of sentences (premises) intended to give someone a reason to believe a conclusion.

📌 Key points (3–5)

  • What a logical argument is: a structured series of sentences with premises supporting a conclusion, not a shouting match.
  • What counts as a sentence in logic: only statements that can be true or false; questions, commands, and exclamations do not count.
  • Two ways arguments fail: premises can be false, or premises can fail to support the conclusion even when true.
  • Common confusion: grammatical sentences vs logical sentences—grammar includes questions and imperatives, but logic only considers statements that have truth values.
  • How to identify argument structure: look for premise indicators (since, because) and conclusion indicators (therefore, hence, thus).

🎯 What logical arguments are

🎯 Definition and purpose

A logical argument: a series of sentences where the sentences at the beginning are premises and the final sentence is the conclusion.

  • Logic focuses on arguments as structured reasoning, not emotional disputes.
  • The purpose: to give someone a reason to believe the conclusion.
  • If the premises are true and the argument is good, you have a reason to accept the conclusion.

📝 Structure of arguments

  • Premises: the supporting sentences at the beginning.
  • Conclusion: the final sentence the argument aims to establish.
  • Indicators: words that signal structure.
TypeWordsPurpose
Premise indicatorssince, because, given thatShow which sentences are premises
Conclusion indicatorstherefore, hence, thus, then, soShow which sentence is the conclusion

⚠️ Even bad arguments count

  • The definition is very general: any series of sentences with premises and a conclusion counts as an argument.
  • Example: "There is coffee in the pot. There is a dragon playing bassoon on the armoire. ∴ Salvador Dali was a poker player."
    • This is still an argument by definition, just a terrible one.
    • The premises have nothing to do with the conclusion, making it a bad argument.
  • Don't confuse: "argument" in logic means any premise-conclusion structure, not just good or persuasive ones.

📐 What counts as a sentence in logic

📐 Core requirement

A sentence (in logic): something that can be true or false.

  • Logic only considers sentences that can figure as premises or conclusions.
  • The key test: does it have a truth value?
  • This is different from grammatical definitions of "sentence."

✅ What qualifies as logical sentences

  • Statements of fact: "Kierkegaard was a hunchback" or "Kierkegaard liked almonds" (can be true or false).
  • Statements of opinion: "Almonds are yummy" (can be true or false, even if subjective).
  • Answers to questions: "I am not sleepy" (true or false).
  • Declarative statements that look like commands: "You will respect my authority" (either you will or won't—true or false).

❌ What does NOT count as logical sentences

❓ Questions

  • "Are you sleepy yet?" is an interrogative sentence in grammar but not a logical sentence.
  • The question itself is neither true nor false.
  • Don't confuse: questions don't count, but answers do.
  • Example: "What is this course about?" (not a sentence) vs "No one knows what this course is about" (is a sentence).

🗣️ Imperatives (commands)

  • "Wake up!" or "Sit up straight" are imperative sentences in grammar.
  • Commands are neither true nor false—they might be good or bad advice, but they lack truth values.
  • Exception: some commands are phrased as declarative statements (see above).

🎭 Exclamations

  • "Ouch!" is neither true nor false.
  • "Ouch, I hurt my toe!" means the same as "I hurt my toe"—the exclamation adds no truth-evaluable content.

🚫 Two ways arguments can fail

🚫 Failure mode overview

The excerpt introduces that arguments can go wrong in two distinct ways, using the umbrella argument as an example:

  1. It is raining heavily.
  2. If you do not take an umbrella, you will get soaked. ∴ You should take an umbrella.

🔴 False premises

  • If premise (1) is false—if it is sunny outside—the argument gives you no reason to carry an umbrella.
  • Even if the structure is good, false premises undermine the argument.
  • Example: the umbrella argument fails if it's not actually raining.

🔴 Premises fail to support conclusion

  • Even if it is raining (premise 1 is true), you might not need an umbrella.
  • You might wear a rain poncho or keep to covered walkways.
  • In these cases, premise (2) would be false: you could go out without an umbrella and still avoid getting soaked.
  • Don't confuse: true premises are necessary but not always sufficient—the connection between premises and conclusion also matters.
2

Sentences

1.2 Sentences

🧭 Overview

🧠 One-sentence thesis

In logic, a sentence is defined as something that can be true or false, which excludes questions, imperatives, and exclamations that cannot carry truth values.

📌 Key points (3–5)

  • What counts as a sentence in logic: only statements that can be true or false, not questions, commands, or exclamations.
  • Fact vs. opinion doesn't matter: both factual claims and opinions can be logical sentences if they can be true or false.
  • Common confusion: grammar sentences vs. logical sentences—grammar includes interrogatives, imperatives, and exclamations; logic does not.
  • Questions vs. answers: questions are not sentences in logic, but answers to questions typically are.
  • Why it matters: only logical sentences can serve as premises or conclusions in arguments.

🎯 What qualifies as a logical sentence

🎯 The core definition

A sentence is something that can be true or false.

  • This is the only criterion for being a sentence in logic.
  • The definition is narrower than the grammatical definition of "sentence."
  • A logical sentence must have a truth value—it must be either true or false, not neither.

🔍 Fact vs. opinion is irrelevant

  • The excerpt emphasizes: do not confuse "can be true or false" with "fact vs. opinion."
  • Both of these count as logical sentences:
    • Factual claims: "Kierkegaard was a hunchback" or "Kierkegaard liked almonds."
    • Opinions: "Almonds are yummy."
  • What matters is whether the statement has a truth value, not whether everyone agrees on what that value is.
  • Example: "Almonds are yummy" is a sentence because it is either true or false (even if people disagree about which).

🚫 What does NOT count as a logical sentence

❓ Questions (interrogatives)

  • In grammar class, "Are you sleepy yet?" is an interrogative sentence.
  • But the question itself is neither true nor false—it asks for information rather than stating something.
  • Therefore, questions do not count as sentences in logic.
  • Don't confuse: answers to questions usually are sentences.
    • Question: "What is this course about?" → not a sentence.
    • Answer: "No one knows what this course is about" → is a sentence (can be true or false).
  • Example: "Are you sleepy yet?" is not a sentence, but "I am not sleepy" is a sentence.

📢 Imperatives (commands)

  • Commands like "Wake up!" or "Sit up straight" are imperative sentences in grammar.
  • They are neither true nor false—they tell someone to do something.
  • Therefore, imperatives do not count as logical sentences.
  • Exception: some commands are phrased as statements.
    • "You will respect my authority" is either true or false (you will or you won't), so it is a logical sentence.
    • The form matters: if it can be evaluated as true or false, it counts.

😲 Exclamations

  • Exclamations like "Ouch!" are neither true nor false.
  • The excerpt treats "Ouch, I hurt my toe!" as meaning the same as "I hurt my toe."
  • The "ouch" part adds emotional expression but nothing that could be true or false.
  • Only the part that can be true or false ("I hurt my toe") counts as a logical sentence.

📊 Summary comparison

TypeGrammar classLogic classReason
QuestionsInterrogative sentenceNot a sentenceCannot be true or false
CommandsImperative sentenceUsually not a sentenceCannot be true or false (unless phrased as a statement)
ExclamationsExclamatory sentenceNot a sentenceCannot be true or false
Statements (fact)Declarative sentenceSentenceCan be true or false
Statements (opinion)Declarative sentenceSentenceCan be true or false

🔗 Why this definition matters

🔗 Connection to arguments

  • The excerpt states: "In logic, we are only interested in sentences that can figure as a premise or conclusion of an argument."
  • Arguments are built from premises and conclusions, and both must be evaluable as true or false.
  • If something cannot be true or false, it cannot serve as a premise or conclusion.
  • Example: you cannot use "Are you sleepy yet?" as a premise, but you can use "I am not sleepy."
3

1.3 Two ways that arguments can go wrong

1.3 Two ways that arguments can go wrong

🧭 Overview

🧠 One-sentence thesis

Arguments can fail either because one or more premises are false or because the premises, even if true, do not guarantee the conclusion—and logic focuses primarily on this second kind of weakness, the logical form.

📌 Key points (3–5)

  • Two distinct weaknesses: (1) false premises, or (2) premises that fail to support the conclusion even when true.
  • Logical form matters: even true premises and a true conclusion do not guarantee validity; the structure of reasoning is what counts.
  • Deductive validity defined: an argument is valid if and only if it is impossible for the premises to be true and the conclusion false at the same time.
  • Common confusion: validity vs. truth—validity is about the form (whether premises guarantee the conclusion), not about whether premises or conclusion are actually true.
  • Inductive arguments exist: some good arguments generalize from cases but are not deductively valid.

🛠️ The two ways arguments fail

🛠️ False premises

  • If one or more premises are false, the argument gives you no reason to believe the conclusion.
  • Example: "It is raining outside, so take an umbrella"—if it is actually sunny, premise (1) is false and the argument collapses.
  • Even if some premises are true, a single false premise can undermine the whole argument.

🛠️ Premises that fail to support the conclusion

  • Even when all premises are true, the conclusion might still be false.
  • This is a weakness in the logical form of the argument: the kind of premises given do not necessarily lead to the kind of conclusion given.
  • Example: the umbrella argument—even if it is raining and you need to go out, you might enjoy getting soaked, so the conclusion "you should take an umbrella" does not follow.
  • Don't confuse: an argument can have all true premises and a true conclusion yet still be weak in form.

🔍 What is deductive validity?

🔍 The core definition

Deductive validity: An argument is valid if and only if it is impossible for the premises to be true and the conclusion false.

  • The key phrase is "at the same time": if the premises are true, the conclusion must be true.
  • Validity is about the form of the argument, not the actual truth or falsity of the sentences.

🔍 Validity does not require true premises or conclusion

  • A valid argument can have ridiculous or false premises and a ridiculous or false conclusion.
  • Example from the excerpt:
    • Premise: Oranges are either fruits or musical instruments.
    • Premise: Oranges are not fruits.
    • Conclusion: Oranges are musical instruments.
    • This is valid because if both premises were true, the conclusion would necessarily be true—even though the conclusion is absurd.
  • What matters: the structure guarantees that true premises would force a true conclusion.

🔍 True premises and conclusion do not guarantee validity

  • An argument can have all true premises and a true conclusion yet still be invalid.
  • Example from the excerpt:
    • Premise: London is in England.
    • Premise: Beijing is in China.
    • Conclusion: Paris is in France.
    • All three sentences are true, but the premises have nothing to do with the conclusion.
    • It is logically possible for the premises to remain true while the conclusion becomes false (e.g., if Paris declared independence).
    • Therefore, the argument is invalid.

🧩 Logical form vs. actual truth

🧩 What logical form means

  • Logical form is the structure of reasoning: does the type of premises given necessarily lead to the type of conclusion given?
  • Logic is primarily interested in this form, not in whether premises happen to be true in the real world.

🧩 Incompatibility test

  • Validity means the truth of the premises is incompatible with the falsity of the conclusion.
  • To test: imagine the premises are true—can you also imagine the conclusion being false? If yes, the argument is invalid.

🧩 Don't confuse validity with soundness

ConceptWhat it checksExample
ValidityForm: do true premises guarantee a true conclusion?Oranges argument (valid but absurd)
Actual truthAre premises and conclusion factually true?London/Beijing/Paris argument (all true but invalid)
  • The excerpt emphasizes: validity is not about actual truth; it is about the logical relationship between premises and conclusion.

🌀 Inductive arguments

🌀 A different kind of good argument

  • Not all good arguments are deductively valid.
  • Inductive arguments generalize from many specific cases to a broader conclusion.
  • Example from the excerpt:
    • Premise: In January 1997, it rained in San Diego.
    • Premise: In January 1998, it rained in San Diego.
    • Premise: In January 1999, it rained in San Diego.
    • Conclusion: It rains every January in San Diego.
  • This argument can be made stronger by adding more cases, but it is not deductively valid: the premises do not guarantee the conclusion (there could be a January without rain).

🌀 How inductive differs from deductive

  • Inductive: premises make the conclusion more probable, but do not guarantee it.
  • Deductive: premises, if true, make the conclusion necessarily true.
  • Don't confuse: a strong inductive argument is still not deductively valid, because it is logically possible for all premises to be true and the conclusion false.
4

Deductive Validity

1.4 Deductive validity

🧭 Overview

🧠 One-sentence thesis

Deductive validity means it is impossible for an argument's premises to be true while its conclusion is false, regardless of whether the premises or conclusion are actually true.

📌 Key points (3–5)

  • What validity is: an argument is valid if and only if the truth of the premises is incompatible with the falsity of the conclusion.
  • Validity ≠ actual truth: a valid argument can have false premises and a false conclusion; an invalid argument can have all true sentences.
  • Common confusion: validity is about logical form, not about whether premises or conclusions are actually true in the real world.
  • Inductive arguments are not valid: even strong inductive generalizations (e.g., from many cases to all cases) are not deductively valid because counter-examples remain possible.
  • Why it matters: one important task of logic is to sort valid arguments from invalid ones by examining their form.

🔍 What deductive validity means

🔍 The core definition

An argument is deductively valid if and only if it is impossible for the premises to be true and the conclusion false.

  • The key phrase is "at the same time"—the premises being true must be incompatible with the conclusion being false.
  • Validity is about logical form, not about what is actually the case in the world.
  • Example: "Oranges are either fruits or musical instruments. Oranges are not fruits. ∴ Oranges are musical instruments." The conclusion is ridiculous, but the argument is valid—if both premises were true, the conclusion would necessarily be true.

🚫 What validity is NOT

  • Not about actual truth: a valid argument does not need true premises or a true conclusion.
  • Not guaranteed by true premises + true conclusion: "London is in England. Beijing is in China. ∴ Paris is in France." All sentences are actually true, but the argument is terrible—the premises have nothing to do with the conclusion. If Paris declared independence, the conclusion would be false while the premises remained true, so the argument is invalid.
  • Don't confuse: validity asks "could the premises be true and conclusion false?" not "are the premises and conclusion true?"

🧩 Valid vs invalid arguments

✅ Valid argument characteristics

  • The truth of the premises is incompatible with the falsity of the conclusion.
  • Even if the content is absurd, the form can be perfect.
  • Example: the oranges argument above—ridiculous content, but valid form.

❌ Invalid argument characteristics

  • It is logically possible for the premises to be true and the conclusion false.
  • Example: "London is in England. Beijing is in China. ∴ Paris is in France." Imagine Paris becomes independent—premises still true, conclusion now false, so invalid.
  • Example: "You are reading this book. This is a logic book. ∴ You are a logic student." Most readers are logic students, but a roommate could pick up the book without becoming a logic student—premises true, conclusion false is possible, so invalid.

🔄 Two ways an argument can be weak

Weakness typeWhat it meansRelation to validity
False premisesOne or more premises might be falseAn argument gives reason to believe its conclusion only if you believe its premises
Poor logical formPremises fail to support the conclusionEven if premises were true, the form might be weak—this is what validity addresses
  • The excerpt focuses primarily on the second kind of weakness: logical form.
  • Don't confuse: an argument can have true premises and still be weak in form (invalid).

🔁 Inductive arguments are not valid

🔁 What inductive arguments do

  • Generalize from many cases to all cases.
  • Example: "In January 1997, it rained in San Diego. In January 1998, it rained in San Diego. In January 1999, it rained in San Diego. ∴ It rains every January in San Diego."
  • Adding more premises (January 2000, 2001, etc.) makes the argument stronger but still not deductively valid.

🔁 Why inductive arguments are not valid

  • It is possible for the premises to be true and the conclusion false.
  • Example: weather is fickle—a single freakish year with no January rain in San Diego would make the conclusion false, even if all the premises (past years) remain true.
  • No amount of evidence can guarantee the conclusion is necessarily true.
  • Don't confuse: "good inductive argument" does not mean "deductively valid"—even good inductive arguments are invalid.

🔁 Scope of this book

  • The excerpt states: "We will not be interested in inductive arguments in this book."
  • The focus is on deductive validity and sorting valid from invalid arguments.
5

Other logical notions

1.5 Other logical notions

🧭 Overview

🧠 One-sentence thesis

Beyond deductive validity, logic studies several other fundamental concepts—truth-values, logical truth (tautologies and contradictions), logical equivalence, and consistency—that describe properties of individual sentences and relationships between them.

📌 Key points (3–5)

  • Truth-values: every sentence is either true or false; this property is called its truth-value.
  • Three types of sentences by logical status: contingent (might be true or false), tautology (logically true), and contradiction (logically false).
  • Logical equivalence: two sentences are logically equivalent when they necessarily have the same truth-value.
  • Consistency vs inconsistency: a set of sentences is consistent if they could all be true at the same time; inconsistent if they cannot.
  • Common confusion: a sentence that is always true is not necessarily a tautology—it must be true as a matter of logic, not just as a matter of fact.

🎯 Truth-values and logical status

🎯 What truth-values are

True or false is said to be the truth-value of a sentence.

  • Sentences are defined as things that can be true or false.
  • Equivalently: sentences are things that can have truth-values.
  • This is a basic property—every sentence has exactly one truth-value.

🔍 Three types of sentences

The excerpt distinguishes sentences by whether their truth-value depends on how the world actually is:

TypeDefinitionExample from excerptMust check the world?
ContingentNeither a tautology nor a contradiction; might be true or false"It is raining"Yes—need to look outside
TautologyLogically true; true as a matter of logic"Either it is raining, or it is not"No—true regardless of weather
ContradictionLogically false; false as a matter of logic"It is both raining and not raining"No—false regardless of weather

⚠️ Always true vs logically true

A sentence might always be true and still be contingent.

  • Example from excerpt: "At least seven things exist" might always be true (if the universe never contained fewer than seven things).
  • But it is still contingent because there is no contradiction in imagining a world with fewer than seven things.
  • Don't confuse: "always true" (a fact about the actual world) with "must be true as a matter of logic" (true in every possible world).
  • The key question: does the sentence must be true just on account of logic?

🔗 Logical equivalence

🔗 What logical equivalence means

When two sentences necessarily have the same truth value, we say that they are logically equivalent.

  • Two sentences are logically equivalent if: whenever one is true, the other is true; whenever one is false, the other is false.
  • Both sentences can be contingent and still be logically equivalent.

📝 Example from the excerpt

The excerpt gives:

  • "John went to the store after he washed the dishes."
  • "John washed the dishes before he went to the store."

Analysis:

  • Both sentences are contingent (John might not have done either activity).
  • Yet they must have the same truth-value—if either is true, both are; if either is false, both are.
  • Therefore, they are logically equivalent.

🧩 Consistency and inconsistency

🧩 What consistency means

If a set of sentences could not all be true at the same time, they are said to be inconsistent. Otherwise, they are consistent.

  • Consistency is a property of sets of sentences (one or more sentences).
  • A set is inconsistent when it is logically impossible for all the sentences to be true together.
  • A set is consistent when there is no such logical impossibility.

🦒 The brother example

The excerpt gives:

  • B1: "My only brother is taller than I am."
  • B2: "My only brother is shorter than I am."

Analysis:

  • Logic cannot tell us which sentence is true.
  • But logic tells us: if B1 is true, then B2 must be false; if B2 is true, then B1 must be false.
  • Both cannot be true at the same time → the set {B1, B2} is inconsistent.

🦒 The giraffe example

The excerpt gives four sentences:

  • G1: "There are at least four giraffes at the wild animal park."
  • G2: "There are exactly seven gorillas at the wild animal park."
  • G3: "There are not more than two martians at the wild animal park."
  • G4: "Every giraffe at the wild animal park is a martian."

Analysis:

  • G1 and G4 together imply at least four martian giraffes.
  • G3 implies no more than two martian giraffes.
  • These conflict → the set {G1, G2, G3, G4} is inconsistent.
  • Notice: G2 has nothing to do with the inconsistency; it just happens to be part of an inconsistent set.

🔍 "Contains a contradiction"

  • People sometimes say an inconsistent set "contains a contradiction."
  • This means: it would be logically impossible for all the sentences to be true at once.
  • Don't confuse: a set can be inconsistent even if each individual sentence is contingent or tautologous (not itself a contradiction).
  • When a single sentence is a contradiction, that sentence alone cannot be true.

🔄 Relationship to deductive validity

🔄 Inductive arguments are not valid

The excerpt begins by contrasting inductive and deductive arguments:

  • Example inductive argument: "It rained in San Diego every January from 1950 to 2000, so it will rain next January."
  • Even with many premises, the argument is not deductively valid—it is possible (though unlikely) that it will not rain.
  • A single counter-example is enough to make the conclusion false.
  • Inductive arguments, even good ones, are not deductively valid.
  • The book will not be interested in inductive arguments.

🔄 Focus on logical concepts

  • In formal logic, we care about what would be true if the premises were true.
  • Generally, we are not concerned with the actual truth value of particular sentences.
  • Yet some sentences (tautologies) must be true, and some (contradictions) must be false, just as a matter of logic.
  • These logical notions—truth-values, logical truth, logical equivalence, consistency—are the tools for analyzing arguments formally.
6

Formal languages

1.6 Formal languages

🧭 Overview

🧠 One-sentence thesis

Formal languages reveal the logical structure of arguments by replacing natural-language words with symbols, making it easier to judge validity while balancing simplicity against capturing enough structure.

📌 Key points (3–5)

  • Why formalize: translating English arguments into formal languages removes distracting features and makes logical form explicit.
  • The trade-off: simpler formal languages are easier to work with but leave out more structure; no formal language is perfect for every argument.
  • Two languages in this book: SL (sentential logic) treats whole sentences as basic units; QL (quantified logic) breaks sentences into objects, properties, and relations.
  • Bivalent assumption: the book assumes every sentence is either true or false (two truth-values), though other logics allow for more values or both true-and-false.
  • Common confusion: formal validity depends on logical form, not whether premises are actually true—even absurd premises can produce valid arguments.

🔤 What formal languages do

🔤 Replacing words with symbols

  • Natural languages like English contain irrelevant or distracting features that can obscure logical structure.
  • A formal language uses letters and symbols to stand in for parts of sentences.
  • Example: "Socrates is a man. All men are mortal. ∴ Socrates is mortal" becomes "S is M. All Ms are Cs. ∴ S is C."
  • The goal is to make the formal structure of the argument perspicuous (clear and obvious).

✅ Validity depends on form, not content

  • Two arguments can share the same logical form even if one has true premises and the other has absurd premises.
  • Example comparison:
    • Argument 1: Socrates is a man. All men are mortal. ∴ Socrates is mortal.
    • Argument 2: Socrates is a man. All men are carrots. ∴ Socrates is a carrot.
  • Both have the form "S is M. All Ms are Cs. ∴ S is C."
  • Both are valid because every argument of that form is valid, regardless of whether "C" stands for "mortal" or "carrot."
  • Don't confuse: validity is about logical form, not about whether premises are true or interesting.

🏛️ Historical context: Aristotelean logic

🏛️ The first formal logic

  • Aristotle (4th century BC Greece, student of Plato, tutor of Alexander the Great) developed a formal logic that dominated the western world for over two millennia.
  • In Aristotelean logic, categories are replaced with capital letters.
  • Every sentence fits one of four forms (labeled by medieval logicians):
    • (A) All As are Bs.
    • (E) No As are Bs.
    • (I) Some A is B.
    • (O) Some A is not B.

🧩 Syllogisms and their names

  • A syllogism is a three-line argument (two premises and a conclusion).
  • Medieval logicians gave mnemonic names to valid argument forms.
  • Example: the form "All S are M. All Ms are C. ∴ S is C" was called Barbara (the vowels A-A-A indicate all three sentences are (A) form).

⚠️ Limitations of Aristotelean logic

  • It makes no distinction between kinds and individuals (e.g., "All Socrateses are men" vs. "Socrates is a man").
  • Despite its historical importance, Aristotelean logic has been superseded by more expressive systems.

🧰 Two formal languages in this book

🧰 SL: Sentential Logic

SL (sentential logic): a formal language in which the smallest units are entire sentences.

  • Simple sentences are represented as letters.
  • Sentences are connected with logical connectives like "and" and "not" to make more complex sentences.

🧰 QL: Quantified Logic

QL (quantified logic): a formal language in which the basic units are objects, properties of objects, and relations between objects.

  • QL can represent every valid argument of Aristotelean logic and more.
  • It is more expressive than SL.

⚖️ The trade-off between simplicity and structure

  • Including every feature of English (subtlety, nuance) would offer no advantage over thinking in English.
  • Leaving out too much structure makes the formal language unable to represent important arguments.
  • There is inevitable tension between:
    • Capturing as much structure as possible.
    • Keeping the formal language simple.
  • No perfect formal language exists; some do a better job than others for particular arguments.

🔢 Bivalence and alternative logics

🔢 The bivalent assumption

Bivalent: a logical language that assumes true and false are the only possible truth-values (two-valued).

  • Aristotelean logic, SL, and QL are all bivalent.
  • This book makes the bivalent assumption throughout.

🌐 Limits of bivalent logic

  • Some philosophers claim the future is not yet determined, so sentences about "what will be the case" are not yet true or false.
  • Some formal languages allow for sentences that are neither true nor false (something in between).
  • Paraconsistent logics allow for sentences that are both true and false.

🧭 Why start with bivalent logic

  • The languages in this book are not the only possible formal languages.
  • Most nonstandard logics extend the basic formal structure of bivalent logics.
  • Bivalent logic is a good starting point.

📚 Summary of logical notions (from the excerpt)

TermDefinition
Valid argumentImpossible for premises to be true and conclusion false; otherwise invalid.
TautologyA sentence that must be true, as a matter of logic.
ContradictionA sentence that must be false, as a matter of logic.
Contingent sentenceNeither a tautology nor a contradiction.
Logically equivalentTwo sentences that necessarily have the same truth value.
Consistent setLogically possible for all members to be true at the same time; otherwise inconsistent.
7

Sentence letters

2.1 Sentence letters

🧭 Overview

🧠 One-sentence thesis

Sentence letters in SL preserve the logical structure of arguments by representing atomic sentences as capital letters, allowing us to build complex sentences while maintaining the relationships that make arguments valid.

📌 Key points (3–5)

  • What sentence letters do: capital letters represent entire sentences in the logical language SL, and a symbolization key maps each letter to its English meaning.
  • Why structure matters: simply replacing every sentence with a different letter destroys logical structure; we must preserve how sentences relate to one another (e.g., when one sentence contains another as a part).
  • Atomic sentences: sentences symbolized by a single letter are "atomic"—they are basic building blocks with no internal logical structure visible in SL.
  • Common confusion: the same letter (e.g., A) can mean different things in different contexts, but within one symbolization key, each letter must keep the same meaning; subscripts (A₁, A₂, etc.) let us create unlimited distinct atomic sentences.
  • Negation basics: the symbol ¬ represents "It is not the case that…" and is the first logical connective introduced, allowing us to express denial without losing the connection to the original sentence.

🔤 What sentence letters are

🔤 Capital letters as sentence symbols

In SL, capital letters are used to represent basic sentences.

  • Considered purely as a symbol, the letter A could mean any sentence.
  • A symbolization key is essential: it provides the English sentence that each letter stands for in a given context.
  • Example: If we set B: "Mary is in Barcelona," then B always means that sentence as long as we are discussing Mary and Barcelona.

🔑 The symbolization key

  • The key is context-specific: B can mean one thing in one argument and something completely different in another argument.
  • It is vital to use the same meaning for a letter throughout a single argument or discussion.
  • Subscripts extend the alphabet: A₁, A₂, A₃, … allow us to create as many distinct atomic sentences as needed.

🏗️ Preserving logical structure

🏗️ Why not replace every sentence with a unique letter

The excerpt gives an example argument:

  • Premise 1: There is an apple on the desk.
  • Premise 2: If there is an apple on the desk, then Jenny made it to class.
  • Conclusion: Jenny made it to class.

Bad symbolization (loses structure):

  • A: There is an apple on the desk.
  • B: If there is an apple on the desk, then Jenny made it to class.
  • C: Jenny made it to class.
  • Symbolized as: A, BC

Why it fails: There is no necessary connection between arbitrary sentences A, B, and C; the structure that makes the argument valid is completely lost.

🔗 Preserving relationships between sentences

Good symbolization (preserves structure):

  • A: There is an apple on the desk.
  • C: Jenny made it to class.
  • Premise 2 is built from A and C: "If A, then C."
  • Symbolized as: A, If A then CC

Why it works: The second premise contains the first premise and the conclusion as parts; this preserves the logical relationships that make the argument valid.

  • The excerpt emphasizes: "The important thing about the argument is that the second premise is not merely any sentence, logically divorced from the other sentences in the argument. The second premise contains the first premise and the conclusion as parts."

⚛️ Atomic sentences

⚛️ What makes a sentence atomic

Atomic sentences: sentences that can be symbolized with a single sentence letter; they are the basic building blocks out of which more complex sentences can be built.

  • "Atomic" means indivisible in SL: whatever internal logical structure a sentence might have in English is lost when it is translated as an atomic sentence.
  • From the point of view of SL, an atomic sentence is just a letter—it can be used to build more complex sentences, but it cannot be taken apart.
  • Example: "Adam Ant is taking an airplane from Anchorage to Albany" is atomic in SL; any internal structure (proper nouns, locations, actions) is invisible to the logical language.

🔢 Using subscripts for unlimited atomic sentences

  • There are only 26 letters, but no logical limit to the number of atomic sentences.
  • Subscripts allow us to reuse letters: A₁, A₂, A₃, … A₂₉₄, etc.
  • Important: Each subscripted letter is a different sentence letter; A₁ and A₂ are as distinct as A and B.
  • Don't confuse: subscripts are not "versions" of the same sentence; they are entirely separate atomic sentences that happen to share a base letter.

¬ Negation (the first connective)

¬ What negation means

Negation (¬): represents "It is not the case that…"

  • Negation is the first of five logical connectives introduced in the excerpt.
  • A sentence can be symbolized as ¬A if it can be paraphrased in English as "It is not the case that A."
  • Example: If B: "Mary is in Barcelona," then "Mary is not in Barcelona" is symbolized as ¬B.

🔄 Logical equivalence and double negation

  • "Mary is somewhere besides Barcelona" is logically equivalent to "Mary is not in Barcelona," so both are symbolized as ¬B.
  • Double negation: "The widget is not irreplaceable" becomes ¬¬R (where R: "The widget is replaceable").
  • The excerpt notes that ¬¬R will be defined as logically equivalent to R in SL.

⚠️ When not to use negation

Common confusion: Not every English sentence with a negative word should be symbolized with ¬.

Example from the excerpt:

  • Sentence 7: "Elliott is happy" → H
  • Sentence 8: "Elliott is unhappy" → not ¬H

Why: "Elliott is unhappy" does not mean the same as "It is not the case that Elliott is happy." He could be neither happy nor unhappy (indifferent). To allow for this possibility, sentence 8 needs a new sentence letter, not negation.

Rule of thumb: Use ¬ only when the sentence can be paraphrased as "It is not the case that [original sentence]."

📊 Truth behavior of negation

The excerpt introduces a characteristic truth table for negation:

A¬A
TF
FT
  • If A is true, then ¬A is false.
  • If ¬A is true, then A is false.
  • (The excerpt notes that truth tables will be discussed at greater length in the next chapter.)
8

Connectives in Sentential Logic

2.2 Connectives

🧭 Overview

🧠 One-sentence thesis

Logical connectives are truth-functional operators that build complex sentences from atomic components in sentential logic, and each connective has precise rules that determine when compound sentences are true or false.

📌 Key points (3–5)

  • Five connectives: SL uses negation (¬), conjunction (&), disjunction (∨), conditional (→), and biconditional (↔) to combine atomic sentences.
  • Truth-functional nature: The truth-value of any compound sentence depends only on the truth-values of its atomic parts, not on meaning or causation.
  • Common confusion—'or' in English: English 'or' can be exclusive (soup or salad, not both) or inclusive (at least one); SL's ∨ is always inclusive.
  • Common confusion—'if...then' vs 'only if': 'If A then B' translates as A → B, but 'A only if B' also translates as A → B (not B → A).
  • Material conditional quirk: A → B is automatically true whenever A is false, regardless of B's truth-value—this differs from everyday causal reasoning.

🔤 Negation and basic structure

🔤 What negation does

Negation (¬): 'It is not the case that...'

  • Negation reverses truth-value: if A is true, ¬A is false; if A is false, ¬A is true.
  • Key test: a sentence can be symbolized as ¬A if it can be paraphrased as "It is not the case that A."
  • Example: "Mary is not in Barcelona" and "Mary is somewhere besides Barcelona" both translate as ¬B (where B = Mary is in Barcelona).

🔄 Double negation

  • ¬¬R means "It is not the case that...it is not the case that R."
  • Double negation is logically equivalent to the original sentence: ¬¬R ≡ R.
  • Example: "The widget is not irreplaceable" = ¬¬R = R (the widget is replaceable).

⚠️ When not to use negation

  • Don't confuse logical negation with opposite meanings.
  • "Elliott is unhappy" ≠ ¬H (where H = Elliott is happy), because someone can be neither happy nor unhappy (indifferent).
  • Use negation only when the sentence truly means "it is not the case that..."

🔗 Conjunction and disjunction

🔗 Conjunction (&)

Conjunction (&): 'Both...and...'

  • A & B is true if and only if both A and B are true; false in all other cases.
  • Each part (conjunct) must be a complete sentence.
  • Conjunction is symmetrical: A & B is logically equivalent to B & A.
ABA & B
TTT
TFF
FTF
FFF

🔀 Handling 'but' and 'although'

  • "Barbara is athletic, but Adam is more athletic" translates as conjunction: B & R.
  • "Although Barbara is energetic, she is not athletic" = E & ¬B.
  • Contrastive words like 'but' and 'although' signal conjunction; the contrast is rhetorical, not logical.

🔀 Disjunction (∨)

Disjunction (∨): 'Either...or...'

  • A ∨ B is true if at least one disjunct is true; false only when both are false.
  • SL's ∨ is inclusive: A ∨ B allows both A and B to be true.
  • Disjunction is symmetrical: A ∨ B ≡ B ∨ A.
ABA ∨ B
TTT
TFT
FTT
FFF

🍲 Exclusive vs inclusive 'or'

  • Exclusive or: "soup or salad" (not both) in everyday contexts.
  • Inclusive or: "I'll play with Denison or Ellery" (possibly both).
  • SL's ∨ is inclusive, but you can symbolize exclusive or using multiple connectives: (S₁ ∨ S₂) & ¬(S₁ & S₂).

🔄 'Unless' translates as disjunction

  • "Unless you wear a jacket, you will catch cold" = J ∨ D (or equivalently ¬J → D).
  • General rule: "Unless A, B" = A ∨ B.

➡️ Conditional and biconditional

➡️ Conditional (→)

Conditional (→): 'If...then...'

  • A → B: if the antecedent (A) is true, then the consequent (B) is true.
  • The conditional is asymmetrical: A → B is not equivalent to B → A.
  • A → B is false only when A is true and B is false.
ABA → B
TTT
TFF
FTT
FFT

🔑 'Only if' vs 'if'

  • "The bomb will explode only if you cut the red wire" = B → R (not R → B).
  • "A only if B" means "if A then B" (A → B).
  • Don't confuse: "If R then B" (R → B) vs "B only if R" (B → R) say different things about the same scenario.

🤖 Material conditional

  • The → is a material conditional: it does not capture causation or counterfactuals.
  • When the antecedent is false, the conditional is automatically true (regardless of the consequent).
  • This differs from English "if...then," which often implies what would happen if the antecedent were true.

⚡ Not all 'if...then' are conditionals

  • "If anyone wants to see me, then I will be on the porch" just means P (I will be on the porch), not a conditional.
  • Context determines whether 'if...then' structure is genuinely conditional.

↔️ Biconditional (↔)

Biconditional (↔): '...if and only if...'

  • A ↔ B is true when A and B have the same truth-value (both true or both false).
  • A ↔ B entails both A → B and B → A.
  • Could be written as (A → B) & (B → A), but ↔ is more convenient.
ABA ↔ B
TTT
TFF
FTF
FFT

📐 Formal structure of SL

📐 Well-formed formulas (wffs)

  • An expression is any string of SL symbols (sentence letters, connectives, parentheses).
  • A wff (well-formed formula) is a meaningful expression built by recursive rules:
    1. Every atomic sentence is a wff.
    2. If A is a wff, then ¬A is a wff.
    3. If A and B are wffs, then (A & B), (A ∨ B), (A → B), and (A ↔ B) are wffs.
  • The main logical operator is the connective you look to first when decomposing a sentence.

🔧 Notational conventions

  • Outer parentheses: can be omitted around the entire sentence (Q & R instead of (Q & R)).
  • Square brackets: can replace parentheses for readability: [(H → I) ∨ (I → H)] & (J ∨ K).
  • Multiple conjunctions/disjunctions: A & B & C is shorthand for (A & (B & C)) or ((A & B) & C)—both are equivalent.
  • Don't omit parentheses when mixing different connectives or using conditionals/biconditionals in series.

🗣️ Object language vs metalanguage

  • Object language: SL itself (the formal language being studied).
  • Metalanguage: English + logical vocabulary used to talk about SL.
  • Metavariables (A, B, etc.) stand for any wff; they are not part of SL.

🎯 Translation strategies

🎯 Key translation patterns

English patternSL translationNotes
Not A¬AParaphrase as "it is not the case that A"
Both A and BA & BEach part must be a sentence
Either A or BA ∨ BInclusive or (allows both)
If A then BA → BAntecedent → consequent
A only if BA → BNot B → A
A if and only if BA ↔ BBoth directions
Unless A, BA ∨ BOr equivalently ¬A → B
Neither A nor B¬(A ∨ B)Or ¬A & ¬B

🎯 Common translation pitfalls

  • Replace pronouns with names to make each conjunct/disjunct a complete sentence.
  • Words like 'both' and 'also' are rhetorical; don't try to symbolize them separately.
  • Sentence letters are atomic—once you translate part of a sentence as B, no further structure remains.
  • Example: "Barbara is athletic and energetic" requires two sentence letters (B & E), not "B and energetic."
9

Other symbolization

2.3 Other symbolization

🧭 Overview

🧠 One-sentence thesis

The biconditional symbol and 'unless' constructions can be translated into SL using combinations of connectives, and understanding logical equivalence helps avoid common translation errors.

📌 Key points (3–5)

  • Biconditional is optional but useful: A ↔ B can always be written as (A → B) & (B → A), but the symbol makes translation easier.
  • 'Unless' has multiple equivalent translations: "Unless A, B" can be symbolized as ¬A → B, ¬B → A, or A ∨ B—all are logically equivalent.
  • Common confusion: The conditional is not symmetric, so getting the direction wrong (e.g., J → ¬D instead of ¬J → D) produces incorrect translations.
  • Logical equivalence matters: Multiple correct translations exist because different symbolic forms can express the same logical relationship.
  • Formal vs natural language: SL allows precise formal definitions of what counts as a sentence, unlike natural languages like English.

🔗 The biconditional connective

🔗 What the biconditional means

A ↔ B is true if and only if A and B have the same truth value.

  • The biconditional captures "if and only if" statements.
  • Example: "You will study if and only if there is a test" means S ↔ T.
  • It is true when both sides match (both true or both false); false when they differ.

🔄 Why we don't strictly need it

  • Any biconditional A ↔ B can be rewritten as (A → B) & (B → A).
  • The parentheses are necessary: without them, A → B & B → A would be ambiguous.
  • Nevertheless, logical languages usually include the symbol because it simplifies translation of "if and only if" phrases.

📋 Truth table

ABA ↔ B
TTT
TFF
FTF
FFT
  • The biconditional is true only when both sides have the same truth value.

🔀 Translating 'unless'

🔀 Multiple equivalent forms

Consider:

  • "Unless you wear a jacket, you will catch cold."
  • "You will catch cold unless you wear a jacket."

Let J = "You will wear a jacket" and D = "You will catch a cold."

Both sentences can be paraphrased as "Unless J, D" and translated in three logically equivalent ways:

  1. ¬J → D: "If you do not wear a jacket, then you will catch cold."
  2. ¬D → J: "If you do not catch a cold, then you must have worn a jacket."
  3. J ∨ D: "You will wear a jacket or you will catch a cold."

All three are correct because they are logically equivalent in SL.

⚠️ Common error: wrong direction

  • The conditional is not symmetric.
  • It would be wrong to translate "Unless J, D" as J → ¬D.
  • Don't confuse: the direction of the conditional matters; flipping it changes the meaning.

🔍 Why the disjunction works

  • "Unless A, B" means "A or—if not A—then B."
  • This is logically equivalent to A ∨ B.
  • The 'or' here is not exclusive: the sentences do not exclude the possibility that you might both wear a jacket and catch a cold (jackets don't protect against all causes of colds).

📝 General rule

If a sentence can be paraphrased as "Unless A, B," then it can be symbolized as A ∨ B.

🏗️ Formal language structure

🏗️ Object language vs metalanguage

Object language: the language being talked about (in this chapter, SL). Metalanguage: the language used to talk about the object language (English supplemented with logical and mathematical vocabulary).

  • Example of object language: "(A ∨ B)" is a sentence of SL, using only SL symbols.
  • Example of metalanguage: "This expression is a sentence of SL" is a sentence in English about SL, not a sentence of SL itself.
  • The word "sentence" is not part of SL; it belongs to the metalanguage.

🎯 Precision of formal languages

  • In natural languages like English, we recognize sentences when we encounter them but lack a formal definition.
  • In SL, it is possible to formally define what counts as a sentence.
  • This is one respect in which formal languages are more precise than natural languages.
10

Sentences of SL

2.4 Sentences of SL

🧭 Overview

🧠 One-sentence thesis

Sentential Logic (SL) achieves formal precision by recursively defining which symbol strings count as well-formed sentences, distinguishing it from natural languages like English that lack such formal definitions.

📌 Key points (3–5)

  • Formal definition advantage: SL can formally define what counts as a sentence, unlike natural languages such as English.
  • Object language vs metalanguage: SL is the object language (what we study); English with logical vocabulary is the metalanguage (what we use to talk about SL).
  • Recursive construction: Well-formed formulas (wffs) are built from atomic sentences using connectives according to strict rules.
  • Common confusion: Not every expression is a wff—only those generated by the recursive rules count as meaningful sentences.
  • Notational conventions: Practical shortcuts (omitting outer parentheses, using brackets, chaining conjunctions/disjunctions) make SL easier to use without changing formal definitions.

🔤 Language structure fundamentals

🔤 Three symbol types in SL

SL uses exactly three kinds of symbols:

Symbol typeExamplesNotes
Sentence lettersA, B, C, ..., Z, A₁, B₁, J₃₇₅With subscripts as needed
Connectives¬, &, ∨, →, ↔Negation, conjunction, disjunction, conditional, biconditional
Parentheses( , )For grouping

📝 Expression vs well-formed formula

Expression of SL: any string of symbols of SL, in any order.

  • Most expressions are meaningless "gobbledegook."
  • Only certain expressions follow the rules to be meaningful.

Well-formed formula (wff): a meaningful expression constructed according to the recursive rules.

  • Plural: wffs (pronounced "woofs").
  • Example: A, ¬A, (A & G₁₃) are all wffs; random symbol strings are not.

🏗️ Recursive definition of wffs

🏗️ The seven construction rules

The formal definition builds wffs step-by-step:

  1. Base case: Every atomic sentence (sentence letter) is a wff.
  2. Negation: If A is a wff, then ¬A is a wff.
  3. Conjunction: If A and B are wffs, then (A & B) is a wff.
  4. Disjunction: If A and B are wffs, then (A ∨ B) is a wff.
  5. Conditional: If A and B are wffs, then (A → B) is a wff.
  6. Biconditional: If A and B are wffs, then (A ↔ B) is a wff.
  7. Closure: All and only wffs can be generated by these rules.

🔍 How recursive definitions work

  • Start with base elements (atomic sentences).
  • Define ways to indefinitely compound them.
  • To verify an expression is a wff, decompose it step-by-step back to atomic parts.

Example: Is ¬¬¬D a wff?

  • ¬¬¬D is a wff if ¬¬D is a wff (by rule 2).
  • ¬¬D is a wff if ¬D is a wff (by rule 2).
  • ¬D is a wff if D is a wff (by rule 2).
  • D is an atomic sentence, so it is a wff (by rule 1).
  • Therefore, ¬¬¬D is a wff.

🎯 Main logical operator

Main logical operator: the connective you look to first when decomposing a sentence.

  • For ¬(E ∨ (F → G)), the main operator is ¬.
  • For (¬E ∨ (F → G)), the main operator is ∨.
  • The main operator determines the sentence's overall structure.

🗣️ Object language vs metalanguage

🗣️ Key distinction

Object language: the language being studied (here, SL).

Metalanguage: the language used to talk about the object language (here, English with logical/mathematical vocabulary).

📋 Metavariables

  • In the recursive definition, A and B are not symbols of SL.
  • They are metavariables—placeholders in the metalanguage that stand for any wff.
  • Example: "If A is a wff, then ¬A is a wff" talks about infinitely many SL expressions without listing them.
  • Don't confuse: The sentence letter A (in SL) vs. the metavariable A (in English about SL).

💬 Sentences in each language

  • '(A ∨ B)' is a sentence of SL (uses only SL symbols).
  • 'This expression is a sentence of SL' is a sentence in the metalanguage (uses English words not in SL).

🔧 Notational conventions

🔧 Four practical shortcuts

These are not changes to the formal definition—they are convenient shorthand:

ConventionWhat it allowsExample
1. Outer parenthesesOmit parentheses around entire sentenceWrite Q & R instead of (Q & R)
2. Square bracketsUse [ ] instead of ( ) for readability[(H → I) ∨ (I → H)] & (J ∨ K)
3. Multiple conjunctionsChain & without inner parenthesesA & B & C instead of (A & B) & C
4. Multiple disjunctionsChain ∨ without inner parenthesesA ∨ B ∨ C instead of (A ∨ B) ∨ C

⚠️ When parentheses are required

  • Mixed connectives: (A & B) ∨ C is different from A & (B ∨ C)—parentheses are essential.
  • Conditionals/biconditionals: (A → B) → C is different from A → (B → C).
  • Conventions 3 and 4 only apply when the same connective repeats.

🎯 Why use conventions

  • Expressively simple: easier to translate from English.
  • Formally simple: keeps the recursive definition straightforward.
  • Conventions are a compromise between these two goals.

📐 Sentences in SL

📐 Sentences = wffs in SL

Sentence: a meaningful expression that can be true or false.

  • In SL, every wff is either true or false.
  • Therefore, the definition of "sentence of SL" is the same as the definition of "wff."
  • Don't confuse: In some formal languages (like QL, introduced later), there are wffs that are not sentences—but not in SL.

🧩 Recursive structure matters for truth

  • The truth-value of ¬¬¬D depends on the truth-value of ¬¬D.
  • Which depends on ¬D, which depends on D.
  • You work through the structure until you reach atomic components.
  • Example: ¬¬¬D is true if and only if the atomic sentence D is false.
11

Truth-Functional Connectives

3.1 Truth-Functional connectives

🧭 Overview

🧠 One-sentence thesis

All logical operators in SL are truth-functional—meaning the truth-value of any compound sentence depends only on the truth-values of its atomic parts—which makes it possible to use truth tables as a purely mechanical procedure for evaluating sentences and arguments.

📌 Key points (3–5)

  • What truth-functional means: a connective is truth-functional when the truth-value of the compound sentence depends only on the truth-values of the atomic sentences that comprise it.
  • Why SL is special: all connectives in SL (negation, conjunction, disjunction, conditional, biconditional) are truth-functional, so truth tables can be constructed mechanically.
  • Common confusion: not all languages are truth-functional—English expressions like "It is possible that X" and modal logic operators like ◇ are not truth-functional, because their truth does not depend directly on the truth-value of X.
  • Notation shift: truth tables use 1 and 0 instead of T and F to emphasize that these are just input-output values, not deep philosophical truths; computers can process them mechanically.
  • How to build truth tables: list all possible combinations of truth-values for atomic sentences, then apply the characteristic truth table for each connective step by step.

🔧 What makes a connective truth-functional

🔧 Definition and core idea

Truth-functional connective: a connective where the truth-value of the compound sentence depends only on the truth-value of the atomic sentences that comprise it.

  • Any non-atomic sentence in SL is built from atomic sentences plus sentential connectives.
  • To know the truth-value of a compound like (D ↔ E), you only need the truth-values of D and E—nothing else.
  • This property allows purely mechanical evaluation: no intuition or special insight required.

🔍 Example: conjunction

  • To evaluate (H & I) → H, you only need the truth-values of H and I.
  • The truth-value of the whole sentence is determined by applying the rules for & and → to those atomic values.
  • Example: if H is true and I is true, then (H & I) is true (by the rule for &), and then (H & I) → H is true (by the rule for →).

🚫 Limits of truth-functionality

🚫 Not all languages are truth-functional

  • English example: "It is possible that X" does not depend directly on whether X is true or false.
    • Even if X is false, "It is possible that X" might still be true (if X could have been true in some sense).
  • Modal logic: formal languages with operators for possibility (◇) or necessity are called modal logics.
    • The ◇ operator is not truth-functional.
    • Cost: modal logics are not amenable to truth tables.

🔍 Why this matters

  • SL's truth-functionality is what makes truth tables possible.
  • Don't confuse: SL can handle "and," "or," "if…then," etc., but it cannot directly represent modality (possibility, necessity) in a truth-functional way.

🧮 Truth tables in practice

🧮 Notation: 1 and 0 instead of T and F

  • The excerpt uses 1 for true and 0 for false.
  • Why the change?
    • Emphasizes that truth functions are just rules transforming input values into output values.
    • Not about "truth in any deep or cosmic sense."
    • Computers can be programmed to fill out truth tables mechanically: 1 might mean a register is on, 0 means off.
  • Mathematically, 1 and 0 are just the two possible values a sentence of SL can have.

📋 Characteristic truth tables for SL connectives

The excerpt provides the characteristic truth tables for all five connectives:

ABA & BA ∨ BA → BA ↔ B
111111
100100
010110
000011
A¬A
10
01
  • Conjunction (A & B): true if and only if both A and B are true.
  • Disjunction (A ∨ B): false only when both A and B are false.
  • Conditional (A → B): false only when A is true and B is false.
  • Biconditional (A ↔ B): true when A and B have the same truth-value.
  • Negation (¬A): flips the truth-value.

🛠️ Step-by-step procedure

  1. List all possible combinations of truth-values for the atomic sentences (e.g., H and I give four rows: 1-1, 1-0, 0-1, 0-0).
  2. Copy the truth-values for each atomic sentence underneath the corresponding letters in the compound sentence.
  3. Evaluate subsentences using the characteristic truth tables:
    • Example: for (H & I) → H, first evaluate H & I on each row.
    • If H is 1 and I is 1, then H & I is 1 (by the conjunction rule).
    • Then evaluate the whole conditional using the result.
  4. Work outward from the smallest subsentences to the entire sentence.

🔍 Example walkthrough: (H & I) → H

  • Row 1: H = 1, I = 1 → H & I = 1 → (1 → 1) = 1.
  • Row 2: H = 1, I = 0 → H & I = 0 → (0 → 1) = 1.
  • Row 3: H = 0, I = 1 → H & I = 0 → (0 → 0) = 1.
  • Row 4: H = 0, I = 0 → H & I = 0 → (0 → 0) = 1.
  • The compound sentence is true in all four rows (a tautology, though the excerpt does not use that term here).

🎯 Why truth-functionality matters

🎯 Mechanical evaluation

  • Because SL connectives are truth-functional, truth tables are a "purely mechanical procedure."
  • No intuition or insight needed—just follow the rules.
  • This makes SL suitable for computer implementation and rigorous logical analysis.

🎯 Scope and limits

  • Scope: SL can represent many logical structures in natural language (conjunctions, disjunctions, conditionals, etc.).
  • Limits: SL cannot represent non-truth-functional aspects of language (modality, belief, knowledge, etc.) without losing the ability to use truth tables.
  • Don't confuse: the power of truth tables comes at the cost of expressive limitations—SL is not a universal translation tool for all of natural language.
12

Complete Truth Tables

3.2 Complete truth tables

🧭 Overview

🧠 One-sentence thesis

A complete truth table systematically evaluates all possible truth-value combinations for sentence letters to determine whether a sentence is always true, always false, or sometimes true and sometimes false.

📌 Key points (3–5)

  • What a complete truth table does: lists every possible combination of truth-values (1 for true, 0 for false) for all sentence letters in a sentence.
  • How size is determined: a sentence with n different sentence letters requires 2<sup>n</sup> rows.
  • Three outcomes: a sentence can be a tautology (true on every row), a contradiction (false on every row), or contingent (true on some rows, false on others).
  • Common confusion: the number of rows depends on the number of different sentence letters, not the total number of times letters appear—repeating the same letter does not add rows.
  • Logical equivalence: two sentences are logically equivalent in SL if they have the same truth-value on every row of a complete truth table.

🔢 Truth values as 1s and 0s

🔢 Why use 1 and 0 instead of T and F

  • The excerpt writes '1' for true and '0' for false to emphasize that truth functions are just rules transforming input values into output values.
  • Computers can fill out truth tables mechanically: in a machine, '1' might mean a register is on, '0' that it is off.
  • Mathematically, 1 and 0 are simply the two possible values a sentence of SL can have.

📋 Characteristic truth tables for connectives

The excerpt provides the truth tables for negation, conjunction, disjunction, conditional, and biconditional:

A¬A
10
01
ABA & BA ∨ BA → BA ↔ B
111111
100100
010110
000011
  • These tables give the truth conditions for any sentence of the corresponding form.
  • Example: a conjunction A & B is true if and only if both A and B are true.

🛠️ Building a complete truth table step by step

🛠️ How to construct the table for a compound sentence

The excerpt walks through the sentence (H & I) → H:

  1. List all combinations: with two sentence letters H and I, there are four rows (2² = 4).
  2. Copy truth-values: write the truth-values for H and I underneath the letters in the sentence.
  3. Evaluate subsentences first: find the truth-value of H & I (a conjunction) on each row using the characteristic table for &.
  4. Evaluate the main connective: the whole sentence is a conditional (H & I) → H; use the characteristic table for → to find the truth-value on each row.
  5. Read the final column: the column under the main connective (the conditional symbol) shows the truth-value of the entire sentence on each row.
  • In the example, (H & I) → H is true on all four rows, so it is a tautology.

📝 Practical tips for writing truth tables

  • When writing on paper, it is impractical to rewrite the whole table for every step; instead, write all intermediate columns in one table (it will be more crowded).
  • The truth-value of the sentence on each row is the column underneath the main logical operator.
  • As you become more adept, you may no longer need to copy over the columns for each sentence letter—only the intermediate steps and the main connective matter.

📏 Size and structure of complete truth tables

📏 How many rows are needed

A complete truth table has a row for all the possible combinations of 1 and 0 for all of the sentence letters.

  • The size depends on the number of different sentence letters, not the total number of occurrences.
  • Formula: if a sentence has n different sentence letters, the complete truth table must have 2<sup>n</sup> rows.
Number of different sentence lettersNumber of rows
12
24
38
416
532
664
  • Example: the sentence [(C ↔ C) → C] & ¬(C → C) contains only one sentence letter (C), so it requires only two rows, even though C is repeated many times.
  • Don't confuse: a single sentence letter can never be marked both 1 and 0 on the same row.

🔄 How to fill in the rows systematically

The excerpt gives a mechanical procedure to ensure all combinations are covered:

  1. Rightmost sentence letter: alternate 1s and 0s (1, 0, 1, 0, ...).
  2. Next column to the left: write two 1s, then two 0s, and repeat (1, 1, 0, 0, 1, 1, 0, 0, ...).
  3. Third column: write four 1s, then four 0s (1, 1, 1, 1, 0, 0, 0, 0, ...).
  4. Continue: for each additional column to the left, double the number of consecutive 1s and 0s.
  • This yields a systematic eight-line table for three letters, a 16-line table for four letters, and so on.

🏷️ Classifying sentences with truth tables

🏷️ Tautology

A sentence is a tautology in SL if the column under its main connective is 1 on every row of a complete truth table.

  • A tautology must be true as a matter of logic, regardless of what the world is like.
  • Example: (H & I) → H is a tautology because it is true on all four rows.

🏷️ Contradiction

A sentence is a contradiction in SL if the column under its main connective is 0 on every row of a complete truth table.

  • A contradiction is false on every possible combination of truth-values.
  • Example: [(C ↔ C) → C] & ¬(C → C) is a contradiction because it is false on both rows.

🏷️ Contingent sentence

A sentence is contingent in SL if it is neither a tautology nor a contradiction; i.e., if it is 1 on at least one row and 0 on at least one row.

  • A contingent sentence might be true or false, depending on the truth-values of its sentence letters.
  • Example: M & (N ∨ P) is contingent because it is true on some rows and false on others.

🔗 Logical equivalence

🔗 What logical equivalence means

Two sentences are logically equivalent in SL if they have the same truth-value on every row of a complete truth table.

  • This is the SL analogue of logical equivalence in English (having the same truth value as a matter of logic).
  • To check equivalence, construct a complete truth table for both sentences and compare the columns under their main connectives row by row.
  • Example: the excerpt begins to compare ¬(A ∨ B) and ¬A & ¬B to see if they are logically equivalent (the excerpt cuts off before completing the comparison).

🔗 Why it matters

  • Logical equivalence allows us to replace one sentence with another without changing the logical meaning.
  • It is a formal way to verify that two different-looking sentences express the same logical content.
13

Using Truth Tables

3.3 Using truth tables

🧭 Overview

🧠 One-sentence thesis

Truth tables systematically evaluate all possible truth-value combinations to determine whether sentences are tautologies, contradictions, or contingent, whether arguments are valid, and whether sets of sentences are consistent.

📌 Key points (3–5)

  • What truth tables reveal: They show whether a sentence must be true (tautology), must be false (contradiction), or depends on circumstances (contingent).
  • Logical equivalence: Two sentences are logically equivalent in SL if they have the same truth value on every row of a complete truth table.
  • Validity testing: An argument is valid in SL if there is no row where all premises are true and the conclusion is false.
  • Common confusion: Complete vs. partial truth tables—proving something is a tautology/contradiction/valid requires checking all rows; proving something is not requires only one counterexample row.
  • Consistency: A set of sentences is consistent if at least one row makes them all true simultaneously.

🔍 Core logical properties

🔍 Tautologies

A sentence is a tautology in SL if the column under its main connective is 1 on every row of a complete truth table.

  • A tautology must be true as a matter of logic, regardless of what the world is like.
  • Because we consider all possible ways the world might be (all rows), if the sentence is true on every line, it is logically necessary.
  • Example from the excerpt: (H & I) → H is a tautology.

🔍 Contradictions

A sentence is a contradiction in SL if the column under its main connective is 0 on every row of a complete truth table.

  • A contradiction is false on every possible assignment of truth values.
  • Example from the excerpt: [(C ↔ C) → C] & ¬(C → C) is a contradiction.

🔍 Contingent sentences

A sentence is contingent in SL if it is neither a tautology nor a contradiction; i.e., if it is 1 on at least one row and 0 on at least one row.

  • Contingent sentences depend on how the world actually is—they can be true or false depending on circumstances.
  • Example from the excerpt: M & (N ∨ P) is contingent.

🔗 Relationships between sentences

🔗 Logical equivalence

Two sentences are logically equivalent in SL if they have the same truth value on every row of a complete truth table.

  • This mirrors the English notion: two sentences are logically equivalent if they have the same truth value as a matter of logic.
  • The excerpt demonstrates ¬(A ∨ B) and ¬A & ¬B are logically equivalent because their main connective columns match on all four rows (0, 0, 0, 1).
  • How to check: Construct a complete truth table and compare the columns under each sentence's main connective—if they match on every row, the sentences are equivalent.

🔗 Consistency

A set of sentences is logically consistent in SL if there is at least one line of a complete truth table on which all of the sentences are true. It is inconsistent otherwise.

  • Consistency means it is logically possible for all the sentences to be true at once.
  • You only need to find one row where all sentences are true to prove consistency.
  • Inconsistency requires showing that on every row, at least one sentence is false.

✅ Argument validity

✅ What validity means

An argument is valid in SL if there is no row of a complete truth table on which the premises are all 1 and the conclusion is 0; an argument is invalid in SL if there is such a row.

  • Validity means it is logically impossible for the premises to be true and the conclusion false at the same time.
  • The excerpt's example argument (¬L → (J ∨ L), ¬L, therefore J) is valid because the only row where both premises are true (row 2) also has a true conclusion.

✅ How to test validity

  • Construct a complete truth table with columns for all premises and the conclusion.
  • Look for any row where all premises are 1 and the conclusion is 0.
  • If no such row exists, the argument is valid; if such a row exists, the argument is invalid.

⚡ Partial truth tables

⚡ When partial tables suffice

The excerpt explains that you don't always need a complete truth table—sometimes one or two carefully chosen rows are enough:

TaskComplete table needed?Partial table sufficient?
Show is a tautologyYes (must check all rows)No
Show not a tautologyNoYes (one row where it's 0)
Show is a contradictionYes (must check all rows)No
Show not a contradictionNoYes (one row where it's 1)
Show contingentNoYes (two rows: one true, one false)
Show not contingentYes (must prove tautology or contradiction)No
Show logically equivalentYes (must match on all rows)No
Show not logically equivalentNoYes (one row where they differ)
Show consistentNoYes (one row where all are true)
Show inconsistentYes (must check all rows)No
Show validYes (must check all rows)No
Show invalidNoYes (one row: premises true, conclusion false)

⚡ Building a partial truth table

The excerpt demonstrates showing (U & T) → (S & W) is not a tautology:

  • Start by making the whole sentence false (0).
  • For a conditional to be false, the antecedent must be true and the consequent false.
  • Fill in U = 1, T = 1 (to make U & T true), and make S & W false by choosing S = 0, W = 0.
  • This single row proves the sentence is not a tautology.

Don't confuse: To show contingency, you need two rows (one true, one false), not just one. The excerpt shows this by adding a second row where the sentence is true.

⚡ Strategy tip

  • If you don't know whether a sentence is contingent, start building a complete truth table.
  • If you complete rows that show the sentence is both true and false, you can stop—you've proven contingency.
  • There is nothing wrong with filling in more rows than strictly necessary; extra rows don't invalidate your proof.
14

Partial truth tables

3.4 Partial truth tables

🧭 Overview

🧠 One-sentence thesis

Partial truth tables can efficiently prove certain logical properties (contingency, invalidity, consistency, non-equivalence) by showing just one or two carefully chosen rows, whereas other properties (tautology, validity, inconsistency, equivalence) require complete truth tables.

📌 Key points (3–5)

  • When partial tables suffice: proving a sentence is contingent, an argument is invalid, sentences are not equivalent, or a set is consistent requires only 1–2 rows.
  • When complete tables are required: proving tautology, contradiction, validity, inconsistency, or logical equivalence demands examining every possible truth-value combination.
  • The strategic difference: to show something has a property (e.g., "is contingent"), you need only one counterexample row; to show it always has a property (e.g., "is always true"), you must check all rows.
  • Common confusion: don't confuse "contingent" with "not a tautology"—showing contingency needs two rows (one true, one false), but disproving tautology needs only one row where the sentence is false.
  • Practical advice: if you're unsure whether a sentence is contingent, start a complete table and stop early if you find rows proving contingency.

🔍 When partial tables are enough

🔍 Proving contingency

  • A sentence is contingent if it can be both true and false under different truth-value assignments.
  • What you need: a two-line partial truth table showing the sentence true on one row and false on another.
  • The excerpt notes: "there are many combinations of truth values that would have made the sentence true, so there are many ways we could have written the second line."
  • Example: if you find one assignment making sentence A true and another making it false, you've proven A is contingent—no need to check other rows.

❌ Proving invalidity

  • An argument is invalid if there exists at least one case where all premises are true but the conclusion is false.
  • What you need: a one-line partial truth table with all premises true and the conclusion false.
  • The excerpt states: "If you can produce a line on which the premises are all true and the conclusion is false, then the argument is invalid."
  • Don't confuse: invalidity is not "the argument is never valid"; it's "the argument fails in at least one case."

✅ Proving consistency

  • A set of sentences is consistent if there is at least one truth-value assignment making all of them true simultaneously.
  • What you need: a one-line partial truth table where every sentence in the set is true.
  • The excerpt emphasizes: "The rest of the table is irrelevant, so a one-line partial truth table will do."
  • Example: if you can assign truth values so that sentences A, B, and C are all true together, the set {A, B, C} is consistent.

≠ Proving non-equivalence

  • Two sentences are not logically equivalent if there is at least one case where they have different truth values.
  • What you need: a one-line partial truth table making one sentence true and the other false.
  • The excerpt specifies: "Make the table so that one sentence is true and the other false."

📋 When complete tables are required

📋 Proving tautology or contradiction

  • Tautology: a sentence that is true on every row of its truth table.
  • Contradiction: a sentence that is false on every row.
  • Both require examining all possible truth-value combinations to verify the sentence never deviates from its pattern.
  • The excerpt states: "Showing that a sentence is not contingent requires providing a complete truth table, because it requires showing that the sentence is a tautology or that it is a contradiction."

📋 Proving validity

  • An argument is valid if there is no case where all premises are true and the conclusion is false.
  • What you need: a complete truth table showing that on every row, either at least one premise is false or the conclusion is true.
  • The excerpt: "Showing that an argument is valid requires a complete truth table."
  • Don't confuse: validity is a universal property (must hold in all cases), so you cannot prove it with a single example.

📋 Proving logical equivalence

  • Two sentences are logically equivalent if they have the same truth value on every row.
  • What you need: a complete truth table demonstrating identical truth values across all assignments.
  • The excerpt: "Showing that two sentences are logically equivalent requires providing a complete truth table."

📋 Proving inconsistency

  • A set of sentences is inconsistent if on every possible truth-value assignment, at least one sentence in the set is false.
  • What you need: a complete truth table showing no row makes all sentences true simultaneously.
  • The excerpt: "Showing that a set of sentences is inconsistent, on the other hand, requires a complete truth table: You must show that on every row of the table at least one of the sentences is false."

🗺️ Summary table and strategy

🗺️ Quick reference

The excerpt provides this summary:

PropertyTo prove YESTo prove NO
Tautology?Complete truth tableOne-line partial (show false)
Contradiction?Complete truth tableOne-line partial (show true)
Contingent?Two-line partial (one true, one false)Complete truth table
Equivalent?Complete truth tableOne-line partial (different values)
Consistent?One-line partial (all true)Complete truth table
Valid?Complete truth tableOne-line partial (premises true, conclusion false)

🧭 Practical workflow

  • If you don't know the property in advance: start constructing a complete truth table.
  • Stop early if possible: "If you complete rows that show the sentence is contingent, then you can stop. If not, then complete the truth table."
  • Extra rows are harmless: "Even though two carefully selected rows will show that a contingent sentence is contingent, there is nothing wrong with filling in more rows."
  • The key is recognizing when you've gathered enough information to conclude.

⚠️ Don't confuse proving vs disproving

  • Proving a universal property (tautology, validity, equivalence, inconsistency) requires checking all cases.
  • Disproving a universal property requires only one counterexample.
  • Proving an existential property (contingency, consistency, invalidity, non-equivalence) requires only one or two examples.
  • Disproving an existential property requires checking all cases to show no example exists.
15

From sentences to predicates

4.1 From sentences to predicates

🧭 Overview

🧠 One-sentence thesis

Sentential logic (SL) cannot capture arguments that depend on quantifier structure (like "all," "some," "no one"), so we need a new formal language—quantified logic (QL)—that uses predicates, singular terms, and quantifiers to represent this structure.

📌 Key points (3–5)

  • Why SL fails: SL treats sentences like "No one is confused" and "Everyone is confused" as independent atomic sentences, erasing the logical connection between quantifier expressions.
  • When SL works anyway: Some arguments with quantifiers remain valid in SL even though SL ignores quantifier structure, but others are "completely botched" and appear invalid in SL despite being valid in English.
  • Common confusion: If an argument with quantifiers is invalid in SL, we cannot conclude the English argument is invalid—the validity may depend on quantifier structure that SL cannot represent.
  • What QL adds: QL introduces predicates (properties like "is a dog"), singular terms (names of specific things), and quantifier symbols (∀ for "all," ∃ for "some") to capture the internal structure of sentences.
  • Universe of discourse: Quantifiers range over a specified set (the UD), so "everyone" means "everyone in the UD," not literally all people everywhere.

🚫 The limits of sentential logic

🚫 What SL cannot capture

  • SL treats entire sentences as atomic units (single letters like N or E).
  • When we translate "No one is confused" as N and "Everyone is confused" as E, we lose the fact that these two sentences are logically connected.
  • In English, both cannot be true at the same time; in SL, there is a truth-value assignment where both N and E are true.

Quantifiers: expressions like "no one," "everyone," "anyone."

Quantifier structure: the internal logical structure of sentences involving quantifiers.

✅ When ignoring quantifiers is safe

  • Example argument (valid in both English and SL):
    • If the lecture was confusing, then either no one or everyone was confused.
    • If everyone was confused, then the boardwork was confusing.
    • The lecture was confusing.
    • Therefore, if the boardwork was not confusing, then no one was confused.
  • Translating to SL: L → (N ∨ E), E → B, L, ∴ ¬B → N.
  • This is valid in SL (checkable by truth table).
  • Why it works: The validity does not depend on the quantifier structure; SL's truth-functional structure is enough.

❌ When ignoring quantifiers breaks validity

  • Example argument (valid in English, invalid in SL):
    • Willard is a logician.
    • All logicians wear funny hats.
    • Therefore, Willard wears a funny hat.
  • Translating to SL: L (Willard is a logician), A (All logicians wear funny hats), F (Willard wears a funny hat), ∴ L, A, ∴ F.
  • This is invalid in SL (checkable by truth table).
  • Why it fails: The sentence "All logicians wear funny hats" is about both logicians and hat-wearing; SL treats it as a single atomic sentence A, losing the connection between being a logician and wearing a hat.
  • Don't confuse: This is not a mistake in symbolization—these are the best symbolizations possible in SL; the problem is that SL lacks the expressive power.

🧭 Rules of thumb for SL and quantifiers

Outcome in SLWhat we can conclude about the English argument
Valid in SLThe English argument is valid
Invalid in SLWe cannot conclude the English argument is invalid (it may be valid due to quantifier structure)
Tautology in SLThe English sentence is logically true
Contingent in SLWe cannot conclude the English sentence is contingent (it may be logically true due to quantifier structure)

🧱 Building blocks of QL

🧱 Predicates as the basic unit

Predicate: an expression like "is a dog"—not a sentence on its own, neither true nor false until we specify what or who it applies to.

  • In QL, predicates are symbolized with capital letters A through Z (with optional subscripts).
  • Example: Let D stand for "is a dog."
  • To make a sentence, combine a predicate with a name: if b stands for Bertie, then Db means "Bertie is a dog."

🏷️ Singular terms (constants)

Singular term: a word or phrase that refers to a specific person, place, or thing.

  • In QL, singular terms are symbolized with lowercase letters a through w (with optional subscripts): a, b, c, …, w, a₁, f₃₂, j₃₉₀, m₁₂.
  • These are called constants because they pick out specific individuals.
  • Note: x, y, z are reserved as variables (placeholders, not names of specific things).

🏷️ Proper names

Proper name: a singular term that picks out an individual without describing it.

  • Example: "Emerson," "Jack Hathaway."
  • The name alone does not tell you anything about the individual (Jack could even be a giraffe).

🏷️ Definite descriptions

Definite description: picks out an individual by means of a unique description, often of the form "the such-and-so."

  • Examples: "the tallest member of Monty Python," "the first emperor of China."
  • Don't confuse: "A member of Monty Python" is not a definite description (it does not pick out a specific individual).
  • The same thing can be named or described: "Mount Rainier" (proper name) and "the highest peak in Washington state" (definite description) refer to the same place.

🏷️ Context and specificity

  • In English, context matters: "Willard" means a specific person, not just anyone named Willard.
  • In QL, singular terms must refer to just one specific thing.

🔧 Predicates: one-place, two-place, and beyond

🔧 One-place (monadic) predicates

  • A predicate with one blank to fill in.
  • Examples: "__ is a dog," "__ is a member of Monty Python," "A piano fell on __."
  • Combine with one singular term to make a sentence.

🔧 Two-place (dyadic) predicates

  • A predicate with two blanks, expressing a relation between two things.
  • Examples: "__ is bigger than ," " is to the left of ," " owes money to __."
  • Need two terms to make a sentence.

🔧 Three-place (triadic) and n-place predicates

  • Example sentence: "Vinnie borrowed the family car from Nunzio."
  • By removing terms, we can recognize different predicates:
    • One-place: "__ borrowed the family car from Nunzio," "Vinnie borrowed __ from Nunzio," "Vinnie borrowed the family car from __."
    • Two-place: "Vinnie borrowed __ from ," " borrowed the family car from ," " borrowed __ from Nunzio."
    • Three-place: "__ borrowed __ from __."
  • Polyadic predicates: predicates with more than one place.
  • n-place (n-adic) predicates: predicates with n places.

🔧 Choosing the right predicate

  • It depends on what you need to express.
  • If you only discuss one car, a one-place predicate may suffice.
  • If you need to talk about different borrowers, cars, and lenders, use a three-place predicate.

📝 Symbolization key for QL

  • Predicates are written with variables (by convention).
  • Constants are listed at the end.
  • Example key:
    • Ax: x is angry.
    • Hx: x is happy.
    • T₁xy: x is as tall or taller than y.
    • T₂xy: x is as tough or tougher than y.
    • Bxyz: y is between x and z.
    • d: Donald
    • g: Gregor
    • m: Marybeth

📝 Example translations

English sentenceTranslationNotes
Donald is angry.AdReplace the variable x in Ax with the constant d.
If Donald is angry, then so are Gregor and Marybeth.Ad → (Ag & Am)QL has all truth-functional connectives from SL.
Marybeth is at least as tall and as tough as Gregor.T₁mg & T₂mgTwo dyadic predicates combined.
Donald is shorter than Gregor.¬T₁dgParaphrase as "It is not the case that Donald is as tall or taller than Gregor" to use existing predicates.
Gregor is between Donald and Marybeth.BdgmPay attention to the order of terms in the key.

Don't confuse: "Donald is shorter than Gregor" does not require a new predicate Sxy; we can paraphrase using negation and the existing T₁ predicate to preserve the logical connection between "shorter" and "taller."

🔢 Quantifiers in QL

🔢 Universal quantifier (∀)

Universal quantifier (∀): symbolizes "for all" or "every."

  • Written as ∀x (an "x-quantifier"), followed by a formula.
  • Example: "Everyone is happy" → ∀xHx (paraphrased: "For all x, x is happy").
  • The variable x is a placeholder; ∀xHx means the same as ∀yHy, ∀zHz, etc.

🔢 Scope of a quantifier

Scope: the part of the sentence that the quantifier quantifies over (the formula that follows the quantifier).

  • In ∀xHx, the scope of ∀x is Hx.

🔢 Existential quantifier (∃)

Existential quantifier (∃): symbolizes "there exists" or "some" (at least one).

  • Written as ∃x, followed by a formula.
  • Example: "Someone is angry" → ∃xAx (paraphrased: "There is some x which is angry").
  • Means "at least one," not "exactly one."

🔢 Negation and quantifiers

English sentenceNatural paraphraseTranslationEquivalent translation
No one is angry.It is not the case that someone is angry.¬∃xAx∀x¬Ax
No one is angry.Everyone is not angry.∀x¬Ax¬∃xAx
There is someone who is not happy.There is some x such that x is not happy.∃x¬Hx¬∀xHx
Not everyone is happy.It is not the case that everyone is happy.¬∀xHx∃x¬Hx

Key equivalence: ∀xA is logically equivalent to ¬∃x¬A.

  • Any sentence with a universal quantifier can be rewritten with an existential quantifier (and vice versa).
  • The critical distinction is whether negation comes before or after the quantifier.

🔢 Why two quantifiers?

  • We could have an equivalent language with only one quantifier (e.g., treat ∃x as shorthand for ¬∀x¬).
  • QL opts for expressive simplicity: both ∀ and ∃ are official symbols, making translations more natural.

🌍 Universe of discourse

🌍 What the UD specifies

Universe of discourse (UD): the set of things we are talking about; quantifiers range over the UD.

  • In English, "everyone" is context-dependent (everyone in the room, in the class, etc.).
  • In QL, we eliminate ambiguity by explicitly defining the UD at the beginning of the symbolization key.
  • Example: UD: people in Chicago.

🌍 How quantifiers use the UD

  • Given UD: people in Chicago:
    • ∀x means "Everyone in Chicago."
    • ∃x means "Someone in Chicago."
  • Each constant must name a member of the UD.
  • Example: If d, g, m stand for Donald, Gregor, Marybeth, they must all be people in Chicago (given the UD above).

🌍 Changing the UD

  • If we want to talk about people in places besides Chicago, we must expand the UD to include those people.
  • The UD determines the scope of "all" and "some."
16

4.2 Building blocks of QL

4.2 Building blocks of QL

🧭 Overview

🧠 One-sentence thesis

Quantified Logic (QL) uses two quantifiers (universal and existential), a universe of discourse to specify what we're talking about, and constants that must refer to actual members of that universe, making it possible to express general claims while avoiding ambiguity and logical problems.

📌 Key points (3–5)

  • Two quantifiers with equivalent power: QL includes both universal (∀) and existential (∃) quantifiers, though technically one could be defined in terms of the other; we keep both for expressive simplicity.
  • Universe of discourse (UD) eliminates ambiguity: the UD specifies exactly what set of things the quantifiers range over, so "everyone" or "something" has a precise meaning.
  • Constants must refer to real members: every constant in QL must pick out exactly one thing in the UD; this avoids the problem of non-referring terms (like mythological creatures).
  • Common confusion—scope of quantifiers: "everyone" in English is vague (everyone alive? in the room?), but in QL the UD makes the scope explicit.
  • UD must be non-empty: the universe of discourse must contain at least one thing, though allowing single-member UDs can produce strange-sounding results.

🔢 The two quantifiers

🔢 Universal and existential quantifiers

Universal quantifier (∀): expresses "for all" or "everything."
Existential quantifier (∃): expresses "there exists" or "something."

  • The excerpt shows that these are logically interdefinable:
    • "Not everyone is happy" can be written as ¬∀xHx or equivalently as ∃x¬Hx ("there is someone who is not happy").
    • We could treat ∃ as shorthand for ¬∀¬, using only one quantifier formally.
  • Why QL keeps both: the choice is between formal simplicity (one quantifier) and expressive simplicity (two quantifiers that match natural language better).
  • QL opts for expressive simplicity, so both ∀ and ∃ are official symbols.

🔄 Logical equivalence example

English sentenceNatural translationEquivalent form
"Not everyone is happy"¬∀xHx∃x¬Hx
"Someone is not happy"∃x¬Hx¬∀xHx
  • Don't confuse: the two forms are logically equivalent, but one may be more natural depending on how the English sentence is phrased.

🌍 Universe of discourse

🌍 What the UD specifies

Universe of discourse (UD): the set of things that we are talking about.

  • In English, "everyone" is ambiguous—it could mean everyone alive, everyone in a room, everyone in history, etc.
  • The UD eliminates this ambiguity by explicitly defining the scope.
  • Example: if UD is "people in Chicago," then ∀x means "everyone in Chicago" and ∃x means "someone in Chicago."

📝 How the UD appears in a symbolization key

  • The UD is written at the beginning of the key:
    UD: people in Chicago
    
  • The quantifiers "range over" the UD—they apply only to members of that set.
  • Constants must name members of the UD: if the UD is people in Chicago, then Donald, Gregor, and Marybeth must all be in Chicago.

⚠️ UD must be non-empty

  • The UD must include at least one thing.
  • Allowing an empty UD would introduce complications, so QL requires at least one member.
  • Strange case—single-member UD: if the UD contains only the Eiffel Tower, then ∀xPx ("everything is in Paris") just means "the Eiffel Tower is in Paris."
    • Don't confuse: ∀x does not mean "everything in the world"; it means "everything in the UD."

🏷️ Constants and non-referring terms

🏷️ Constants must refer

Constant: a singular term that picks out exactly one member of the UD.

  • Each constant must refer to something—it cannot refer to more than one thing, and it cannot refer to nothing.
  • This requirement connects to the problem of non-referring terms.

🐉 The problem of non-referring terms

  • The problem: what do we do with terms like "chimera" (a mythological creature that does not exist)?
  • Consider:
    • Sentence 12: "Chimera is angry."
    • Sentence 13: "Chimera is not angry."
  • If we define a constant c for chimera and translate these as Ac and ¬Ac, we face a dilemma:
    • Option 1—both false: if sentence 12 is false because chimera doesn't exist, then sentence 13 is also false for the same reason. But Ac and ¬Ac cannot both be false (violates truth conditions for negation).
    • Option 2—meaningless: if Ac is meaningless when chimera doesn't exist, then the expression is sometimes meaningful and sometimes not, depending on interpretation. This makes the formal language hostage to particular interpretations, which undermines the study of logical form.

✅ The solution in QL

  • Each constant must refer to something in the UD, but we can choose any UD we like.
  • If we want to talk about mythological creatures or fictional characters, we simply include them in the UD.
  • Example: to translate "Sherlock Holmes lived at 221B Baker Street," define a UD that includes fictional characters.
  • This allows us to study the logic of stories and hypothetical scenarios without running into the non-referring-term problem.

🔤 Translating sentences to QL

🔤 Setting up a symbolization key

  • The excerpt gives an example with coins:
    • Sentence 14: "Every coin in my pocket is a quarter."
    • Sentence 15: "Some coin on the table is a dime."
    • Sentence 16: "Not all the coins on the table are dimes."
    • Sentence 17: "None of the coins in my pocket are dimes."
  • Symbolization key:
    UD: all coins
    Px: x is in my pocket.
    Tx: x is on the table.
    Qx: x is a quarter.
    Dx: x is a dime.
    
  • Since we are talking about coins in general (not specific named coins), we do not need to define any constants.

🔍 Choosing the right quantifier

  • Sentence 14 ("Every coin in my pocket is a quarter") is naturally translated with a universal quantifier.
  • The universal quantifier says something about everything in the UD (in this case, all coins), not just some subset.
  • The excerpt notes that the universal quantifier applies to the entire UD, so the translation must specify the relevant subset (coins in my pocket) using a conditional or other structure.
17

Quantifiers

4.3 Quantifiers

🧭 Overview

🧠 One-sentence thesis

Quantifiers in QL allow us to express statements about all or some members of a specified universe of discourse, and proper translation requires careful attention to scope, domain restrictions, and the requirement that every constant must refer to something real within that universe.

📌 Key points (3–5)

  • Two quantifiers with equivalent power: QL uses both universal (∀) and existential (∃) quantifiers for expressive simplicity, though technically one could be defined in terms of the other.
  • Universe of discourse (UD) defines scope: quantifiers range over a specified UD, not "everything in the world"; the UD must be non-empty and explicitly stated.
  • Constants must refer: every constant in QL must pick out exactly one member of the UD; non-referring terms (like mythological creatures) create problems unless included in the UD.
  • Common confusion: "everyone/everything" in natural language is ambiguous, but in QL it always means "everyone/everything in the UD"—a UD with only one member makes universal claims trivially true or false.
  • Logical equivalence: negating a universal quantifier (¬∀x) is logically equivalent to an existential quantifier with negated predicate (∃x¬), and vice versa.

🔄 The two quantifiers and their relationship

🔄 Universal and existential quantifiers

QuantifierSymbolNatural languageExample
Universal"all," "every," "everyone"∀xHx = "Everyone is happy"
Existential"some," "there exists"∃xHx = "Someone is happy"
  • Both quantifiers are included in QL as basic symbols for expressive simplicity.
  • The excerpt notes we could technically use only one quantifier and define the other as shorthand.

🔗 Logical equivalence between quantifiers

The existential quantifier can be understood as shorthand: '∃x' is equivalent to '¬∀x¬'.

  • "Someone is not happy" can be written as either:
    • ∃x¬Hx (existential with negated predicate)
    • ¬∀xHx (negated universal)
  • These are logically equivalent translations of the same natural-language statement.
  • Example: "Not everyone is happy" (¬∀xHx) means the same as "There is some x such that x is not happy" (∃x¬Hx).

⚖️ Formal vs expressive simplicity

  • Formal simplicity: use only one quantifier, treat the other as notation (like square brackets are just parentheses).
  • Expressive simplicity: include both quantifiers as basic symbols so translations are more natural.
  • QL chooses expressive simplicity—both ∀ and ∃ are official symbols.

🌍 Universe of Discourse (UD)

🌍 What the UD specifies

Universe of discourse (UD): the set of things that we are talking about.

  • Natural language is ambiguous: "everyone" could mean everyone alive, everyone in a room, everyone in history, etc.
  • In QL, we must specify a UD at the beginning of the symbolization key to eliminate this ambiguity.
  • Example: if UD is "people in Chicago," then ∀x means "everyone in Chicago" and ∃x means "someone in Chicago."

📏 Quantifiers range over the UD

  • The quantifiers range over the universe of discourse—they talk only about members of the UD.
  • Each constant must name some member of the UD.
  • Example: if the symbolization key uses constants for Donald, Gregor, and Marybeth, and the UD is "people in Chicago," then all three must be in Chicago.
  • Don't confuse: ∀xPx does not mean "everything in the world has property P"; it means "everything in the UD has property P."

🚫 UD must be non-empty

  • The UD must include at least one thing; empty UDs are not allowed in QL.
  • Allowing empty UDs would introduce complications (the excerpt does not detail them).

🗼 Strange results with a single-member UD

  • Even a UD with just one member can produce odd-sounding translations.
  • Example from the excerpt:
    • UD: the Eiffel Tower
    • Px: x is in Paris
    • ∀xPx translates as "Everything is in Paris."
    • But this is misleading: it only means "everything in the UD is in Paris," i.e., the Eiffel Tower is in Paris.
  • The universal quantifier makes a claim that sounds grand in English but is trivial when the UD is tiny.

🔗 Constants and the problem of non-referring terms

🔗 Constants must refer to exactly one thing

In QL, each constant must pick out exactly one member of the UD. A constant cannot refer to more than one thing—it is a singular term. Each constant must still pick out something.

  • A constant is a singular term: it names one and only one object.
  • Every constant must refer to something that exists in the UD.

🐉 The problem of non-referring terms

  • Medieval philosophers used sentences about the chimera (a mythological creature) to illustrate this problem.
  • Consider:
    • Sentence 12: "Chimera is angry."
    • Sentence 13: "Chimera is not angry."
  • If we define a constant c for "chimera" and translate these as Ac and ¬Ac, we face a dilemma.

❌ Why both sentences cannot be false

  • Option 1: Say sentence 12 is false because chimera does not exist.
    • Then sentence 13 would also be false for the same reason.
    • But Ac and ¬Ac cannot both be false (contradicts the truth conditions for negation).
  • Option 2: Say sentence 12 is meaningless because it talks about something non-existent.
    • Then Ac would be meaningful for some interpretations but not others.
    • This makes the formal language "hostage to particular interpretations."
    • We want to consider logical form apart from any particular interpretation, so this is unacceptable.

🛠️ The solution: include non-existent things in the UD

  • To avoid the problem, each constant must refer to something in the UD, but the UD can be any set we like.
  • If we want to symbolize arguments about mythological creatures or fictional characters, we must define a UD that includes them.
  • Example: to translate "Sherlock Holmes lived at 221B Baker Street," include fictional characters in the UD.
  • Don't confuse: the UD is not restricted to physically real objects; it can include abstract entities, fictional characters, or anything else we want to reason about.

🔤 Translating sentences to QL

🔤 Setting up the symbolization key

  • The excerpt introduces sentences about coins:
    • 14. Every coin in my pocket is a quarter.
      1. Some coin on the table is a dime.
      1. Not all the coins on the table are dimes.
      1. None of the coins in my pocket are dimes.
  • Since we are talking about coins in pockets and on tables, the UD must contain at least those coins.
  • Since we are not talking about anything besides coins, we let the UD be all coins.
  • Since we are not talking about any specific coins, we do not need to define any constants.

🗝️ Example symbolization key

UD: all coins
Px: x is in my pocket.
Tx: x is on the table.
Qx: x is a quarter.
Dx: x is a dime.

🔍 Universal quantifier for "every"

  • Sentence 14 ("Every coin in my pocket is a quarter") is most naturally translated with a universal quantifier.
  • The universal quantifier says something about everything in the UD, not just some subset.
  • (The excerpt cuts off here, but the implication is that sentence 14 would be translated using ∀x with appropriate predicates.)
18

Translating to QL

4.4 Translating to QL

🧭 Overview

🧠 One-sentence thesis

Translating English sentences into QL requires carefully choosing predicates, constants, and quantifiers while paying attention to scope, the universe of discourse, and the logical structure hidden beneath pronouns and ambiguous terms.

📌 Key points (3–5)

  • Universal vs. existential patterns: universal quantifiers typically pair with conditionals (∀x(Px → Qx)), while existential quantifiers pair with conjunctions (∃x(Px & Qx)).
  • Universe of discourse matters: the same English sentence translates differently depending on whether the UD is restricted (e.g., only roses) or broad (e.g., all things).
  • Empty predicates are allowed: a predicate can apply to nothing in the UD, making universal statements trivially true when the subject class is empty.
  • Common confusion—pronouns and "any": pronouns like "he" and words like "anyone" can require either universal or existential quantifiers depending on context; paraphrasing helps reveal the correct structure.
  • Ambiguous predicates: terms like "skilled" or "big" may need separate predicates for different contexts (skilled surgeon vs. skilled tennis player) to avoid invalid translations.

🔤 Basic translation patterns

🔤 Universal quantifier with conditional

Universal statements typically translate as: ∀x(Px → Qx), meaning "for any x, if x is P, then x is Q."

  • Why conditional, not conjunction? If you wrote ∀x(Px & Qx), it would claim that everything in the UD is both P and Q—a much stronger (and usually false) claim.
  • Example: "Every coin in my pocket is a quarter" becomes ∀x(Px → Qx), not ∀x(Px & Qx), because the latter would mean all coins everywhere are in your pocket and are quarters.
  • The conditional ensures the claim applies only to things that satisfy the antecedent.

🔤 Existential quantifier with conjunction

Existential statements typically translate as: ∃x(Px & Qx), meaning "there exists an x such that x is both P and Q."

  • Why conjunction, not conditional? Writing ∃x(Px → Qx) would be trivially true: it only requires some x where "if Px then Qx" holds, which is satisfied by any x where Px is false (since a conditional with false antecedent is true).
  • Example: "Some coin on the table is a dime" becomes ∃x(Tx & Dx).
  • General rule: avoid putting conditionals inside the scope of existential quantifiers unless you are certain you need one.

🔤 Negations of quantified statements

Two equivalent ways to express negation:

EnglishTranslation 1Translation 2Why equivalent
Not all coins on the table are dimes¬∀x(Tx → Dx)∃x(Tx & ¬Dx)¬∀xA ≡ ∃x¬A and ¬(A → B) ≡ A & ¬B
None of the coins in my pocket are dimes¬∃x(Px & Dx)∀x(Px → ¬Dx)Both express "no x satisfies both P and D"
  • Both translations are correct; choose whichever feels more natural to the English phrasing.

🌍 Universe of discourse and empty predicates

🌍 Choosing the UD carefully

The universe of discourse (UD) is the set of all things the quantifiers range over; it must contain at least one member.

  • UD determines meaning: ∀x(Rx → Tx) means "every rose has a thorn" only if the UD includes all roses.
  • If the UD is "things on my kitchen table," the same formula only claims roses on the table have thorns—trivially true if there are no roses there.
  • Two strategies:
    • Restrict the UD to exactly the relevant objects (e.g., UD: all coins).
    • Use a broad UD and add predicates to filter (e.g., UD: all things; Cx: x is a coin).

🌍 Handling "everyone" and "everything"

  • "Everyone" means "every person," not "every member of the UD."
  • If the UD includes people and plants, translate "Everyone is cross with Esmerelda" as ∀x(Px → Cxe), not ∀xCxe (which would include plants).
  • Don't confuse: the universal quantifier always ranges over the entire UD; you must use a conditional to restrict it to a subset.

🌍 Empty predicates

An empty predicate is one that applies to nothing in the UD.

  • Why allow them? To express statements like "I don't know if there are any monkeys, but any monkeys that exist know sign language."
  • Consequence: ∀x(Mx → Sx) can be true even when ∃x(Mx & Sx) is false—if there are no monkeys, the universal is trivially true (no counterexamples exist), but the existential is false (nothing satisfies both M and S).
  • Example: If Rx means "x is a refrigerator" and the UD is animals, then ∀x(Rx → Mx) is trivially true because there are no refrigerators in the UD to serve as counterexamples.
  • Don't confuse: this does not mean refrigerators are monkeys; it means "any member of the UD that is a refrigerator is a monkey," and since there are none, the claim is vacuously true.

🔁 Pronouns and scope

🔁 Translating pronouns

Pronouns like "he," "she," "it," and "that one" must be replaced with explicit references (constants or variables) before translation.

  • Same pronoun, different structures:
    • "If Lemmy can play guitar, then he is a rock star" → paraphrase as "If Lemmy can play guitar, then Lemmy is a rock star" → Gl → Rl.
    • "If a person can play guitar, then he is a rock star" → paraphrase as "For any person x, if x can play guitar, then x is a rock star" → ∀x(Gx → Rx).
  • The first is about a specific individual (constant); the second is a universal claim (variable).

🔁 "Any" and "anyone"

The words "any" and "anyone" can require either universal or existential quantifiers depending on context.

SentenceParaphraseTranslationQuantifier type
If anyone can play guitar, then Lemmy canIf someone can play guitar, then Lemmy can∃xGx → GlExistential (antecedent)
If anyone can play guitar, then he/she is a rock starFor any person, if that person can play guitar, then that person is a rock star∀x(Gx → Rx)Universal
  • Strategy: if "any" confuses you, paraphrase using "someone," "everyone," or "for any person" to clarify the logical structure.

🔁 Quantifier scope and conditionals

Scope matters, especially with conditionals.

  • ∃xGx → Gl means "if there is some guitarist, then Lemmy is a guitarist."
  • ∃x(Gx → Gl) means "there is some person such that if that person is a guitarist, then Lemmy is a guitarist"—trivially true if any non-guitarist exists (because the conditional is true when the antecedent is false).
  • Oddity with material conditional: changing the scope of a quantifier around a conditional can flip the quantifier type:
    • ∃xGx → Gl ≡ ∀x(Gx → Gl)
    • ∃x(Gx → Gl) ≡ ∀xGx → Gl
  • This oddity does not occur with other connectives (e.g., conjunction) or when the variable is in the consequent.

🎯 Ambiguous predicates and multiple quantifiers

🎯 Avoiding ambiguous predicates

Some English predicates are context-dependent and require separate QL predicates.

  • "Skilled": being a skilled surgeon vs. a skilled tennis player involves different skills.
    • Bad translation: "Carol is a skilled surgeon and a tennis player, therefore Carol is a skilled tennis player" as (Rc & Kc) & Tc ∴ Tc & Kc (valid in QL but invalid in English).
    • Good translation: use K₁x for "skilled as a surgeon" and K₂x for "skilled as a tennis player" → (Rc & K₁c) & Tc ∴ Tc & K₂c (correctly invalid).
  • Other ambiguous terms: "good," "bad," "big," "small" (big dogs vs. big mice are big in different ways).
  • When to split predicates? If the argument or sentences distinguish between different senses of the term, use separate predicates; if not, a single predicate suffices.

🎯 Multiple quantifiers step-by-step

Translate complex sentences with multiple quantifiers by paraphrasing incrementally.

  • Example: "All of Gerald's friends are dog owners."
    • Step 1: ∀x(Fxg → "x is a dog owner")
    • Step 2: recognize "x is a dog owner" means ∃z(Dz & Oxz)
    • Final: ∀x[Fxg → ∃z(Dz & Oxz)]
  • Example: "Every dog owner is the friend of a dog owner."
    • Step 1: ∀x["x is a dog owner" → ∃y("y is a dog owner" & Fxy)]
    • Step 2: replace each "is a dog owner" with ∃z(Dz & O_z)
    • Final: ∀x[∃z(Dz & Oxz) → ∃y(∃z(Dz & Oyz) & Fxy)]
  • Strategy: break the sentence into smaller parts, translate each part, then combine; use different variables for nested quantifiers to avoid confusion.

🎯 Variable choice

  • When you already have an x-quantifier, use a different variable (y, z, etc.) for additional quantifiers.
  • Any variable will do; the choice is arbitrary as long as you avoid conflicts.

🧩 Formal requirements

🧩 Constants, predicates, and the UD

Three key constraints:

ElementRequirementWhy
UDMust have at least one memberQuantifiers need something to range over
PredicateMay apply to some, all, or no members of the UDAllows empty predicates for flexibility
ConstantMust pick out exactly one member of the UDA member may be named by zero, one, or many constants
  • Non-referring terms problem: if a constant didn't have to refer to something, expressions like "Ac" would sometimes be meaningful and sometimes meaningless depending on interpretation, making logical form hostage to particular interpretations.
  • Solution: require every constant to refer, but allow the UD to include fictional characters, mythological creatures, etc., if needed (e.g., to analyze the logic of stories like "Sherlock Holmes lived at 221B Baker Street").

🧩 Symbols of QL

Six kinds of symbols (formal definition begins but is cut off in the excerpt):

  • Predicates: A, B, C, ..., Z (with subscripts A₁, B₁, Z₁, A₂, A₂₅, J₃₇₅, ...)
  • Constants: a, b, c, ..., w (with subscripts a₁, w₄, h, ...)
  • (The excerpt ends before listing the remaining four symbol types: variables, quantifiers, connectives, and parentheses are implied by earlier sections but not enumerated here.)
19

4.5 Sentences of QL

4.5 Sentences of QL

🧭 Overview

🧠 One-sentence thesis

A sentence of QL is a well-formed formula that contains no free variables, meaning every variable is bound by a quantifier that tells us how to interpret it.

📌 Key points (3–5)

  • What makes a formula well-formed: atomic formulas plus recursive rules for connectives and quantifiers, with restrictions to prevent malformed expressions like ∀x∃xDx.
  • Bound vs free variables: a variable is bound when it falls within the scope of its quantifier; otherwise it is free.
  • Why not all wffs are sentences: a wff like Lzz contains a free variable z with no quantifier to tell us whether it means "everyone," "someone," or something else.
  • Common confusion: the variable x in the definition rules is a meta-variable standing for any variable (x, y, z, x₁, etc.), not the specific variable x.
  • Scope determines binding: the scope of a quantifier is the subformula where that quantifier controls how to interpret its variable.

📐 Building blocks of QL

🔤 Six kinds of symbols

QL uses exactly six categories of symbols:

Symbol typeExamplesNotes
PredicatesA, B, C, …, Z (with subscripts A₁, B₁, Z₁, A₂, …)Represent properties or relations
Constantsa, b, c, …, w (with subscripts a₁, w₄, h₇, …)Name specific individuals
Variablesx, y, z (with subscripts x₁, y₁, z₁, x₂, …)Stand in for any member of the UD
Connectives¬, &, ∨, →, ↔Same as in sentential logic
Parentheses( , )Group subformulae
Quantifiers∀, ∃Universal and existential

🧱 Expressions and terms

Expression of QL: any string of symbols of QL, in any order.

  • Not every expression is meaningful; most are gibberish.
  • Term of QL: either a constant or a variable.
  • Terms are the building blocks that fill predicate slots.

⚛️ Atomic formulae

Atomic formula of QL: an n-place predicate followed by n terms.

  • Example: if D is a one-place predicate and x is a variable, then Dx is atomic.
  • Example: if L is a two-place predicate, then Lxy, Lab, and Lzz are all atomic.
  • Every atomic formula automatically counts as a wff.

🏗️ Recursive definition of well-formed formulae

🏗️ The nine rules

The excerpt provides a recursive definition with nine rules:

  1. Every atomic formula is a wff.
  2. If A is a wff, then ¬A is a wff.
  3. If A and B are wffs, then (A & B) is a wff.
  4. If A and B are wffs, then (A ∨ B) is a wff.
  5. If A and B are wffs, then (A → B) is a wff.
  6. If A and B are wffs, then (A ↔ B) is a wff.
  7. If A is a wff, x is a variable, A contains at least one occurrence of x, and A contains no x-quantifiers, then ∀xA is a wff.
  8. If A is a wff, x is a variable, A contains at least one occurrence of x, and A contains no x-quantifiers, then ∃xA is a wff.
  9. All and only wffs of QL can be generated by applications of these rules.

🚫 Why the quantifier rules have restrictions

The excerpt explains that without restrictions, bizarre expressions like ∀x∃xDx and ∀xDw would count as wffs.

  • ∀xDw is blocked: the variable x does not occur in Dw, so rule 7 does not apply.
  • ∀x∃xDx is blocked: ∃xDx already contains an x-quantifier, so rule 7 does not apply.
  • The restrictions ensure that quantifiers only bind variables that actually appear and are not already bound.

🔁 The 'x' in the definition is a meta-variable

  • The 'x' that appears in rules 7 and 8 is not the specific variable x of QL.
  • It is a meta-variable standing in for any variable.
  • Example: ∀xAx, ∀yAy, ∀zAz, ∀x₄Ax₄, and ∀z₉Az₉ are all wffs by the same rule.

🎯 Scope, bound variables, and free variables

🎯 Scope of a quantifier

Scope: the subformula for which the quantifier is the main logical operator.

  • The scope tells us which part of the formula the quantifier controls.
  • Example: in ∀x(Ex ∨ Dy), the scope of ∀x is (Ex ∨ Dy).

🔗 Bound vs free variables

Bound variable: an occurrence of a variable x that is within the scope of an x-quantifier.

Free variable: an occurrence of a variable that is not bound.

  • A variable can have multiple occurrences; some may be bound and others free.
  • Example: ∀x(Ex ∨ Dy) → ∃z(Ex → Lzx)
    • The first x is bound by ∀x (within scope (Ex ∨ Dy)).
    • The second and third x are free (outside the scope of ∀x).
    • The y is free (no y-quantifier exists).
    • Both z occurrences are bound by ∃z (within scope (Ex → Lzx)).

🤔 Why free variables are a problem

The excerpt uses Lzz (with UD: people, Lxy: x loves y) to illustrate:

  • Lzz is an atomic formula, hence a wff.
  • But what does it mean? z is a variable with no quantifier.
  • Does it mean "everyone loves themselves"? "Someone loves themselves"? "Anyone loves themselves"?
  • Without a quantifier, we cannot interpret z.
  • Don't confuse: some formal languages treat free variables as implicitly universally quantified; QL does not.
  • If you mean "everyone loves themselves," you must write ∀zLzz.

✅ Sentences of QL

✅ Definition of a sentence

Sentence of QL: a wff of QL that contains no free variables.

  • Only sentences can be true or false.
  • In SL, every wff was a sentence; in QL, this is not the case.
  • A wff with free variables is not a sentence because we cannot determine its truth value without knowing how to interpret the free variables.

🔍 Examples

  • Lzz: a wff but not a sentence (z is free).
  • ∃zLzz: a wff and a sentence (z is bound by ∃z; means "someone loves themselves").
  • ∀zLzz: a wff and a sentence (z is bound by ∀z; means "everyone loves themselves").
  • ∀x(Ex ∨ Dy): a wff but not a sentence (y is free).
  • ∀x∀y(Ex ∨ Dy): a wff and a sentence (both x and y are bound).

📝 Notational conventions

📝 Four simplifications

The excerpt adopts the same conventions as for SL:

  1. Outermost parentheses may be omitted: write ∀xAx instead of (∀xAx).
  2. Square brackets for readability: use [ ] in place of some parentheses to make nested formulae easier to read.
  3. Long conjunctions: omit parentheses between each pair of conjuncts (e.g., A & B & C instead of ((A & B) & C)).
  4. Long disjunctions: omit parentheses between each pair of disjuncts (e.g., A ∨ B ∨ C instead of ((A ∨ B) ∨ C)).
  • These are notational shortcuts; the underlying logical structure remains the same.
  • Example: ∀x[Ex ∨ Dy] → ∃z[Ex → Lzx] is easier to read than ∀x(Ex ∨ Dy) → ∃z(Ex → Lzx).
20

Identity in Quantified Logic

4.6 Identity

🧭 Overview

🧠 One-sentence thesis

The identity predicate '=' allows us to express uniqueness, numerical quantity, and definite descriptions in quantified logic, solving problems that arise when translating sentences involving "else," "only," "exactly," and non-referring terms.

📌 Key points (3–5)

  • Why identity is needed: Without '=', we cannot distinguish "everyone" from "everyone else" or express that two things are distinct.
  • What identity means: x = y means x and y are the very same thing (not merely indistinguishable), and x ≠ y abbreviates ¬(x = y).
  • Expressing quantity: Identity lets us translate "at least n," "at most n," and "exactly n" by ensuring variables pick out distinct objects.
  • Definite descriptions: Russell's theory analyzes "the X" as three claims—existence, uniqueness, and predication—requiring identity for the uniqueness component.
  • Common confusion: Negating definite descriptions is ambiguous between wide-scope (denying the whole claim) and narrow-scope (denying only the predicate).

🔧 Why we need identity

🔧 The "everyone else" problem

  • Without identity, translating "Pavel owes money to everyone else" as ∀xOpx incorrectly implies Pavel owes money to himself.
  • The word "else" excludes the subject from the quantified set.
  • Solution: Use identity to exclude Pavel: ∀x(x ≠ p → Opx) means "for all x, if x is not Pavel, then Pavel owes money to x."

🔧 "Besides" and "only"

  • "No one besides Pavel owes money to Hikaru" = ¬∃x(x ≠ p & Oxh).
  • "Only Pavel owes Hikaru money" = Oph & ¬∃x(x ≠ p & Oxh) (Pavel does, and no one else does).
  • Both require identity to express "not Pavel."

🔢 Expressing numerical quantity

🔢 "At least n"

SentenceTranslationWhy identity is needed
At least one apple∃xAxNo identity needed
At least two apples∃x∃y(Ax & Ay & x ≠ y)Without x ≠ y, both variables could pick the same apple
At least three apples∃x∃y∃z(Ax & Ay & Az & x ≠ y & y ≠ z & x ≠ z)All three must be distinct
  • Example: ∃x∃y(Ax & Ay) alone would be true even with only one apple, because x and y could refer to the same object.

🔢 "At most n"

  • "At most one apple" can be translated two equivalent ways:
    • As negation: ¬∃x∃y(Ax & Ay & x ≠ y) (not the case that there are at least two).
    • With universal quantifiers: ∀x∀y[(Ax & Ay) → x = y] (any apples must be the same apple).
  • "At most two apples": ∀x∀y∀z[(Ax & Ay & Az) → (x = y ∨ x = z ∨ y = z)] (any three apples must include duplicates).

🔢 "Exactly n"

  • "Exactly one apple" = at least one AND at most one.
  • More direct paraphrase: ∃x[Ax & ¬∃y(Ay & x ≠ y)] means "there is an apple and no other apple."
  • "Exactly two apples": ∃x∃y[Ax & Ay & x ≠ y & ¬∃z(Az & x ≠ z & y ≠ z)] (two distinct apples and no third).

🔢 Quantifying over the entire domain

  • "At most two things on the table" (where UD = things on the table): ∀x∀y∀z(x = y ∨ x = z ∨ y = z).
  • No predicate needed—the quantifier already ranges over things on the table.

📖 Definite descriptions (Russell's theory)

📖 The problem of non-referring terms

Definite description: A phrase like "the present king of France" that is supposed to pick out a unique individual.

  • Constants in QL must refer to something in the universe of discourse.
  • "The present king of France is bald" poses a problem: there is no such king.
  • We cannot define a constant for a non-existent object.

📖 Russell's three-part analysis

Russell argued that "the X is Y" has hidden logical structure with three components:

  1. Existence: There is at least one X.
  2. Uniqueness: There is at most one X (this X is the only X).
  3. Predication: This X has property Y.
  • Example: "The present king of France is bald" translates as ∃x[Fx & ¬∃y(Fy & x ≠ y) & Bx].
    • Fx = x is the present king of France; Bx = x is bald.
    • This says: there exists someone who is king, he is the only king, and he is bald.
  • Result: The sentence is meaningful but false (the existence claim fails).

📖 Wide-scope vs narrow-scope negation

"The present king of France is not bald" is ambiguous:

TypeMeaningTranslationTruth value
Wide-scopeIt is not the case that [the king is bald]¬∃x[Fx & ¬∃y(Fy & x ≠ y) & Bx]True (denies the whole claim)
Narrow-scopeThe king is [not bald]∃x[Fx & ¬∃y(Fy & x ≠ y) & ¬Bx]False (still asserts existence)
  • Wide-scope: Negates the entire sentence; does not presuppose the king exists.
  • Narrow-scope: Asserts the king exists and is unique, but denies baldness.
  • Don't confuse: Wide-scope negation is true when the description fails to refer; narrow-scope negation is false in that case.

📖 Why this matters

  • Russell's theory resolves the paradox: sentences about non-existent things seemed both true and false.
  • By showing the ambiguity, Russell demonstrated they are true under one reading (wide-scope) and false under another (narrow-scope).
  • Identity is essential for expressing the uniqueness component of definite descriptions.

🧮 Technical details

🧮 Bound vs free variables (context from excerpt)

  • The excerpt defines bound and free variables to clarify scope.
  • A sentence in QL contains no free variables.
  • This matters for identity because variables in identity statements must be properly quantified.

🧮 Identity as a special predicate

Identity predicate: For terms t₁ and t₂, the formula t₁ = t₂ is atomic and means "t₁ is identical to t₂."

  • Written differently from other predicates (infix notation: x = y, not =xy).
  • Not just "indistinguishable"—means the very same object.
  • x ≠ y is shorthand for ¬(x = y), not a separate primitive.

🧮 Using constants with identity

  • "Pavel is Mister Checkov" translates as p = c (both constants refer to the same person).
  • Identity between constants expresses that two names pick out one individual.
21

5.1 Semantics for SL

5.1 Semantics for SL

🧭 Overview

🧠 One-sentence thesis

Truth in SL is formally defined by a recursive function that assigns 1 (true) or 0 (false) to every sentence, starting from atomic sentences and building up through connectives, allowing us to rigorously characterize tautologies, contradictions, validity, and entailment.

📌 Key points (3–5)

  • What truth in SL means: assigning 1 or 0 to sentences based on truth value assignments to atomic sentences and recursive rules for connectives.
  • Two-step structure: atomic sentences get values from a truth value assignment function; compound sentences get values from recursive rules that mirror the definition of well-formed formulas.
  • Interpretation vs. truth: an interpretation (symbolization key) gives meaning to symbols, but truth/falsity also depends on the state of the world; formally, we use a truth value assignment function to capture both.
  • Common confusion: the definition uses metalanguage words like 'and' to define object language symbols like '&'—this is not circular because they belong to different languages.
  • Why it matters: this formal definition allows precise definitions of tautology, contradiction, validity, entailment, and logical equivalence without relying on truth tables alone.

🔤 Object language vs. metalanguage

🔤 The distinction

  • Object language: the language we are talking about (SL or QL in this text).
  • Metalanguage: the language we use to talk about the object language (English plus mathematical notation).
  • This distinction is crucial for understanding the formal semantics—we use English and math to describe the formal properties of SL.

🔄 Why it matters for definitions

  • When we define truth for SL sentences containing '&', we use the English word 'and' in the metalanguage.
  • This is not circular: we are defining an object language symbol using a metalanguage word.
  • Example: "v(A) = 1 if v(B) = 1 and v(C) = 1" uses metalanguage 'and' to explain object language '&'.

🎯 Truth value assignments and interpretation

🎯 What determines truth or falsity

The excerpt emphasizes a key equation:

INTERPRETATION + STATE OF THE WORLD ⇒ TRUTH/FALSITY

  • Interpretation alone is not enough: knowing that M means "The moon is a giant turnip" does not tell you whether M is true.
  • State of the world matters: whether M is true depends on what the actual moon is like.
  • Example: A child might understand "The moon is a giant turnip" but mistakenly think it is true; someone might know M means "It is morning now" but not know the current time.

📋 The truth value assignment function a

The truth value assignment function a: for all sentence letters P, a(P) = 1 if P is true, 0 otherwise.

  • This function encodes which atomic sentences are true and which are false.
  • It is not part of SL itself; it is part of the mathematical machinery used to describe SL.
  • Think of a as being like a row of a truth table, but assigning values to every atomic sentence (infinitely many), not just the few we care about in a particular table.

🔍 Interpretation vs. formal difference

  • Different interpretations can make no formal difference if they assign the same truth values.
  • Example: "D means 'Today is Tuesday'" vs. "D means 'Today is the day after Monday'" are different interpretations but formally equivalent—all that matters is whether D is true or false.

🏗️ The recursive definition of truth

🏗️ Two-step structure

The definition of the truth function v mirrors the recursive definition of well-formed formulas:

  1. Base case: handle atomic sentences (sentence letters).
  2. Recursive case: handle compound sentences built from connectives.

⚛️ Step 1: Atomic sentences

  • If A is a sentence letter, then v(A) = a(A).
  • The truth value of an atomic sentence is simply whatever the truth value assignment function says.

🔗 Step 2: Compound sentences

The excerpt provides recursive clauses for each connective:

ConnectiveRule
Negation (¬B)v(¬B) = 1 if v(B) = 0; otherwise v(¬B) = 0
Conjunction (B & C)v(B & C) = 1 if v(B) = 1 and v(C) = 1; otherwise 0
Disjunction (B ∨ C)v(B ∨ C) = 0 if v(B) = 0 and v(C) = 0; otherwise 1
Conditional (B → C)v(B → C) = 0 if v(B) = 1 and v(C) = 0; otherwise 1
Biconditional (B ↔ C)v(B ↔ C) = 1 if v(B) = v(C); otherwise 0
  • These rules match the characteristic truth tables for each connective.
  • Because the definition has the same structure as the definition of a wff, v assigns a value to every sentence of SL.

🔁 Relative truth

  • Truth in SL is always truth relative to some truth value assignment.
  • The definition does not say whether a given sentence is absolutely true or false; it says how the truth of that sentence relates to a truth value assignment.

📐 Formal definitions of key concepts

📐 Semantic entailment (⊨)

A ⊨ B means "A semantically entails B": there is no truth value assignment for which A is true and B is false.

  • Equivalently: B is true for any and all truth value assignments for which A is true.
  • Can be extended to sets: {A₁, A₂, A₃, …} ⊨ B means no assignment makes all of A₁, A₂, A₃, … true and B false.
  • Single sentence: ⊨ C means C is true for all truth value assignments (C is entailed by anything).

🎭 Tautology, contradiction, contingency

Using the double turnstile symbol:

ConceptDefinition
Tautology in SLA sentence A such that ⊨ A (true for all truth value assignments)
Contradiction in SLA sentence A such that ⊨ ¬A (false for all truth value assignments)
Contingent in SLA sentence that is neither a tautology nor a contradiction

✅ Validity and equivalence

  • Valid argument in SL: "P₁, P₂, …, ∴ C" is valid if and only if {P₁, P₂, …} ⊨ C.
  • Logically equivalent in SL: A and B are equivalent if and only if both A ⊨ B and B ⊨ A.

🔄 Consistency

A set {A₁, A₂, A₃, …} is consistent in SL if and only if there is at least one truth value assignment for which all of the sentences are true.

  • The set is inconsistent in SL if and only if there is no such assignment.
  • Don't confuse: consistency is defined differently from entailment—it requires at least one assignment making all sentences true, not all assignments.

🔗 Connection to truth tables

🔗 Truth tables as informal semantics

  • Truth tables allowed us to test tautologies, equivalence, validity, etc., but they did not formally define these concepts.
  • Each line of a truth table corresponds to a way the world might be—a possible combination of truth values for sentence letters.

🔢 Formal meaning of 1 and 0

  • In truth tables, '1' and '0' are initially interpreted as 'true' and 'false'.
  • Once we construct a truth table, the symbols are divorced from their metalinguistic meaning.
  • The formal properties of 1 and 0 are defined entirely by the characteristic truth tables for connectives.
  • Example: if A is value 1, then ¬A is value 0—this is a formal rule, not an appeal to intuitive meaning.

🎯 Truth in SL just is assignment

Truth in SL just is the assignment of a 1 or a 0.

  • The formal definition of truth captures what truth tables were doing informally.
  • A truth table row is a partial truth value assignment; the formal definition covers all atomic sentences.
22

5.2 Interpretations and models in QL

5.2 Interpretations and models in QL

🧭 Overview

🧠 One-sentence thesis

In quantified logic (QL), a model formalizes an interpretation by specifying a universe of discourse (UD), extensions for predicates, and referents for constants, allowing us to evaluate truth without needing background knowledge about the subject matter.

📌 Key points (3–5)

  • What an interpretation in QL requires: a universe of discourse (UD), schematic meanings for predicates, and objects picked out by constants—not just truth values for sentence letters as in SL.
  • Why we need models instead of truth-value assignments: predicates are neither true nor false on their own; they apply to objects, so we need extensions (sets of objects) rather than simple true/false assignments.
  • Extension vs referent: the extension of a predicate is the set of UD members to which it applies; the referent of a constant is the individual UD member it picks out.
  • Common confusion: interpretation vs model—an interpretation uses English descriptions and requires background knowledge to determine truth; a model is the formal structure (UD, extensions, referents) that makes truth evaluable without external knowledge.
  • Identity is special: the identity predicate always means "is identical to" and its extension is always the set of ordered pairs of each UD member with itself—you cannot reinterpret it.

🔍 Why QL needs more than truth-value assignments

🔍 The limitation of SL-style assignments

  • In sentential logic (SL), an interpretation assigns truth values (0 or 1) to atomic sentence letters.
  • Two interpretations are formally the same if they produce the same truth-value assignment.
  • Why this fails for QL: if we assigned truth values to atomic formulas like Fb and Fw, we would lose all logical structure of predicates and terms.
    • Example: translating Fb and Fw as separate sentence letters ignores that both share the predicate F and involve terms b and w.
    • Quantified sentences like ∀xFx cannot be adequately translated into SL at all.

🧩 What predicates and constants need

  • Predicates are not true or false: asking whether F (on its own) is true makes no sense, just as asking whether the English fragment "...fights crime" is true makes no sense.
  • Predicates apply to objects: an interpretation picks out which objects in the UD the predicate applies to.
  • Constants name individuals: a constant picks out a specific member of the UD, called its referent.
  • Don't confuse: a predicate's truth (which doesn't exist) with a predicate's extension (the set of objects it applies to).

🗂️ Core formal structures: extensions and referents

🗂️ Extension of a predicate

Extension: the set of members of the UD to which a predicate applies.

  • For a one-place predicate like Fx ("x fights crime"), the extension is the set of all UD members that fight crime.
  • Interpretation alone is not enough: using an English description like "x fights crime" does not tell you the extension without background knowledge (e.g., knowing that Batman fights crime).
  • Listing extensions explicitly: sometimes you can write the extension as a set.
    • Example: extension(M) = {Bruce Wayne, Alfred the butler, Dick Grayson} for Mx meaning "x lives in Wayne Manor."
    • With this explicit extension, you can evaluate Mw (true, since Bruce Wayne is in the set) and ∀xMx (false, since not all UD members are in the set) without knowing anything about comic books.

🏷️ Referent of a constant

Referent: the individual member of the UD that a constant picks out.

  • A constant is like a name; the referent is the thing named.
  • Multiple constants can have the same referent: both b and w can refer to the same comic book character (Batman/Bruce Wayne).
  • Example: in the Batman interpretation, referent(b) = referent(w) = Batman/Bruce Wayne.

🔢 Extensions for multi-place predicates

  • Two-place predicates: the extension is a set of ordered pairs ⟨a, b⟩.
    • Order matters: ⟨1, 8⟩ (1 is less than 8) is different from ⟨8, 1⟩ (8 is less than 1).
    • Example: for Lxy ("x is less than y") with UD = {1, 2, 3, ..., 9}, extension(L) includes ⟨1, 2⟩, ⟨1, 3⟩, ⟨2, 3⟩, etc., but not ⟨8, 1⟩.
  • Three-place predicates: the extension is a set of ordered triples ⟨a, b, c⟩.
    • Example: for Txyz ("x times y equals z"), extension(T) includes ⟨2, 4, 8⟩ because 2 × 4 = 8.
  • General rule: the extension of an n-place predicate is a set of ordered n-tuples ⟨a₁, a₂, ..., aₙ⟩ where the predicate is true of those members in that order.

🧱 What is a model?

🧱 Definition and purpose

Model: the formal structure consisting of a UD, an extension for each predicate, and a referent for each constant.

  • A model captures all the formal significance of an interpretation.
  • Key difference from interpretation: a model is fully specified and requires no background knowledge to evaluate truth.
    • Interpretation: "UD: People who played as part of the Three Stooges; Hx: x had head hair; f: Mister Fine" → you need to know who Mister Fine is and who had hair.
    • Model: UD = {Larry, Curly, Moe, Shemp, Joe, Curly Joe}; extension(H) = {Larry, Moe, Shemp}; referent(f) = Larry → you can evaluate Hf (true) without knowing anything about the Three Stooges.

📋 Example: Three Stooges model

  • UD: {Larry, Curly, Moe, Shemp, Joe, Curly Joe}
  • extension(H): {Larry, Moe, Shemp} (those with head hair)
  • referent(f): Larry
  • Evaluating sentences:
    • Hf is true (Larry is in the extension of H).
    • ∃xHx is true (at least one UD member is in the extension of H).
    • ∃x¬Hx is true (at least one UD member is not in the extension of H).
    • ∀xHx is false (not all UD members are in the extension of H).

🔢 Example: whole numbers model

  • UD: {1, 2, 3, 4, 5, 6, 7, 8, 9}
  • extension(E) (Ex: "x is even"): {2, 4, 6, 8}
  • extension(N) (Nx: "x is negative"): ∅ (empty set, no negative numbers in UD)
  • extension(L) (Lxy: "x is less than y"): {⟨1, 2⟩, ⟨1, 3⟩, ..., ⟨8, 9⟩} (all ordered pairs where first is less than second)
  • extension(T) (Txyz: "x times y equals z"): includes triples like ⟨2, 4, 8⟩

🆔 The special case of identity

🆔 Identity is not user-defined

  • The identity predicate (x = y) always means "x is identical to y."
  • You cannot reinterpret it: unlike other predicates, you do not specify its meaning in a symbolization key or its extension in a model.
  • The sentence ∀xIxx (with an ordinary predicate I) is contingent—its truth depends on the extension of I.
  • The sentence ∀x x = x is a tautology—it is always true because the extension of identity always makes it true.

🆔 Extension of identity depends on the UD

  • The extension of identity is always the set of ordered pairs of each UD member with itself.
    • If UD = {Doug}, then extension(=) = {⟨Doug, Doug⟩}.
    • If UD = {Doug, Omar}, then extension(=) = {⟨Doug, Doug⟩, ⟨Omar, Omar⟩}.
  • Same referent implies interchangeability: if referent(a) = referent(b), then anything true of a is true of b.
    • Example: Aa ↔ Ab, Rca ↔ Rcb, ∀xRxa ↔ ∀xRxb, etc.
  • The reverse is not true: it is possible for everything true of a to also be true of b, yet a and b have different referents.
    • Example model: UD = {Rosencrantz, Guildenstern}; referent(a) = Rosencrantz; referent(b) = Guildenstern; all predicates have empty extensions.
    • In this model, no predicate is true of either a or b, so anything true of a is (vacuously) true of b, but a and b still refer to different individuals.

📦 Sets and ordered structures

📦 Sets

  • Notation: curly brackets { } denote sets; members are listed in any order, separated by commas.
  • Order does not matter: {foo, bar} and {bar, foo} are the same set.
  • Empty set: a set with no members, written as {} or ∅.

📦 Ordered pairs and tuples

  • Ordered pair: written with angle brackets ⟨foo, bar⟩; order matters, so ⟨foo, bar⟩ ≠ ⟨bar, foo⟩.
  • Ordered triple: ⟨a, b, c⟩; used for three-place predicates.
  • Ordered n-tuple: ⟨a₁, a₂, ..., aₙ⟩; used for n-place predicates.
  • Don't confuse: sets (order irrelevant) with ordered pairs/tuples (order essential).
23

5.3 Semantics for identity

5.3 Semantics for identity

🧭 Overview

🧠 One-sentence thesis

Identity in quantified logic is a special predicate with a fixed interpretation—always meaning "is identical to"—whose extension automatically contains exactly the ordered pairs of each object in the universe of discourse with itself, making sentences like ∀x x = x tautologies.

📌 Key points (3–5)

  • Identity is special: unlike ordinary predicates, the identity predicate (=) always means "is identical to" and cannot be reinterpreted; you do not include it in a symbolization key.
  • Fixed extension rule: the extension of identity always contains just the ordered pair of each object in the UD with itself—you cannot choose which pairs to include.
  • Tautology vs contingency: ∀x x = x is a tautology (true in every model), whereas ∀xIxx with an ordinary predicate I is contingent (depends on interpretation).
  • Common confusion: if two constants share the same referent, anything true of one is true of the other—but the reverse is not guaranteed; two constants can have different referents yet still satisfy all the same predicates if all predicates are empty.
  • Extension varies with UD: although identity always has the same interpretation, its extension changes depending on which objects are in the universe of discourse.

🔧 How identity differs from ordinary predicates

🔧 Fixed interpretation

The sentence x = y always means 'x is identical to y,' and it cannot be interpreted to mean anything else.

  • Ordinary two-place predicates (e.g., Ixy) require a symbolization key; you decide what they mean.
  • Identity (=) does not appear in a symbolization key—it is built into the logic.
  • You write x = y instead of the standard predicate notation Ixy.

🔒 Non-negotiable extension

  • When you construct a model with an ordinary predicate, you pick which ordered pairs go into its extension.
  • For identity, you do not get to pick: the extension always contains exactly the ordered pair of each object in the UD with itself.
  • Example: if the UD is {Doug}, then extension(=) = {<Doug, Doug>}; if the UD is {Doug, Omar}, then extension(=) = {<Doug, Doug>, <Omar, Omar>}.

🎯 Tautology and contingency

🎯 Why ∀x x = x is a tautology

  • The sentence ∀x x = x says "everything is identical to itself."
  • Because the extension of identity always makes this true (every object is paired with itself), ∀x x = x is true in every model.
  • Don't confuse: ∀xIxx (with an ordinary predicate I) is contingent—it depends on how you interpret I and what you put in its extension.

📊 Comparison table

SentencePredicate typeStatusWhy
∀xIxxOrdinary two-placeContingentTruth depends on the extension of I
∀x x = xIdentityTautologyExtension of identity always includes <object, object> for every object

🔄 Same referent vs. same properties

🔄 If two constants refer to the same object

  • If referent(a) = referent(b), then anything true of a is true of b.
  • The excerpt lists examples:
    • Aa ↔ Ab
    • Ba ↔ Bb
    • Ca ↔ Cb
    • Rca ↔ Rcb
    • ∀xRxa ↔ ∀xRxb
  • This holds for any two sentences containing a and b.

⚠️ The reverse is not guaranteed

  • It is possible that anything true of a is also true of b, yet a and b still have different referents.
  • This may seem puzzling, but the excerpt provides a model to demonstrate it.

🧪 Example model showing the puzzle

Consider this model:

  • UD = {Rosencrantz, Guildenstern}
  • referent(a) = Rosencrantz
  • referent(b) = Guildenstern
  • For all predicates P, extension(P) = ∅ (empty)
  • extension(=) = {<Rosencrantz, Rosencrantz>, <Guildenstern, Guildenstern>}

What happens:

  • All infinitely-many predicates are empty.
  • Both Aa and Ab are false (and equivalent); both Ba and Bb are false; and so on for any sentences containing a and b.
  • Yet a and b refer to different things.
  • The ordered pair <referent(a), referent(b)> is not in the extension of identity.
  • In this model, a = b is false and a ≠ b is true.

Key insight: Two constants can satisfy all the same predicates yet still have different referents if all predicates happen to be empty.

🌐 Extension depends on the universe of discourse

🌐 Same interpretation, different extensions

  • Although identity always has the same interpretation (always means "is identical to"), it does not always have the same extension.
  • The extension of identity depends on what objects are in the UD.

🌐 Examples of varying extensions

  • If UD = {Doug}, then extension(=) = {<Doug, Doug>}.
  • If UD = {Doug, Omar}, then extension(=) = {<Doug, Doug>, <Omar, Omar>}.
  • And so on for any UD: the extension grows to include exactly one ordered pair for each object in the UD, pairing that object with itself.
24

Working with models

5.4 Working with models

🧭 Overview

🧠 One-sentence thesis

Models in QL allow us to determine whether sentences are tautologies, contradictions, or contingent, and whether arguments are valid, by examining truth conditions across all possible interpretations or by constructing specific counterexamples.

📌 Key points (3–5)

  • What models do: A model specifies a universe of discourse (UD), extensions for predicates, and referents for constants; sentences are true or false relative to models.
  • The double turnstile symbol: A |= B means A entails B (no model makes A true and B false); |= A means A is true in every model.
  • Two strategies for proofs: constructing one or two specific models can show a sentence is not a tautology or that an argument is invalid; proving something is a tautology or valid requires reasoning about all models.
  • Common confusion: You cannot show a sentence is a tautology by building many models where it's true—you must show it's true in every model, which requires a general argument, not enumeration.
  • Partial models suffice: You don't need to specify every predicate and constant in QL, only those that appear in the sentence you're evaluating.

🔧 Core definitions in QL

🔧 Tautology, contradiction, and contingency

Tautology in QL: a sentence A that is true in every model; i.e., |= A.

Contradiction in QL: a sentence A that is false in every model; i.e., |= ¬A.

Contingent in QL: a sentence that is neither a tautology nor a contradiction.

  • These definitions parallel SL but are in terms of models rather than truth-value assignments.
  • A contingent sentence is true in some models and false in others.

🔧 Validity and logical equivalence

Valid in QL: an argument "P₁, P₂, …, ∴ C" is valid if and only if there is no model in which all premises are true and the conclusion is false; i.e., {P₁, P₂, …} |= C.

Logically equivalent in QL: two sentences A and B are equivalent if and only if both A |= B and B |= A.

  • Validity means: whenever the premises are true (in any model), the conclusion must also be true.
  • Equivalence means the two sentences have the same truth value in every model.

🔧 Consistency

Consistent in QL: a set {A₁, A₂, A₃, …} is consistent if and only if there is at least one model in which all sentences are true.

Inconsistent in QL: a set is inconsistent if and only if there is no such model.

  • Consistency is about the possibility of all sentences being true together.
  • Example: To show a set is consistent, construct one model where all members are true.

🏗️ Constructing models to show what is not the case

🏗️ Showing a sentence is not a tautology

  • Goal: Prove the sentence is false in some model.
  • Method: Construct one model where the sentence is false.
  • Example from the excerpt: To show ∀xAxx → Bd is not a tautology:
    • UD = {Paris}
    • extension(A) = {⟨Paris, Paris⟩}
    • extension(B) = ∅
    • referent(d) = Paris
    • The antecedent ∀xAxx is true (Paris is paired with itself in A), but the consequent Bd is false (Paris is not in B), so the conditional is false.

🏗️ Showing a sentence is not a contradiction

  • Goal: Prove the sentence is true in some model.
  • Method: Construct one model where the sentence is true.
  • Example: For the same sentence ∀xAxx → Bd, construct:
    • UD = {Paris}
    • extension(A) = {⟨Paris, Paris⟩}
    • extension(B) = {Paris}
    • referent(d) = Paris
    • Now Bd is true, so the conditional is true.

🏗️ Showing a sentence is contingent

  • Goal: Prove the sentence is neither a tautology nor a contradiction.
  • Method: Construct two models—one where the sentence is true and another where it is false.
  • The two models above together show ∀xAxx → Bd is contingent.

🏗️ Showing two sentences are not logically equivalent

  • Goal: Prove the sentences can have different truth values.
  • Method: Construct one model where they differ.
  • Example: To show ∀xSx and ∃xSx are not equivalent:
    • UD = {Duke, Miles}
    • extension(S) = {Duke}
    • ∃xSx is true (Duke is in S), but ∀xSx is false (Miles is not in S).

🏗️ Showing an argument is invalid

  • Goal: Prove there exists a model where premises are true and conclusion is false.
  • Method: Construct one such model.
  • Example: For the argument (Rc & K₁c) & Tc ∴ Tc & K₂c:
    • UD = {Björk}
    • extension(T) = {Björk}, extension(K₁) = {Björk}, extension(K₂) = ∅, extension(R) = {Björk}
    • referent(c) = Björk
    • Premises are true, but K₂c is false, so the conclusion is false.

🏗️ Partial models

  • A complete model specifies extensions for every predicate and referents for every constant in QL—impossible to write down (infinitely many).
  • Partial model: specifies only the predicates and constants that appear in the sentence being evaluated.
  • The excerpt emphasizes: predicates like H or constants like f₁₃ that don't appear in the sentence make no difference to its truth or falsity.
  • Don't confuse: A partial model is not an incomplete proof; it's a legitimate shortcut because irrelevant symbols don't affect the sentence's truth value.

🌐 Reasoning about all models

🌐 When you must reason about infinity

  • To show a sentence is a tautology, contradiction, or that an argument is valid, you cannot just build examples—you must prove something about every model.
  • There are infinitely many models (and infinitely many partial models).
  • Example: The sentence Raa ↔ Raa has 2 distinct partial models with a 1-member UD, 32 with a 2-member UD, 1526 with a 3-member UD, etc.

🌐 Proof strategy for tautologies

  • Example: To prove Raa ↔ Raa is a tautology:
    • Divide all models into two exhaustive kinds:
      1. Models where ⟨referent(a), referent(a)⟩ is in the extension of R → Raa is true → the biconditional is true.
      2. Models where it is not → Raa is false → the biconditional is still true.
    • Since every model is one of these two kinds, the sentence is true in every model.
  • This is an argument in English about QL (a metalanguage argument), not a formal QL argument.

🌐 Pitfalls in informal reasoning

  • Don't confuse: Rxx → Rxx is not a sentence (it has a free variable x), so it is neither true nor false in a model.
  • You cannot say "Rxx → Rxx is true in every model, so ∀x(Rxx → Rxx) is true"—the first part is meaningless.
  • The excerpt notes we need the concept of satisfaction (introduced later) to reason properly about formulas with free variables.

🌐 Summary table: what requires which strategy

QuestionTo answer YESTo answer NO
Is A a tautology?Show A must be true in any modelConstruct one model where A is false
Is A a contradiction?Show A must be false in any modelConstruct one model where A is true
Is A contingent?Construct two models (one true, one false)Show A is a tautology or a contradiction
Are A and B equivalent?Show A and B must have the same truth value in any modelConstruct one model where they differ
Is set A consistent?Construct one model where all sentences are trueShow the sentences cannot all be true in any model
Is argument 'P ∴ C' valid?Show any model where P is true must make C trueConstruct one model where P is true and C is false
  • Key takeaway: Constructing models is easy for showing something is not the case; reasoning about all models is hard and required for showing something is the case (tautology, validity, etc.).

🧩 What models specify and what they don't

🧩 The three components of a model

  1. Universe of discourse (UD): the set of objects the model talks about.
  2. Extensions of predicates: which tuples of objects satisfy each predicate.
  3. Referents of constants: which object each constant names.
  • Smaller UDs make it easier to specify extensions.
  • Example: UD = {Paris} is simpler than UD = {Duke, Miles}.

🧩 Models vs. English interpretations

  • A partial model tells you only the formal structure: which objects are in which extensions.
  • It does not tell you what the predicates "mean" in English.
  • Example: extension(A) = {⟨Paris, Paris⟩} could correspond to:
    • "x is in the same country as y"
    • "x is the same size as y"
    • "x and y are both cities"
  • All that matters for truth in QL is the information in the model—UD, extensions, referents—not the English translation.
  • Don't confuse: Specifying a model is not the same as translating QL into English; models are about truth conditions, not meanings.
25

Truth in QL

5.5 Truth in QL

🧭 Overview

🧠 One-sentence thesis

Truth in quantified logic (QL) is defined relative to models through the concept of satisfaction, which allows us to reason about validity and tautology by considering all possible models rather than constructing individual counterexamples.

📌 Key points (3–5)

  • Constructing models vs reasoning about all models: showing a sentence is a tautology or an argument is valid requires reasoning about all possible models, while showing contingency or invalidity only requires constructing one or two specific models.
  • Satisfaction extends truth to non-sentences: because free variables like x in Px have no truth value, we define satisfaction (which applies to all well-formed formulas) before defining truth (which applies only to sentences).
  • Variable assignments handle free variables: a variable assignment function matches each variable to a member of the universe of discourse (UD), allowing us to evaluate formulas with free variables.
  • Common confusion: truth in QL is not absolute but always truth in a model—sentences are true or false only relative to a specific interpretation (model).
  • Two strategies for reasoning about all models: divide cases exhaustively (e.g., either an object is in an extension or it isn't) and consider arbitrary objects to prove general claims.

🔍 When models are enough vs when you need all models

🔍 The YES column: one or two models suffice

The excerpt provides a table showing when constructing specific models is enough:

  • To show a sentence is NOT a tautology: construct one model in which it is false.
  • To show a sentence is NOT a contradiction: construct one model in which it is true.
  • To show a sentence is contingent: construct two models, one where it's true and one where it's false.
  • To show two sentences are NOT equivalent: construct a model where they have different truth values.
  • To show a set is consistent: construct a model in which all sentences in the set are true.
  • To show an argument is NOT valid: construct a model in which the premise is true and the conclusion is false.

🔍 The NO column: must reason about all models

For the following, you cannot rely on one or two examples:

  • To show a sentence IS a tautology: must show it is true in any model.
  • To show a sentence IS a contradiction: must show it is false in any model.
  • To show two sentences ARE equivalent: must show they have the same truth value in any model.
  • To show a set is inconsistent: must show the sentences could not all be true in any model.
  • To show an argument IS valid: must show that any model making the premise true also makes the conclusion true.

The excerpt emphasizes: "It is relatively easy to answer a question if you can do it by constructing a model or two. It is much harder if you need to reason about all possible models."

🧩 From truth assignments to satisfaction

🧩 Why SL's approach doesn't work for QL

In sentential logic (SL), truth was defined in two parts:

  • A truth value assignment (a) for sentence letters.
  • A truth function (v) for all sentences built from connectives.

This worked because every well-formed formula of SL has a truth value.

In QL, this approach fails:

  • The formula Px (with free variable x) is not a sentence—it has no truth value.
  • We cannot define truth for ∀x Px by asking whether Px is true, because Px is neither true nor false.
  • Therefore, we need a broader concept that applies to formulas with free variables.

🧩 Satisfaction: the solution

Satisfaction: a relation between a well-formed formula, a model, and a variable assignment; every wff is either satisfied or not satisfied in a model by a variable assignment, even if it has no truth value.

  • Satisfaction applies to all wffs, including those with free variables.
  • Truth for sentences is then defined in terms of satisfaction.

🎯 Variable assignments and how satisfaction works

🎯 What a variable assignment does

Variable assignment (a): a function that matches each variable with a member of the universe of discourse (UD).

  • Since x in Px is a variable, not a constant, it doesn't name a particular object.
  • A variable assignment a tells us which object x stands for when evaluating Px.
  • Example: if the UD is U.S. presidents, a might assign George Washington to x.

Don't confuse: this is not the same as the truth value assignment used in SL.

🎯 Modified assignments: a[Ω|x]

The excerpt introduces notation for "tweaking" an assignment:

a[Ω|x]: the variable assignment that assigns Ω to x but agrees with a for all other variables.

  • Ω is some member of the UD (not a symbol of QL).
  • Example: a[Grover Cleveland|x] assigns Grover Cleveland to x, regardless of what a originally assigned to x.

This notation is crucial for quantifiers:

  • x Px is satisfied in model M by assignment a if and only if Px is satisfied in M by a[Ω|x] for every object Ω in the UD.
  • x Px is satisfied in M by a if and only if Px is satisfied in M by a[Ω|x] for at least one object Ω in the UD.

🎯 The satisfaction function s

The excerpt defines a function s such that for any wff A and variable assignment a:

  • s(A, a) = 1 if A is satisfied in M by a.
  • s(A, a) = 0 otherwise.

The definition covers eight cases (atomic formulas, negation, conjunction, disjunction, conditional, biconditional, universal quantifier, existential quantifier), mirroring the structure of wffs in QL.

Key insight: this recursive definition ensures every wff is either satisfied or not—no wffs are left out or assigned conflicting values.

🔗 From satisfaction to truth

🔗 Why sentences don't depend on variable assignments

Consider ∀x Px:

  • By the definition of satisfaction, it is satisfied if a[Ω|x] satisfies Px for every Ω in the UD.
  • This happens if every Ω is in the extension of P.
  • Whether ∀x Px is satisfied does not depend on the particular variable assignment a—all variables are bound by quantifiers.

The same holds for any sentence of QL: because all variables are bound, satisfaction does not depend on the details of the variable assignment.

🔗 Definition of truth

A sentence A is true in M if and only if some variable assignment satisfies A in M; A is false in M otherwise.

  • Since satisfaction for sentences doesn't depend on which variable assignment we pick, we can define truth this way.
  • Truth in QL is truth in a model—sentences are not true or false as mere symbols, but only relative to a model that provides meaning.

🧠 Strategies for reasoning about all models

🧠 Example: showing ∀x(RxxRxx) is a tautology

The excerpt walks through a proof:

  1. Consider an arbitrary model M.
  2. Consider an arbitrary member of the UD, call it Ω.
  3. Divide cases: either ⟨Ω, Ω⟩ is in the extension of R, or it is not.
    • If ⟨Ω, Ω⟩ is in extension(R), then Rxx is satisfied by the assignment that assigns Ω to x; since the consequent is satisfied, the conditional RxxRxx is satisfied.
    • If ⟨Ω, Ω⟩ is not in extension(R), then Rxx is not satisfied; since the antecedent is not satisfied, the conditional is still satisfied.
  4. In either case, RxxRxx is satisfied.
  5. This holds for any member of the UD, so ∀x(RxxRxx) is satisfied by any variable assignment.
  6. Therefore, it is true in M.
  7. Since we assumed nothing special about M, it is true in any model—hence a tautology.

🧠 Two key strategies

The excerpt identifies two techniques for reasoning about all models:

StrategyHow it worksExample from excerpt
Divide casesSplit into exhaustive alternatives so every case must be one kind or the otherEither ⟨Ω, Ω⟩ is in extension(R) or it is not
Arbitrary objectsConsider an arbitrary object without assuming anything special about it; whatever holds for it must hold for allΩ was arbitrary, so the result holds for every member of the UD; M was arbitrary, so the result holds for all models

🧠 Example: showing an argument is valid

The excerpt shows that ∀x(Hx & Jx) ∴ ∀x Hx is valid:

  1. Consider an arbitrary model M in which the premise ∀x(Hx & Jx) is true.
  2. The conjunction Hx & Jx is satisfied regardless of what is assigned to x.
  3. Therefore, Hx must also be satisfied (by the definition of satisfaction for conjunction).
  4. So ∀x Hx is satisfied by any variable assignment and true in M.
  5. Since we assumed nothing about M except that the premise is true, the conclusion must be true in any model where the premise is true.
  6. Therefore, the argument is valid.

Don't confuse: even for simple arguments, reasoning about all models in English can be "insufferable" (the excerpt's word). This motivates formal proof systems (covered in the next chapter) and truth trees (semantic tableaux), which formalize the reasoning without explicitly talking about models.

26

Basic Rules for SL

6.1 Basic rules for SL

🧭 Overview

🧠 One-sentence thesis

A natural deduction proof system for sentential logic (SL) provides a structured way to demonstrate argument validity by combining introduction and elimination rules for each logical operator, making the reasoning transparent rather than relying on exhaustive truth tables.

📌 Key points (3–5)

  • What a proof is: a sequence of sentences starting with premises, where each later sentence follows from earlier ones by a rule, ending with the conclusion.
  • Why natural deduction: instead of a grab bag of rules, each logical operator gets exactly two rules—introduction (to prove a sentence with that operator) and elimination (to derive something from a sentence with that operator).
  • How subproofs work: you can assume anything temporarily in a subproof to explore "what if" scenarios, but you must discharge (close) all subproofs to complete the proof.
  • Common confusion: assuming something in a subproof does not mean you've proven it; you can only use what you derive inside a subproof to justify rules like conditional introduction when you close the subproof.
  • Why it matters: proofs reveal why an argument is valid by showing the logical steps, unlike truth tables which only confirm validity without explanation.

🔍 What proofs are and why we need them

🔍 The limitation of truth tables

  • Truth tables can verify validity but do not show why an argument works.
  • Example: a 10-letter argument requires a 1024-line truth table; even if you find no counterexample, you learn nothing about the reasoning structure.
  • Truth tables do not distinguish between different valid inference patterns (e.g., modus ponens vs. disjunctive syllogism).

📜 Definition of a proof

A proof is a sequence of sentences where the first sentences are assumptions (premises), every later sentence follows from earlier sentences by a rule of proof, and the final sentence is the conclusion.

  • Proofs make reasoning transparent by naming and combining basic inference forms.
  • Example: an argument may combine modus ponens and disjunctive syllogism to reach an intermediate conclusion, then use that to derive the final conclusion.

🏗️ Structure of natural deduction

  • Two rules per operator: introduction (prove a sentence with that operator as main connective) and elimination (use a sentence with that operator).
  • Reiteration rule (R): if you've already shown something, you can repeat it on a new line.
    • Example: Line 1 has A; line 2 can have A with justification "R 1."
    • This rule alone proves nothing new; it just copies existing lines.

🔗 Conjunction rules

➕ Conjunction introduction (&I)

  • What it does: if you have proven A and also proven B (on any two lines), you can derive A & B.
  • Rule schema:
    • Line m: A
    • Line n: B
    • New line: A & B, justified by "&I m, n"
  • The two conjuncts can appear in any order and be separated by many lines.
  • Example: if K is on line 8 and L is on line 15, you can later write (K & L) with justification "&I 8, 15."

➖ Conjunction elimination (&E)

  • What it does: from A & B, you can derive either A or B.
  • Rule schema:
    • Line m: A & B
    • New line: A, justified by "&E m"
    • Or new line: B, justified by "&E m"
  • Example proof (swapping conjuncts):
    1. [(A ∨ B) → (C ∨ D)] & [(E ∨ F) → (G ∨ H)]
    2. [(A ∨ B) → (C ∨ D)] — &E 1
    3. [(E ∨ F) → (G ∨ H)] — &E 1
    4. [(E ∨ F) → (G ∨ H)] & [(A ∨ B) → (C ∨ D)] — &I 3, 2
  • This trivial proof shows how rules combine; a truth table for this 8-letter argument would need 256 lines.

🔀 Disjunction rules

➕ Disjunction introduction (∨I)

  • What it does: if you have proven A, you can derive A ∨ B for any sentence B.
  • Rule schema:
    • Line m: A
    • New line: A ∨ B, justified by "∨I m"
    • Or new line: B ∨ A, justified by "∨I m"
  • B can be completely unrelated to A.
  • Example: from M, you can derive M ∨ [(A ↔ B) → (C & D)] ↔ [E & F].
  • Why this is valid: the truth conditions for disjunction mean if A is true, A ∨ B is true regardless of B.

➖ Disjunction elimination (∨E)

  • What it does: from A ∨ B and the negation of one disjunct, you can derive the other disjunct (disjunctive syllogism).
  • Rule schema:
    • Line m: A ∨ B
    • Line n: ¬B
    • New line: A, justified by "∨E m, n"
    • (Or with ¬A to derive B)
  • You cannot conclude anything from A ∨ B alone; you need additional information to eliminate one disjunct.

🔁 Conditional rules and subproofs

🏗️ How subproofs work

  • A subproof is a proof within the main proof, marked by an additional vertical line.
  • You can assume anything you want at the start of a subproof (no justification needed).
  • Think of it as asking: "What could we show if this assumption were true?"
  • Discharging assumptions: when you close a subproof (end the vertical line), you discharge its assumptions and cannot refer back to individual lines inside it.
  • Key rule: you cannot complete a proof until all subproofs are closed (all assumptions discharged except the original premises).

➕ Conditional introduction (→I)

  • What it does: to prove A → B, assume A in a subproof and derive B; then close the subproof.
  • Rule schema:
    • Line m: A (subproof assumption) — want B
    • ...
    • Line n: B
    • New line (main proof): A → B, justified by "→I m–n"
  • Example proof of R ∨ F ∴ ¬R → F:
    1. R ∨ F
    2. | ¬R (subproof assumption)
    3. | F — ∨E 1, 2
    4. ¬R → F — →I 2–3
  • Strategy tip: to derive a conditional, assume its antecedent and try to derive its consequent.
  • Don't confuse: assuming something in a subproof does not mean you've proven it; the assumption is only "in force" within that subproof.

➖ Conditional elimination (→E)

  • What it does: from A → B and A, you can derive B (modus ponens).
  • Rule schema:
    • Line m: A → B
    • Line n: A
    • New line: B, justified by "→E m, n"
  • Example proof of P → Q, Q → R ∴ P → R:
    1. P → Q
    2. Q → R
    3. | P (subproof assumption) — want R
    4. | Q — →E 1, 3
    5. | R — →E 2, 4
    6. P → R — →I 3–5

🔄 Biconditional rules

➕ Biconditional introduction (↔I)

  • What it does: to prove A ↔ B, you need two subproofs—one deriving B from A, the other deriving A from B.
  • Rule schema:
    • Line m: A (subproof 1 assumption) — want B
    • Line n: B
    • Line p: B (subproof 2 assumption) — want A
    • Line q: A
    • New line: A ↔ B, justified by "↔I m–n, p–q"
  • The two subproofs can appear in any order and need not be adjacent.

➖ Biconditional elimination (↔E)

  • What it does: from A ↔ B, you can derive B if you have A, or derive A if you have B (works both directions).
  • Rule schema:
    • Line m: A ↔ B
    • Line n: A
    • New line: B, justified by "↔E m, n"
    • (Or with B to derive A)
  • This is a "double-barreled" version of conditional elimination.

❌ Negation rules and reductio

🔄 Reductio ad absurdum

  • What it means: "reduction to absurdity"—assume something and show it leads to a contradiction, thereby proving the assumption false.
  • Example (informal): assume there is a greatest natural number A; then A + 1 is also a natural number and A + 1 > A, contradicting the assumption; therefore there is no greatest natural number.

➕ Negation introduction (¬I)

  • What it does: assume A in a subproof; if you derive both B and ¬B (an explicit contradiction), you can conclude ¬A.
  • Rule schema:
    • Line m: A (subproof assumption) — for reductio
    • ...
    • Line n: B
    • Line n+1: ¬B
    • New line: ¬A, justified by "¬I m–(n+1)"
  • The last two lines of the subproof must be a sentence and its negation.
  • Example proof of ¬(G & ¬G) (law of non-contradiction):
    1. | G & ¬G (for reductio)
    2. | G — &E 1
    3. | ¬G — &E 1
    4. ¬(G & ¬G) — ¬I 1–3
  • Note: "for reductio" is a reminder note, not formally part of the proof.

➖ Negation elimination (¬E)

  • What it does: assume ¬A in a subproof; if you derive both B and ¬B, you can conclude A.
  • Rule schema:
    • Line m: ¬A (subproof assumption) — for reductio
    • ...
    • Line n: B
    • Line n+1: ¬B
    • New line: A, justified by "¬E m–(n+1)"
  • This is the "positive" version of reductio: proving something by showing its negation leads to contradiction.

🛠️ Derived rules

🔧 What derived rules are

A derived rule is a rule of proof that does not make any new proofs possible; anything provable with a derived rule can be proven without it using only basic rules.

  • Derived rules are shortcuts: they let you do in one line what would otherwise take many lines and nested subproofs.
  • Think of a proof using a derived rule as shorthand for a longer proof using only basic rules.

🔀 Dilemma (DIL)

  • What it does: from A ∨ B, A → C, and B → C, you can derive C.
  • Rule schema:
    • Line m: A ∨ B
    • Line n: A → C
    • Line o: B → C
    • New line: C, justified by "DIL m, n, o"
  • This can be proven with basic rules using nested reductio subproofs (the excerpt shows a 14-line proof).
  • Adding it as a derived rule is convenient but not necessary.

🔄 Modus tollens (MT)

  • What it does: from A → B and ¬B, you can derive ¬A.
  • Rule schema:
    • Line m: A → B
    • Line n: ¬B
    • New line: ¬A, justified by "MT m, n"
  • The excerpt leaves the proof of this rule as an exercise.

🔗 Hypothetical syllogism (HS)

  • What it does: from A → B and B → C, you can derive A → C.
  • Rule schema:
    • Line m: A → B
    • Line n: B → C
    • New line: A → C, justified by "HS m, n"
  • A proof of this rule was given earlier (page 109 of the source).
27

6.2 Derived rules

6.2 Derived rules

🧭 Overview

🧠 One-sentence thesis

Derived rules are proof shortcuts that do not enable any new proofs but allow us to compress multi-line derivations into single steps, making proofs more convenient without expanding what is provable.

📌 Key points (3–5)

  • What derived rules are: rules that can be proven using only the basic rules, so they add convenience but not new proving power.
  • How they work: any proof using a derived rule is shorthand for a longer proof using only basic rules.
  • Examples of derived rules: Dilemma (DIL), modus tollens (MT), and hypothetical syllogism (HS) are all derived rules added for convenience.
  • Common confusion: derived rules vs. basic rules—derived rules are provable from basic rules, whereas basic rules are the systematic foundation (one introduction and one elimination rule per logical operator).
  • Rules of replacement: a special category of derived rules that can be applied to part of a sentence (subformulas), not just whole sentences, because they swap logically equivalent expressions.

🔧 What derived rules are and why they matter

🔧 Definition and purpose

A derived rule is a rule of proof that does not make any new proofs possible; anything provable with a derived rule can be proven without it.

  • The basic rules are systematic: one introduction and one elimination rule for each logical operator.
  • Derived rules are added for convenience, not necessity.
  • They compress what would otherwise require many lines and nested subproofs into a single step.

🧩 How derived rules work as shorthand

  • A proof using a derived rule is "shorthand for a longer proof that uses only the basic rules."
  • Example: the Dilemma rule can do in one line what requires eleven lines and several nested subproofs with basic rules alone.
  • You can think of derived rules as "recipes" or patterns that replicate longer basic-rule proofs.

🧪 Example: The Dilemma rule (DIL)

🧪 The rule itself

The Dilemma rule looks like this:

LineContentJustification
mA ∨ B(premise or derived)
nA → C(premise or derived)
oB → C(premise or derived)
resultCDIL m, n, o
  • It says: if you have a disjunction (A or B) and both disjuncts lead to the same conclusion C, then you can conclude C directly.

🔍 Proving DIL with basic rules

The excerpt shows that the Dilemma rule can be derived using only basic rules (negation introduction, negation elimination, conditional elimination, disjunction elimination, and reiteration):

  • The proof takes 14 lines and uses nested subproofs for reductio (proof by contradiction).
  • The key steps:
    • Assume ¬C for reductio.
    • Assume A for reductio, derive C from A → C, get a contradiction, conclude ¬A.
    • Assume B for reductio, derive C from B → C, get a contradiction, conclude ¬B (but this contradicts A ∨ B and ¬A).
    • Use ¬E to conclude C.
  • This demonstrates that DIL is not "really necessary"—it does not allow us to derive anything we could not derive without it.

Don't confuse: The Dilemma rule is not a basic rule; it is a pattern that can always be expanded into a longer proof using only the basic rules.

🧮 Other derived rules

🧮 Modus tollens (MT)

LineContentJustification
mA → B(premise or derived)
n¬B(premise or derived)
result¬AMT m, n
  • The excerpt notes that the proof of this rule is left as an exercise.
  • It also mentions that if MT had already been proven, the proof of the Dilemma rule could have been done in only five lines (instead of fourteen).

🧮 Hypothetical syllogism (HS)

LineContentJustification
mA → B(premise or derived)
nB → C(premise or derived)
resultA → CHS m, n
  • The excerpt states that a proof of HS was already given on page 109.
  • This rule chains two conditionals together.

🔄 Rules of replacement

🔄 What makes replacement rules special

Rules of replacement are derived rules that may be applied to part of a sentence (a subformula), not just whole sentences, because they swap logically equivalent expressions.

  • Basic rules of proof can only be applied to whole sentences.
  • Replacement rules work differently: they let you replace a subformula with a logically equivalent one anywhere in a sentence.
  • The double-headed arrow (⇐⇒) means the rule works in both directions.

🔄 Why replacement rules are needed

Example scenario from the excerpt:

  • Argument: F → (G & H), therefore F → G.
  • You cannot directly apply & E to (G & H) because it is not on a line by itself.
  • You must first use → E to get (G & H) on its own line, then apply & E.
  • Replacement rules will later allow more direct manipulation of subformulas.

🔄 Commutativity (Comm)

The Comm rule swaps the order of conjuncts, disjuncts, or biconditionals:

  • (A & B) ⇐⇒ (B & A)
  • (A ∨ B) ⇐⇒ (B ∨ A)
  • (A ↔ B) ⇐⇒ (B ↔ A)

Example: proving (M ∨ P) → (P & M), therefore (P ∨ M) → (M & P):

  1. (M ∨ P) → (P & M) (premise)
  2. (P ∨ M) → (P & M) (Comm 1, swapping M ∨ P)
  3. (P ∨ M) → (M & P) (Comm 2, swapping P & M)
  • Without Comm, this proof would be "long and inconvenient" using only basic rules.

🔄 Double negation (DN)

The DN rule removes or inserts a pair of negations anywhere in a sentence:

  • ¬¬A ⇐⇒ A

  • You can use this rule on any subformula, not just the whole sentence.

🔄 De Morgan's Laws

Named for 19th-century British logician August De Morgan (though he was not the first to discover them):

  • These rules "capture useful relations between negation, conjunction, and disjunction."
  • The excerpt mentions them but does not provide the full formulation (the text cuts off).

Don't confuse: Replacement rules vs. basic rules—replacement rules can be applied to subformulas; basic rules apply only to whole sentences on their own lines.

28

Rules of Replacement

6.3 Rules of replacement

🧭 Overview

🧠 One-sentence thesis

Rules of replacement are derived rules that allow you to replace part of a sentence with a logically equivalent expression, making proofs much shorter and more convenient than using only basic rules.

📌 Key points (3–5)

  • What replacement rules do: they let you apply logical equivalences to part of a sentence, not just whole sentences like basic rules require.
  • Why they are derived rules: anything provable with replacement rules can also be proven with basic rules alone, but replacement rules save many lines and nested subproofs.
  • Key difference from basic rules: basic rules apply only to whole sentences; replacement rules work on subformulas within a sentence.
  • Common confusion: the double-headed arrow (⇐⇒) means the rule works in both directions—you can replace either side with the other.
  • Main replacement rules introduced: Commutativity (Comm), Double Negation (DN), De Morgan's Laws (DeM), Material Conditional (MC), and Biconditional Exchange (↔ex).

🔧 What derived rules are and why they matter

🔧 Derived rules vs basic rules

A derived rule is a rule of proof that does not make any new proofs possible. Anything that can be proven with a derived rule can be proven without it.

  • Derived rules are shortcuts: they condense what would take many lines with basic rules into one or a few lines.
  • Example from the excerpt: the Dilemma rule (DIL) can be proven using basic rules in eleven lines with nested subproofs, but as a derived rule it takes just one line.
  • You can think of a short proof using a derived rule as "shorthand for a longer proof that uses only the basic rules."

🛠️ Examples of derived rules

The excerpt mentions three derived rules before introducing replacement rules:

RuleAbbreviationWhat it does
DilemmaDILAllows complex disjunctive reasoning in one step instead of eleven lines
Modus tollensMTFrom A → B and ¬B, infer ¬A
Hypothetical syllogismHSFrom A → B and B → C, infer A → C
  • The excerpt notes that if MT had already been proven, the proof of DIL could be done in only five lines instead of eleven.

🔄 Why replacement rules are needed

🔄 The limitation of basic rules

  • Basic rules of proof can only be applied to whole sentences.
  • Example problem from the excerpt: to prove F → (G & H), ∴ F → G, you cannot directly apply &E to (G & H) because it is not on a line by itself—it is part of a larger sentence.
  • The excerpt shows you must first get (G & H) isolated on its own line (using →E with an assumption) before applying &E.

🔄 What replacement rules allow

Rules of replacement can be used to replace part of a sentence with a logically equivalent expression.

  • They work on subformulas (parts of sentences), not just whole sentences.
  • The bold double-headed arrow (⇐⇒) means you can replace either side with the other, in either direction.
  • This makes proofs much simpler when you need to manipulate parts of complex sentences.

📐 The six replacement rules

📐 Commutativity (Comm)

Rule:

  • (A & B) ⇐⇒ (B & A)
  • (A ∨ B) ⇐⇒ (B ∨ A)
  • (A ↔ B) ⇐⇒ (B ↔ A)

What it does:

  • Swap the order of conjuncts in a conjunction, disjuncts in a disjunction, or the two sides of a biconditional.

Example from the excerpt:

  • Argument: (M ∨ P) → (P & M), ∴ (P ∨ M) → (M & P)
  • Proof:
    1. (M ∨ P) → (P & M) [premise]
    2. (P ∨ M) → (P & M) [Comm 1, swapping M ∨ P]
    3. (P ∨ M) → (M & P) [Comm 2, swapping P & M]

📐 Double Negation (DN)

Rule:

  • ¬¬A ⇐⇒ A

What it does:

  • Remove or insert a pair of negations anywhere in a sentence.
  • Works in both directions: you can eliminate double negation or add it.

📐 De Morgan's Laws (DeM)

Rule:

  • ¬(A ∨ B) ⇐⇒ (¬A & ¬B)
  • ¬(A & B) ⇐⇒ (¬A ∨ ¬B)

What they capture:

  • Useful relations between negation, conjunction, and disjunction.
  • Named for 19th-century British logician August De Morgan (though he was not the first to discover them).

How to remember:

  • Negating a disjunction turns it into a conjunction of negations.
  • Negating a conjunction turns it into a disjunction of negations.
  • The connective "flips" (∨ ↔ &) and negations distribute to each part.

📐 Material Conditional (MC)

Rule:

  • (A → B) ⇐⇒ (¬A ∨ B)
  • (A ∨ B) ⇐⇒ (¬A → B)

What it captures:

Because A → B is a material conditional, it is equivalent to ¬A ∨ B.

  • A conditional can be rewritten as a disjunction (and vice versa).
  • This equivalence is fundamental to understanding conditionals in propositional logic.

📐 Biconditional Exchange (↔ex)

Rule:

  • [(A → B) & (B → A)] ⇐⇒ (A ↔ B)

What it captures:

  • The relation between conditionals and biconditionals.
  • A biconditional is logically equivalent to the conjunction of two conditionals going in opposite directions.

🧪 Example proof using multiple replacement rules

Argument: ¬(P → Q), ∴ P & ¬Q

Proof:

  1. ¬(P → Q) [premise]
  2. ¬(¬P ∨ Q) [MC 1, replacing P → Q with ¬P ∨ Q]
  3. ¬¬P & ¬Q [DeM 2, distributing negation]
  4. P & ¬Q [DN 3, removing double negation]

Why this is simpler:

  • The excerpt notes: "As always, we could prove this argument using only the basic rules. With rules of replacement, though, the proof is much simpler."
  • Without replacement rules, this would require many more lines and subproofs.

🔢 Rules for quantifiers (preview)

🔢 Overview of quantifier rules

The excerpt transitions to quantified logic (QL) and introduces:

  • Four new basic rules: introduction and elimination rules for each quantifier (∀ and ∃).
  • All SL rules still apply: both basic and derived rules from sentential logic carry over to QL.
  • One new derived replacement rule: quantifier negation (not detailed in this excerpt).

🔢 Substitution instances

For a wff A, a constant c, and a variable x, a substitution instance of ∀xA or ∃xA is the wff that we get by replacing every occurrence of x in A with c. We call c the instantiating constant.

Notation:

  • Write the original quantified expression as ∀xAx or ∃xAx.
  • Write the substitution instance as Ac.
  • A, x, and c are meta-variables (stand-ins for any wff, variable, and constant).

Examples from the excerpt:

  • Aa → Ba, Af → Bf, and Ak → Bk are all substitution instances of ∀x(Ax → Bx); instantiating constants are a, f, and k.
  • Raj, Rdj, and Rjj are substitution instances of ∃zRzj; instantiating constants are a, d, and j.

🔢 Universal Elimination (∀E)

Rule:

  • From ∀xAx on line m, infer Ac for any constant c.

What it means:

  • If you have ∀xAx, you can infer that anything is an A.
  • You can infer any substitution instance: Aa, Ab, Az, Ad₃, etc.

Example from the excerpt:

  1. ∀x(Mx → Rxd) [premise]
  2. Ma → Rad [∀E 1, instantiating with a]
  3. Md → Rdd [∀E 1, instantiating with d]

🔢 Existential Introduction (∃I)

Rule:

  • From Ac on line m, infer ∃xAx.

What it means:

  • If you know that something is a P (e.g., Pa), then ∃xPx follows.
  • It is legitimate to infer ∃xPx if you have any particular instance.

Important flexibility:

  • The variable x does not need to replace all occurrences of the constant c.
  • You can decide which occurrences to replace and which to leave in place.

Examples from the excerpt:

  1. Ma → Rad [premise]
  2. ∃x(Ma → Rax) [∃I 1, replacing only the second a]
  3. ∃x(Mx → Rxd) [∃I 1, replacing only the first a]
  4. ∃x(Mx → Rad) [∃I 1, replacing only the first a, leaving d]
  5. ∃y∃x(Mx → Ryd) [∃I 4, now replacing d with y]
  6. ∃z∃y∃x(Mx → Ryz) [∃I 5, now replacing d with z]

🔢 Universal Introduction (∀I)

The challenge:

  • A universal claim like ∀xPx would be proven if every substitution instance (Pa, Pb, ...) had been proven.
  • But there are infinitely many constants in QL, so proving every substitution instance is impossible.

Note: The excerpt cuts off before explaining how ∀I actually works, but it sets up the problem that the rule must solve.

29

Rules for quantifiers

6.4 Rules for quantifiers

🧭 Overview

🧠 One-sentence thesis

Quantified logic (QL) extends sentential logic (SL) by adding four basic rules—introduction and elimination for both universal and existential quantifiers—that govern how to move between general claims and their specific instances.

📌 Key points (3–5)

  • What substitution instances are: replacing every occurrence of a variable in a quantified formula with a constant produces a substitution instance.
  • Universal elimination (∀E): from a universal claim "for all x, Ax," you can infer any specific instance "Ac" for any constant c.
  • Existential introduction (∃I): from a specific instance "Ac," you can infer the existential claim "there exists an x such that Ax."
  • Common confusion—proxy constants: when eliminating an existential (∃E), you must use a fresh constant that doesn't appear elsewhere; when introducing a universal (∀I), the constant must not appear in any undischarged assumption.
  • Quantifier negation (QN): negating a quantifier is equivalent to switching the quantifier and moving the negation inward (e.g., "not for all x" ⇔ "there exists an x such that not").

🔤 Substitution instances

🔤 What a substitution instance is

A substitution instance of ∀xA or ∃xA is the formula obtained by replacing every occurrence of x in A with a constant c.

  • The constant c is called the instantiating constant.
  • Notation: write the quantified expression as ∀xAx or ∃xAx, and the substitution instance as Ac.
  • Example: "Aa → Ba," "Af → Bf," and "Ak → Bk" are all substitution instances of ∀x(Ax → Bx), with instantiating constants a, f, and k respectively.
  • Example: "Raj," "Rdj," and "Rjj" are substitution instances of ∃zRzj, with instantiating constants a, d, and j respectively.

🔍 Meta-variables vs object-language symbols

  • A, x, and c are meta-variables: stand-ins for any formula, variable, and constant.
  • When we write Ac, the constant c may occur multiple times in the formula A.
  • Don't confuse: the meta-variable notation is a shorthand for describing the rule; it is not itself part of QL.

🔽 Elimination rules (from general to specific)

🔽 Universal elimination (∀E)

If you have ∀xAx, you can infer any substitution instance Ac for any constant c.

The rule:

m  ∀xAx
   Ac     ∀E m
  • Intuition: if something is true for all x, then it is true for any particular thing.
  • You can infer Aa, Ab, Az, Ad₃, or any other substitution instance.
  • Example proof steps:
    • From "∀x(Mx → Rxd)" you can derive "Ma → Rad" (∀E, replacing x with a).
    • From the same premise you can also derive "Md → Rdd" (∀E, replacing x with d).

🔼 Existential elimination (∃E)

If you have ∃xAx, you can reason about "whatever satisfies Ax" by introducing a proxy constant c in a subproof; any conclusion B that does not mention c can then be inferred outside the subproof.

The rule:

m  ∃xAx
n    Ac*
⋮
p    B
   B     ∃E m, n–p

*The constant c must not appear in ∃xAx, in B, or in any undischarged assumption.

  • Intuition: "there exists an x such that Ax" means some thing satisfies A, but we don't know which; so we give it a temporary name (proxy) to reason about it.
  • The proxy constant is like a placeholder—think of it as "call this thing Ishmael" in the excerpt's analogy.
  • Restrictions on the proxy c:
    • Must be a fresh constant not used elsewhere.
    • Cannot appear in the original existential sentence ∃xAx.
    • Cannot appear in the conclusion B.
    • Cannot appear in any undischarged assumption.
  • Example: to prove ∃xTx from ∃xSx and ∀x(Sx → Tx):
    1. Assume ∃xSx.
    2. Open a subproof with proxy "Si."
    3. Derive "Ti" inside the subproof.
    4. Derive "∃xTx" inside the subproof (∃I).
    5. Close the subproof and conclude "∃xTx" (∃E).

🔼 Introduction rules (from specific to general)

🔼 Existential introduction (∃I)

From a specific instance Ac, you can infer the existential claim ∃xAx.

The rule:

m  Ac
   ∃xAx     ∃I m
  • Intuition: if a particular thing satisfies A, then something satisfies A.
  • The variable x does not need to replace all occurrences of the constant c—you choose which occurrences to replace.
  • Example: from "Ma → Rad" you can derive:
    • "∃x(Ma → Rax)" (replacing only the second occurrence of a).
    • "∃x(Mx → Rxd)" (replacing both occurrences of a).
    • "∃x(Mx → Rad)" (replacing only the first occurrence of a).
    • "∃y∃x(Mx → Ryd)" (chaining multiple ∃I steps).

🔽 Universal introduction (∀I)

From a specific instance Ac, you can infer the universal claim ∀xAx, provided c does not occur in any undischarged assumption.

The rule:

m  Ac*
   ∀xAx     ∀I m

*The constant c must not occur in any undischarged assumption.

  • Intuition: if you can prove Ac for an arbitrary constant c (one you know nothing special about), then Ax holds for all x.
  • Critical restriction: c must not appear in any undischarged assumption—otherwise c is not arbitrary.
  • Example (valid):
    1. Premise: ∀xMx.
    2. Derive Ma (∀E).
    3. Conclude ∀yMy (∀I)—valid because a was arbitrary.
  • Example (invalid):
    1. Premise: ∀xRxa.
    2. Derive Raa (∀E).
    3. Cannot conclude ∀yRyy (∀I)—invalid because a appears in the premise, so it is not arbitrary.
  • Don't confuse: c may appear in a discharged (closed) assumption; the rule only forbids c in undischarged assumptions.

🔄 Quantifier negation (QN)

🔄 The quantifier negation rule

Negating a quantifier is equivalent to switching the quantifier type and moving the negation inside.

The rule (replacement):

¬∀xA  ⇔  ∃x¬A
¬∃xA  ⇔  ∀x¬A     QN
  • This is a replacement rule, so it can be applied to whole sentences or sub-formulas.
  • Intuition:
    • "Not everything is A" ⇔ "Something is not A."
    • "Nothing is A" ⇔ "Everything is not A."
  • The excerpt proves one direction (∀xAx ⊢ ¬∃x¬Ax) with a complex proof involving nested subproofs and reductio ad absurdum.
  • The other direction (¬∃x¬A ⊢ ∀xA) is left as an exercise.

🛠️ Why QN is useful

  • Often you need to translate between quantifiers when constructing proofs.
  • Example: to prove ¬∃x¬Ax from ∀xAx, you assume ∃x¬Ax for reductio, introduce a proxy ¬Ac, derive Ac from the universal, reach contradiction, and discharge the assumption.

🆔 Rules for identity

🆔 Identity introduction (=I)

You can assert that any constant is identical to itself, without any premises.

The rule:

c = c     =I
  • No prior lines are required.
  • Intuition: self-identity is always true.
  • Limitation: you cannot use =I to conclude a = b (two different constants)—no non-identity premises can justify such a claim.

🔄 Identity elimination (=E)

If c = d, then any sentence true of c is also true of d (and vice versa).

The rule:

m  c = d
n  A
   A[c/d]     =E m, n

Notation: A[c/d] means a sentence produced by replacing some or all occurrences of c in A with d (or d with c).

  • Intuition: identical things share all properties.
  • You choose which occurrences to replace—you need not replace all of them.
  • Example: if you know "Raa" and "a = b," you can derive "Rab," "Rba," or "Rbb."
  • Don't confuse: this is not the same as substitution instances, because you are not replacing a variable with a constant; you are replacing one constant with another that is identical to it.
30

Rules for Identity

6.5 Rules for identity

🧭 Overview

🧠 One-sentence thesis

Identity rules in formal logic allow us to assert that any constant is identical to itself without premises, and to substitute identical terms for one another in any sentence.

📌 Key points (3–5)

  • Identity introduction (=I): you can always write "c = c" at any point in a proof without needing any premises or prior lines.
  • Identity elimination (=E): if you know "a = b", you can replace some or all occurrences of "a" with "b" (or vice versa) in any sentence.
  • Common confusion: knowing many shared properties between a and b (e.g., Aa & Ab, Ba & Bb, etc.) is not enough to conclude a = b—identity claims require the identity predicate itself.
  • Why it matters: identity rules enable reasoning about substitution and equivalence in formal proofs involving the identity predicate.

🔑 The two identity rules

🔑 Identity introduction (=I)

Identity introduction rule: for any constant c, you can write "c = c" on any line with only the =I rule as justification.

  • This rule requires no premises and refers to no prior lines of the proof.
  • It reflects the logical truth that anything is always identical to itself.
  • You can invoke this rule at any point in a proof for any constant you are working with.

Example: In any proof, you can write "a = a" or "f = f" on a new line and justify it simply with "=I".

🔄 Identity elimination (=E)

Identity elimination rule: if you have shown "c = d" on line m, and you have a sentence A on line n, you can derive a new sentence by replacing some or all occurrences of c with d (or d with c) in A.

  • The notation A^(c/d) means: a sentence produced by replacing some or all instances of c in A with d, or instances of d with c.
  • This is not the same as a substitution instance—you do not have to replace every occurrence; you may replace only some.
  • The rule is written formally as:
    • Line m: c = d
    • Line n: A
    • Conclusion: A^(c/d) (justified by =E m, n)

Example: If you know "Raa" and you have shown "a = b", you can derive "Rab", "Rba", or "Rbb" by replacing occurrences of a with b.

🚫 What identity rules do not allow

🚫 Shared properties are not enough

  • Suppose you know many things true of a are also true of b:
    • Aa & Ab
    • Ba & Bb
    • ¬Ca & ¬Cb
    • Da & Db
    • ¬Ea & ¬Eb
    • and so on...
  • This pattern is not sufficient to conclude "a = b".
  • In general, no sentences that do not already contain the identity predicate can justify the conclusion "a = b".
  • Don't confuse: having all the same properties is not the same as being identical in formal logic—identity must be explicitly stated or derived using identity rules.

🚫 Identity introduction with two different constants

  • The =I rule will not justify "a = b" or any other identity claim containing two different constants.
  • It only allows you to assert "c = c" for a single constant.

📝 Identity rules in action

📝 Sample proof structure

The excerpt provides this proof to illustrate both rules:

LineStatementJustification
1∀x∀y x = y(premise)
2∃xBx(premise)
3∀x(Bx → ¬Cx)(premise)
4Be(assumption for ∃E)
5∀y e = y∀E 1
6e = f∀E 5
7Bf=E 6, 4
8Bf → ¬Cf∀E 3
9¬Cf→E 8, 7
10¬Cf∃E 2, 4–9
11∀x¬Cx∀I 10
12¬∃xCxQN 11

📝 How the rules work together

  • Line 6 uses identity elimination on the universal quantifier to get "e = f".
  • Line 7 uses identity elimination (=E) to substitute f for e in "Be", yielding "Bf".
  • This substitution allows the proof to continue and eventually derive the conclusion "¬∃xCx".
  • The identity rules enable term replacement that would otherwise be impossible with standard quantifier and propositional rules alone.
31

Proof strategy

6.6 Proof strategy

🧭 Overview

🧠 One-sentence thesis

Effective proof construction requires working backwards from the conclusion, forwards from the premises, and flexibly applying replacement rules and indirect proof when direct approaches fail.

📌 Key points (3–5)

  • Work backwards from the goal: identify the introduction rule for the conclusion's main operator to plan the final steps.
  • Work forwards from premises: use elimination rules on what you already have to generate new sentences.
  • Use replacement rules strategically: transform difficult targets (e.g., disjunctions, negated existentials) into easier equivalent forms.
  • Common confusion: direct vs. indirect proof—both are formally legitimate; one may be easier depending on the problem.
  • Persistence matters: no simple recipe exists; try different approaches and link short proofs together for long ones.

🎯 Backward reasoning from the conclusion

🎯 Identify the introduction rule

  • Look at the conclusion's main logical operator and determine which introduction rule applies.
  • This tells you what should happen just before the last line of the proof.
  • Treat that penultimate step as your new goal and repeat the process.

🔀 Example: conditional conclusions

If your conclusion is a conditional A → B, plan to use the → I rule.

  • The → I rule requires:
    • Start a subproof assuming A.
    • Derive B within that subproof.
  • Example: To prove "If it rains, the ground is wet," assume "it rains" and work toward "the ground is wet."

🔍 Forward reasoning from premises

🔍 Apply elimination rules

  • Examine the premises (or sentences derived so far).
  • Think about the elimination rules for their main operators.
  • These rules reveal your available options.

🧩 Universal and existential quantifiers

QuantifierStrategyKey constraint
∀x AInstantiate for any helpful constantChoose constants strategically
∃x AUse ∃E rule: assume A[c|x] for fresh c, derive conclusion without cc must not appear in premises or final conclusion
  • For a short proof, you may be able to eliminate premises and introduce the conclusion directly.
  • Long proofs are just multiple short proofs linked together; alternate between working backward and forward to fill gaps.

🔄 Transformation strategies

🔄 Use replacement rules

Replacement rules can often make your life easier.

  • If a proof seems impossible, try different substitutions.
  • Replacement rules transform sentences into equivalent forms that may be easier to work with.

🎭 Common transformations

  • Disjunctions: To prove A ∨ B, it is often easier to prove ¬A → B and apply the MC (Material Conditional) rule.
  • Negated existentials: To prove ¬∃x A, it is often easier to prove ∀x ¬A and apply the QN (Quantifier Negation) rule.
  • Negated disjunctions: Immediately think of DeMorgan's rule when you see a negated disjunction.
  • Some replacement rules should become second nature through practice.

🔁 Indirect proof and iteration

🔁 Indirect proof as a tool

If you cannot find a way to show something directly, try assuming its negation.

  • Most proofs can be done either directly or indirectly.
  • Both approaches are formally legitimate.
  • One way might be easier or spark your imagination more than the other.
  • Don't forget this option when direct approaches fail.

🔄 Iterative refinement

  • Repeat as necessary: Once you decide how to reach the conclusion, revisit the premises to see what you can do with them.
  • Then reconsider the target sentences and how to reach them.
  • Persist: Try different approaches; if one fails, try something else.
  • There is no simple recipe for proofs; practice is essential.

📐 Proof-theoretic concepts

📐 The turnstile symbol

The symbol '`' indicates that a proof is possible. This symbol is called the turnstile.

  • Sometimes called a single turnstile to distinguish it from the double turnstile (|=) used for semantic entailment.
  • {A₁, A₂, ...} ` B means: it is possible to give a proof of B with A₁, A₂, ... as premises.
  • A ` B means: there is a proof of B with A as a premise (curly braces omitted for single premise).
  • ` C means: there is a proof of C with no premises.
  • Logical proofs are often called derivations, so A ` B reads as "B is derivable from A."

🏆 Theorems

A theorem is a sentence that is derivable without any premises; i.e., T is a theorem if and only if ` T.

  • Showing something is a theorem: provide a proof of it (relatively easy).
  • Showing something is NOT a theorem: much harder.
    • If its negation is a theorem, you can prove the negation (e.g., proving ¬(Pa & ¬Pa) shows (Pa & ¬Pa) cannot be a theorem).
    • For sentences that are neither theorems nor negations of theorems, you must demonstrate that no proof is possible—not just that certain strategies fail.
    • Even failing a thousand times doesn't prove impossibility; perhaps the proof is just too long or complex.

🔗 Provable equivalence

Two sentences A and B are provably equivalent if and only if each can be derived from the other; i.e., A B and B A.

  • Showing provable equivalence: provide a pair of proofs (relatively easy).
  • Showing NOT provably equivalent: much harder, as hard as showing a sentence is not a theorem.
  • These problems are interchangeable: a certain sentence would be a theorem if and only if A and B were provably equivalent.

⚠️ Provable inconsistency

The set of sentences {A₁, A₂, ...} is provably inconsistent if and only if a contradiction is derivable from it; i.e., for some sentence B, {A₁, A₂, ...} B and {A₁, A₂, ...} ¬B.

  • Showing a set is provably inconsistent: assume the sentences and prove a contradiction (easy).
  • Showing a set is NOT provably inconsistent: much harder; requires showing that proofs of a certain kind are impossible, not just providing a proof or two.

🔗 Proofs and models

🔗 Connection between theorems and tautologies

  • The excerpt notes there is a connection between theorems and tautologies.
  • Formal way to show a sentence is a theorem: prove it.
  • For each line of a proof, we can check if that line follows by the cited rule.
  • It may be hard to produce a twenty-line proof, but checking each line of an existing proof is not as hard.
  • Don't confuse: producing a proof (creative, potentially difficult) vs. verifying a proof (mechanical, easier).
32

Proof-Theoretic Concepts

6.7 Proof-Theoretic concepts

🧭 Overview

🧠 One-sentence thesis

Proof-theoretic concepts provide a formal framework for deriving conclusions from premises using the turnstile symbol, and they correspond exactly to semantic concepts like tautology and validity, giving us complementary tools for analyzing arguments.

📌 Key points (3–5)

  • The turnstile symbol : represents that a proof is possible (derivability), distinct from semantic entailment .
  • Theorems: sentences derivable without any premises; proving something is a theorem is straightforward (give a proof), but showing something is not a theorem is very hard.
  • Provable equivalence and inconsistency: two sentences are provably equivalent if each derives the other; a set is provably inconsistent if it derives a contradiction.
  • Common confusion: single turnstile (proof/derivation) vs. double turnstile (semantic entailment/models)—they represent different approaches but yield the same results.
  • Why it matters: proofs and models are interchangeable—a sentence is a theorem if and only if it is a tautology—so you can choose whichever method is easier for a given task.

🔤 The turnstile and derivability

🔤 What the turnstile means

The symbol (called the turnstile or single turnstile) indicates that a proof is possible.

  • {A₁, A₂, ...} ⊢ B means: there exists a proof of B with A₁, A₂, ... as premises.
  • A ⊢ B means: there is a proof of B with A as a premise (curly braces omitted for single premises).
  • ⊢ C means: there is a proof of C with no premises at all.

📖 Derivations

  • Logical proofs are often called derivations.
  • So A ⊢ B can be read as "B is derivable from A."
  • The turnstile is about formal proof structure, not about truth in models.

Don't confuse: (single turnstile, proof-theoretic) with (double turnstile, semantic entailment from chapter 5). They represent different concepts but turn out to be equivalent.

🏛️ Theorems and their properties

🏛️ What is a theorem

A theorem is a sentence that is derivable without any premises; i.e., T is a theorem if and only if ⊢ T.

  • It is not too hard to show something is a theorem—just give a proof of it.
  • Example: proving ⊢ T requires constructing a formal derivation with no premises that ends in T.

🚫 Showing something is not a theorem

  • Showing that something is not a theorem is much harder.
  • If the negation is a theorem, you can provide a proof of the negation.
    • Example: it is easy to prove ¬(Pa & ¬Pa), which shows that (Pa & ¬Pa) cannot be a theorem.
  • For a sentence that is neither a theorem nor the negation of a theorem, there is no easy way.
  • You would have to demonstrate that no proof is possible—not just that certain strategies fail.
  • Even failing in a thousand different ways doesn't prove impossibility; perhaps the proof is just too long or complex.

🔗 Provable equivalence and inconsistency

🔗 Provable equivalence

Two sentences A and B are provably equivalent if and only if each can be derived from the other; i.e., A ⊢ B and B ⊢ A.

  • Relatively easy to show: just provide a pair of proofs (one in each direction).
  • Showing sentences are not provably equivalent is much harder—as hard as showing a sentence is not a theorem.
  • The excerpt notes these problems are interchangeable: there exists a sentence that would be a theorem if and only if A and B were provably equivalent.

⚠️ Provable inconsistency

The set of sentences {A₁, A₂, ...} is provably inconsistent if and only if a contradiction is derivable from it; i.e., for some sentence B, {A₁, A₂, ...} ⊢ B and {A₁, A₂, ...} ⊢ ¬B.

  • Easy to show a set is provably inconsistent: assume the sentences in the set and prove a contradiction.
  • Showing a set is not provably inconsistent is much harder—requires showing that proofs of a certain kind are impossible, not just providing one or two proofs.

🔄 The connection between proofs and models

🔄 Theorems vs. tautologies

  • There is a formal way to show a sentence is a theorem: prove it line by line, checking each step.
  • Showing a sentence is a tautology requires reasoning in English about all possible models—no formal checking method.
  • Given a choice: it is easier to show something is a theorem than to show it is a tautology.

🔄 Non-theorems vs. non-tautologies

  • There is no formal way to show a sentence is not a theorem—you must reason about all possible proofs.
  • Yet there is a formal method for showing a sentence is not a tautology: construct a model in which it is false.
  • Given a choice: it is easier to show something is not a tautology than to show it is not a theorem.

🎯 The fundamental equivalence

Key result: A sentence is a theorem if and only if it is a tautology.

  • If you provide a proof ⊢ A (showing A is a theorem), it follows that ⊨ A (A is a tautology).
  • If you construct a model in which A is false (showing A is not a tautology), it follows that A is not a theorem.
  • In general: A ⊢ B if and only if A ⊨ B.

🧰 Practical implications

Because proofs and models are equivalent, the following hold:

Semantic conceptProof-theoretic equivalent
An argument is validThe conclusion is derivable from the premises
Two sentences are logically equivalentThey are provably equivalent
A set of sentences is consistentIt is not provably inconsistent

Strategy: You can pick and choose when to think in terms of proofs and when to think in terms of models, doing whichever is easier for a given task.

  • Proofs and models give a versatile toolkit for working with arguments.
  • Table 6.1 (referenced in the excerpt) summarizes when it is best to give proofs and when to use models.

Example: To show an argument is valid, you might find it easier to construct a formal proof (if the derivation is short) or to reason about models (if checking truth tables is simpler). Both approaches are legitimate and yield the same answer.

33

Proofs and models

6.8 Proofs and models

🧭 Overview

🧠 One-sentence thesis

A sentence is a theorem if and only if it is a tautology, so we can choose whether to work with formal proofs or semantic models depending on which method is easier for the task at hand.

📌 Key points (3–5)

  • The core equivalence: theorems and tautologies are interchangeable; provability and semantic entailment agree (A ` B if and only if A |= B).
  • When proofs are easier: showing a sentence is a theorem, showing two sentences are provably equivalent, or showing a set is provably inconsistent—all require only constructing a proof.
  • When models are easier: showing a sentence is not a tautology, showing sentences are not equivalent, or showing a set is consistent—all can be done by constructing a single counterexample model.
  • Common confusion: proving vs. disproving—positive claims (is a theorem, is a tautology) may be easier in one system, while negative claims (is not a theorem, is not a tautology) may be easier in the other.
  • Why it matters: this toolkit lets us measure logical weight in a purely formal way—valid arguments get formal proofs, invalid arguments get formal counterexamples.

🔄 The central equivalence

🔄 Theorems and tautologies

A sentence is a theorem if and only if it is a tautology.

  • If we provide a proof of ` A (showing it is a theorem), it follows that A is a tautology (|= A).
  • If we construct a model in which A is false (showing it is not a tautology), it follows that A is not a theorem.
  • More generally: A ` B if and only if A |= B.

🔗 Three key correspondences

The excerpt establishes three parallel relationships:

Semantic conceptProof-theoretic concept
Valid argumentConclusion is derivable from premises
Logically equivalent sentencesProvably equivalent sentences
Consistent set of sentencesNot provably inconsistent set
  • Valid argument: an argument is valid if and only if the conclusion is derivable from the premises.
  • Logical equivalence: two sentences are logically equivalent if and only if they are provably equivalent (A B and B A).
  • Consistency: a set of sentences is consistent if and only if it is not provably inconsistent.

🛠️ Choosing the right tool

🛠️ When proofs are easier

Positive existence claims are easier to establish with proofs:

  • Showing a sentence is a theorem: provide a formal proof (` A). Each line can be checked against the cited rule, making verification straightforward.
  • Showing two sentences are provably equivalent: provide two proofs (A B and B A).
  • Showing a set is provably inconsistent: assume the sentences and prove a contradiction (derive both B and ¬B from the set).

Why easier? Because you only need to construct one proof and check each line follows the rules.

🔍 When models are easier

Negative claims are easier to establish with models:

  • Showing a sentence is not a tautology: construct a single model in which the sentence is false. This is a formal method.
  • Showing sentences are not equivalent: give a model in which the two sentences have different truth values.
  • Showing a set is consistent: give a model in which all sentences in the set are true.

Why easier? Showing something is not a theorem would require reasoning in English about all possible proofs (showing proofs of a certain kind are impossible), which is much harder than constructing one counterexample model.

Don't confuse: Showing a sentence is a tautology requires reasoning in English about all possible models (no formal checking method), but showing it is not a tautology only requires one model.

📋 Summary table

The excerpt provides a decision table for choosing methods:

QuestionYES (easier method)NO (easier method)
Is A a tautology?prove ` Agive a model where A is false
Is A a contradiction?prove ` ¬Agive a model where A is true
Is A contingent?give two models (A true in one, false in other)prove A or ¬A
Are A and B equivalent?prove A B and B Agive a model where A and B differ
Is set A consistent?give a model where all in A are truefrom A, prove B and ¬B
Is argument 'P, ∴ C' valid?prove P ` Cgive a model where P true, C false

Example: To show an argument is valid, construct a formal proof; to show it is invalid, provide a formal counterexample model where premises are true but conclusion is false.

🏗️ Soundness and completeness

🏗️ Why the equivalence matters

The fact that provability (`) and semantic entailment (|=) are interchangeable is not trivial—it requires proof.

The excerpt identifies two fundamental questions:

  • Soundness: Does A ` B imply A |= B? (Every provable argument is valid.)
  • Completeness: Does A |= B imply A ` B? (Every valid argument is provable.)

Don't confuse: The symbols '|=' and '`' look similar, but proving they are truly interchangeable is not simple.

🔐 Soundness explained

A proof system is sound if there are no proofs of invalid arguments.

  • Soundness asks: why should an argument that can be proven necessarily be a valid argument?
  • Strategy: show that each individual inference rule cannot turn a valid argument into an invalid one.
  • If each rule individually preserves validity, then using rules in combination (i.e., a full proof) also preserves validity.

🧪 Example: the & I rule

The excerpt walks through the & I (conjunction introduction) rule:

  • Suppose we use & I to add A & B to a valid argument.
  • For the rule to apply, A and B must already be available in the proof.
  • Since the argument so far is valid, A and B are either premises or valid consequences of premises.
  • Any model making the premises true must make A and B true.
  • By the definition of truth in QL, A & B is also true in such a model.
  • Therefore, A & B validly follows from the premises.
  • Conclusion: using & I to extend a valid proof produces another valid proof.

To fully prove soundness, we would need similar arguments for all 16 other basic rules (the excerpt notes this is tedious and beyond scope). Derived rules don't need separate proof because they are consequences of basic rules.

✅ Completeness explained

A proof system is complete if there is a proof [for every valid argument].

  • Completeness asks: why think that every valid argument is an argument that can be proven?
  • Even if we prove soundness (every theorem is a tautology), we can still ask whether every tautology is a theorem.
  • The excerpt introduces the problem but does not provide the proof (it falls beyond the scope).

🎯 The versatile toolkit

Because soundness and completeness hold:

  • We can pick and choose when to think in terms of proofs and when to think in terms of models.
  • We do whichever is easier for a given task.
  • If we can translate an argument into QL, we can measure its logical weight in a purely formal way:
    • If deductively valid → give a formal proof.
    • If invalid → provide a formal counterexample.
34

Soundness and completeness

6.9 Soundness and completeness

🧭 Overview

🧠 One-sentence thesis

A proof system is both sound (every provable argument is valid) and complete (every valid argument is provable), allowing us to choose freely between giving proofs or constructing models depending on which is easier.

📌 Key points (3–5)

  • Soundness means there are no proofs of invalid arguments—if you can prove it, it must be valid.
  • Completeness means every valid argument has a proof—if it's valid, you can prove it.
  • Common confusion: the symbols '|=' (semantic entailment) and '`' (provability) look similar but proving they are interchangeable requires careful demonstration.
  • Why it matters: because QL is both sound and complete, we can use whichever method (proof or model) is more convenient for the task at hand.
  • How to show soundness: demonstrate that each individual inference rule preserves validity, so using them in combination cannot turn a valid argument invalid.

🔍 What soundness means

🔍 The soundness question

A proof system is sound if there are no proofs of invalid arguments.

  • The question: why should we think that an argument that can be proven is necessarily a valid argument?
  • In symbols: why does A ` B imply A |= B?
  • It's not enough to succeed at proving many valid arguments and fail at proving invalid ones—we need to show that any possible proof is a proof of a valid argument.

🧱 Strategy for proving soundness

  • The approach is step-wise: show that each inference rule individually cannot change a valid argument into an invalid one.
  • If each rule preserves validity on its own, then using them in combination cannot make an argument invalid.
  • Since a proof is just a series of lines each justified by a rule of inference, this shows every provable argument is valid.

📝 Example: the conjunction introduction rule

  • Suppose we use &I to add A & B to a valid argument.
  • For the rule to apply, A and B must already be available in the proof.
  • Since the argument so far is valid, A and B are either premises or valid consequences of the premises.
  • Any model making the premises true must make both A and B true.
  • According to the definition of truth in QL, this means A & B is also true in such a model.
  • Therefore A & B validly follows from the premises, so using &I extends a valid proof to another valid proof.

⚙️ What's required for full soundness

  • To show the proof system is sound, we would need similar arguments for all the other inference rules.
  • Since derived rules are consequences of basic rules, it suffices to provide arguments for the 16 other basic rules (beyond the scope of this book).
  • Once proven sound, it follows that every theorem is a tautology.

🎯 What completeness means

🎯 The completeness question

A proof system is complete if there is a proof of every valid argument.

  • Even after proving soundness, we can still ask: why think that every valid argument is an argument that can be proven?
  • In symbols: why does A |= B imply A ` B?
  • This is the reverse direction from soundness.

🏆 Gödel's completeness result

  • Completeness for a language like QL was first proven by Kurt Gödel in 1929.
  • The proof is beyond the scope of this book.
  • The important point: the proof system for QL is both sound and complete.
  • Don't confuse: this is not true for all proof systems and all formal languages—QL is special in this regard.

🛠️ Practical implications

🛠️ Freedom to choose methods

Because QL is both sound and complete, we can choose to give proofs or construct models—whichever is easier for the task at hand.

TaskProof approachModel approach
Show validityProve P ` CWould need to check all models (harder)
Show invalidityWould need to show no proof exists (harder)Give one model where P is true and C is false
Show tautologyProve ` AWould need to check all models (harder)
Show contingencyWould need to show neither A nor ¬A (harder)Give two models with different truth values

🔄 The interchangeability of symbols

  • The symbols '|=' (semantic entailment) and '`' (provability) are really interchangeable in QL.
  • This is not a simple or obvious fact—it requires proof.
  • Soundness and completeness together establish this interchangeability.
  • Example: if you can prove an argument, you know it's valid; if an argument is valid, you know a proof exists (even if you haven't found it yet).

📚 Key definitions

📚 Formal definitions from the excerpt

Theorem: A sentence A is a theorem if and only if ` A.

Provably equivalent: Two sentences A and B are provably equivalent if and only if A B and B A.

Provably inconsistent: {A₁, A₂, ...} is provably inconsistent if and only if, for some sentence B, {A₁, A₂, ...} ` (B & ¬B).

  • These definitions parallel semantic notions (tautology, logical equivalence, inconsistency) but are defined in terms of provability rather than truth in models.
  • Because of soundness and completeness, the proof-theoretic and semantic versions coincide in QL.