Home > Events > LING Meeting: Paul Pietroski (Rutgers)
S M T W T F S
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
 
 
 
 

LING Meeting: Paul Pietroski (Rutgers)

Time: 
Friday, April 12, 2024 - 3:00 PM to 4:30 PM
Location: 
1108B Marie Mount Hall

SMPL Concepts, Conjunctive Meanings

For this Linguistics General Meeting welcomes home our Emeritus comrade, Paul Pietroski, to tell us about his recent work on a proof theory for natural language semantics, abstracted below.

Abstract: 

SMPL Concepts, Conjunctive Meanings
Here’s an old idea that I like: human linguistic meanings are instructions for how to build concepts of a special sort; and these concepts exhibit systematic patterns of intuitively impeccable inference that reflect psychologically simple forms of conjunction and negation. Consider, for example, (1) and (2).

1. A baker buttered a bun with a knife at dawn; so a baker buttered a bun.
2. No baker buttered a bun; so no baker buttered a bun at dawn.

But the old idea saddles us with questions about how the relevant concepts are generated. How are they combined, and what kinds of atomic concepts are permitted? Are the psychological forms of conjunction and negation computationally simpler than their logical counterparts in (3) and (4)?

3. ∃e∃x∃y[Rexy & Fx & Gy & He]
4. ∀e∀x∀y [Rexy & Fx & Gy & He]

In the first part of the talk, I describe a possible Language of Thought (LoT) that is very simple computationally, but still strong enough to capture some important inferential patterns. From a formal perspective, this LoT is an elementary predicate calculus. Every generable concept is monadic; think of COW, BROWN, BROWN^COW, BROWN^HORSE, BLACK^HORSE, BLACK^COW, etc. This LoT provides no form of concept-negation: you can’t use COW to build a concept that applies to whatever COW doesn’t apply to. Nonetheless, there is a way to reconstruct propositional logic. In the second part of the talk, I’ll show how to supplement this basic LoT in ways that increase expressive power dramatically, while leaving the core generative system unaffected. The net result at least approximates descriptive adequacy for natural language semantics. It delivers Aristotelian logic, but not the unattested “fourth corner quantifier” that corresponds to ∃x[Fx & ~Gx]. It also captures the inferential patterns that motivate Davidsonian event analyses, via examples like (1) and (2), and the broader idea that lengthening expressions often involves conjunction. I’ll be drawing on ideas from my book Conjoining Meanings, a recent paper by Thomas Icard and Larry Moss, and some joint work (in progress) with Icard.