Will Morrisey Reviews

Book reviews and articles on political philosophy and literature.

  • Home
  • Reviews
    • American Politics
    • Bible Notes
    • Manners & Morals
    • Nations
    • Philosophers
    • Remembrances
  • Contents
  • About
  • Books

Recent Posts

  • Orthodox Christianity: Manifestations of God
  • Orthodox Christianity: Is Mysticism a Higher Form of Rationality?
  • The French Malaise
  • Chateaubriand in Jerusalem
  • Chateaubriand’s Voyage toward Jerusalem

Recent Comments

    Archives

    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016

    Categories

    • American Politics
    • Bible Notes
    • Manners & Morals
    • Nations
    • Philosophers
    • Remembrances
    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org

    Powered by Genesis

    Archives for November 2019

    What Is Analytic Philosophy?

    November 16, 2019 by Will Morrisey

    Stephen Schwartz: A Brief History of Analytic Philosophy: From Russell to Rawls. West Sussex: John Wiley and Sons, 2012.

     

    “Personally and passionately involved in the enlightening and edifying enterprise of analytic philosophy,” Professor Schwartz calls it “the dominant Anglo-Saxon philosophical movement of the twentieth century” and also of this century, so far. He is unquestionably correct, especially with respect to academic philosophizing. With the compartmentalization of academic research, and with the division of labor institutional departmentalization reflects, academics ‘doing philosophy’ have needed a way to distinguish their enterprise from mathematics and science ‘hard’ and ‘soft.’ At the same time, they could not ignore the intellectual authority of mathematics and science in the modern world, seen in the thought of Bacon and Descartes, two of modernity’s philosophic progenitors. This authority has increased in the centuries subsequent to their work, and it includes the application or attempted application of scientific method to the work of governments in the form of the ‘administrative state.’ Even its recent philosophic, often ideological rival, ‘post-modernism,’ which rejects rationalism in the name of a democratized Nietscheism, nonetheless eagerly uses the apparatus of the administrative state as its preferred instrument of ruling.

    “I am personally and passionately involved in the enlightening and edifying enterprise of analytic philosophy,” Schwartz writes. This engagement seldom injures his account of its history. Quite the contrary, for the most part: History written by a lover almost always outranks history written by despiser, as a lover will attend to the features of his beloved, always wanting to know more about her.

    Modern philosophers have tended to group themselves into two major encampments, empiricists (‘Baconians’) and rationalists (‘Cartesians’). Although analytic philosophers began by distancing themselves sharply from Hegelianism, with its grand thesis-antithesis-synthesis historical dialectic, it’s hard to deny that they began by attempting to ‘synthesize’ modern empiricism and rationalism. Empiricists hold that all knowledge is based on experience; ‘experimental’ science tests ideas, initially demoted to the status of hypotheses, testing them by concrete results under the rationally controlled conditions of the laboratory—that is, a place of labor, of action, not of contemplation. Modern scientists, including Einstein himself, were quite relieved that the mathematical formula E=mc² found confirmation in the lab. Modern rationalists concentrate their attention on ideas thought to be innate in the human mind, ideas based on ‘pure’—that is to say, non-empirical, non-factual—reason. Can these two opposite approaches to philosophizing be combined? How?

    Bertrand Russell took the first, and very deep, stab at accomplishing this in his Principia Mathematica, co-written with Alfred North Whitehead and published in 1903. Russell sought nothing less than to break with both Aristotelian and Hegelian logic. Taking over a project begun by the German mathematician Gottlob Frege, Russell treated logic like the calculus. As Jacob Klein has shown, the calculus itself was invented to register the interest of modern philosophers, beginning with Machiavelli in kinetics. [1] Moderns are less interested in the stable figures of Pythagorean geometry as in a geometry of motion—of plotting points along a curve. Aristotelian, syllogistic logic seeks to understand things in accordance with their forms and ‘essences.’ Hence such standard syllogistic locutions as “All men are mortal; Socrates is a man; therefore, Socrates is mortal.” “Socrates” is the subject, “man” the predicate. To say “All zebras are animals” is logically the same as to say “Socrates is a man,” despite the fact that a zebra is a species, Socrates a person. Additionally, in syllogistic logic, “Socrates is married to Xanthippe” is logically identical to these other propositions because “married to Xanthippe” is the predicate—this, despite the fact that “married to” is a relation, whereas “is a man” is a statement about the intrinsic nature of Socrates.

    In mathematical logic, “predicates represent functions from objects to truth-values,” “functions” being a term from calculus. To say “Socrates is a man” is to say “Socrates satisfies the function ‘man.'” “‘All zebras are animals’ says that if any object satisfies the function ‘x is a zebra,’ then it satisfies the function ‘x is an animal.'” And to say that “Socrates is married to Xanthippe” is to say that the pair Socrates/Xanthippe satisfies the function “is married to.” Such sentences are structured like equations in calculus. They say nothing about the substance of the things equated. They do not posit essences, only the verbal equivalents of points on a line.

    Russell’s logic differs from Hegel because it is analytic. It does not aim at producing a synthesis. When X meets not-X there is no necessary Y that comes out of the meeting. The logical principle of non-contradiction enables us to analyze but tells us nothing about synthesis. Analytic philosophy is as kinetic as Hegelianism, but it doesn’t try to tell us where we are going. No wonder Lenin hated it. [2]

    Because it is analytic and kinetic, not ‘essentialist’ or substantive and not synthetic, either, “The name ‘analytic philosophy refers more to the methods of analytic philosophy than to any particular doctrine that analytic philosophers have all shared.” For such philosophers, “insight comes from seeing how things are put together and how they can be prized apart; how they are constructed and how they can be reconstructed.” Schwartz notes the similarity between analytic philosophy and the more recent philosophies of ‘deconstructionism.’ The difference, it might be added, stems from the influence of Nietzsche on deconstructionists, which is entirely absent from the minds of the ‘analysts.’ Like the deconstructionists, and like Nietzsche, “analytic philosophers rejected the pretensions of the Enlightenment philosophers,” their grand schemes of rationally-controlled progress. Unlike the deconstructionists and Nietzsche, they did not question the Enlightenment philosophers’ “commitment to reason.” At any rate, analytic philosophy “is not a unified movement or school,” and indeed Russell himself changed his opinions on all manner of things throughout his long career.

    “The basic aspects of modernism—rejection of past traditions, experimentation with new methods and forms; fascination with and anxiety about technology and use of new technical methods; focusing on method, surface, expression, and language—all characterize analytic philosophy.” This method also comports with the ambition of modernism to master nature; by basing logic on a particular form of modern mathematics, the calculus, it intends to overcome reality even as it seeks to understand it. As Schwartz puts it, the analytic philosophers “saw their work as freeing philosophy and even society, from its past forms and obsessions.”

    “Mathematics is a priori and universal, so how can it be empirical?” Following Frege, Russell initially “treat[ed] logic mathematically, and then treat[ed] mathematics as a form of logic.” By abstracting from experience, mathematical equations give us certainty; for example, we can’t know empirically that there are infinitely many prime numbers because we can’t know if the one we’ve just mentioned is the last one. This appears to lead to a strict form of rationalism. Frege replies that “mathematical propositions are not based on experience or observation, but they are not the results of pure rational insight into the ultimate nature of reality, either.” They may not be ‘about’ anything other than themselves. Russell replies to Frege by counter-example: The famous Liar’s Paradox (a Cretan says, “All Cretans are liars”) doesn’t only yield an ’empty set,’ as mathematicians say; it isn’t a set at all. Originally a Fregian, Russell now could “no longer plausibly claim that mathematics was reducible to pure logic—that it was all analytic.” What he did claim (and here is where empiricism comes in) was that language can be analyzed. The Epicureans’ atomism can return because sentences can be rationally reduced to what Russell called “ultimate simples” or “logical atoms.” They are the objects perceived through the empeiria of this new empiricism; for logical purposes they have the same status as the natural atoms for Lucretius, the sense-data for Locke. And these empirical facts can be analyzed by means of symbolic logic, a logic that takes on the form of mathematical equations but without the need to need to translate the things analyzed into numbers. (In this, Russell sharply diverges from such system as the Gematria in Judaism, which does indeed translate words into numbers. Russell would deny that such a move makes sense.)

    It was Russell’s sometime colleague, G. E. Moore, who attacked the then-dominant school of German Idealism, especially as seen in the thesis-antithesis-synthesis dialectic of Hegel. Hegel and his followers claimed that the dialectic had ontological content, that it registered the logical unfolding of the Absolute Spirit. Moore (with Russell) denied this. The negation of a ‘thesis’ by an ‘antithesis’ yields nothing more than a contradiction; there is no ‘synthesis’ at all. The logical atomists or, as they also called themselves, logical positivists (contrasted with Hegelian ‘negationists’) was nothing but airy “metaphysics.” Logical positivists insisted on limiting reason to matters of common sense, that is, sense data, with which we are all “directly acquainted,” as Schwartz puts it. But we are only acquainted with the sense data we perceive; such notions as whiteness, diversity, and brotherhood are not immediate perceptions. It is through a mental process that we become aware of them. I perceive the appearance of a table sensually, but I know it only indirectly, through words, “by description.” And, in Russell’s words, “Awareness of universals is called conceiving, and a universal of which we are aware is called a concept.” These are logical atoms because any attempt at forming a conception that is exposed as illogical, as self-contradictory, thereby falls apart in our mind, becomes inconceivable. In mathematical terms one might call it a failed function, a pseudo- or dys-functional function.

    “Analytic philosophers proudly contrast the clarity, technical proficiency, and respect for natural science of analytic philosophy versus the ultra-sophistication, contrived jargon, and mystification of Continental philosophy.” They are working very broadly within the British philosophic framework of empiricism, seen in Hobbes and Locke, but they add a linguistic layer to perception that the earlier empiricists did not emphasize. To the logical positivists, we truly know only what we make, and what we make first and foremost is concepts, out of sense perceptions mixed with words.

    Schwartz adds that there was a political-historical element to all of this. At the time, “Hegelianism was something like the official philosophy of Germany, and especially Prussia.” Germany generally and Prussia particularly had taken on a bad odor for Englishmen as it rose to challenge the British regime at the beginning of the twentieth century. This may or may not have had philosophic relevance (the positivists denied that there can be such a thing as political philosophy), but it aided in obtaining a respectful hearing for a philosophic method distinct from and indeed contradictory to that of the Germans.

    Russell and Moore were logicians who called themselves positivists, but ‘logical positivism’ as the term for a philosophic school came to be deployed in Vienna in the 1920s. Its most important proponent was Ludwig Wittgenstein. “Like Russell and Moore, the members of the Vienna Circle reacted against Hegelian German idealism,” very much including its political dimension; “many blamed the Prussian aristocratic traditions for starting the war and for not being able to pursue it successfully”—quite the failure of historicist dialectic, that. “If Frege is the pioneer and Bertrand Russell the father of analytic philosophy, then Wittgenstein’s writings provide the backbone.” Wittgenstein had read the Principia Mathematica before the war, studied with Russell at Cambridge, and then became a decorated artillery officer in the Austro-Hungarian Army. Presumably, his stint in an Italian P.O.W. camp provided the leisure to contemplate the defects of the German-Austrian misalliance, along with the philosophic reasons for doubting Hegelianism and for refining positivism. He published his Tractatus in 1924.

    Wittgenstein sees that language attempts not only to represent the world—what analytic philosophers would come to call the “actual” world we perceive with our senses—but also to represent “non-actual states of affairs,” the stuff of plans and fantasies. “In order for us to be able to think about the world and talk about it, there must be a fundamental similarity of structure or isomorphism between thought and language, and between language and the world. This structure is represented by formal logic.” He breaks with Russell, however, in denying that logic describes “very abstract or fundamental facts or truths about the world or thought or even language”; language provides only “the framework or scaffolding that makes statements of facts possible.” The limits of the linguistic framework are tautologies on the one hand, self-contradictions on the other; this is what keeps Wittgenstein within the realm of logic. “Russell was still yearning for some sort of intellectually satisfying certainty, whereas according to Wittgenstein the only certainty available is empty and formal.”

    An analytic proposition is self-evident only because it is a tautology. In this it is identical to a mathematical proposition, whether an axiom, postulate, or theorem. This returns mathematics to the apodictic certainty of pure abstraction, and it denies that any certainty can result from empirical investigation. As another member of the Vienna Circle put it, there are statements about facts and statements which “merely express the way in which the rules which govern the application of words to facts depend upon each other.” The latter “say nothing about objects and are for this very reason certain, universally valid, irrefutable by observation.” Therefore, as Wittgenstein writes, “Philosophy is not one of the natural sciences”; it aims only “at the logical clarification of thoughts” and not at “a body of doctrine.” It “does not result in ‘philosophical propositions,’ but rather in the clarification of propositions.” A decade later, A. J. Ayer concurred, asserting that “The traditional disputes of philosophers are, for the most part, as unwarranted as they are unfruitful.” Philosophers have been the dupes of language, goofed by grammar. Such a conception as ‘God’ is neither provable nor disprovable. ‘Moral philosophy’ is equally a contradiction in terms, as moral claims have no cognitive content but express nothing but attitudes and emotions. At best, logic will tell us if a moral judgment coheres logically with the moral standard asserted by the one making the judgment.

    Logical positivism, popularized (well, at least among academics) by Ayer’s 1936 book, Language, Truth, and Logic, held sway among philosophy professors well into the 1950s. “Much of the development of philosophy and methodology in the sciences since the 1950s has been driven by the criticisms of the doctrines of the logical positivists.” One of the critics would be Wittgenstein himself. These criticisms, however, were undertaken in “the spirit of the logical positivists’ motivation,” deploying many of the same “methods, standards, and attitudes.”

    Wittgenstein, for example, came to reject “the use of symbolic logic to dissolve philosophical problems.” He now took “the meaning of a statement” to be not “a picture of reality or a fact” but a tool, a matter of how the statement was used in “practical life.” This begins to move toward a reconception of language as rhetoric: “Language is used to elicit a response in listeners, to coordinate our activities, and so on.” This is the theme of Wittgenstein’s philosophic notebooks, published posthumously (he died in 1951) as The Philosophical Investigations. This shift distinguishes what scholars have come to call the “early Wittgenstein” of the Tractatus from the “later Wittgenstein.”

    After Wittgenstein, W. V. Quine took up the mantle. To understand meaning as use is to recall the American pragmatist school, led by John Dewey and William James. Radicalized, the tendency of pragmatism is to reject all attempts to ‘verify’ the truth of a statement; indeed, ‘truth’ itself comes to be guarded by inverted commas, too. Accordingly, Quine rejects even Karl Popper’s more modest principle of falsification, which asks us not to verify anything in accordance with some standard but more modestly to eliminate those claims which contradict that standard. Quine regards all claims, whether they are assertions of the existence of the Homeric gods or of the existence of physical objects, as “cultural posits.” ‘We moderns’ believe in the existence of physical objects only because that belief (as Quine puts it) “has proved more efficacious than other myths as a device for working a manageable structure into the flux of experience.” That is, meaning is Machiavellian in intent; it seeks to master Fortuna. “There is no first or fundamental philosophy that discovers truth or rather TRUTH underlying or separate from science”—which is a matter of ‘grasping,’ of touch, not of seeing (noesis) or of hearing (revelation).

    If an ‘analytic’ statement has a meaning independent of facts and a ‘synthetic’ meaning is grounded in facts, can we really distinguish between the two? Quine doubts it. What is a ‘cigarette’? Tobacco rolled in cylindrically-shaped paper? What about marijuana rolled in paper? And does a ‘cigarette’ need to be rolled in paper at all? Why not a tobacco leaf? “We begin to see the difficulty of distinguishing pure elements of the linguistic meaning of ‘cigarette’ from empirical facts or generalizations about cigarettes. The notion that the term ‘cigarette’ has a pure linguistic meaning begins to dissolve,” and with it its logical ‘analyticity.’ Quine says that the only way to assign meaning to ‘cigarette’ is to consider it within “the whole of science,” “the totality of our so-called knowledge or beliefs,” which he deems “a man-made fabric which impinges on experience only along the edges,” a (rather disorderly) “web of belief.” What “positivists and other modern empiricists failed to recognize” was this “holistic character of knowledge.” Human knowledge or science is never comprehensive (as it is in such a great systematizer as Hegel); when a given experience contradicts it, we can and should adjust it accordingly. “Even the laws of logic and mathematics are not immune to revision.” But there is no standard ‘above’ the myth or story of science; we are simply adjusting the “paradigm” (as the historian of ideas Thomas Kuhn calls it). More, we rarely discard an old paradigm for a new one. Kuhn writes, “Typically the adherents of the old scientific paradigm are not defeated by the results of experiments or observations. They are defeated by the grim reaper,” as “older scientists are replaced in positions of scientific power by younger colleagues with fewer intellectual commitments.” (Notice that this last point strengthens the intellectual hands not only of analytic philosophers but of post-modernists, ever alert to ‘will to power’ Nietzsche so vehemently asserted to be the pervasive principle of all life.)

    Why, then, do Quine and Kuhn persist in favoring scientific paradigms over others? It can’t simply be a matter of “cultural predilections.” Rather, scientific paradigms have proven more accurate than, say, astrology as tools “for making predictions.” We judge science the same way “we judge any tool. How useful is it? How well does it work? Does it do the job for which it is designed?” Philosophy and philosophizing thus can assist the modern scientific enterprise, so long as philosophers abandon their pretension to see nature and stick to the task of sharpening the tools by which we grasp and shape it.

    All of this is quite reminiscent of Dewey, as Quine’s successor, Richard Rorty, insists in Philosophy and the Mirror of Nature (1979). Rorty, along with his contemporary Hilary Putnam, “labored to dismantle the traditional view of science as an attempt, by rigid and formal methods, to get an ever more accurate picture or mirror of a fixed lawlike world.” This led them to question whether science has a monopoly on knowledge. Why can’t painting, sculpture, music, literature, moral codes, “and perhaps even religion” “contribute to the web of knowledge”? These areas of thought, too, may offer “cognitive content with pragmatic value,” even if modern science remains “the Queen” of the knowledges.

    Respecting moral codes, for example, Putnam rejects the fact/value distinction, “which was almost as dear to the positivists as the analytic/synthetic distinction.” Not only such words as ‘cruel’ and ‘kind’ are “value-laden,” but so are “such factual sounding terms as ‘rational,’ ‘logical,’ ‘irrational'”. Putnam considers “his rescuing values from positivist exclusion to be his most important contribution to philosophy.”

    Putnam especially insists that this opening-up of knowledge, even combined with the rejection of a standard of truth ‘above the cave’ of our current myth, doesn’t entail relativism. “Denying that it makes sense to ask whether our concepts ‘match’ something totally uncontaminated by conceptualization is one thing; but to hold that every conceptual system is therefore just as good as every other would be something else. If anyone really believed that, and if they were foolish enough to pick a conceptual system that told them they could fly and to act upon it by jumping out of a window, they would if they were lucky enough to survive, see the weakness of the latter view at once.” While “the very inputs upon which our knowledge is based are conceptually contaminated,” such contaminated inputs “are better than none.” And if “contaminated inputs are all we have, still all we have has proved to be quite a bit,” given the success of modern science in doing what its philosophic forebears promised, the conquest of nature for the relief of man’s estate.

    Meanwhile, in England, philosophy shifted its base of operations from Russell’s Cambridge to Oxford, where C. E. M. Anscombe, R. M. Hare, H. L. A. Hart, Charles Stevenson, and Gilbert Ryle worked. They too rejected Cambridge and Vienna Circle formalism, “tend[ing] to view symbolic logic as an attractive snare for the philosophical intellect.” They retained linguistic philosophy’s emphasis on language but turned (Socrates-like, it might be said) to the consideration of “ordinary language” and common sense. Again like their analytic-philosophy predecessors, they sought to elucidate the meaning of concepts, but focused their attention not so much on mathematics and science as on the more concrete realms of literature, the arts, and politics. Ryle held that “philosophy is messy and the messy problems it confronts cannot be resolved by mathematical formulas.” Even as Aristotle had observed that a cultivated man should not expect more precision in a field of knowledge than its subject-matter allows one to have, the “ordinary language philosophers” sought “precision and accuracy of thought and argument, not the precision of the physicist, chemist, or medical doctor.”

    Whether analytic philosophers have maintained that symbolic logic mirrors nature, or whether they have maintained that we find meaning only in uses, in pragmatic refinement of paradigms, they have committed what Ryle regards as the fallacy of dualism, what he called (following Arthur Koestler) “the ghost in the machine,” the Geist or spirit/mind as distinguished from the body. Our minds are not separate from our bodies; they are only “organizations of behavior”. Ryle replaces logical atomism and logical positivism with logical behaviorism. We know what people are thinking and feeling primarily by their actions, which include their vocalizations, linguistic and otherwise. Although Ryle eventually questioned behaviorism, finding it insufficient to account for the experience of introspection, behaviorism has enjoyed a long if often pernicious life in the writings of social scientists seeking, as they do, observable and measurable phenomena to describe.

    Resistance to dualism entails a rejection of the superiority of mind over matter. This “reflected changes in society,” Schwartz remarks, and indeed it does look like a continuation of the trend toward what Tocqueville calls democracy, social egalitarianism—the pushing-down of aristocratic claims to rule in all endeavors of mind and heart. Would ‘aristocracy’ make a comeback,? Would philosophers begin to call ordinary language philosophy ordinary-all-too-ordinary?

    Not entirely. “Since the decline of ordinary language philosophy in the 1960s, no single movement or school has dominated analytic philosophy.” In Schwartz’s estimation, the “most striking and impressive advances” by analytic philosophers came in the investigation of language. And this marked a return to nature, thanks to the work on the linguist Noam Chomsky, whose research into human beings’ innate propensity to language spurred a rethinking of that large portion of philosophic thought which took its cue from John Locke and his (now clearly mistaken) notion of the mind as a tabula rasa. Before Chomsky, the mathematician Kurt Gödel gave “the final deathblow” to the earliest, pre-Principia Mathematica form of analytic philosophy by showing that arithmetic “cannot be reduced to logic and set theory” because some axioms are un-analyzable, unprovable and therefore “beyond the reach of human knowledge.” To put it in verbal terms (as the philosopher Alfred Tarski did), to say that snow is white is true if and only if snow is white. That doesn’t get you very far. More importantly, it can’t, so don’t waste any effort in trying. It is well worth noticing, as Stanley Rosen does, that analytic philosophy tends to deny cognitive status to intellectual intuition or noesis, and that this one of the “limits to analysis.” [3]

    To deal with such a conundrum, Tarski distinguished between “object-language”—the sentences in which we talk about objects—and “meta-language”—the sentences in which we talk about the object-language, in which (as Tarski writes) we “construct the definition of truth for the first language.” Donald Davidson elaborated on Tarski’s proposal, linking meta-language to Quine’s anti-positivist holism or Kuhn’s paradigm theory. Or, reaching back still further, Davidson writes, “Frege said that only in the context of a sentence does a word have meaning; in the same vein he might have added that only in the context of the language does a sentence (and therefore a word) have meaning.” Therefore, “an argument must always be interpreted in the way that makes the most sense given the context and other information we have”; “if we cannot find a way to interpret the utterances and other behavior of a creature as revealing a set of beliefs largely consistent and true by our standards, we have no reason to count that creature as rational, as having beliefs, or as saying anything.” Taken by itself, this would amount to a highly sophisticated form of classical conventionalism; it took Chomsky to bring nature back in, after it had been driven out by philosophic pitchforks.

    Chomsky rejected behaviorism, which “cannot explain our ability to learn a language” because languages are too complex to be learned by an organism starting at zero. “All normal humans are born with a universal grammar already hard-wired in their brains.” This strikes a blow against Quine, a friend of B. F. Skinner, the Harvard psychologist who became the most prominent behaviorist of the postwar decades. [4] Schwartz comments, “Logical behaviorism never had any plausibility. No definitions in terms of behavior and dispositions to behave were ever formulated nor could they be,” inasmuch as “thoughts and day dreams are interior and private and need never be manifested in anything exterior.” They are unverifiable by outside observers.

    Further, the natural languages that derive from the universal grammar natural to human beings resemble “a formal logical system.” Whereas Wittgenstein, and Quine following him, denied that the purpose of language “was to express our thoughts,” Chomsky “embraces exactly this view.” Despite his esteem for the Quine insofar as he authored the metaphor of the “web of belief,” Davidson limited such conventionalism by endorsing Chomsky rationalist naturalism: “The dependence of speaking on thinking is evident,” he wrote, “for to speak is to express thoughts.” Schwartz sees that “the next step is not far: [T]he structure of language is isomorphic to the structure of thought, and the world.” The deeper philosophers dove into language, the more they moved toward the classical claim that man is by nature animated by logos, by speech and reason.

    From the philosophy of language, then, to the philosophy of mind. Here, philosophers noticed that “the behaviorist cannot forgo appeal to mental states,” which are “functional states of an organism,” that is, states that cause it to behave. External stimuli may ‘push’ the organism to do something, but only as mediated through that mental state (e.g., pain, pleasure, revulsion, attraction). The “functionalist” “views a mental state as a function that takes an input stimulus, plus other mental states, and generates an output that depends on both the input and the other mental states,” indeed, “the entire mental state of the organism.” A computer, which is an artifact imitating the human mind, performing some of the same functions (albeit more efficiently) does much the same thing: its “output depends on the input plus the program the machine is running.”

    This is as good as far as it goes, but it “leaves out of the account the subjective nature of our mental lives.” Computers have no consciousness, “as far as we know.” An organism that had no consciousness would feel no pain, even if it were subjected to abuse—a point well known to all of us who have experienced the benefits of anesthesia. Although Davidson and many other philosophers are reluctant to abandon materialism, the idea of consciousness obviously causes a problem for them, even if their web-of-belief organicism disposes of behaviorist simplisme. Schwartz notes, “The problem of mental causation and the problem of consciousness are today the central problems in the philosophy of mind.”

    And with all this, even much-denigrated metaphysics, the bugbear of analytic philosophers, has reappeared in the thought of today’s analytic philosophers. This “remarkable development” occurred thanks to “developments in formal modal logic in the 1960s.” Formulated by C. I. Lewis, “modal” logic “is the logic of necessity and possibility,” duly translated into symbolic-logic figures (a box symbolizing necessity, a diamond symbolizing possibility). As Quine immediately saw, and abominated, this suggests that symbolic logic might be made into a means of understanding things in nature, reviving the hated ‘essentialism’ of previous schools of philosophy, and with it metaphysics itself. Schwartz counts himself among those who find Aristotelian essential “intuitive and commonsensical,” very far from impossible to think about logically. “In embracing metaphysics,” he hastens to add, “we did not give up commitment to clarity, care, and careful sequential reasoning, nor to honoring science and mathematics.” That part of the analytic-philosophy mindset remains, well, conscious of itself.

    The centerpiece of contemporary metaphysical thought is the idea of “possible worlds,” that is, worlds that “could have been” but are not the “actual” world. The notion of possible worlds was originated by G. W. F. Leibniz, the renowned seventeenth-century metaphysician and indeed theologian. The down-to-earth example of such thinking begins with a “counterfactual”: a world in which, for example, Ralph Nader didn’t run in the 2000 presidential election, resulting in victory for Senator Gore over Governor Bush. Such a possible, but not actual turn of events likely would have led to turns in subsequent events (would President Gore have prosecuted Gulf War II?). In this line of thought, a necessary proposition is true in every possible world, whereas a possible proposition is true in at least one possible world; an impossible proposition is possible in none, and a contingent proposition is true in some worlds, false in others. “This is not about language. Even though we speak of possible world semantics, it is metaphysics.” And it reopens philosophic minds to essentialism: “I have the property of being a human being in every world in which I exist,” Schwartz writes. “I have the property of living in Ithaca in some but not others. I am essentially a human but contingently an Ithacan.” “An essence is a property or conjunction of properties that is necessary and sufficient for being a particular individual.” As Alvin Plantinga puts it, “If Socrates”—not to be confused with Schwartz, but the principle is the same—”had not existed, his essence would have been unexemplified, but not nonexistent. In world where Socrates exists, Socrateity is his essence; exemplifying Socrateity is essential to him.” Or, as Schwartz puts it, “In some worlds, some essences are exemplified, and others are not.” Mathematicians have dealt with such an idea for years in the form of probability theory in statistics. As the political writer George F. Will noticed, a Chicago Cubs hitter whose batting average is .203, isn’t necessary ‘overdue for a base hit,’ whatever some cheerleading baseball announcer may say. In the actual world, he may strike out, even if there are possible worlds in which he saves the day with an RBI triple.

    On a loftier level, modal logic revives the ontological argument for the existence of God. In Descartes’ version, since God has all perfections and existence is a perfection, God exists. The modal version of the ontological argument is: “If it is possible that God exists, then God exists”—that is, in essence if not in actuality; “if it is possible that a necessary being who is omnipotent” and possesses the other attributes of the Biblical God exists, “then such a being exists.” This doesn’t prove the existence of God, but, as Plantinga argues, “it establishes the rational acceptability of belief in God—the rational acceptability of theism,” because even if one disbelieves that there can be a possible world in which “maximal greatness is instantiated,” believing that there is, “is not irrational.” Therefore, “theism is not irrational” but rather a logical stance taken on the basis of a premise that cannot be proven or disproven, rather as we understand that snow is white only if snow is white. “To my knowledge,” Schwartz writes, “no one has yet succeeded in demonstrating that the concept of God is impossible, self-contradictory, or nonsensical.”

    If nature has returned to philosophy, precisely through the thinking-through of analytic philosophy, what is nature? Putnam defines natural kinds as “classes of things that we regard as of explanatory importance… held together [by] deep-lying mechanisms.” According to the theory of “reference”—what we mean to say that our words refer to something—meaning has “intension” and “extension.” “Intension” is what we mean to say by using a given term; the word ‘lemon’ means a certain “conjunction of properties.” “Extension” is a reference to the things to which that meaning applies. The term ‘lemon’ refers to an object currently sitting in the fruit and vegetable section of Market House, among other objects, many of them not lemons. In logical terms, the concept corresponding to the term is its “intension,” and it “must always provide a necessary and sufficient condition for falling into the extension of the term.” “Analytic” truths “are based on the meanings of terms.” From Hume onwards, “all necessity was construed as analyticity or somehow based on linguistic conventions.” This is what’s behind Hume’s questioning of the theory of causality; there is no sort of “extra-linguistic necessity.” That’s what Wittgenstein has in mind when he claims that essence is expressed by grammar, by a linguistic convention, and need not apply to the physical world.

    Keith Donellan argues otherwise. To describe something, he remarks, you may make an “attributive” description or a “referential” one. An attributive description is one in which I infer a characteristic of, say, a person without knowing who the person is. If someone explains E=mc² to me, I might think that the person who first formulated that must have been smarter than I am, but I might not know it was Albert Einstein. If I met Einstein and had a conversation with him, and then described him as being smarter than I am, I would be defining him referentially, now having a definite person in mind. Bringing in ‘possible worlds’ theory, Saul Kripke adds that when I say “Albert Einstein” I mean “the same person whether or not he… satisfies some list of commonly associated descriptions.” Those who knew Einstein as a child attributed no genius to him but nonetheless meant the same dude as those who later described him quite differently. Kripke distinguishes between necessity—a category in metaphysics—a prioricity—a category in epistemology—and analyticity—a category in linguistics.

    The same goes not only for persons but for natural kinds. Under the older theory, “the concept associated with a term functions like the set of identifying descriptions supposedly associated with an ordinary name”; “gold” is yellow, shiny, metallic, and so forth. It can be analyzed linguistically, broken down into these other words. Kripke observes, however, that such a description doesn’t truly define gold. Only “its atomic structure defines whether some stuff is gold.” Gold is gold metaphysically, that is, it is gold in all possible worlds. Anything that does not have that atomic structure isn’t gold, “even if it satisfies some list of superficial features that we think” characterize it. Our certainty in this classification derives not from “knowledge of a definition” but from “a well-established empirical theory.” It is not analytic, in the analytic-philosophy sense, but “if it is true, it is necessary” metaphysically.

    Schwartz provides another way into the question, distinguishing physical possibility and necessity from metaphysical possibility and necessity, and both from logical possibility and necessity. An alternate world that is physically possible must have the same natural laws as our world. An alternative world that is metaphysically possible might be physically impossible in our, actual, world. An alternative world that is logically possible must only meet the criterion of “logical consistency.” So, for example, it is physically possible for Schwartz to have lived in San Francisco, as this would violate “no natural laws”; what is more, this is also metaphysically and logically possible. It is physically impossible for Schwartz to swim across the Atlantic Ocean; it isn’t metaphysically or logically impossible, however. It is not physically possible for Schwartz to be an alligator, nor is it metaphysically possible “(assuming that I am essentially human)”; it is nonetheless logically possible, for example if I (not doubt unwarrantedly) use the term ‘alligator’ as a metaphor to describe Schwartz’s personality.

    Speaking of character, analytic philosophers have also begun to admit ethics into their purview, along with metaphysics and nature. G. E. Moore’s moral-philosophic equivalent of the Principia Mathematica was his Principia Ethica, published in the same year as the Russell/Whitehead opus. Moore denied that there can be any such things as moral philosophy; his book centers on what he calls “metaethics,” the “logical and analytical study of ethics” by means of epistemological, logical, and metaphysical categories. Such categories tell one nothing substantive about right and wrong, good and bad because (according to Moore) such ethical topics have no cognitive content if one considers them epistemologically, logically, and metaphysically. Moore argues that ‘good’ is an indefinable term, rather like ‘white’ or ‘yellow.’ Like those terms, it cannot be analyzed. But unlike shades and colors, it can’t be “perceived by the senses,” either; it is “apprehended by moral intuition.” Because it can’t be perceived by the senses it can’t be natural; in claiming this, Moore affirms Hume’s denial that we can derive ‘ought’ from ‘is’. Philosophers who try to derive ethics from nature commit the “naturalistic fallacy.” The moral intuition amounts to “personal affection” and “appreciation”—to ‘values’ as distinct from ‘facts.’ A later writer in Moore’s line, C. L. Stevenson subsumed ethical discourse under rhetoric: “The point of ethical discourse is to influence not describe.” A. J. Ayer agreed.

    Analytic philosophers began to change their minds after World War II, which conflict must have imposed a fairly severe challenge to moral subjectivism. R. M. Hare hoped to stay within the Moore-Stevenson-Ayer orbit, but in the process put an end to their ethical emotivism by remarking (in an unwittingly Aristotelian way) that ‘good’ in morality means essentially the same thing as it means in other areas. We mean ‘good person’ in more or less the same way we mean ‘good dog,’ ‘good car,’ ‘good movie.’ We don’t mean only that we like that person, dog, car, or movie; we also mean that it has some intrinsic quality that fulfills the definition of the noun we use to classify it. And so, to use Schwartz’s example, if Conan the Barbarian understands the good life as the victorious life cannot be right, as victory in itself contributes nothing “to human flourishing,” does not fulfill the meaning of the noun ‘human.’ “It is mere self-interest.” But more, one might argue. Insofar as mere victory in battle might deform the person who achieves it, making him more inhuman than before, it is not even self-interest, rightly understood.

    This is the kind of thing C. E. M. Anscombe and Philippa Foot have in mind when they assert that moral terms have factual content, that a ‘value’ can partake of facticity. They founded “a school of ethics, based on Aristotle’s ethics, called virtue ethics,” which emphasizes “moral character rather than moral oughts and goodness” in the utilitarian and also in the Kantian sense. Schwartz somewhat puzzlingly goes on to laud John Rawls, a neo-Kantian, as a veritable “Philosopher King,” although this does at least bring political philosophy back into the ethical universe, as Aristotle had seen it to be.

    Looking at the trajectory of analytic philosophy as described by Schwartz, one finds it a remarkable enterprise indeed. What started out as a philosophic method that eschewed metaphysics and morality as sub-philosophical realms devoid of rational content has slowly uncovered doctrines with affinities to Aristotelianism. That is, ultra-‘modern’ analytic philosophy has begun to turn modernity away from itself and back toward the ‘ancients.’ Will postmodernists take a similar turn? As the old saying goes, ‘From your mouth to God’s ears’—God having now been reintroduced to polite philosophic conversation.

     

    Notes

    1. Jacob Klein: Greek Mathematical Thought and the Origins of Algebra. Eva Brann translation. New York: Dover Publications, 1992.
    2. V. I. Lenin: Collected Works. Volume 14, pp. 17-362. Moscow: Progress Publishers, 1972.
    3. Stanley Rosen: The Limits of Analysis. South Bend: St. Augustine’s Press, 2000.
    4. An acquaintance of mine once lived next door to a famous behavioral psychologist. The great man had gone so far as to place his infant son for in what was called a ‘Skinner box’ for substantial periods of time. A Skinner box was a controlled environment in which an animal (very often a rat or a pigeon) would be rewarded for performing a certain action, not rewarded or even punished for failing to perform it. The last time my acquaintance saw him, the lad was chasing the family cat around the back yard, a syringe in hand. This suggests that errors in epistemological theory may have startling actual-world consequences, although admittedly it doesn’t rigorously prove that they do, or must do.

     

    Filed Under: Philosophers

    George Washington, Nation-Builder

    November 7, 2019 by Will Morrisey

    Edward J. Larson: George Washington, Nationalist. Charlottesville: University of Virginia Press, 2016.

     

    Americans understood themselves as “a people” by the 1770s, at least, as the Declaration of Independence most famously indicates. But until the Declaration they couldn’t think of themselves as a self-governing people, a nation in full. Securing that nationhood took years of war, constitutional architectonics, and commerce both economic and social. The merit of historian Edward J. Larson’s compact and incisive essay begins in selecting for consideration the ‘middle’ years of Washington’s career, those between the war and his inauguration as our first president. In them we see not Washington the general or Washington the commander in chief, but Washington the adroit and great-souled politician, the man who used the fame he won during the war to take his country from domestic unrest and geopolitical insecurity to what he called an empire, what his sometime colleague Thomas Jefferson called an empire of liberty. Jefferson wrote the Declaration; Madison, James Wilson, and their colleagues wrote the Constitution; but Washington took the indispensable steps that enabled independence fought in defense of natural rights to issue in the security of those rights within a framework of constitutional and commercial republicanism.

    This book’s “simple thesis,” Larson writes, holds that Washington was “the leading nationalist of the late Revolutionary era in American history.” By “nationalist,” he doesn’t mean blood-and-soil statism or even Burkean traditionalism but popular self-government. He commits an important misstep at the outset, saying that Washington “believed in the Lockean natural right of free men and the republican ideals of government by the consent of the governed”; obviously, if right is natural, it must belong to all men, as the Declaration affirms and as Washington recognized by emancipating his slaves in his will. Fortunately, this is just about the last mistake Larson makes, and it isn’t foundational to his argument, which centers primarily on practical policies not political theory. And he is exactly right to link Washington’s understanding of natural right to his commitment to the founding of a republican regime.

    Having fought major battles in five states and coordinating troop movements in all thirteen, Washington understood American politics from “a national perspective” well before he re-entered civilian life. After the war, the English continued to prey upon American shipping and to occupy New York City, Charleston, and Savannah—all major ports, vital to American commerce. The union of the states, first asserted in the 1774 Articles of Association, weakened without a battlefield enemy on the ground who daily reinforced the sentiment of hanging together, lest we hang separately. Disunion led to reluctance by states to pay debts incurred during the war to the federal government, and this led to a regime crisis. Unpaid soldiers will grumble. Officers in Newburgh, New York became restive. They received some encouragement from such nation-builders as Robert Morris and Gouverneur Morris, who hoped that fear of a coup would spur the states to pay up. Major General Alexander McDougall was the point man for the proto-rebellion, threatening Treasury Secretary Henry Knox with refusal to disband the troops until payment was received.

    Washington understood that such a rebellion would threaten republicanism itself by challenging civil authority. He decided to employ a peaceful form of what military men call tactical surprise, the civil equivalent of the Battle of Trenton. He made a unannounced visit to the officers’ meeting in Newburgh on March 15, 1782, reading what one historian has called “the most impressive speech he ever wrote.” Taking himself as his example, he cited “the great duty I owe to my country” to obey civilian authority, a duty deriving from the principle of government by the consent of the governed, itself derived from the equal natural rights of all human beings. Appealing to honor, the military virtue par excellence, he exhorted the officers to “express your utmost horror and detestation of the Man who wishes, under any specious pretences, to overturn the liberties of our Country, and who wickedly attempts to open the flood Gates of Civil discord, and deluge our rising Empire in Blood.” Who will rule this rising empire? Military men? If so, was Washington himself not the highest-ranking and most-honored such man in America? And had he not fought with them as comrades throughout the early defeats and hardships, sharing with them the final triumph? Instead of calling them to lay down their arms, could he not have led them on a march to the capital, taking over the government by force? He had done the opposite of that. The officers backed down.

    “As word of the encounter first reached Congress and then spread across the land in newspaper accounts, Washington gained yet another laurel. Already first in war, he was now first in peace and clearly first in the hearts of his countrymen. He had no rivals.” Washington “use[d] his platform as America’s leading citizen to call for quickly and fairly compensating the troops, and ultimately for building a strong national union that could support those payments and some form of permanent military establishment”—an establishment which, going on 250 years, has yet to attempt a coup d’état against the people it is charged to protect or the civilian government those people have consented to be governed by. Working against any foolish potential backlash against the military as such, Washington advocated the maintenance of a small standing army, with a well-organized militia to supplement it, on the grounds that it could defend America’s northern border with British Canada and its northwest territories against Indian tribes and nations allied with the British.

    Washington’s call for national union went well beyond national defense. In his 1783 Circular Letter to the states, he associated a stronger central government with the “happiness” of those states as parts of that union. “It is only in our united Character as an Empire, that our Independence is acknowledged” by foreign powers, and it is only by thinking of ourselves as “citizens of America,” by establishing our “National Character” that we can become “a happy Nation,” one so situated as to secure our natural rights of life, liberty, and self-government. By resigning his military commission at the national Assembly Chamber in Annapolis near the end of the year, and by declaring his intention to retire to private life, he astonished the world (and most particularly George III). As the “second Cincinnatus,” he “became the first American,” no longer merely a Virginian of great distinction but “a world-renowned personification of republican virtue.” In one of his many well-chosen quotations, Larson cites Thomas Jefferson: “The moderation and virtue of a single character probably prevented this revolution from being closed, as most others have been by a subversion of that liberty it was intended to establish.”

    Returning to Mount Vernon, Washington put his long-neglected household in order then turned his attention to his properties along rivers in southeastern Pennsylvania and today’s West Virginia. He discovered that a grist mill he owned had been mismanaged and that a Calvinist sect called the Seceders had claimed squatters’ right on another of his tracts since 1773. For his pains, a group of Indians attempted to capture him at Great Kanawha, along the Ohio River. These unpleasant surprises galvanized his ambition to empower the federal government to permit orderly settlement of the West. “If Congress could open, sell, and settle these lands and thereby gain authority and revenue, it could bolster the union. If not, it risked losing them to a foreign power, and with them, much of the reason for a national government.” As a result, why would the settlers in the West not turn to Spain, which ruled the West’s geo-economic linchpin, New Orleans, and to Great Britain, which ruled the Great Lakes and the St. Lawrence River, for both security and trade? “The touch of a feather, would turn [the Westerners] either way,” he wrote. To secure this portion of the Union, not only a well-funded military force but east-west transportation routes would be indispensable—the latter to be secured by linking the North Branch of the Potomac River to the headwaters of the Ohio River. To this end, he lobbied the Virginia and Maryland legislatures to establish a private toll route on the Potomac, while lining up investors. He played the role of what we would now call a ‘rainmaker’ with his usual skill, and by January 1785 “Washington had his company and soon would be elected its first president.” He proved a less successful entrepreneur, however, not because he lacked business acumen but because the Erie Canal soon became the main east-west corridor, due to its better positioning, closer to the commercial entrepots of New England.

    Nonetheless, the project earned a substantial political profit. In obtaining the Mount Vernon Compact between Virginia and Maryland to cooperate on Potomac River commerce, he had partnered with the young Virginia state legislator James Madison, whom he enlisted in his broader intention to strengthen the Union. “We are either a United people, or we are not”; “if the former, let us, in all matters of general concern act as a nation,” with “national objects to promote, and a national character to support.” Madison concurred, proposing that the Virginia legislature “call a general meeting on interstate commercial regulations to be attended by delegates from all thirteen states.” Representatives of five states did attend the meeting, held in Annapolis in September 1786. This became the first step toward calling a national convention to revise the failing Articles of Confederation. But such a convention would need not only Washington’s support but his attendance, if it were to attract delegates from all the states. Madison and Washington’s former military aide Alexander Hamilton went to work on the general—who, in the end, needed little persuasion. Not only was the general well aware of the geopolitical dangers to Americans, he also worried about internecine conflicts, especially over borders and commerce, and, “perhaps most important,” the failure of states “to protect individual liberty and private property.” So were many of his fellow Virginians, who chose him to lead its delegation at Philadelphia. For his part, Washington worried that the convention wouldn’t be serious—that is, genuinely constitutional.

    As he had done with his officers during the war, Washington consulted his most trusted advisers before going into battle. Madison, Knox, and Jay all advocated “a truly national government” with “separate legislative, judicial, and executive branches” and a bicameral legislature. Madison also argued for a fully articulated federal judicial system, which would “avoid local bias in expounding national laws and deciding cases involving citizens of different states.” All agreed that “in areas under its domain the national government must have the power to act directly on the people, not just through the states.” Washington “embraced their proposals and made them his own,” while wondering if, as he said to Jay, “the public mind [was] matured for such an important change.” He called the convention as “the last peaceable mode” of “saving the republic.” Virginia delegate John Randolph was designated to present what was immediately labeled “The Virginia Plan,” which in most aspects carried the day, with some compromises at the insistence of the smaller states.  Respecting the office which everyone expected Washington to occupy, the new constitution broke with parliamentarism, electing the president not by legislative vote but through the novel Electoral College, which, tellingly, would dissolve at the end of each presidential election cycle, making the chief executive entirely independent of any standing set of officeholders in the national or states’ governments. Governmental powers would thus be not only separated but balanced.

    At times bitter and hard-fought, the ratification contests in the several states saw determined opposition to the new constitution from advocates of the Articles of Confederation system. “Federalists would rely on the public’s trust in Washington to carry the day,” and it did. Further, once ratification was assured, it was crucial to ensure that anti-federalists didn’t control the first Congress. To this end, Washington set down three “main goals for the United States under the Constitution: respect abroad, prosperity at home, and development westward”—goals obtainable by policies of “effective tariffs, sound money, secure property rights, and a nonaligned foreign policy.” As Washington put it, “America under an efficient government, will be the most favorable Country of any in the world for persons of industry and frugality,” a country not “less advantageous to the happiness of the lowest class of people,” thanks to the vast tracts of land available in the West. “He saw it as a model for individual liberty and republican rule everywhere,” and candidates for the first Congress under the Constitution would see in that model what amounted to an exceptionally attractive political platform.

    After his election, Washington journeyed to New York, stopping in Philadelphia and Trenton. At a City Tavern banquet in his honor, the diners raised their glasses to the toast, “To Liberty without licentiousness,” a republican slogan if ever there was one.  At Assunpink Creek, near Trenton, where Washington’s troops had rounded on British forces in January 1777, a banner unfurled to read “The Defender of the Mothers, will be the Protector of the Daughters.”

    This resembled a king’s progress across his realm, with one critical exception. The crowds who greeted the new president didn’t bow to him; he bowed to them. George Washington had become “the master of the correct gesture.” (Adams called him “the finest political actor he had ever seen.) The regime he had been instrumental in founding lodged sovereignty in the people, not in the government, and not in some elected monarch.

    And the regime worked, far better than the Articles regime had done. Treasury Secretary Hamilton worked out a financial system capable of paying the war debt. Secretary of War Knox organized for war against the Western Confederacy, an alliance of Indians which had blocked American settlement in the rich lands of the Ohio Valley. John Jay negotiated a treaty with Britain that got them out of its forts in the Northwest Territory. North Carolina and Rhode Island finally ratified the Constitution; Tennessee and Kentucky also joined the Union. Congressman Madison floor-managed the Bill of Rights through Congress, “with Washington’s support.” Secretary of State Jefferson “devis[ed] a broad regime of federally protected intellectual property rights,” which would secure the innovations on which manufacturing and commerce depend.

    Controversies over the national bank and Jay’s treaty caused tensions between Washington and his fellow Virginians Jefferson and Madison, who eventually began “a formal national political party with a states’-rights bent.” Thus what began as a controversy between big states and small states during the ratification contest morphed into a controversy between finance and agriculture by the turn of the century, a controversy that would eventually morph into the controversy between slavery abolition and slaveholding which nearly destroyed the Union. Far-seeing George Washington manumitted his slaves in his Last Will and Testament; had enough of his fellow slaveholders done that, there might have been no Civil War.

    Filed Under: American Politics