Will Morrisey Reviews

Book reviews and articles on political philosophy and literature.

  • Home
  • Reviews
    • American Politics
    • Bible Notes
    • Manners & Morals
    • Nations
    • Philosophers
    • Remembrances
  • Contents
  • About
  • Books

Recent Posts

  • A Sure Thing: Betting on Pascal
  • Pascal Against the Jesuits
  • Medieval “Cures” for Modern Madness
  • Diplomacy as Practiced by ‘Great Powers’: America Under the Nixon Administration
  • Diplomacy as Practiced by ‘Great Powers’: Germany and Britain

Recent Comments

    Archives

    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016

    Categories

    • American Politics
    • Bible Notes
    • Manners & Morals
    • Nations
    • Philosophers
    • Remembrances
    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org

    Powered by Genesis

    What Is Analytic Philosophy?

    November 16, 2019 by Will Morrisey

    Stephen Schwartz: A Brief History of Analytic Philosophy: From Russell to Rawls. West Sussex: John Wiley and Sons, 2012.

     

    “Personally and passionately involved in the enlightening and edifying enterprise of analytic philosophy,” Professor Schwartz calls it “the dominant Anglo-Saxon philosophical movement of the twentieth century” and also of this century, so far. He is unquestionably correct, especially with respect to academic philosophizing. With the compartmentalization of academic research, and with the division of labor institutional departmentalization reflects, academics ‘doing philosophy’ have needed a way to distinguish their enterprise from mathematics and science ‘hard’ and ‘soft.’ At the same time, they could not ignore the intellectual authority of mathematics and science in the modern world, seen in the thought of Bacon and Descartes, two of modernity’s philosophic progenitors. This authority has increased in the centuries subsequent to their work, and it includes the application or attempted application of scientific method to the work of governments in the form of the ‘administrative state.’ Even its recent philosophic, often ideological rival, ‘post-modernism,’ which rejects rationalism in the name of a democratized Nietscheism, nonetheless eagerly uses the apparatus of the administrative state as its preferred instrument of ruling.

    “I am personally and passionately involved in the enlightening and edifying enterprise of analytic philosophy,” Schwartz writes. This engagement seldom injures his account of its history. Quite the contrary, for the most part: History written by a lover almost always outranks history written by despiser, as a lover will attend to the features of his beloved, always wanting to know more about her.

    Modern philosophers have tended to group themselves into two major encampments, empiricists (‘Baconians’) and rationalists (‘Cartesians’). Although analytic philosophers began by distancing themselves sharply from Hegelianism, with its grand thesis-antithesis-synthesis historical dialectic, it’s hard to deny that they began by attempting to ‘synthesize’ modern empiricism and rationalism. Empiricists hold that all knowledge is based on experience; ‘experimental’ science tests ideas, initially demoted to the status of hypotheses, testing them by concrete results under the rationally controlled conditions of the laboratory—that is, a place of labor, of action, not of contemplation. Modern scientists, including Einstein himself, were quite relieved that the mathematical formula E=mc² found confirmation in the lab. Modern rationalists concentrate their attention on ideas thought to be innate in the human mind, ideas based on ‘pure’—that is to say, non-empirical, non-factual—reason. Can these two opposite approaches to philosophizing be combined? How?

    Bertrand Russell took the first, and very deep, stab at accomplishing this in his Principia Mathematica, co-written with Alfred North Whitehead and published in 1903. Russell sought nothing less than to break with both Aristotelian and Hegelian logic. Taking over a project begun by the German mathematician Gottlob Frege, Russell treated logic like the calculus. As Jacob Klein has shown, the calculus itself was invented to register the interest of modern philosophers, beginning with Machiavelli in kinetics. [1] Moderns are less interested in the stable figures of Pythagorean geometry as in a geometry of motion—of plotting points along a curve. Aristotelian, syllogistic logic seeks to understand things in accordance with their forms and ‘essences.’ Hence such standard syllogistic locutions as “All men are mortal; Socrates is a man; therefore, Socrates is mortal.” “Socrates” is the subject, “man” the predicate. To say “All zebras are animals” is logically the same as to say “Socrates is a man,” despite the fact that a zebra is a species, Socrates a person. Additionally, in syllogistic logic, “Socrates is married to Xanthippe” is logically identical to these other propositions because “married to Xanthippe” is the predicate—this, despite the fact that “married to” is a relation, whereas “is a man” is a statement about the intrinsic nature of Socrates.

    In mathematical logic, “predicates represent functions from objects to truth-values,” “functions” being a term from calculus. To say “Socrates is a man” is to say “Socrates satisfies the function ‘man.'” “‘All zebras are animals’ says that if any object satisfies the function ‘x is a zebra,’ then it satisfies the function ‘x is an animal.'” And to say that “Socrates is married to Xanthippe” is to say that the pair Socrates/Xanthippe satisfies the function “is married to.” Such sentences are structured like equations in calculus. They say nothing about the substance of the things equated. They do not posit essences, only the verbal equivalents of points on a line.

    Russell’s logic differs from Hegel because it is analytic. It does not aim at producing a synthesis. When X meets not-X there is no necessary Y that comes out of the meeting. The logical principle of non-contradiction enables us to analyze but tells us nothing about synthesis. Analytic philosophy is as kinetic as Hegelianism, but it doesn’t try to tell us where we are going. No wonder Lenin hated it. [2]

    Because it is analytic and kinetic, not ‘essentialist’ or substantive and not synthetic, either, “The name ‘analytic philosophy refers more to the methods of analytic philosophy than to any particular doctrine that analytic philosophers have all shared.” For such philosophers, “insight comes from seeing how things are put together and how they can be prized apart; how they are constructed and how they can be reconstructed.” Schwartz notes the similarity between analytic philosophy and the more recent philosophies of ‘deconstructionism.’ The difference, it might be added, stems from the influence of Nietzsche on deconstructionists, which is entirely absent from the minds of the ‘analysts.’ Like the deconstructionists, and like Nietzsche, “analytic philosophers rejected the pretensions of the Enlightenment philosophers,” their grand schemes of rationally-controlled progress. Unlike the deconstructionists and Nietzsche, they did not question the Enlightenment philosophers’ “commitment to reason.” At any rate, analytic philosophy “is not a unified movement or school,” and indeed Russell himself changed his opinions on all manner of things throughout his long career.

    “The basic aspects of modernism—rejection of past traditions, experimentation with new methods and forms; fascination with and anxiety about technology and use of new technical methods; focusing on method, surface, expression, and language—all characterize analytic philosophy.” This method also comports with the ambition of modernism to master nature; by basing logic on a particular form of modern mathematics, the calculus, it intends to overcome reality even as it seeks to understand it. As Schwartz puts it, the analytic philosophers “saw their work as freeing philosophy and even society, from its past forms and obsessions.”

    “Mathematics is a priori and universal, so how can it be empirical?” Following Frege, Russell initially “treat[ed] logic mathematically, and then treat[ed] mathematics as a form of logic.” By abstracting from experience, mathematical equations give us certainty; for example, we can’t know empirically that there are infinitely many prime numbers because we can’t know if the one we’ve just mentioned is the last one. This appears to lead to a strict form of rationalism. Frege replies that “mathematical propositions are not based on experience or observation, but they are not the results of pure rational insight into the ultimate nature of reality, either.” They may not be ‘about’ anything other than themselves. Russell replies to Frege by counter-example: The famous Liar’s Paradox (a Cretan says, “All Cretans are liars”) doesn’t only yield an ’empty set,’ as mathematicians say; it isn’t a set at all. Originally a Fregian, Russell now could “no longer plausibly claim that mathematics was reducible to pure logic—that it was all analytic.” What he did claim (and here is where empiricism comes in) was that language can be analyzed. The Epicureans’ atomism can return because sentences can be rationally reduced to what Russell called “ultimate simples” or “logical atoms.” They are the objects perceived through the empeiria of this new empiricism; for logical purposes they have the same status as the natural atoms for Lucretius, the sense-data for Locke. And these empirical facts can be analyzed by means of symbolic logic, a logic that takes on the form of mathematical equations but without the need to need to translate the things analyzed into numbers. (In this, Russell sharply diverges from such system as the Gematria in Judaism, which does indeed translate words into numbers. Russell would deny that such a move makes sense.)

    It was Russell’s sometime colleague, G. E. Moore, who attacked the then-dominant school of German Idealism, especially as seen in the thesis-antithesis-synthesis dialectic of Hegel. Hegel and his followers claimed that the dialectic had ontological content, that it registered the logical unfolding of the Absolute Spirit. Moore (with Russell) denied this. The negation of a ‘thesis’ by an ‘antithesis’ yields nothing more than a contradiction; there is no ‘synthesis’ at all. The logical atomists or, as they also called themselves, logical positivists (contrasted with Hegelian ‘negationists’) was nothing but airy “metaphysics.” Logical positivists insisted on limiting reason to matters of common sense, that is, sense data, with which we are all “directly acquainted,” as Schwartz puts it. But we are only acquainted with the sense data we perceive; such notions as whiteness, diversity, and brotherhood are not immediate perceptions. It is through a mental process that we become aware of them. I perceive the appearance of a table sensually, but I know it only indirectly, through words, “by description.” And, in Russell’s words, “Awareness of universals is called conceiving, and a universal of which we are aware is called a concept.” These are logical atoms because any attempt at forming a conception that is exposed as illogical, as self-contradictory, thereby falls apart in our mind, becomes inconceivable. In mathematical terms one might call it a failed function, a pseudo- or dys-functional function.

    “Analytic philosophers proudly contrast the clarity, technical proficiency, and respect for natural science of analytic philosophy versus the ultra-sophistication, contrived jargon, and mystification of Continental philosophy.” They are working very broadly within the British philosophic framework of empiricism, seen in Hobbes and Locke, but they add a linguistic layer to perception that the earlier empiricists did not emphasize. To the logical positivists, we truly know only what we make, and what we make first and foremost is concepts, out of sense perceptions mixed with words.

    Schwartz adds that there was a political-historical element to all of this. At the time, “Hegelianism was something like the official philosophy of Germany, and especially Prussia.” Germany generally and Prussia particularly had taken on a bad odor for Englishmen as it rose to challenge the British regime at the beginning of the twentieth century. This may or may not have had philosophic relevance (the positivists denied that there can be such a thing as political philosophy), but it aided in obtaining a respectful hearing for a philosophic method distinct from and indeed contradictory to that of the Germans.

    Russell and Moore were logicians who called themselves positivists, but ‘logical positivism’ as the term for a philosophic school came to be deployed in Vienna in the 1920s. Its most important proponent was Ludwig Wittgenstein. “Like Russell and Moore, the members of the Vienna Circle reacted against Hegelian German idealism,” very much including its political dimension; “many blamed the Prussian aristocratic traditions for starting the war and for not being able to pursue it successfully”—quite the failure of historicist dialectic, that. “If Frege is the pioneer and Bertrand Russell the father of analytic philosophy, then Wittgenstein’s writings provide the backbone.” Wittgenstein had read the Principia Mathematica before the war, studied with Russell at Cambridge, and then became a decorated artillery officer in the Austro-Hungarian Army. Presumably, his stint in an Italian P.O.W. camp provided the leisure to contemplate the defects of the German-Austrian misalliance, along with the philosophic reasons for doubting Hegelianism and for refining positivism. He published his Tractatus in 1924.

    Wittgenstein sees that language attempts not only to represent the world—what analytic philosophers would come to call the “actual” world we perceive with our senses—but also to represent “non-actual states of affairs,” the stuff of plans and fantasies. “In order for us to be able to think about the world and talk about it, there must be a fundamental similarity of structure or isomorphism between thought and language, and between language and the world. This structure is represented by formal logic.” He breaks with Russell, however, in denying that logic describes “very abstract or fundamental facts or truths about the world or thought or even language”; language provides only “the framework or scaffolding that makes statements of facts possible.” The limits of the linguistic framework are tautologies on the one hand, self-contradictions on the other; this is what keeps Wittgenstein within the realm of logic. “Russell was still yearning for some sort of intellectually satisfying certainty, whereas according to Wittgenstein the only certainty available is empty and formal.”

    An analytic proposition is self-evident only because it is a tautology. In this it is identical to a mathematical proposition, whether an axiom, postulate, or theorem. This returns mathematics to the apodictic certainty of pure abstraction, and it denies that any certainty can result from empirical investigation. As another member of the Vienna Circle put it, there are statements about facts and statements which “merely express the way in which the rules which govern the application of words to facts depend upon each other.” The latter “say nothing about objects and are for this very reason certain, universally valid, irrefutable by observation.” Therefore, as Wittgenstein writes, “Philosophy is not one of the natural sciences”; it aims only “at the logical clarification of thoughts” and not at “a body of doctrine.” It “does not result in ‘philosophical propositions,’ but rather in the clarification of propositions.” A decade later, A. J. Ayer concurred, asserting that “The traditional disputes of philosophers are, for the most part, as unwarranted as they are unfruitful.” Philosophers have been the dupes of language, goofed by grammar. Such a conception as ‘God’ is neither provable nor disprovable. ‘Moral philosophy’ is equally a contradiction in terms, as moral claims have no cognitive content but express nothing but attitudes and emotions. At best, logic will tell us if a moral judgment coheres logically with the moral standard asserted by the one making the judgment.

    Logical positivism, popularized (well, at least among academics) by Ayer’s 1936 book, Language, Truth, and Logic, held sway among philosophy professors well into the 1950s. “Much of the development of philosophy and methodology in the sciences since the 1950s has been driven by the criticisms of the doctrines of the logical positivists.” One of the critics would be Wittgenstein himself. These criticisms, however, were undertaken in “the spirit of the logical positivists’ motivation,” deploying many of the same “methods, standards, and attitudes.”

    Wittgenstein, for example, came to reject “the use of symbolic logic to dissolve philosophical problems.” He now took “the meaning of a statement” to be not “a picture of reality or a fact” but a tool, a matter of how the statement was used in “practical life.” This begins to move toward a reconception of language as rhetoric: “Language is used to elicit a response in listeners, to coordinate our activities, and so on.” This is the theme of Wittgenstein’s philosophic notebooks, published posthumously (he died in 1951) as The Philosophical Investigations. This shift distinguishes what scholars have come to call the “early Wittgenstein” of the Tractatus from the “later Wittgenstein.”

    After Wittgenstein, W. V. Quine took up the mantle. To understand meaning as use is to recall the American pragmatist school, led by John Dewey and William James. Radicalized, the tendency of pragmatism is to reject all attempts to ‘verify’ the truth of a statement; indeed, ‘truth’ itself comes to be guarded by inverted commas, too. Accordingly, Quine rejects even Karl Popper’s more modest principle of falsification, which asks us not to verify anything in accordance with some standard but more modestly to eliminate those claims which contradict that standard. Quine regards all claims, whether they are assertions of the existence of the Homeric gods or of the existence of physical objects, as “cultural posits.” ‘We moderns’ believe in the existence of physical objects only because that belief (as Quine puts it) “has proved more efficacious than other myths as a device for working a manageable structure into the flux of experience.” That is, meaning is Machiavellian in intent; it seeks to master Fortuna. “There is no first or fundamental philosophy that discovers truth or rather TRUTH underlying or separate from science”—which is a matter of ‘grasping,’ of touch, not of seeing (noesis) or of hearing (revelation).

    If an ‘analytic’ statement has a meaning independent of facts and a ‘synthetic’ meaning is grounded in facts, can we really distinguish between the two? Quine doubts it. What is a ‘cigarette’? Tobacco rolled in cylindrically-shaped paper? What about marijuana rolled in paper? And does a ‘cigarette’ need to be rolled in paper at all? Why not a tobacco leaf? “We begin to see the difficulty of distinguishing pure elements of the linguistic meaning of ‘cigarette’ from empirical facts or generalizations about cigarettes. The notion that the term ‘cigarette’ has a pure linguistic meaning begins to dissolve,” and with it its logical ‘analyticity.’ Quine says that the only way to assign meaning to ‘cigarette’ is to consider it within “the whole of science,” “the totality of our so-called knowledge or beliefs,” which he deems “a man-made fabric which impinges on experience only along the edges,” a (rather disorderly) “web of belief.” What “positivists and other modern empiricists failed to recognize” was this “holistic character of knowledge.” Human knowledge or science is never comprehensive (as it is in such a great systematizer as Hegel); when a given experience contradicts it, we can and should adjust it accordingly. “Even the laws of logic and mathematics are not immune to revision.” But there is no standard ‘above’ the myth or story of science; we are simply adjusting the “paradigm” (as the historian of ideas Thomas Kuhn calls it). More, we rarely discard an old paradigm for a new one. Kuhn writes, “Typically the adherents of the old scientific paradigm are not defeated by the results of experiments or observations. They are defeated by the grim reaper,” as “older scientists are replaced in positions of scientific power by younger colleagues with fewer intellectual commitments.” (Notice that this last point strengthens the intellectual hands not only of analytic philosophers but of post-modernists, ever alert to ‘will to power’ Nietzsche so vehemently asserted to be the pervasive principle of all life.)

    Why, then, do Quine and Kuhn persist in favoring scientific paradigms over others? It can’t simply be a matter of “cultural predilections.” Rather, scientific paradigms have proven more accurate than, say, astrology as tools “for making predictions.” We judge science the same way “we judge any tool. How useful is it? How well does it work? Does it do the job for which it is designed?” Philosophy and philosophizing thus can assist the modern scientific enterprise, so long as philosophers abandon their pretension to see nature and stick to the task of sharpening the tools by which we grasp and shape it.

    All of this is quite reminiscent of Dewey, as Quine’s successor, Richard Rorty, insists in Philosophy and the Mirror of Nature (1979). Rorty, along with his contemporary Hilary Putnam, “labored to dismantle the traditional view of science as an attempt, by rigid and formal methods, to get an ever more accurate picture or mirror of a fixed lawlike world.” This led them to question whether science has a monopoly on knowledge. Why can’t painting, sculpture, music, literature, moral codes, “and perhaps even religion” “contribute to the web of knowledge”? These areas of thought, too, may offer “cognitive content with pragmatic value,” even if modern science remains “the Queen” of the knowledges.

    Respecting moral codes, for example, Putnam rejects the fact/value distinction, “which was almost as dear to the positivists as the analytic/synthetic distinction.” Not only such words as ‘cruel’ and ‘kind’ are “value-laden,” but so are “such factual sounding terms as ‘rational,’ ‘logical,’ ‘irrational'”. Putnam considers “his rescuing values from positivist exclusion to be his most important contribution to philosophy.”

    Putnam especially insists that this opening-up of knowledge, even combined with the rejection of a standard of truth ‘above the cave’ of our current myth, doesn’t entail relativism. “Denying that it makes sense to ask whether our concepts ‘match’ something totally uncontaminated by conceptualization is one thing; but to hold that every conceptual system is therefore just as good as every other would be something else. If anyone really believed that, and if they were foolish enough to pick a conceptual system that told them they could fly and to act upon it by jumping out of a window, they would if they were lucky enough to survive, see the weakness of the latter view at once.” While “the very inputs upon which our knowledge is based are conceptually contaminated,” such contaminated inputs “are better than none.” And if “contaminated inputs are all we have, still all we have has proved to be quite a bit,” given the success of modern science in doing what its philosophic forebears promised, the conquest of nature for the relief of man’s estate.

    Meanwhile, in England, philosophy shifted its base of operations from Russell’s Cambridge to Oxford, where C. E. M. Anscombe, R. M. Hare, H. L. A. Hart, Charles Stevenson, and Gilbert Ryle worked. They too rejected Cambridge and Vienna Circle formalism, “tend[ing] to view symbolic logic as an attractive snare for the philosophical intellect.” They retained linguistic philosophy’s emphasis on language but turned (Socrates-like, it might be said) to the consideration of “ordinary language” and common sense. Again like their analytic-philosophy predecessors, they sought to elucidate the meaning of concepts, but focused their attention not so much on mathematics and science as on the more concrete realms of literature, the arts, and politics. Ryle held that “philosophy is messy and the messy problems it confronts cannot be resolved by mathematical formulas.” Even as Aristotle had observed that a cultivated man should not expect more precision in a field of knowledge than its subject-matter allows one to have, the “ordinary language philosophers” sought “precision and accuracy of thought and argument, not the precision of the physicist, chemist, or medical doctor.”

    Whether analytic philosophers have maintained that symbolic logic mirrors nature, or whether they have maintained that we find meaning only in uses, in pragmatic refinement of paradigms, they have committed what Ryle regards as the fallacy of dualism, what he called (following Arthur Koestler) “the ghost in the machine,” the Geist or spirit/mind as distinguished from the body. Our minds are not separate from our bodies; they are only “organizations of behavior”. Ryle replaces logical atomism and logical positivism with logical behaviorism. We know what people are thinking and feeling primarily by their actions, which include their vocalizations, linguistic and otherwise. Although Ryle eventually questioned behaviorism, finding it insufficient to account for the experience of introspection, behaviorism has enjoyed a long if often pernicious life in the writings of social scientists seeking, as they do, observable and measurable phenomena to describe.

    Resistance to dualism entails a rejection of the superiority of mind over matter. This “reflected changes in society,” Schwartz remarks, and indeed it does look like a continuation of the trend toward what Tocqueville calls democracy, social egalitarianism—the pushing-down of aristocratic claims to rule in all endeavors of mind and heart. Would ‘aristocracy’ make a comeback,? Would philosophers begin to call ordinary language philosophy ordinary-all-too-ordinary?

    Not entirely. “Since the decline of ordinary language philosophy in the 1960s, no single movement or school has dominated analytic philosophy.” In Schwartz’s estimation, the “most striking and impressive advances” by analytic philosophers came in the investigation of language. And this marked a return to nature, thanks to the work on the linguist Noam Chomsky, whose research into human beings’ innate propensity to language spurred a rethinking of that large portion of philosophic thought which took its cue from John Locke and his (now clearly mistaken) notion of the mind as a tabula rasa. Before Chomsky, the mathematician Kurt Gödel gave “the final deathblow” to the earliest, pre-Principia Mathematica form of analytic philosophy by showing that arithmetic “cannot be reduced to logic and set theory” because some axioms are un-analyzable, unprovable and therefore “beyond the reach of human knowledge.” To put it in verbal terms (as the philosopher Alfred Tarski did), to say that snow is white is true if and only if snow is white. That doesn’t get you very far. More importantly, it can’t, so don’t waste any effort in trying. It is well worth noticing, as Stanley Rosen does, that analytic philosophy tends to deny cognitive status to intellectual intuition or noesis, and that this one of the “limits to analysis.” [3]

    To deal with such a conundrum, Tarski distinguished between “object-language”—the sentences in which we talk about objects—and “meta-language”—the sentences in which we talk about the object-language, in which (as Tarski writes) we “construct the definition of truth for the first language.” Donald Davidson elaborated on Tarski’s proposal, linking meta-language to Quine’s anti-positivist holism or Kuhn’s paradigm theory. Or, reaching back still further, Davidson writes, “Frege said that only in the context of a sentence does a word have meaning; in the same vein he might have added that only in the context of the language does a sentence (and therefore a word) have meaning.” Therefore, “an argument must always be interpreted in the way that makes the most sense given the context and other information we have”; “if we cannot find a way to interpret the utterances and other behavior of a creature as revealing a set of beliefs largely consistent and true by our standards, we have no reason to count that creature as rational, as having beliefs, or as saying anything.” Taken by itself, this would amount to a highly sophisticated form of classical conventionalism; it took Chomsky to bring nature back in, after it had been driven out by philosophic pitchforks.

    Chomsky rejected behaviorism, which “cannot explain our ability to learn a language” because languages are too complex to be learned by an organism starting at zero. “All normal humans are born with a universal grammar already hard-wired in their brains.” This strikes a blow against Quine, a friend of B. F. Skinner, the Harvard psychologist who became the most prominent behaviorist of the postwar decades. [4] Schwartz comments, “Logical behaviorism never had any plausibility. No definitions in terms of behavior and dispositions to behave were ever formulated nor could they be,” inasmuch as “thoughts and day dreams are interior and private and need never be manifested in anything exterior.” They are unverifiable by outside observers.

    Further, the natural languages that derive from the universal grammar natural to human beings resemble “a formal logical system.” Whereas Wittgenstein, and Quine following him, denied that the purpose of language “was to express our thoughts,” Chomsky “embraces exactly this view.” Despite his esteem for the Quine insofar as he authored the metaphor of the “web of belief,” Davidson limited such conventionalism by endorsing Chomsky rationalist naturalism: “The dependence of speaking on thinking is evident,” he wrote, “for to speak is to express thoughts.” Schwartz sees that “the next step is not far: [T]he structure of language is isomorphic to the structure of thought, and the world.” The deeper philosophers dove into language, the more they moved toward the classical claim that man is by nature animated by logos, by speech and reason.

    From the philosophy of language, then, to the philosophy of mind. Here, philosophers noticed that “the behaviorist cannot forgo appeal to mental states,” which are “functional states of an organism,” that is, states that cause it to behave. External stimuli may ‘push’ the organism to do something, but only as mediated through that mental state (e.g., pain, pleasure, revulsion, attraction). The “functionalist” “views a mental state as a function that takes an input stimulus, plus other mental states, and generates an output that depends on both the input and the other mental states,” indeed, “the entire mental state of the organism.” A computer, which is an artifact imitating the human mind, performing some of the same functions (albeit more efficiently) does much the same thing: its “output depends on the input plus the program the machine is running.”

    This is as good as far as it goes, but it “leaves out of the account the subjective nature of our mental lives.” Computers have no consciousness, “as far as we know.” An organism that had no consciousness would feel no pain, even if it were subjected to abuse—a point well known to all of us who have experienced the benefits of anesthesia. Although Davidson and many other philosophers are reluctant to abandon materialism, the idea of consciousness obviously causes a problem for them, even if their web-of-belief organicism disposes of behaviorist simplisme. Schwartz notes, “The problem of mental causation and the problem of consciousness are today the central problems in the philosophy of mind.”

    And with all this, even much-denigrated metaphysics, the bugbear of analytic philosophers, has reappeared in the thought of today’s analytic philosophers. This “remarkable development” occurred thanks to “developments in formal modal logic in the 1960s.” Formulated by C. I. Lewis, “modal” logic “is the logic of necessity and possibility,” duly translated into symbolic-logic figures (a box symbolizing necessity, a diamond symbolizing possibility). As Quine immediately saw, and abominated, this suggests that symbolic logic might be made into a means of understanding things in nature, reviving the hated ‘essentialism’ of previous schools of philosophy, and with it metaphysics itself. Schwartz counts himself among those who find Aristotelian essential “intuitive and commonsensical,” very far from impossible to think about logically. “In embracing metaphysics,” he hastens to add, “we did not give up commitment to clarity, care, and careful sequential reasoning, nor to honoring science and mathematics.” That part of the analytic-philosophy mindset remains, well, conscious of itself.

    The centerpiece of contemporary metaphysical thought is the idea of “possible worlds,” that is, worlds that “could have been” but are not the “actual” world. The notion of possible worlds was originated by G. W. F. Leibniz, the renowned seventeenth-century metaphysician and indeed theologian. The down-to-earth example of such thinking begins with a “counterfactual”: a world in which, for example, Ralph Nader didn’t run in the 2000 presidential election, resulting in victory for Senator Gore over Governor Bush. Such a possible, but not actual turn of events likely would have led to turns in subsequent events (would President Gore have prosecuted Gulf War II?). In this line of thought, a necessary proposition is true in every possible world, whereas a possible proposition is true in at least one possible world; an impossible proposition is possible in none, and a contingent proposition is true in some worlds, false in others. “This is not about language. Even though we speak of possible world semantics, it is metaphysics.” And it reopens philosophic minds to essentialism: “I have the property of being a human being in every world in which I exist,” Schwartz writes. “I have the property of living in Ithaca in some but not others. I am essentially a human but contingently an Ithacan.” “An essence is a property or conjunction of properties that is necessary and sufficient for being a particular individual.” As Alvin Plantinga puts it, “If Socrates”—not to be confused with Schwartz, but the principle is the same—”had not existed, his essence would have been unexemplified, but not nonexistent. In world where Socrates exists, Socrateity is his essence; exemplifying Socrateity is essential to him.” Or, as Schwartz puts it, “In some worlds, some essences are exemplified, and others are not.” Mathematicians have dealt with such an idea for years in the form of probability theory in statistics. As the political writer George F. Will noticed, a Chicago Cubs hitter whose batting average is .203, isn’t necessary ‘overdue for a base hit,’ whatever some cheerleading baseball announcer may say. In the actual world, he may strike out, even if there are possible worlds in which he saves the day with an RBI triple.

    On a loftier level, modal logic revives the ontological argument for the existence of God. In Descartes’ version, since God has all perfections and existence is a perfection, God exists. The modal version of the ontological argument is: “If it is possible that God exists, then God exists”—that is, in essence if not in actuality; “if it is possible that a necessary being who is omnipotent” and possesses the other attributes of the Biblical God exists, “then such a being exists.” This doesn’t prove the existence of God, but, as Plantinga argues, “it establishes the rational acceptability of belief in God—the rational acceptability of theism,” because even if one disbelieves that there can be a possible world in which “maximal greatness is instantiated,” believing that there is, “is not irrational.” Therefore, “theism is not irrational” but rather a logical stance taken on the basis of a premise that cannot be proven or disproven, rather as we understand that snow is white only if snow is white. “To my knowledge,” Schwartz writes, “no one has yet succeeded in demonstrating that the concept of God is impossible, self-contradictory, or nonsensical.”

    If nature has returned to philosophy, precisely through the thinking-through of analytic philosophy, what is nature? Putnam defines natural kinds as “classes of things that we regard as of explanatory importance… held together [by] deep-lying mechanisms.” According to the theory of “reference”—what we mean to say that our words refer to something—meaning has “intension” and “extension.” “Intension” is what we mean to say by using a given term; the word ‘lemon’ means a certain “conjunction of properties.” “Extension” is a reference to the things to which that meaning applies. The term ‘lemon’ refers to an object currently sitting in the fruit and vegetable section of Market House, among other objects, many of them not lemons. In logical terms, the concept corresponding to the term is its “intension,” and it “must always provide a necessary and sufficient condition for falling into the extension of the term.” “Analytic” truths “are based on the meanings of terms.” From Hume onwards, “all necessity was construed as analyticity or somehow based on linguistic conventions.” This is what’s behind Hume’s questioning of the theory of causality; there is no sort of “extra-linguistic necessity.” That’s what Wittgenstein has in mind when he claims that essence is expressed by grammar, by a linguistic convention, and need not apply to the physical world.

    Keith Donellan argues otherwise. To describe something, he remarks, you may make an “attributive” description or a “referential” one. An attributive description is one in which I infer a characteristic of, say, a person without knowing who the person is. If someone explains E=mc² to me, I might think that the person who first formulated that must have been smarter than I am, but I might not know it was Albert Einstein. If I met Einstein and had a conversation with him, and then described him as being smarter than I am, I would be defining him referentially, now having a definite person in mind. Bringing in ‘possible worlds’ theory, Saul Kripke adds that when I say “Albert Einstein” I mean “the same person whether or not he… satisfies some list of commonly associated descriptions.” Those who knew Einstein as a child attributed no genius to him but nonetheless meant the same dude as those who later described him quite differently. Kripke distinguishes between necessity—a category in metaphysics—a prioricity—a category in epistemology—and analyticity—a category in linguistics.

    The same goes not only for persons but for natural kinds. Under the older theory, “the concept associated with a term functions like the set of identifying descriptions supposedly associated with an ordinary name”; “gold” is yellow, shiny, metallic, and so forth. It can be analyzed linguistically, broken down into these other words. Kripke observes, however, that such a description doesn’t truly define gold. Only “its atomic structure defines whether some stuff is gold.” Gold is gold metaphysically, that is, it is gold in all possible worlds. Anything that does not have that atomic structure isn’t gold, “even if it satisfies some list of superficial features that we think” characterize it. Our certainty in this classification derives not from “knowledge of a definition” but from “a well-established empirical theory.” It is not analytic, in the analytic-philosophy sense, but “if it is true, it is necessary” metaphysically.

    Schwartz provides another way into the question, distinguishing physical possibility and necessity from metaphysical possibility and necessity, and both from logical possibility and necessity. An alternate world that is physically possible must have the same natural laws as our world. An alternative world that is metaphysically possible might be physically impossible in our, actual, world. An alternative world that is logically possible must only meet the criterion of “logical consistency.” So, for example, it is physically possible for Schwartz to have lived in San Francisco, as this would violate “no natural laws”; what is more, this is also metaphysically and logically possible. It is physically impossible for Schwartz to swim across the Atlantic Ocean; it isn’t metaphysically or logically impossible, however. It is not physically possible for Schwartz to be an alligator, nor is it metaphysically possible “(assuming that I am essentially human)”; it is nonetheless logically possible, for example if I (not doubt unwarrantedly) use the term ‘alligator’ as a metaphor to describe Schwartz’s personality.

    Speaking of character, analytic philosophers have also begun to admit ethics into their purview, along with metaphysics and nature. G. E. Moore’s moral-philosophic equivalent of the Principia Mathematica was his Principia Ethica, published in the same year as the Russell/Whitehead opus. Moore denied that there can be any such things as moral philosophy; his book centers on what he calls “metaethics,” the “logical and analytical study of ethics” by means of epistemological, logical, and metaphysical categories. Such categories tell one nothing substantive about right and wrong, good and bad because (according to Moore) such ethical topics have no cognitive content if one considers them epistemologically, logically, and metaphysically. Moore argues that ‘good’ is an indefinable term, rather like ‘white’ or ‘yellow.’ Like those terms, it cannot be analyzed. But unlike shades and colors, it can’t be “perceived by the senses,” either; it is “apprehended by moral intuition.” Because it can’t be perceived by the senses it can’t be natural; in claiming this, Moore affirms Hume’s denial that we can derive ‘ought’ from ‘is’. Philosophers who try to derive ethics from nature commit the “naturalistic fallacy.” The moral intuition amounts to “personal affection” and “appreciation”—to ‘values’ as distinct from ‘facts.’ A later writer in Moore’s line, C. L. Stevenson subsumed ethical discourse under rhetoric: “The point of ethical discourse is to influence not describe.” A. J. Ayer agreed.

    Analytic philosophers began to change their minds after World War II, which conflict must have imposed a fairly severe challenge to moral subjectivism. R. M. Hare hoped to stay within the Moore-Stevenson-Ayer orbit, but in the process put an end to their ethical emotivism by remarking (in an unwittingly Aristotelian way) that ‘good’ in morality means essentially the same thing as it means in other areas. We mean ‘good person’ in more or less the same way we mean ‘good dog,’ ‘good car,’ ‘good movie.’ We don’t mean only that we like that person, dog, car, or movie; we also mean that it has some intrinsic quality that fulfills the definition of the noun we use to classify it. And so, to use Schwartz’s example, if Conan the Barbarian understands the good life as the victorious life cannot be right, as victory in itself contributes nothing “to human flourishing,” does not fulfill the meaning of the noun ‘human.’ “It is mere self-interest.” But more, one might argue. Insofar as mere victory in battle might deform the person who achieves it, making him more inhuman than before, it is not even self-interest, rightly understood.

    This is the kind of thing C. E. M. Anscombe and Philippa Foot have in mind when they assert that moral terms have factual content, that a ‘value’ can partake of facticity. They founded “a school of ethics, based on Aristotle’s ethics, called virtue ethics,” which emphasizes “moral character rather than moral oughts and goodness” in the utilitarian and also in the Kantian sense. Schwartz somewhat puzzlingly goes on to laud John Rawls, a neo-Kantian, as a veritable “Philosopher King,” although this does at least bring political philosophy back into the ethical universe, as Aristotle had seen it to be.

    Looking at the trajectory of analytic philosophy as described by Schwartz, one finds it a remarkable enterprise indeed. What started out as a philosophic method that eschewed metaphysics and morality as sub-philosophical realms devoid of rational content has slowly uncovered doctrines with affinities to Aristotelianism. That is, ultra-‘modern’ analytic philosophy has begun to turn modernity away from itself and back toward the ‘ancients.’ Will postmodernists take a similar turn? As the old saying goes, ‘From your mouth to God’s ears’—God having now been reintroduced to polite philosophic conversation.

     

    Notes

    1. Jacob Klein: Greek Mathematical Thought and the Origins of Algebra. Eva Brann translation. New York: Dover Publications, 1992.
    2. V. I. Lenin: Collected Works. Volume 14, pp. 17-362. Moscow: Progress Publishers, 1972.
    3. Stanley Rosen: The Limits of Analysis. South Bend: St. Augustine’s Press, 2000.
    4. An acquaintance of mine once lived next door to a famous behavioral psychologist. The great man had gone so far as to place his infant son for in what was called a ‘Skinner box’ for substantial periods of time. A Skinner box was a controlled environment in which an animal (very often a rat or a pigeon) would be rewarded for performing a certain action, not rewarded or even punished for failing to perform it. The last time my acquaintance saw him, the lad was chasing the family cat around the back yard, a syringe in hand. This suggests that errors in epistemological theory may have startling actual-world consequences, although admittedly it doesn’t rigorously prove that they do, or must do.

     

    Filed Under: Philosophers

    George Washington, Nation-Builder

    November 7, 2019 by Will Morrisey

    Edward J. Larson: George Washington, Nationalist. Charlottesville: University of Virginia Press, 2016.

     

    Americans understood themselves as “a people” by the 1770s, at least, as the Declaration of Independence most famously indicates. But until the Declaration they couldn’t think of themselves as a self-governing people, a nation in full. Securing that nationhood took years of war, constitutional architectonics, and commerce both economic and social. The merit of historian Edward J. Larson’s compact and incisive essay begins in selecting for consideration the ‘middle’ years of Washington’s career, those between the war and his inauguration as our first president. In them we see not Washington the general or Washington the commander in chief, but Washington the adroit and great-souled politician, the man who used the fame he won during the war to take his country from domestic unrest and geopolitical insecurity to what he called an empire, what his sometime colleague Thomas Jefferson called an empire of liberty. Jefferson wrote the Declaration; Madison, James Wilson, and their colleagues wrote the Constitution; but Washington took the indispensable steps that enabled independence fought in defense of natural rights to issue in the security of those rights within a framework of constitutional and commercial republicanism.

    This book’s “simple thesis,” Larson writes, holds that Washington was “the leading nationalist of the late Revolutionary era in American history.” By “nationalist,” he doesn’t mean blood-and-soil statism or even Burkean traditionalism but popular self-government. He commits an important misstep at the outset, saying that Washington “believed in the Lockean natural right of free men and the republican ideals of government by the consent of the governed”; obviously, if right is natural, it must belong to all men, as the Declaration affirms and as Washington recognized by emancipating his slaves in his will. Fortunately, this is just about the last mistake Larson makes, and it isn’t foundational to his argument, which centers primarily on practical policies not political theory. And he is exactly right to link Washington’s understanding of natural right to his commitment to the founding of a republican regime.

    Having fought major battles in five states and coordinating troop movements in all thirteen, Washington understood American politics from “a national perspective” well before he re-entered civilian life. After the war, the English continued to prey upon American shipping and to occupy New York City, Charleston, and Savannah—all major ports, vital to American commerce. The union of the states, first asserted in the 1774 Articles of Association, weakened without a battlefield enemy on the ground who daily reinforced the sentiment of hanging together, lest we hang separately. Disunion led to reluctance by states to pay debts incurred during the war to the federal government, and this led to a regime crisis. Unpaid soldiers will grumble. Officers in Newburgh, New York became restive. They received some encouragement from such nation-builders as Robert Morris and Gouverneur Morris, who hoped that fear of a coup would spur the states to pay up. Major General Alexander McDougall was the point man for the proto-rebellion, threatening Treasury Secretary Henry Knox with refusal to disband the troops until payment was received.

    Washington understood that such a rebellion would threaten republicanism itself by challenging civil authority. He decided to employ a peaceful form of what military men call tactical surprise, the civil equivalent of the Battle of Trenton. He made a unannounced visit to the officers’ meeting in Newburgh on March 15, 1782, reading what one historian has called “the most impressive speech he ever wrote.” Taking himself as his example, he cited “the great duty I owe to my country” to obey civilian authority, a duty deriving from the principle of government by the consent of the governed, itself derived from the equal natural rights of all human beings. Appealing to honor, the military virtue par excellence, he exhorted the officers to “express your utmost horror and detestation of the Man who wishes, under any specious pretences, to overturn the liberties of our Country, and who wickedly attempts to open the flood Gates of Civil discord, and deluge our rising Empire in Blood.” Who will rule this rising empire? Military men? If so, was Washington himself not the highest-ranking and most-honored such man in America? And had he not fought with them as comrades throughout the early defeats and hardships, sharing with them the final triumph? Instead of calling them to lay down their arms, could he not have led them on a march to the capital, taking over the government by force? He had done the opposite of that. The officers backed down.

    “As word of the encounter first reached Congress and then spread across the land in newspaper accounts, Washington gained yet another laurel. Already first in war, he was now first in peace and clearly first in the hearts of his countrymen. He had no rivals.” Washington “use[d] his platform as America’s leading citizen to call for quickly and fairly compensating the troops, and ultimately for building a strong national union that could support those payments and some form of permanent military establishment”—an establishment which, going on 250 years, has yet to attempt a coup d’état against the people it is charged to protect or the civilian government those people have consented to be governed by. Working against any foolish potential backlash against the military as such, Washington advocated the maintenance of a small standing army, with a well-organized militia to supplement it, on the grounds that it could defend America’s northern border with British Canada and its northwest territories against Indian tribes and nations allied with the British.

    Washington’s call for national union went well beyond national defense. In his 1783 Circular Letter to the states, he associated a stronger central government with the “happiness” of those states as parts of that union. “It is only in our united Character as an Empire, that our Independence is acknowledged” by foreign powers, and it is only by thinking of ourselves as “citizens of America,” by establishing our “National Character” that we can become “a happy Nation,” one so situated as to secure our natural rights of life, liberty, and self-government. By resigning his military commission at the national Assembly Chamber in Annapolis near the end of the year, and by declaring his intention to retire to private life, he astonished the world (and most particularly George III). As the “second Cincinnatus,” he “became the first American,” no longer merely a Virginian of great distinction but “a world-renowned personification of republican virtue.” In one of his many well-chosen quotations, Larson cites Thomas Jefferson: “The moderation and virtue of a single character probably prevented this revolution from being closed, as most others have been by a subversion of that liberty it was intended to establish.”

    Returning to Mount Vernon, Washington put his long-neglected household in order then turned his attention to his properties along rivers in southeastern Pennsylvania and today’s West Virginia. He discovered that a grist mill he owned had been mismanaged and that a Calvinist sect called the Seceders had claimed squatters’ right on another of his tracts since 1773. For his pains, a group of Indians attempted to capture him at Great Kanawha, along the Ohio River. These unpleasant surprises galvanized his ambition to empower the federal government to permit orderly settlement of the West. “If Congress could open, sell, and settle these lands and thereby gain authority and revenue, it could bolster the union. If not, it risked losing them to a foreign power, and with them, much of the reason for a national government.” As a result, why would the settlers in the West not turn to Spain, which ruled the West’s geo-economic linchpin, New Orleans, and to Great Britain, which ruled the Great Lakes and the St. Lawrence River, for both security and trade? “The touch of a feather, would turn [the Westerners] either way,” he wrote. To secure this portion of the Union, not only a well-funded military force but east-west transportation routes would be indispensable—the latter to be secured by linking the North Branch of the Potomac River to the headwaters of the Ohio River. To this end, he lobbied the Virginia and Maryland legislatures to establish a private toll route on the Potomac, while lining up investors. He played the role of what we would now call a ‘rainmaker’ with his usual skill, and by January 1785 “Washington had his company and soon would be elected its first president.” He proved a less successful entrepreneur, however, not because he lacked business acumen but because the Erie Canal soon became the main east-west corridor, due to its better positioning, closer to the commercial entrepots of New England.

    Nonetheless, the project earned a substantial political profit. In obtaining the Mount Vernon Compact between Virginia and Maryland to cooperate on Potomac River commerce, he had partnered with the young Virginia state legislator James Madison, whom he enlisted in his broader intention to strengthen the Union. “We are either a United people, or we are not”; “if the former, let us, in all matters of general concern act as a nation,” with “national objects to promote, and a national character to support.” Madison concurred, proposing that the Virginia legislature “call a general meeting on interstate commercial regulations to be attended by delegates from all thirteen states.” Representatives of five states did attend the meeting, held in Annapolis in September 1786. This became the first step toward calling a national convention to revise the failing Articles of Confederation. But such a convention would need not only Washington’s support but his attendance, if it were to attract delegates from all the states. Madison and Washington’s former military aide Alexander Hamilton went to work on the general—who, in the end, needed little persuasion. Not only was the general well aware of the geopolitical dangers to Americans, he also worried about internecine conflicts, especially over borders and commerce, and, “perhaps most important,” the failure of states “to protect individual liberty and private property.” So were many of his fellow Virginians, who chose him to lead its delegation at Philadelphia. For his part, Washington worried that the convention wouldn’t be serious—that is, genuinely constitutional.

    As he had done with his officers during the war, Washington consulted his most trusted advisers before going into battle. Madison, Knox, and Jay all advocated “a truly national government” with “separate legislative, judicial, and executive branches” and a bicameral legislature. Madison also argued for a fully articulated federal judicial system, which would “avoid local bias in expounding national laws and deciding cases involving citizens of different states.” All agreed that “in areas under its domain the national government must have the power to act directly on the people, not just through the states.” Washington “embraced their proposals and made them his own,” while wondering if, as he said to Jay, “the public mind [was] matured for such an important change.” He called the convention as “the last peaceable mode” of “saving the republic.” Virginia delegate John Randolph was designated to present what was immediately labeled “The Virginia Plan,” which in most aspects carried the day, with some compromises at the insistence of the smaller states.  Respecting the office which everyone expected Washington to occupy, the new constitution broke with parliamentarism, electing the president not by legislative vote but through the novel Electoral College, which, tellingly, would dissolve at the end of each presidential election cycle, making the chief executive entirely independent of any standing set of officeholders in the national or states’ governments. Governmental powers would thus be not only separated but balanced.

    At times bitter and hard-fought, the ratification contests in the several states saw determined opposition to the new constitution from advocates of the Articles of Confederation system. “Federalists would rely on the public’s trust in Washington to carry the day,” and it did. Further, once ratification was assured, it was crucial to ensure that anti-federalists didn’t control the first Congress. To this end, Washington set down three “main goals for the United States under the Constitution: respect abroad, prosperity at home, and development westward”—goals obtainable by policies of “effective tariffs, sound money, secure property rights, and a nonaligned foreign policy.” As Washington put it, “America under an efficient government, will be the most favorable Country of any in the world for persons of industry and frugality,” a country not “less advantageous to the happiness of the lowest class of people,” thanks to the vast tracts of land available in the West. “He saw it as a model for individual liberty and republican rule everywhere,” and candidates for the first Congress under the Constitution would see in that model what amounted to an exceptionally attractive political platform.

    After his election, Washington journeyed to New York, stopping in Philadelphia and Trenton. At a City Tavern banquet in his honor, the diners raised their glasses to the toast, “To Liberty without licentiousness,” a republican slogan if ever there was one.  At Assunpink Creek, near Trenton, where Washington’s troops had rounded on British forces in January 1777, a banner unfurled to read “The Defender of the Mothers, will be the Protector of the Daughters.”

    This resembled a king’s progress across his realm, with one critical exception. The crowds who greeted the new president didn’t bow to him; he bowed to them. George Washington had become “the master of the correct gesture.” (Adams called him “the finest political actor he had ever seen.) The regime he had been instrumental in founding lodged sovereignty in the people, not in the government, and not in some elected monarch.

    And the regime worked, far better than the Articles regime had done. Treasury Secretary Hamilton worked out a financial system capable of paying the war debt. Secretary of War Knox organized for war against the Western Confederacy, an alliance of Indians which had blocked American settlement in the rich lands of the Ohio Valley. John Jay negotiated a treaty with Britain that got them out of its forts in the Northwest Territory. North Carolina and Rhode Island finally ratified the Constitution; Tennessee and Kentucky also joined the Union. Congressman Madison floor-managed the Bill of Rights through Congress, “with Washington’s support.” Secretary of State Jefferson “devis[ed] a broad regime of federally protected intellectual property rights,” which would secure the innovations on which manufacturing and commerce depend.

    Controversies over the national bank and Jay’s treaty caused tensions between Washington and his fellow Virginians Jefferson and Madison, who eventually began “a formal national political party with a states’-rights bent.” Thus what began as a controversy between big states and small states during the ratification contest morphed into a controversy between finance and agriculture by the turn of the century, a controversy that would eventually morph into the controversy between slavery abolition and slaveholding which nearly destroyed the Union. Far-seeing George Washington manumitted his slaves in his Last Will and Testament; had enough of his fellow slaveholders done that, there might have been no Civil War.

    Filed Under: American Politics

    Planning an American Islamic Republic

    October 29, 2019 by Will Morrisey

    Shamim A. Siddiqi: Methodology of Dawah in American Perspective. Brooklyn: The Forum for Islamic Work, 1989.

    Mohamed Akram al-Adouni: “An Explanatory Memorandum on the General Strategic Goal for the Group in North America.” April 1991.

     

    The late Shamim A. Siddiqi (1928-2018) served for many years as the moving spirit of the Islamic Circle of North America—a New-York-City-area organization not to be confused with the Islamic Society of North America, which was founded by the Muslim Brotherhood and controls the Islamic Learning Foundation. A Muslim born in what is now India, he fled to Pakistan with his family after the Partition in 1947. He admired and met with the most prominent Pak Islamist, Mawlanda Mawdudi, eventually carrying the Islamist message to the United States, where he lived for most of his life.

    He states the core of that message, its purpose, in the opening sentence: “The book in hand is an effort towards the achievement of our cherished goal, i.e., how to make Allah’s Deen dominant on this earth.” Such dominance will lead to the Falah or deliverance “of the entire mankind” [sic], and the “methodology” outlined will cause the call [dawah] to all the peoples of North America to join the Islamic ummah or body of believers to be “properly projected and penetrated deep into the society.” Those peoples, but especially the people of the United States, “are in need of a superb ideology to counteract the menace of their social evils, economic upheavals, racial/color discrimination, political corruption and socialist/communist hegemonies on a global level.” Once converted, Western peoples generally will rise to the top of the worldwide Islamic movement, given their technological superiority to the rest of the world. The task is to show “how to make the message of Islam acceptable to the West,” thereby freeing “the Muslim world” from Western interference and intervention, “pav[ing] the way for the emergence of a global Islamic order.” He assures his readers that “it is Allah who guided my thoughts, my thinking process and its development in its entirety. Nothing in this book is mine. Everything is from Allah.”

    With all Muslims, Siddiqi holds up the Qu’ran as God’s “last and final Guidance” for a humanity that is otherwise “weak, ineffective and in a pitiful state,” with each individual “fearful of his own species” and nations “skeptical of each other.” He finds one hopeful sign in Afghanistan, where, as of 1989, the Taliban sought to establish “an ideal Islamic state, to serve as a model for the rest of mankind.” In a post-9/11 “Updating Note,” he praises the Taliban for having “tactfully disarmed the people” of the country and “establish[ing] the rule of Sharia within their domain.” The subsequent invasion of Afghanistan by “the anti-Islam Western hegemony” and its regional allies under the pretext of counteracting Osama bin Laden’s al-Qaeda. The “very tragic drama of September 11 was staged”; it was blamed on Bin Laden and the Taliban “without the least ascertaining the facts and looking elsewhere who were and are the greatest beneficiaries of this tragedy”—whom Siddiqi carefully leaves unnamed. Any attempted regime change in Afghanistan in the aftermath of the defeat of the Taliban will fail because “Stooges cannot fill the gap.”

    Be all of this as it may, Siddiqi returns to the project at hand—changing the regime of the United States and giving it “an alternative way of life.” This “is the responsibility of Muslims who fortunately migrated to Western countries after the Second World War, when there was a dearth of labor in Europe and America and the immigration restrictions were eased.” This must be done because “the sheiks and kings of the Middle East are all in the pockets of the Western powers, especially the U.S,” which aids those rulers in their attempts “to crush the Islamic forces ruthlessly wherever they raise their voice for establishing Allah’s Deen.” “This dirty game has been goin on throughout the Muslim world unabated for the last two hundred years” and true Muslims must not tolerate it. “This will be possible only by building Islamic Movements in the Western countries in the homelands of those who have caused and are causing incalculable loss to the Muslim world and casting baseless aspersions against Islam day in and day out.” Muslims must “remove the prejudices of the West against Islam.” To do this, they must play “a game of strategy” whereby they “find out and create new friends for Islam and its cause on the side of the enemy, inside and at the rear of the forces fighting against Islam.”

    This is right because “sovereignty belongs to Allah alone and denies all authorities besides Him…. Only Allah-given laws are to be accepted, practiced and implemented in an individual’s life and established in the society where the Muslims live”—the United States now being one such society. “A Muslim has to put all that he has either to change the society into an Islamic society or state or be perished for it [sic]. A Muslim has no other choice.”

    Siddiqi lays down the basics of dawah as presented in Muslim Scripture. Man has free will, but he must choose rightly, according to God’s commands: “The achievement of both heaven and hell depends on the treatment which one accords to the guidance from the Creator.” Free will exists because God intends to “test him and ascertain who among the human beings accepts Allah and His Guidance by his freewill which will qualify him to be the citizen of the next world.” Choosing the right way of life or regime on earth will entitle you to citizenship in the best regime, hereafter. The Prophet Muhammad struggled to “rout out” the wrong way of life and found Allah’s Deen in “the body politic of the Arabian Peninsula”; “this was to serve as a prelude to make Al-Deen-Al-Islam dominant in the rest of the world.” Or, as Muhammad himself said, his disciples must act to “bring Arabs under your control and bring the non-Arab world under your domination [La Yuzharahu].”

    In this “revolutionary” struggle, the idolators’ “political hegemony” was threatened. These tribal chieftains were given the chance to change their regimes, as Muhammad, using Mecca as his base, delivered a series of dawah speeches to them. “We should realize the magnitude of this Dawah effort. Continuously for ten years, every tribe was echoing with the challenge of [Muhammad’s] message.” Subsequent to this, after establishing a new base at Medina, he fought battles against those who resisted. He also undertook “a letter-writing campaign” to “all the Kings and rulers around him,” displaying “the political sagacity and statesmanship of the greatest order, ” warning them that “Arabia was not weak” and “was now dominated by a revolutionary Movement” which non-Arabs were welcome to join—or else.

    The Prophet’s way of life exemplifies the way in which all Muslims should live. “He took advantage of every opportunity to expose and project [his vision] to the people around him,” making the objective “supreme” in his life; “everything was subservient to it.” “Dawah work, whether in America, Europe or elsewhere in the world must have this clear objective in the mind of the Da’ee [proselytizers] that they are out to establish Allah’s Deen in the land or the society in which they are living.”

    For this task, “Allah Himself poured upon [Muhammad] through startling revelations of Al-Qur’an in bits and pieces at the time of every need, every difficult situation, every turning point and every calamity in the shape of short and long, forceful, and eloquent verses to meet the situation.” He command Muhammad to “develop and build up [a devoted and dedicated] character in each individual who responded to his call in the affirmative.” His message “most attracted the youth.” Opposition came not only from tribal chiefs but tribal elders and parents, who “realized the revolutionary aspect” of the message. But convinced that their choice was between an eternity in Paradise or Hell, “no amount of torture, oppression or hardship could move the believers even an inch from their position.” Persecution strengthened them, as it winnowed out the weak and enabled Muhammad to “pick up the best souls from the society of Mecca for the cause of Allah”; bribery and other inducements did not tempt such souls. To them, Allah “was the dearest of all, dearer than their parents.”

    Muhammad’s Meccan converts numbered in the dozens. Threatened with death at the hands of his enemies, he listened to Allah’s command to migrate to Medina. The Hejira “sets a model to Muslims all over the world to migrate to a place where there are better prospects to practice, preach and establish the Deen of Allah. The migration of Muslims to America today presents a parallel situation provided the Muslims reorient the objective of their stay in this country and live by the commitment which they have with their Creator, Allah” to “spread His Deen.” At Medina, Muhammad took three steps to establish his base: building a mosque “to serve as a place of worship, a meeting ground, a guest house, a parliament, a conference hall, a court room, a training camp; establishing a covenant with local Jews “through which the power and the mischief-mongering habit of the Jews was neutralized” and “transferred the political and judicial authority” of the  city into the hands of Muhammad; and founding “The Brotherhood,” whereby all Muslims “share[d] the economic burden” of their newly-founded political community. This enabled Muhammad to organize Medina into “a military camp and the Muslims into a very active mobile military force,” aided by “a very effective system of gathering information” (as we would say, ‘intelligence) about surrounding tribes. Muhammad’s “political maneuvering and many preemptive military actions were thus always timely and befitting to the development of events.” “The stage of Peaceful Resistance was over,” and Medina became “a real Islamic State.”

    “Through well-planned diplomatic activities,” Muslims “dismantled the enemy’s trust among themselves,” dividing them and preparing them for the kill. At the same time, “determined to carry out his mission to logical conclusion,” Muhammad never ceased revealing “Qur’anic injunctions revealed to him” by Allah, guiding “the transformation of society from ignorance into Islam.” In this way he “was constantly busy in building, developing and consolidating the team of his devoted and dedicated workers into a dynamic force of the Islamic Movement.” “Only such a team of workers would be capable of establishing Allah’s Deen in today’s world.” Thus Siddiqi presents himself as modeling Muhammad in contemporary America.

    By the eight year of the Hejira, Muhammad had 10,000 followers under his command. Fortified by a peace treaty with his enemies and with God’s protection, Muhammad accelerated his dawah efforts, re-entered Mecca and converted “the entire population of Mecca.” Now, “the Deen was only for Allah.” “The Islamic state of Medina which had the authority all over the Arabia, was now a power to be reckoned with,” and Rome’s Caesar “was alarmed” at “this growing power at the Eastern frontier of his empire.” Soon, “the frontiers of the Islamic State [came into]… open confrontation with one of the superpowers of the time.” Although remaining “hypocrites in Medina” hoped to exploit this confrontation to “administer a fatal blow to the Movement in case [Muhammad] could be defeated by the Roman Empire,” they “were finally warned to accept Islam or be ready to fight,” “either to accept Islam or pay Jizyah [a tax on non-Muslims] and live a life of second class citizen [dhimmitude] under the bounds and bounties of [the] Islamic State.” That settled the matter, and Muhammad took the opportunity to practice dawah, universally. “This directive is binding on all Muslims until doomsday. It is now incumbent upon all Muslims to deliver the message of Islam to mankind and struggle their best to make His Deen dominant, irrespective of where they are and what they are doing.” In the late twentieth century, “this is now the only way left for Muslims to regain the leadership of this world.”

    Accordingly, Siddiqi devotes his central chapters to the United States. Dawah “is the primary job,” there. In order to accomplish it, Muslims must organize themselves, and educate themselves for that job. American Muslims find themselves in the stage of jihad called “peaceful resistance.” They should wage “a relentless war against immoral practices, drugs, pornography, alcoholism racial discrimination, homosexuality, and other[s] like these.” Not only will this struggle bring the Da’ee into “direct contact with the people of the land at a grass-roots level,” it “may also offset the prejudices of Judeo-Christians against Islam,” leading them to “cooperate with the Muslims with better understanding and a with a soft corner in their hearts.” By so “creat[ing] the necessary goodwill among the people,” the Da’ee “will pave the way for the spread of Dawah deep in the society which otherwise would not be possible.”

    Although this “initial stage” may prove “smooth sailing,” that won’t last. “Alarming signals will be raised by the so-called ‘free press,'” and “the Judeo-Christian anti-Islam propaganda machinery will then let loose its game of hate against Islam and the mission of the Prophet Muhammad,” filling the air with “baseless allegations” against them. Fanatics, reactionaries, conservatives, fundamentalist, and terrorists: the name-calling will begin, to be faced “with patience, cool-minded temperament, good behavior and exemplary character.” As “the Movement” begins to “penetrate deep into the hearts of the common folk,” a “counter-offensive campaign against the false propaganda,” coupled with a quest for “legal protection from court for fundamental human rights to propagate what its adherents believe to be correct and to profess the  same through democratic, peaceful and constitutional means,” can begin. Nonetheless, circumstances will worsen; “a period of trial is a must and is inevitable for Muslims wherever and whenever they rise and try to build the Islamic Movement for the establishment of Allah’s Deen”: “this is the logical consequence or the reaction of the society whose values and fundamentals of life are different from those of Islam.” Fortunately, the very character of the American regime, mere human artifacts though its laws may be, “provid[es] the opportunity to individuals or to a group of people to profess, practice and propagate any ideology of their choice.” Thus “the Muslims of America will also be free to mobilize themselves and carry out the program of Dawah Illallah [calling the people to the fold of Islam] to every nook and corner of America,” there being “nothing to hold them back” in “an almost congenial environment for Muslims to work,” at least initially. In this way the Muslim task will be easier in modern America than it was in tribal Arabia, with its “society of ignorance,” its lack of recognition “for fundamental human rights.”

    Opposition “will come from the vested interests in the society,” such “modern idolators” as “the secular press cum media, the agents of capitalists, the champions of atheism (Godless creeds), the missionary zealots and extremely influential Jewish lobby of America.” These interests notwithstanding, “the Peaceful Resistance will… go on winning the hearts, the minds and the imagination of the people all around. There will be no status quo.” This campaign will prepare the way for the final two stages. Eventually, Allah will provide some territory in which true Muslims establish the Deen. Muslims worldwide may then emigrate to that territory. This may be in the United States, or not. In due course, Allah will make his choice manifest. “The Islamic Movement of America, resorting to intensive Dawah work, fighting Munkar [XXXX], rendering useful services to common folk through various projects of service-to-humanity, may influence a region or a state overwhelmingly,” resulting “in getting political strength through state legislatures and gubernatorial elections.” Muslims can then “try to make it into a model Islamic society within the power available under the constitution of the U.S.A and what it does not prohibit.” In turn, “this will pave the way to get hold of other states in a like manner. Thus, without disturbing or violating the constitution of the U.S.A., they can prepare the ground for the emergence of Islam as a way of life acceptable to the electorate of this country,” sending representatives to Congress and establishing “a strong lobby in Washington for the promotion of Islam and its cause in this country as well as elsewhere in the world.” Siddiqi insists, “This is not daydreaming. This is possible as well as feasible, if the Muslims are determined to play their part as Muslims in this country,” showing the American people that “the only way to get their past sins pardoned by God” and “to enter into paradise after death” in accordance with “the American way of life” is peaceful conversion to Islam. “This process is wide open in this country. It is anybody’s game.”
    “The establishment of ‘God’s Kingdom’ on earth will not be a distant dream. It can emerge in the U.S.A. within the next two to three decades,” if Muslims take care not to test the limits of American constitutional law prematurely.

    Thirty years since Siddiqi published those words, this has not happened, whatever inroads political Islam may have tunneled since the 1980s. Siddiqi sees the difficulties, soberly warning Muslims against “the fallacies of their wishful thinking.” At present, “Dawah work is pretty much limited to Afro-Americans and some other ethnic minorities,” and usually to those in prison. Worse, “the revolutionary aspect of Islam is rarely brought before the new converts, as in most of the cases the Da’ee himself is not conversant with it.” And it is “really a great tragedy” that the many Afro-American Muslims themselves are “divided into hundreds of water-tight compartments with no unity, or united platform or central leadership.” Dawah work remains “haphazard, irregular and without any planning.”

    The same disunity prevails among Muslim organizations generally. “There is no central leadership and no common platform.” But “this is the only process through which the Muslims of America can emerge as a united political entity in the body politic of America.” Further, this platform requires a strategy, one designed for American circumstances. And it needs money, which it will need to acquire not from poverty-stricken African-American ex-convicts but from Muslim immigrants, who “are mostly affluent and can meet the target” of $25-$30 million per year, which would finance radio and television networks, schools, media, and research centers “to attract talented Muslim youth in and outside America to compete with the secular world.” Therefore, the African-American Muslim communities, who have the population numbers, and the mostly Arabian immigrants, who have the wealth, must combine in one Muslim Community of America.

    But these organizational and financial issues pale before “the main cause of Muslims’ failure to come forward and meet the obligation lying on their shoulders”: “lack of vision.” “A Muslim has no place in this world until he undertakes what he is raised for in this world as a Khairal Ummah, the Best of Nations.” So long as Muslims “cut themselves off from the Qur’an,” or “study it in an academic fashion,” they will never found the Deen. Only when professing Muslims practice Qur’anic teachings will the deeper meanings of Allah’s message be revealed to them. Siddiqi insists that to know the Qur’an the believer must know it in ‘the Biblical sense’: intimately, in his heart, as a part of his inner self. “There have always been thousands and thousands of learned scholars of the Qur’an, Hadith, and Fiqh throughout the last thirteen hundred years, but they could not establish Allah’s Deen anywhere in this world in its totality after the first four Caliphs and Umar Bin Abdul Aziz.” “As a result, the Qur’an could not present itself as a practical reality to these learned scholars as it was to the Prophet Muhammad and his companions,” who were “cavaliers of the Islamic Movement,  not academicians of the Qur’an.” Such scholars are “perhaps” more sinful in God’s eyes “than one who is ignorant,” as they have no excuse. This is why, regardless of success or failure, which depend upon God’s Will, American Muslims must formulate a plan for action, where they are, now.

    What kind of person is a true Da’ee? “Islam is a way of life.” The Da’ee must understand with “the fundaments” [sic] of that regime, its basic doctrines including the sovereignty of Allah, “Islamic social justice,” “the concept of Jihad and its necessity,” and “the principle of excellence on the basis of piety.” He will find these principles stated in the Qur’an but also embodied in Muhammad’s person and way of life. He must carefully assess the existing American regime, with its expanding economy under “the goddess of capitalism,” its notion of human, popular sovereignty, and the results of those features: “Gradually, America is growing into a colony of vested interests and international Zionists’ caprices and intrigues.” Moreover, “individual liberties and personal freedom have been distorted to serve only as a means to create lust for sex in the society, promote pornography and adopt perverted attitudes and violence in human relations.” As a result of these converging forces of corruption, popular sovereignty “has been eroded to an alarming state,” “women are challenging the authority of men’s domination in every field, resulting in the emergence of a society of unisex at an accelerated pace,” and “personal freedom amounts to a free license to dismantle the moral values and ethical standards of the society both by individuals and the media.” American material, military, and political greatness remains, but it is “ideologically and morally very poor.” Only Islam can truly enrich it.

    In terms of geopolitics, American dominance has bred ‘Third-World’ resentment. As a result, “an economic war is imminent.” In Europe, the European Union, along with Japan, will also challenge the United States, as will the Soviet bloc. (In a later note, Siddiqi admits that “Russia has disintegrated and has become the ‘sick man’ of Europe,” but correctly insists that “still it has the potentials [sic] to play a third-party role in world politics in collaboration with China, North Korea and Cuba.”) In the Middle East, the state of Israel “is a smoldering bomb,” currently the instrument of U.S. policy but with dreams “of dominating” the region, with American partnership “in this dirty game.” As it also seeks to please “the so-called moderate Arabs,” America has “landed in a quagmire.” As for Latin America, “the people need some superb ideology to give redress to their problems and peace to their mind”; once again, Islam is the answer. So, because “America, in the present context of the world, has the potential to remain s superpower for many decades to come,” the Da’ee must continue to study world events, seeking opportunities to advance the cause.

    Still, mere knowledge will never suffice. If a Da’ee “is weak in character, if he lacks in manifesting cool temperament, palatable manners, the requisite amount of devotion and dedication to the cause, if he is short of patience and perseverance against provocations and if he is devoid of determination to carry out the mission against all odds, he will not be able to meet the challenge.” “No amount of knowledge can bridge this gap.” Such character “cannot be produced in the cozy atmosphere of the drawing room or sitting in a corner like a hermit or Sufi and keeping aloof from the world and its happenings.” An umbrella organization of American Muslims must arrange for Dawah field work, whereby the Da’ee will get out and deal with people, deepening his knowledge of Islam by his practice of it among the American people—conversing, organizing, taking care to model the character type of the man under Allah’s regime. Without such practice, it is “rather impossible to generate the sterling qualities of heart and mind and acquire the required amount of personal endurance” necessary to advance that regime politically. Social work, service to the needy, will “gain recognition” for Muslims, “generate the goodwill of the masses and muster the support of the electorate.” “The process of learning, practicing and preaching will go together.”

    Siddiqi emphasizes the importance of distributing “Dawah literature” in the United States, a point made to him by Mawlanda Mawdudi himself in a conversation at the end of Mawdudi’s life, after he had emigrated to Buffalo, New York, where his son practiced medicine. “We have to produce our own literature in the American perspective,” tracts that register “the moods, the temperament, the psychology of the people and the needs of this country.” Also, the Islamic organization should not depend on immigrants (such as himself) to lead the movement here. The immigrants “should remain in the background,” training American converts to serve as the spokesmen. And of course the Da’ee must avail himself must pray to Allah, asking to avail himself of Allah’s power.

    Because “America is a predominantly secular cum permissive society” in which “people are mostly dominated and dictated by their physical urges,” “slaves to their physical instincts,” and governed by “a secular, rigid constitution that guarantees unrestricted personal freedom to act, to speak, to behave, to assemble, to move around and enjoy life the way they desire”; and because the slogan “In God We Trust” “is simply a slogan coined by their forefathers,” with “no bearing on their living condition,” religion “is nowhere visible in the life pattern of the people,” in what Aristotle calls their Bios ti; and because “the Judeo-Christian God is powerless, keeps away from the people’s lives, and has nothing to do with their social, economic and political activities” (“except in very small pockets of conservative Jews and Christians”); “for all practical purposes, America is a Godless society and purely materialistic in every walk of life.” This being so, America resembles the kind of society Muhammad encountered in seventh-century Arabia, “the society of ignorance (Jajhilayah)”. It is the society of modern ignorance. Therefore, “the basic principle for the presentation of Dawah Ilallah should naturally be the same: to call upon the people to obey God and accept Muhammad as God’s messenger.

    But although America is a free society by habit and by law, “when the question of Islam arises, centuries-old prejudices come in the forefront,” such as “the distorted image of so-called terrorism” in the Middle East. Why “so-called”? Siddiqi doesn’t say, but it is likely that he regards acts of violence committed by devout Muslims as legitimate acts of jihad. To correct this such ‘distortions,’ the Da’ee must “proceed patiently, cautiously and diligently with Hikmah (wisdom) in the presentation of Islam to the American people. This will be possible because both God and prophethood are familiar to Jews and Christians. The Christian understanding of God as one Godhead, three Persons, should be challenged as polytheistic or else illogical. “The concept of Trinity appears to be unreasonable and self-contradictory”; the Da’ee must argue against “the dogma of the ‘human-God’ of Christendom, innovated by the Jewish conspiracy against Prophet Jesus.” It is noteworthy that Siddiqi intends a rational argument (aimed initially at priests and pastors). Siddiqi optimistically contends that “there is no reason why positive response will not be forthcoming, at least from the moderate Christians”; as for the immoderate ones, they can be made “shaky in their beliefs” in this way. [1]

    Alongside this deployment of reason (or sophistry, as the case may well be), the Da’ee should also invoke the passion of fear. This is the approach not so much to priests and pastors as to the people. Tell the people: You will be held accountable before God on the Day of Judgment. Better get this right, or else. “The fear of God and the fear of accountability in the Hereafter will keep the people on the path of righteousness.” Heed the prophets, including Jesus and Muhammad—especially the latter, since “when a new prophet came the previous code of conduct was automatically canceled” “it is essential for every man and woman on earth to follow the latest Guidance brought by the last messenger of God,” namely, Muhammad. For these reasons, the people “have no choice but to accept the Qur’an as the only Guidance now available to mankind to follow.” [2]

    “The Christian community of America will need a special approach to make them understand their misguided concept about Jesus.” On this, Siddiqi logic-chops thusly: God created Adam with no father or mother, Eve without a mother. Christians don’t “ascribe the attributes of God to either one of them. How then can they profess Jesus to be the Son of God? It is illogical and quite absurd.” The syllogism, such as it is, amounts to this: Adam had no father; Adam was not God (he did of course have some of the “attributes” of God, but let that pass); therefore, Jesus cannot be the Son of God. But (obviously) if Jesus is not the same kind of being as Adam, why not? Somewhat more seriously, Siddiqi then claims that ‘making’ Jesus into a human-God “is clear idolatry,” inasmuch as “making partners with God is a sin,” and an unforgiveable one at that. But if Jesus’ godliness and humanity, if Jesus was fully God and fully man at the same time, this is self-contradictory only if He was fully God and fully man in the same way as He was in His fatherliness. The designation of the second Person of the Trinity as the Son of God indicates otherwise.

    Once Christians (and presumably “shaky” Jews) have had their convictions de-centered, they will be prepared to receive the message of the Messenger as the only way out of their predicament. Verbal argumentation is one thing, but printed tracts and pamphlets are indispensable for this “important task [that] has been neglected so far by the Muslim organizations of America/Europe due to lac of vision.” Islamic publications shouldn’t be restricted to things aimed at the masses. A magazine “to serve as a vehicle to carry out the message of Islam to the intellectuals of the society presenting an alternative system of life against what is in practice today” will “prepare the ground” for “the better educated and informed segments of the society” to “accept Islam as their way of life.” Congruently, “For Dawah work in the universities and colleges, it must be pointed out that there should be more concentration on the teachers than the students, or equally on both.” The teachers are “free, they have the time and they exert a lot of influence upon the students. If they are convinced about Islam as a way of life, they can motivate their students to that effect in great numbers. Teachers will therefore be the special Dawah targets of the Islamic Movement.”

    In all these efforts, “the Da’ee must know the inhabitants, to whom the message is to be delivered, well.” “Their mood and temperament, their habits and tastes, their likes and dislikes, their fields of interest, the qualities of their character”—in sum, the ethos of the regime—must be thoroughly understood. “The job of a Da’ee is like that of a doctor,” diagnosing and prescribing to his patient. Once cured of his spiritual ills, the patient may himself become a doctor, or at least a medical paraprofessional, a partner in the task of Islamification. As the cure in its initial stages will be verbal, the doctor of Dawah must be alert to “the situation and timing” of his presentation, waiting until “the contactee is in a receptive mood,” changing the subject if “an addressee is found yawning or restless or absentminded or [un]interested.” And of course “when the attitude of obstinacy comes into the dialogue or the addressee becomes adamant,” “refus[ing] even to listen to logic,” the Da’ee should retreat with the intention of “meet[ing] again at some future time.” “In no way should he hurt the feelings of his contactees.” “Neither force nor any coercive method is to be applied while presenting Dawah to non-Muslims.” In America, at least. “Pray to Allah for the opening of the heart of the contactee and beg from Him to present the message in soft but effective language and in a palatable manner.”

    Proselytizing can also take the form of action. “Every worker of the Islamic Movement, through service to the people in his neighborhood and vicinity, should acquire prominence as a person to be sought after in time of need,” not for the sake of “fame or reputation” but to “earn the sympathy of the people for the sake of Allah and then go deeper into the society for Dawah work.” For this, the elderly—many of them “sick or incapacitated and confined to homes or elderly people care centers—”are a useful electorate” and a rich potential source of community outreach, if converted. On the other side of the spectrum, runaway children, foster children, abused children, and other needy youngsters will respond to “fatherly guidance” from the Da’ee. Model foster hopes and hostels in which Islam is taught will bring this opportunity to fruition, as they will amount to a parallel to the care facilities available to the elderly. Finally, “counseling service to battered husbands and battered wives will ultimately bring them nearer to Islam,” as “they will all feel obligated to the teachings of Islam that changed their lives and made their matrimonial life happier and rejuvenated.” All such services can help to effect regime change, “bring[ing] before this nation Islam as a way of life” and counteracting depraved sexual behavior by “creating hate/contempt against the existing lifestyle of the people” of America—which, as he has already contended, has sunk deep into sinfulness.

    Siddiqi concludes with a personal postscript. “In 1982, I went around the world and visited many countries, with the sole objective of finding out the place where an effective Islamic Movement could be developed in the present context of the world in order to make Allah’s Deen dominant somewhere on this earth.” He found that “America is the most suitable place in the Western hemisphere for that glorious end to be started.” But it has barely begun. “A serious Islamic Movement for the establishment of Allah’s Deen is yet to emerge in the body politic of the U.S.A.” He calls for existing Muslim organizations to “take up the task of Dawah Ilallah along the lines suggested in this book,” to unite without delay in working toward that end. Among those who seem to have done so was Mohamed Akram al-Adouni, then a member of the Board of Directors of the American chapter of the Muslim Brotherhood, and at this writing the General Secretary of the Al Quds International Forum, which finances the Hamas organization in Gaza. In “An Explanatory Memorandum on the General Strategic Goal for the Group in North America,” published in 1991, Akram praised “the brothers in the Islamic Circle”—Siddiqi’s organization—for their “attempt to reach a unity of merger” with other like-minded organizations. In Akram’s language, the purpose of such an organization does indeed resemble Siddiqi’s stated intention, albeit expressed more tartly: workers “must understand that their work in America is a kind of grand Jihad in eliminating and destroying the Western civilization from within and ‘sabotaging’ it miserable house by their hands and the hands of the believers so that it is eliminated and God’s religion is made victorious over all other religions.” Whereas Siddiqi emphasizes the rhetorical content and methods of Dawah, Akram focuses more on the need for organization—the beginnings of the politeia of the new regime, beginning with Islamic Centers “in every city.” “The center ought to turn into a ‘beehive’ which produces sweet honey,” a civil-social political society in itself, offering education, recreation, social activities, and headquarters for political campaigns. The role of the Islamic Center “should be the same as the ‘mosque’s’ role during the time of God’s prophet… when he marched to ‘settle’ the Dawah in its first generation in Medina.” In modern times, such organizational tasks were first begun by Hassan al-Banna, “the pioneer of the contemporary Islamic Dawah” and founder of the Muslim Brotherhood in Egypt in the decades before the Second World War. In America today, “the big challenge that is ahead of us is how to turn… seed or ‘scattered’ elements into comprehensive, stable, ‘settled’ organizations that are connected with our Movement and which fly in our orbit and take orders from our guidance.” Larger and better-funded than Siddiqi’s Islamic Circle of North America, the Brotherhood was indeed better situated to effect Siddiq’s program.

    Controversy remains on whether the American organization heeded Akram’s memorandum. But why would it not?

     

     

    Notes

    1. As Christian theologians from Augustine forward have observed, the Trinitarian understanding of God involves no contradiction if the three Persons are understood as Personae of the same God or “Godhead,” to use the preferred term of these thinkers. Otherwise, it would be impossible to ‘have faith’ in the existence of such a God, since one cannot have faith in any person or any thing who or which is inconceivable. If you tell me to accept on faith that you are holding a square circle in your closed hand, at most I can believe that you are holding something you call a square circle; because I can’t conceive of such a thing, I cannot ‘have faith’ that you have a real square circle in your hand, not knowing what you could possibly be talking about.
    2. In fact, Jesus tells his Jewish disciples that not one jot or tittle of the Jewish law has been suspended for them. He does not require non-Jewish converts to take up obedience to that law, but that is not a cancellation of the prophecies already heard by the Israelites insofar as they were directed exclusively to them.

    Filed Under: American Politics

    • « Previous Page
    • 1
    • …
    • 101
    • 102
    • 103
    • 104
    • 105
    • …
    • 238
    • Next Page »