CHAPTER FIVE
THE CONSTITUTIONALISM OF THE UNITED STATES is rather old hat these days. Since the country has the oldest written national constitution in the world, there doesn’t seem to be much that is new and peculiar about it. Certainly the fact that the American Constitution is written is not unusual. Most constitutions these days are written, many of them during the past three decades. When people talk about a country like Afghanistan or Iraq getting a new constitution, they assume it will be a written one; it now seems that writing one out on paper is the only way to create a constitution these days. (I should point out that most newly written constitutions are a good deal longer than the eight thousand words of the American Constitution. Actually, the U.S. Constitution today has become as “unwritten” as those of Israel or Great Britain.)
It used to be that America’s separation of powers was unusual, if not unique, among governments. Not anymore. Lots of governments now have independent judiciaries and presidents who are not members of their legislatures. But the parliamentary system of cabinet responsibility to the legislature still dominates in the world, and thus the American system of separation of powers still seems unusual.
There was a time when judicial review was peculiarly American, but no longer. Many states in the world now have judiciaries that review legislation and have the authority to declare statutes null and void. (Parenthetically, however, it is important to point out that many of these courts, unlike the American courts, are specialized constitutional courts.) Even the English courts, which have always been respectful of parliamentary sovereignty, have recently begun trying to use the European Convention on Human Rights as a basis for interpreting or limiting parliamentary statutes.
Foreign courts now routinely deal with the same issues that American courts deal with—right-to-life, freedom of speech, and equality. In fact, foreign courts sometimes critically scrutinize and use American court decisions in reaching their own decisions. Some judiciaries in the European states now declare more statutes void than does the U.S. Supreme Court. Many of these foreign courts, such as the Israeli Supreme Court, even consider cases justiciable that the American courts have avoided, especially those dealing with military matters. It is very unlikely that the U.S. Supreme Court would have taken on a case similar to one concerning the amount of food being provided to those holed up in the Church of the Nativity in Bethlehem while surrounded by the Israeli army. But the Israeli Supreme Court did, even without a written constitution!
Americans used to be known for their obsession with rights, but that obsession now seems to be shared more and more by other countries in the developed world. Most of the new constitutions of the past three decades have a core of basic rights and liberties to which judges can refer in their court decisions. Even the idea of separation of church and state, which Americans pioneered, has spread to other nations struggling with religious diversity. Federalism may have been a modern American invention, but it has been much copied. Indeed, federalism is so common throughout the world today that America’s example is scarcely illuminating anymore. America may, in fact, be the most centralized of the many federal states and thus the least interesting model.
Despite all these modern similarities between the U.S. Constitution and other national constitutions, however, there are important differences. To better understand those differences and perhaps to make some sense of Americans’ habitual ignorance of other constitutions in the world, it may be helpful to look at the origins of America’s constitutionalism.
The first thing to emphasize is the fact that the Founders who created America’s constitutional structure at the end of the eighteenth century were Englishmen with a strong sense that they were heirs of the English tradition of freedom. Although England had become corrupted during the eighteenth century, Americans believed that at one time it had been the dominant source of liberty and popular government in the world, its constitution celebrated by liberal intellectuals everywhere. Thus it was fitting that Anglo-Americans (the title most Europeans gave to Americans in the early nineteenth century) should become the beneficiaries of this popular tradition of English rights and English liberty. Americans thought that the torch of English freedom had been passed to them and that they had a responsibility to make it shine brighter and more enduringly than the English had been able to do.
The Americans were intent on avoiding the corruption they believed plagued the English constitution, and that meant that they had to deviate from the English constitutional tradition in a number of ways. In fact, comparing the Americans’ constitutional developments at the end of the eighteenth century with the English constitutional system that they broke away from can help to illuminate just what is distinctive about American constitutionalism.
The most obvious difference between eighteenth-century English and American constitutionalism was the American Revolutionaries’ conception of a constitution as a written document, as a fundamental law circumscribing the government. Before the American Revolution, a constitution was rarely distinguished from the government and its operations. Traditionally in English culture, a constitution referred both to the way the government was put together, or constituted, and to the fundamental rights the government was supposed to protect. The eighteenth-century English constitution was an unwritten mixture of laws, customs, principles, and institutions.
By the end of the Revolutionary era, however, the Americans’ idea of a constitution had become very different. A constitution was now seen to be no part of the government at all: it was a written document distinct from and superior to all the operations of government. A constitution was, as Thomas Paine said in 1791, “a thing antecedent to a government; and a government is only the creature of a constitution.” And, said Paine, it was “not a thing in name only; but in fact.”
For Americans, a constitution was something fundamental. It was a written document, possessed by every family, and carried about like the Bible to be quoted and cited article by article. Such a constitution could never be an act of the legislature; it had to be the act of the people themselves, declared James Wilson—one of the principal framers of the Constitution of 1787—and “in their hands it is clay in the hands of a potter; they have the right to mould, to preserve, to improve, to refine, and to furnish it as they please.” If eighteenth-century Britons thought this American idea of a constitution was, as the British writer Arthur Young caustically suggested in 1792, “a pudding made from a recipe,” the Americans had become convinced that the English had no constitution at all.
As much as we now take for granted this idea of a constitution as written fundamental law, the idea was not easily arrived at. The American colonists began the debate with Great Britain in the 1760s thinking about constitutional issues in much the same way as their fellow Britons. Like the English, they believed that the principal threat to the people’s ancient rights and liberties had always been the prerogative powers of the king—those vague and discretionary but equally ancient rights of authority that the king possessed in order to carry out his responsibility for governing the realm. Indeed, the eighteenth-century English saw their history as essentially a struggle between these ancient conflicting rights—between power and liberty, between an encroaching monarchy on one hand and the freedom-loving people on the other. Time and again in the colonial period, Americans, like their fellow Englishmen at home, had been forced to defend themselves against the intrusions of royal prerogative power. They relied for their defense on their colonial assemblies, their rights as English subjects, and what they called their ancient charters.
In the seventeenth century, many of the colonies had been established by crown charters—corporate or proprietary grants made by the king to groups such as the Massachusetts Puritans or to individuals such as William Penn and Lord Baltimore to found colonies in the New World. In subsequent years these charters gradually lost their original meaning in the eyes of the colonists and took on a new importance, both as prescriptions for government and as devices guaranteeing the rights of the people against their royal governors. In fact, the whole of the colonial past was littered with such charters and other written documents of various sorts to which the colonial assemblies repeatedly appealed in their squabbles with royal power.
In turning to written documents as confirmation of their liberties, the colonists acted no differently from other Englishmen. From almost the beginning of their history, the English had continually invoked written documents and charters in defense of their rights against the crown’s power. “Anxious to preserve and transmit” their rights “unimpaired to posterity,” declared a Connecticut clergyman on the eve of the Revolution, the English people had repeatedly “caused them to be reduced to writing, and in the most solemn manner to be recognized, ratified and confirmed,” first by King John, then Henry III and Edward I, and “afterwards by a multitude of corroborating acts, reckoned in all, by Lord Cook, to be thirty-two, from Edw. 1st. to Hen. 4th. and since, in a great variety of instances, by the bills of right and acts of settlement.” All of these documents, from the Magna Carta to the Bill of Rights of the Glorious Revolution of 1688–1689, were merely written evidence of those fixed principles of reason from which the English believed their constitution was derived.
Although the eighteenth-century English talked about the fundamental law of the English constitution, few of them doubted that Parliament, as the representative of the nobles and people and as the sovereign lawmaking body of the nation, was the supreme guarantor and interpreter of these fixed principles and fundamental law. Parliament was in fact the bulwark of the people’s liberties against the crown’s encroachments; it alone defended and confirmed the people’s rights. The Petition of Right, the act of Habeas Corpus, and the Bill of Rights were all acts of Parliament, statutes not different in form from other laws passed by Parliament.
For the English, therefore, as William Blackstone, the great eighteenth-century jurist, pointed out, there could be no distinction between the “constitution or frame of government” and “the system of laws.” All were of a piece: every act of Parliament was part of the constitution and all law, both customary and statute, was thus constitutional. “Therefore,” concluded the English theorist William Paley, “the terms constitutional and unconstitutional, mean legal and illegal.”
Nothing could be more strikingly different from what Americans came to believe. Indeed, it was precisely on this distinction between “legal” and “constitutional” that the American and English constitutional traditions diverged at the time of the Revolution. During the 1760s and 1770s the colonists came to realize that although acts of Parliament, like the Stamp Act of 1765, might be legal—that is, in accord with the acceptable way of making law—such acts could not thereby be automatically considered constitutional, or in accord with the basic principles of rights and justice that made the English constitution what it was. It was true that the English Bill of Rights and the Act of Settlement of 1701 were only statutes of Parliament, but surely, the colonists insisted, they were of “a nature more sacred than those which established a turnpike road.”
Under this kind of pressure, the Americans came to believe that the fundamental principles of the English constitution had to be lifted out of the lawmaking and other institutions of government and set above them. “In all free States,” said Samuel Adams in 1768, “the Constitution is fixed; and as the supreme Legislature derives its Powers and Authority from the Constitution, it cannot overleap the Bounds of it without destroying its own foundation.” Thus in 1776, when Americans came to make their own constitutions for their newly independent states, they inevitably sought to make them fundamental and write them out in documents. These state constitutions of 1776–1777, which were immediately translated into several European languages, captured the imagination of the enlightened everywhere.
It was one thing, however, to define the constitution as fundamental law, different from ordinary legislation and circumscribing the institutions of government; it was quite another to make such a distinction effective. Since the state constitutions were created by the legislatures, they presumably could also be changed or amended by the legislatures. Some of the constitution makers in 1776 realized the problem and tried to deal with it. Delaware provided for a supermajority, five-sevenths of the legislature, when changing its constitution. Maryland said that its constitution could be amended only by a two-thirds vote of two successive legislatures. Most states, however, simply enacted their constitutions as if they were regular statutes. Clearly, everyone believed that the constitutions were special kinds of law, but no one knew quite how to make them so.
In the years following the Declaration of Independence, Americans struggled with this problem of distinguishing fundamental from statutory law, none more persistently than Thomas Jefferson. In 1779 Jefferson knew from experience that no legislature “elected by the people for the ordinary purposes of legislation only” could restrain the acts of succeeding legislatures. Thus he realized that to declare his great Statute for Religious Freedom in Virginia to be “irrevocable would be of no effect in law; yet we are free,” he wrote into his 1779 bill in frustration, “to declare, and do declare, that . . . if any act shall be hereafter passed to repeal the present [act] or to narrow its operation, such act will be an infringement of natural right.” In effect, he was placing a curse on the future legislators of Virginia.
But Jefferson realized that such a paper declaration was not enough and that something more was needed. By the 1780s both he and his friend James Madison were eager “to form a real constitution” for Virginia; the existing one, they said, was merely an “ordinance” with “no higher authority than the other ordinances of the same session.” They wanted a constitution that would be “perpetual” and “unalterable by other legislatures.” The only way that could be done was to have the constitution created, as Jefferson put it, “by a power superior to that to the legislature.” By the time Jefferson came to write his Notes on the State of Virginia in the early 1780s, the answer had become clear. “To render a form of government unalterable by ordinary acts of assembly,” wrote Jefferson, “the people must delegate persons with special powers. They have accordingly chosen special conventions or congresses to form and fix their governments.”
Massachusetts in 1780 had shown the way. It had elected a convention specially designated to form a constitution and had then placed that constitution before the people for ratification. When the Philadelphia Convention drew up a new constitution for the nation in 1787, it knew what to do. It declared that the new Constitution had to be ratified by the people meeting in state conventions called for that purpose only. Constitutional conventions and the process of ratification made the people themselves the actual constituent power. As enlightened Europeans realized, these devices were some of the most distinctive contributions the American Revolution made to world politics.
But these were not the only contributions. With the conception of a constitution as fundamental law immune from legislative encroachment more firmly in hand, some state judges during the 1780s began cautiously moving in isolated cases to impose restraints on what the assemblies were enacting as law. In effect, they said to the legislatures—as George Wythe, judge of the Virginia supreme court, did in 1782—“Here is the limit of your authority; and hither shall you go, but no further.” These were the hesitant beginnings of what would come to be called judicial review—that remarkable practice by which judges in the ordinary courts of law have the authority to determine the constitutionality of acts of the state and federal legislatures.
The development of judicial review came slowly. It was not easy for people in the eighteenth century, even those who were convinced that many of the acts of the state legislatures in the 1780s were unjust and unconstitutional, to believe that unelected judges could set aside acts of the popularly elected legislatures; this seemed to be an undemocratic usurpation of power. But as early as 1787, James Iredell, soon to be appointed an associate justice of the newly created Supreme Court of the United States, saw that the new meaning Americans had given to a constitution had clarified the responsibility of judges to determine the law. A constitution in America, said Iredell, was not only “a fundamental law” but also a special, popularly created “law in writing . . . limiting the powers of the Legislature, and with which every exercise of those powers must necessarily be compared.” Judges were not arbiters of the constitution or usurpers of legislative power. They were, said Iredell, merely judicial officials fulfilling their duty of applying the proper law. When faced with a decision between “the fundamental unrepeatable law” made specially by the people, and an ordinary statute enacted by the legislature contrary to the constitution, they must simply determine which law was superior. Judges could not avoid exercising this authority, concluded Iredell, for in America a constitution was not “a mere imaginary thing, about which ten thousand different opinions may be formed, but a written document to which all may have recourse, and to which, therefore, the judges cannot witfully blind themselves.”
Although Iredell may have been wrong about the number of different opinions that could arise over a constitution, he was certainly right about the direction judicial authority in America would take. The way was prepared for Supreme Court Justice John Marshall’s decision in Marbury v. Madison in 1803 and the subsequent but bitterly contested development of the practice of judicial review—a practice that Europeans soon became aware of.
Unlike the European and Israeli constitutional courts, the American federal courts are not special courts with constitutional responsibilities separate from ordinary law adjudication. This is an important point of distinction whose implications are not easy to spell out. Because the European and Israeli constitutional courts are so special, they are usually protected from partisanship by elaborate mechanisms of appointment. The Israeli system of appointment is indirect and largely removed from the politics of the Knesset. As we know only too well from recent events, Americans have no such indirect method of appointing judges. Thus the filibustering of appointments by the U.S. Senate becomes a crude requirement of a supermajority for federal court appointments. Of course, this is simply a measure of how significant judges have become in our constitutional system.
As important as the idea of a written constitution distinguishable from ordinary statute law was in the eighteenth century, however, it was not the most significant constitutional deviation the Americans made from their inherited English traditions. More important in distinguishing American constitutionalism from that of the English, and most other democratic nations in the world today, was the idea of separation of powers.
Montesquieu, in his Spirit of the Laws, had praised the English constitution for separating the executive, legislative, and judicial powers of government. But Montesquieu did not understand precisely what was happening to the English constitution in the eighteenth century. The legislature (that is, Parliament) and the executive (that is, the king’s ministry) were in fact becoming blurred as England stumbled into what eventually became its modern parliamentary system of responsible cabinet government. The key to the British system is the fact that the ministers of the crown are simultaneously members of Parliament. It was this linkage, which the American colonists labeled “corruption” and David Hume called “influence,” that the Americans in 1776 were determined to destroy.
Thus, in their state constitutions of 1776, they excluded from their assemblies all members of the executive branch, so that, as the New Jersey constitution declared, “the legislative department of this Government may, as much as possible, be preserved from all suspicion of corruption.” This separation was repeated in the federal Constitution in Article I, Section 6—preventing the development of responsible cabinet government in America. In this respect, at least, American constitutionalism has not been influential at all, for most democratic governments in the world have tended to follow the British parliamentary model of government.
But beneath these obvious differences between the constitutionalism of Great Britain and of America are even more fundamental deviations that help to make America’s conception of government and politics different from nearly every other nation in the world. These differences began with the concept of representation.
During the debates over the nature of the empire in the 1760s and 1770s, the British vainly tried to justify Parliament’s taxation of the colonies. They argued that the American colonists, like Britons everywhere, were subject to acts of Parliament through a system of what they called “virtual” representation. Even though the colonists, like “nine-tenths of the people of Britain, did not in fact choose any representative to the House of Commons,” they said, they were undoubtedly “a part, and an important part of the Commons of Great Britain: they are represented in Parliament in the same manner as those inhabitants of Britain are who have not voices in elections.”
To most of the mainstream English at home, this argument made a great deal of sense. Centuries of history had left Britain with a confusing mixture of sizes and shapes of its electoral districts. Some of the constituencies were large, with thousands of voters, but others were small and more or less in the pocket of a single great landowner. Many of the electoral districts had few voters, and some so-called rotten boroughs had no inhabitants at all. One town, Dunwich, continued to send representatives to Parliament even though it had long since slipped into the North Sea. At the same time, some of England’s largest cities, such as Manchester and Birmingham, which had grown suddenly in the mid-eighteenth century, sent no representatives to Parliament. The British justified this hodgepodge of representation by claiming that each member of Parliament represented the whole British nation and not just the particular locality he supposedly came from. Parliament, as Edmund Burke said, was not “a congress of ambassadors from different and hostile interests, which interests each must maintain, as an agent and advocate, against other agents and advocates; but Parliament is a deliberative assembly of one nation, with one interest, that of the whole.” Requirements that the members of Parliament (MPs) reside in the constituencies they represented had long since been ignored and of course are still not necessary for MPs today. According to this idea of virtual representation, people were represented in England not by the process of election, which was considered incidental to representation, but rather by the mutual interests that members of Parliament were presumed to share with all Britons for whom they spoke—including those, like the colonists, who did not actually vote for them.
The Americans strongly rejected these British claims that they were “virtually” represented in the same way that the nonvoters of cities like Manchester and Birmingham were. They challenged the idea of virtual representation with what they called “actual” representation. If the people were to be properly represented in a legislature, the colonists declared, not only did the people have to vote directly for the members of the legislature, but they also had to be represented by members whose numbers were proportionate to the size of the population they spoke for. What purpose is served, asked James Otis of Massachusetts in 1765, by the continual attempts of the English to defend the lack of American representation in Parliament by citing the examples of Manchester and Birmingham, which returned no members to the House of Commons? “If those now so considerable places are not represented,” said Otis, “they ought to be.”
What was meaningful in England made no sense in America. Unlike in England, electoral districts in the New World were not the products of history that stretched back centuries, but rather were recent and regular creations that were related to changes in population. When new towns in Massachusetts and new counties in Virginia were formed, new representatives customarily were sent to the respective colonial legislatures. This system of actual representation stressed the closest possible connection between the local electors and their representatives. Unlike the English, Americans believed that representatives had to be residents of the localities they spoke for and that people of the locality had the right to instruct their representatives. The representatives were to be in effect what Burke had said they should never be, ambassadors from their localities. Since Americans thought it only fair that their localities be represented more or less in proportion to their population, they wrote that requirement into their Revolutionary constitutions. In short, the American belief in actual representation pointed toward the fullest and most equal participation of the people in the process of government that the modern world had ever seen.
Since this actual representation was based on the people’s mistrust of those they elected, they pushed for the most explicit and broadest kind of consent, which generally meant voting. The mutuality of interests that made virtual representation meaningful in England was in America so weak and tenuous that the representatives could not be trusted to speak for the interests of their constituents unless those constituents actually voted for them. Actual representation thus made the process of election not incidental but central to representation.
Actual representation became the key to the peculiarities of American constitutionalism and government. People wanted elected officials that were like them in every way, not only in ideas but in religion, ethnicity, or social class. The people in Philadelphia in 1775 called for so many Presbyterians, so many artisans, and so many Germans on the Revolutionary committees. Already Americans were expressing the idea that the elected representatives not only had to be for the people, they also had to be of the people.
Mistrust became the source of American democracy. Indeed, the mistrust at times became so great that the representative process itself was brought into question, and mobs and extralegal associations emerged to claim to speak more authentically for the people than their elected representatives. The people, it seemed, could be represented in a variety of ways and in a variety of institutions. But no officials, however many votes they received, could ever fully represent the people.
Ultimately, these contrasting ideas of representation separated the English and American constitutional systems. In England Parliament came to possess sovereignty—the final, supreme, and indivisible lawmaking authority in the state—because it embodied the whole society, all the estates of the realm, within itself, and nothing existed outside of it. In America, however, sovereignty remained with the people themselves, and not with any of their agents or even with all their agents put together. The American people, unlike the British, were never eclipsed by the process of representation.
When Americans referred to the sovereignty of the people, they did not just mean that all government was derived from the people. Instead, they meant that the final, supreme, and indivisible lawmaking authority of the society remained with the people themselves, not with their representatives or with any of their agents. In American thinking, all public officials became delegated and mistrusted agents of the people, temporarily holding bits and pieces of the people’s power out, so to speak, on always recallable loan.
It may be important to point out why the Constitutional Convention failed to include a bill of rights with the Constitution it drafted in 1787. At the end of the Convention one delegate suggested that one was required. But the motion was defeated by every state delegation. The rationale for not having a bill of rights was that—unlike in England, where the crown’s prerogative power preexisted and had to be limited by a bill of rights—all power in America existed in the people, who doled out only scraps of it to their various agents, so no such fence or bill of rights was necessary. This was a bit too precious an argument for many, however, and Madison and other supporters eventually had to concede the need for amendments, the first ten of which became the Bill of Rights.
By thinking of the people in this extraordinary way, Americans were able to conceive of federalism—that is, the remarkable division of power between central and provincial governments. By creating two legislatures with different powers operating over the same territory—the Congress and the separate state legislatures—the Americans offered the world a new way of organizing government. In subsequent decades, nineteenth-century libertarian reformers everywhere in Europe and Latin America, struggling to put together central governments in the face of strong local loyalties, appealed to the American example of federalism. German reformers in 1848 cited the American example in their efforts to build a confederation, and liberal reformers in Switzerland called the United States Constitution “a model and a pattern for the organization of the public life of republics in general, in which the whole and parts shall both be free and equal. . . . The problem,” they said, with more enthusiasm than accuracy, given America’s growing federal crisis that resulted in the Civil War, “has been solved by the new world for all peoples, states and countries.”
Only by conceiving of sovereignty remaining with the people could Americans make sense of their new constitutional achievements, such as the idea of special constitution-making conventions and the process of popular ratification of constitutions. In America the notion that sovereignty rested in the people was not just a convenient political fiction; the American people, unlike the English, retained an actual lawmaking authority. The English did not need conventions and popular ratifications to change their constitution because Parliament was fully and completely the people and the people did not exist politically or constitutionally outside of it, except at the moment of election.
Once election became for Americans the sole criterion of representation, it was natural to think of all elected officials as somehow representative of the people. As early as the 1780s, many Americans were referring to their elected senates as double representations of the people; some began claiming that their governors, because they were elected by all the people, were the most representative officials in the state. Soon all elected officials were being designated representatives of the people, and the term originally applied to the various “houses of representatives” in the state constitutions and the federal Constitution became an awkward reminder that Americans had once thought of representation as the English had: as confined to the lower houses of their legislatures.
The people inevitably included even judges as their various agents. When, in order to justify judicial review, Alexander Hamilton in The Federalist No. 78 referred to judges as agents of the people and not really inferior to the people’s other agents in the legislature, he opened up a radically new way of thinking of the judiciary. If judges were indeed the people’s agents, as many soon concluded, then rightfully they ought to be elected, especially since election had become the sole measure of representation. Consequently, it was only a matter of time before the states began electing their judges. Conceiving of judges as just another one of their agents perhaps helps explain why Americans eventually became so accepting of judicial review, including even having the Supreme Court decide who will be president.
Since in the American system the people were never fully embodied in government, all sorts of strange political institutions and practices could and did emerge. The primaries, referendums, processes of recall, and ballot initiatives introduced by Progressive reformers at the beginning of the twentieth century were only extensions of the ideas of popular sovereignty and acute actual representation created at the Founding of the United States. These efforts to reach beyond actual representation to some kind of pure democracy were based on popular mistrust of elected officials, as were the original ideas of actual representation. As one account of 1896 put it, California had “only one kind of politics and that was corrupt politics. It didn’t matter whether a man was a Republican or Democrat. The Southern Pacific Railroad controlled both parties.”
In the past several decades the number of ballot initiatives in some of the western states has soared, to the point where they seem to rival the number of statutes passed by the legislatures. Ironically, the Southern Pacific Railroad, now just another special-interest group, in 1990 promoted a ballot initiative to issues billions in bonds in support of rail transportation. Although Oregon has had more ballot initiatives than California, California’s have become the most notorious. The recall of Governor Gray Davis and the election of Arnold Schwarzenegger as governor of California in 2003 are the kinds of popular actions not likely to be duplicated in any other developed democracy in the world today (with the possible exception of Switzerland). But once we grasp the peculiar American idea of the sovereignty of the people, based on a deeply rooted mistrust of all elected officials, these extraordinary political events begin to make some sense. Whether these efforts at direct democracy are sensible ways of running a modern democratic state, however, remains to be seen.
AFTERWORD TO CHAPTER 5
Like many of these collected essays, this one began as a lecture presented at a conference on constitutionalism at the University of Chicago Law School in January 2004. Although it has never previously been published, it highlights themes that have been part of my thinking and writing over the past half century.
The history of American constitutionalism in the eighteenth century is very important, if only because of recent events. Over the past two decades or so, sixty-nine countries—from the nations of post-communist Central and Eastern Europe, to South Africa, to Afghanistan and Iraq—have drafted constitutions. At the same time many other states have revised their constitutions on paper, and even the European Union has tried to get a written constitution ratified. Consequently, only a few states in the world are without written constitutions. Indeed, it is almost impossible for many people today to conceive of a constitution as anything but a written document. And it all essentially began with America a little over two centuries ago.
For more on the influence of American constitutionalism over the past two centuries, see the monumental study by George Athan Billias, American Constitutionalism Heard Round the World, 1776–1789: A Global Perspective (New York: New York University Press, 2009).
A German scholar, Horst Dippel, is in the midst of a huge project of editing and publishing all constitutions written between the period 1776 and 1860. Entitled Modern Constitutionalism and Its Sources, it will be an enormously helpful resource when completed.