Category Archives: history

occupy-fawkes

Fifth of November: Untangling truth from Vendetta

Remember, remember, the Fifth of November.

It is fitting that American elections are held during the first week of November — the one time a year when we hear the name Guy Fawkes as we chat briefly about overthrowing governments and installing anarchy. We salivate over how great V for Vendetta is, and how we’d like to head our own utopian government. But, ironically, very few people actually know what happened on the Fifth of November that we’ve come to commemorate.

Sure, some are aware that November 5, 1605, was the day Fawkes tried to bomb the English Parliament, but far fewer know why — or, really, anything else about the man whose face has become a symbol for activists and anarchists everywhere. So, who was Fawkes? What is it he fought for? What is it we are really celebrating? How did he become the symbol we know today? And what are the differences between Fawkes and the fictional character V?

At the turn of the 17th century, Europe was tearing itself apart. Increased literacy and education led to the questioning of traditional religion and its role in state politics. Protestantism was on the rise, and the rift in Christianity left many countries with factions feuding over religious dominance. Henry VIII of England wanted to marry his love, Anne Boleyn, but the Pope refused to annul the king’s wedding to Catherine of Aragon. To get around that complication, the king split from the Pope, who had once called Henry the “defender of the faith,” and the Roman Catholic Church entirely, declaring himself the head of an independent English (or Anglican) Church.

In the coming decades, each subsequent English monarch changed the official religion in some way, including reestablishing the supremacy of Rome for a time. In this environment of constantly shifting religious alliances, it was inevitable that many would take sides and fight for their version of the Christian faith. Under James I, Catholics had hoped to see a move toward greater Catholic religious tolerance. They were disappointed. Anglicanism continued to reign, and the Bible was translated into an authorized English version that bears the king’s name. But some English Catholics were no longer content to wait for the afterlife to see the Protestant leaders judged. And in 1605, one group of conspirators planned to retake the throne for Rome.

The Gunpowder Plot was the attempt by English Catholic subversives to bomb the House of Lords, theoretically leading to the death of King James and the installation of his young daughter, Elizabeth, as a Catholic queen. Fawkes, who many imagine as an anarchist, was in fact among the conspirators hoping to establish a Catholic theocracy. Fawkes was put in charge of guarding the explosives, but he was captured, causing the coup to fizzle out before getting anywhere.

Fawkes was tortured, under the king’s orders, to compel the man to give up the names of his co-conspirators. While his resolve remained strong at first, the increasing brutality of the torture eventually broke Fawkes. To understand the full scale of the beating Fawkes received, one must only look at his signature before and after his torture. Fawkes and his cohorts were tried and sentenced to death, with the added humiliation of being hanged, drawn, and quartered. Fawkes, defiant to the end, evaded part of the punishment when he jumped and broke his neck, avoiding having his guts and testicles cut apart while still conscious.

Not long after the failure of the Gunpowder Plot, November 5 was declared a day of thanksgiving to celebrate the king’s survival. The holiday acquired the name of Bonfire Day because crowds would burn effigies of Fawkes and the Pope. Unsurprisingly, this worsened the bad blood between the Christian sects.

Guy Fawkes/Bonfire Day has evolved over the centuries and taken on new meaning as public opinions and sympathies have changed. For the first several centuries, the holiday was very obviously a celebration of the plot’s failure. When the monarchy did temporarily fall midcentury, the focus ceased to be on the survival of the king, but on the supremacy of Protestantism and parliamentary rule. Today, it is harder to pin down whether folks are celebrating the survival of English government or the idea of destroying it.

Either way, Fawkes became a symbol, with his name and likeness surviving long after his execution. In fact, the word “guy” is derived from the effigies of Guy Fawkes. The effigies were often made by children out of old clothing and called “guys.” The term became a pejorative for poorly dressed men, though it evolved in time to mean any male, losing its negative connotation.

Over two centuries after the failed plot, in 1841, William Harrison Ainsworth’s historical romance Guy Fawkes; or, The Gunpowder Treason was released, portraying Fawkes as a more sympathetic character. Fawkes continued to appear in new works over the next century, but none would do as much for the man’s image as Alan Moore and David Lloyd’s graphic novel of the 1980s, V for Vendetta.

V for Vendetta was Moore’s first attempt at writing a continuing, serialized story. The story, as written in the comic, is set in the dystopian future of 1997. English fascists have overtaken the country following a devastating nuclear war. The state suppresses dissent, eliminates the ethnically diverse, and broadcasts the government’s message everyday. Essentially, the government is a cross between that portrayed in George Orwell’s 1984 and Adolf Hitler’s Germany.

The party in power, Norsefire, meets its match in the anarchist vigilante, V, who dons a cloak and Fawkes mask as he takes to the streets, killing men and women who performed great evil on him or mankind. Unlike traditional heroes, V is not squeamish about killing; rather, death is all he seeks for his adversaries.

V is an ambiguous character with a shrouded backstory and even more mysterious morality. The vigilante is a victim of fascist concentration camps and experimentation. The experiments appear to have had an effect on his mind, possibly driving him mad, though he sees his mission with absolute clarity of purpose. V seems content with becoming a monster himself in order to combat the monsters of Norsefire. V strikes terror into the hearts of the fascists in order to inspire the masses and find vengeance.

While the true story of the Fifth of November can be seen as a dispute between Protestants and Catholics, or between conventional authority and subversive terrorism, V for Vendetta is about a battle between fascism and anarchy — two words with less than favorable reputations. Moore, an anarchist himself, crafts his hero around these principles and even takes time to dispel certain ideas of anarchism. In the novel, V explains that chaos is not the anarchist system; rather, a functional anarchist system would grant great freedom, not the looting and destruction that follows the immediate fall of government.

Moore also does much to make the reader question V’s methods. Moore and Lloyd raise some of the same difficult questions through V as the actions of the Gunpowder Plot raise, such as: When is violence — even murder — acceptable?

In the 2006 film adaptation of the graphic novel, the Wachowskis clung to the principle that killing in the name of freedom is acceptable — and, according to Moore, kept little else from the source material.

In an interview with MTV, Moore explained his problems with the film adaptation.

“It’s been turned into a Bush-era parable by people too timid to set a political satire in their own country,” Moore said. “In the film, you’ve got a sinister group of right-wing figures — not fascists, but you know that they’re bad guys — and what they have done is manufactured a bio-terror weapon in secret, so that they can fake a massive terrorist incident to get everybody on their side, so that they can pursue their right-wing agenda. It’s a thwarted and frustrated and perhaps largely impotent American liberal fantasy of someone with American liberal values [standing up] against a state run by neo-conservatives — which is not what V for Vendetta was about. It was about fascism, it was about anarchy, it was about [England].”

Moore’s criticism of the movie is accurate, but a bit close-minded. While the graphic novel was about 80s-era Thatcherism, the film was intentionally made to evoke images of the modern world. And, in fact, the film’s message had a greater impact than the original creators could have ever hoped. It made their hero and his signature mask into an icon of defiance. The visage of Fawkes, long burned in effigy with feelings of malice, has been adopted by groups such as Anonymous and the Occupy Movement as a symbol of their fight against oppression.

As Moore explained in an interview with The Guardian, “Suppose when I was writing V for Vendetta, I would in my secret heart of hearts, have thought: wouldn’t it be great if these ideas actually made an impact? So when you start to see that idle fantasy intrude on the regular world … It’s peculiar. It feels like a character I created 30 years ago has somehow escaped the realm of fiction.”

But if you are going to don a Guy Fawkes mask to make your political point, it is important to remember the differences between Fawkes and V.

Fawkes was fighting for Catholicism, not anarchy. It could be argued that the Gunpowder Plot was about religious liberty, but it could just as easily be said that Fawkes and the conspirators were looking to establish a different — but still oppressive — theocracy. Still, Fawkes works well as a symbol in opposition to the status quo.

Fawkes was not a sole man on a mission, but rather part of a team — and not even its leader. V, on the other hand, is as alone as a crusader can be. V actively tries to become the symbol that Fawkes became and chooses to use his legacy to change the minds of the world.

It is unlikely that Fawkes ever saw himself as any type of progressive, such as V. Rather, he was a reactionary, fighting for things to return to the way they were a century prior. Still, his act of defiance, which had been derided for centuries, has become an inspiration to all of those looking to fight against the machine. His spirit lives on in the progressive aspirations of the antihero of V for Vendetta, as the film advocates for racial equity, gender equality, religious tolerance, and even sexual orientation and transgender rights.

maus-cover

More than just Holocaust, Maus addresses fathers, sons

Batman, Superman, and Spider-Man’s parents are all dead. The superheroes live and fight in the memory of their fallen family members. Luke Skywalker grew up without a father, and upon finally meeting the man, he had to fight him to the death in order to save the galaxy. Mythological heroes, more often than not, have fathers who are literal gods, and the heroes must live constantly in the shadows of their superior parents. Whether it is movies, books, television, or mythology, it is a fact that “daddy issues” are among the most common traits of characters across all forms of fiction.

In some stories, this trope exists to allow the main character room to grow, as life without parents forces a young character to grow up quickly and take on a greater burden than would otherwise be expected of him. But the absent parent trope also exists as a reflection of societal realities: many children do grow up without one or more of their parents, but those who are lucky enough to have parents around also often find themselves at odds with those who raised them. Parents, reasonably, have expectations for their children and may assume their offspring will grow up a certain way. In turn, children often grow up seeing their parents as heroes, only to become disillusioned upon discovering their flawed humanity. And for many, even a father who is physically present is often emotionally distant. The absent father in fiction often hits too close to home.

So what happens when a boy aims to discover the true personalities of his parents? Will he be disappointed at their human failings, or proud of their surprising accomplishments? Is the generation gap bridgeable, or does the difference in time make it impossible for us to truly understand the world of our parents?

Maus is the story of Art Spiegelman’s attempt to understand his parents, specifically his father. Spiegelman’s work is a journey to discover the man who always felt distant and the mother who left him long ago.

Maus follows Spiegelman as he tries to learn about his father’s struggle during World War II. Vladek Spiegelman was a Polish Jew who survived the Holocaust through resourcefulness and intelligence, even using his skills and likability to make it out of Auschwitz alive. In the graphic novel dramatization of his father’s struggle, Art Spiegelman uses animals to create an extended metaphor, casting the Jews as mice and the Nazis as cats, with other nationalities filling out the animal kingdom.

Instead of simply telling his father’s story, Spiegelman tells about the journey to get the story out of his father — making himself a character in the book, whose own goal is the completion of the book in which he is a part. It sounds confusing, but is actually an impressive storytelling device that makes the book more than just a journey of a man but the story of a man and his son.

The fictionalized Art wants to write a book about his father’s story but often finds himself at odds with the man who raised him. To Art, Vladek is insufferable for his numerous traits that likely developed during his time in Nazi Europe. Vladek is cheap, lies to get his son’s attention, and complains frequently, causing Art to voice his frustration with these traits by noting that his father is acting like the terrible stereotype of Jews prevalent in antisemitic thought. Vladek is not a distant father to Art, but overbearing, and someone to whom is son is unable to relate. But throughout the book, even while the character of Art doesn’t appear to be gaining any new insight into his father, the matter-of-fact writing about their interactions seems to indicate the real-life Art’s greater understanding of his father’s nature. Perhaps writing the book truly helped Art to discover Vladek, even though he was unaware of it at the time.

Chapters often begin with Art and Vladek speaking to one another, with Art growing frustrated as he tries to push his father toward talking about his life in World War II. Art learns about his long-dead brother and mother through his father’s narration, and Spiegelman transposes his words in a way that makes the reader feel as though Vladek is speaking directly to them.

The narrative device used to set up the scenes of Nazi-occupied Poland allows the reader to better understand the humanity of the man who survived Auschwitz. We have all had a parent or grandparent whose quirks and pushiness have gotten on our nerves. Showing this frustrating side of Vladek allows us to relate while we also learn about his heroic triumphs. Vladek survived the concentration camps by using his skills as a worker and his knowledge of the English language. He also flashed his business savvy, often making valuable trades for the necessities of his survival. He even managed to keep his wife protected when they were separated by making friends with the right people. These survival traits also earn Vladek the grudging admiration of his son.

Art experiences a great deal of survivor’s guilt, knowing that he will never have to suffer the way his father did. He takes for granted all that he has been given and, during the time he interviews Vladek, is unable to relate to his father’s story. Art knows that he can never understand his father’s struggle but hopes to at least be able to retell it to others.

There is no point in the story where Art and Vladek reconcile their differences, but the writing makes it clear that Art did ultimately love and respect his father, especially after hearing about his struggle. It appears as though Spiegelman’s realization of who his father was only came about after he began writing Maus. While the story ultimately exists to discuss the horrors of the Holocaust, Maus also does a great job tackling a common issue that is rarely discussed in a real way. Hopefully, those who read the tale will be able to learn its lessons and work harder to understand the trials and quirks of different generations.

varrick

Korra draws parallels to real-world march to war

In a recent Nerd/Wise article, I defended the medium of animation as art on par with cinema and modern television dramas by pointing out the incredible social commentary in the first season of Legend of Korra. The program, however, was originally made to be a miniseries, and thus fit an entire story arc into only a dozen episodes, leaving out the filler that was more prevalent in its predecessor, Avatar: The Last Airbender.

Korra debuted as a big success, and Nickelodeon ordered three more seasons of the show. With over 40 new episodes to produce and no original plan to make any more than 12, the series creators had to get to work, leaving fans reasonably skeptical. But Bryan Konietzko and Michael Dante DiMartino did not disappoint with Korra’s second season and continued to showcase the narrative capacity of animated television.

How were they able to keep the show compelling? Konietzko and DiMartino created a season that expanded on the show’s spiritual aspects while grounding the narrative in a much more human story — the philosophical disputes and reasons that lead to war. And the on-screen machinations reflect real, historical precedents for the march to war.

Legend of Korra’s second season follows the protagonist as the society she knows begins to crumble around her. In the process, she becomes a participant in a dangerous process that almost leads her people into a civil war. The slippery slope begins, as they often do, with a supposed visionary pushing his beliefs on others, convinced he is on a spiritual crusade.

In the real world, when someone is so convinced he is on the one true path, he is often liable to take extreme measures to spread that supposed truth. We see this today, with the unrecognized Islamic State terrorizing Iraq and Greater Syria, as well as al-Qaida and other fundamentalist religious groups operating around the globe. Throughout history, we have seen these same attitudes manifesting in the Crusades, the Inquisition, and hundreds of other times when two distinct faiths have come into contact.

In Legend of Korra, it is Unalaq, Korra’s uncle, who believes the Southern Water Tribe has lost its spiritual way and seeks to rectify the situation. He shows Korra that the evolution and progress of humanity is actually harmful to its well-being, as their lack of spiritual understanding has led to a rising frequency of attacks from the Spirit World. After Unalaq saves Korra from an attack by a wayward spirit, the Avatar begins to trust Unalaq, at the expense of her own father, who suspects deception in his zealot brother.

Unalaq is the Chief of the Water Tribes, and his belief in the South’s spiritual failings leads him to take action. But Unalaq’s actions have consequences, and his attitudes inspire a rebellious attitude among the citizens of the South. Those citizens become convinced that independence is the only answer and begin their planning in secret meetings. In this way, the South’s actions mirror those of the American colonists in 1776, who grew tired of the British Parliament’s tightening grip.

The Sons of Liberty met in secret and began advocating for independence as the rest of the nation slowly became convinced. Troops occupying Massachusetts convinced most of the locals that action needed to be taken, and Parliament’s attempts to maintain control ironically caused them to lose it. Unalaq acts in much the same way, sending soldiers to occupy the South, raising tensions and marching his people closer to war. While Massachusetts delegate John Adams pushed the Continental Congress to declare independence, King George III made his intentions clear by sending in many more troops to quash the American rebellion. Adams was finally able to win over his colleagues by pointing out the obvious: that a state of war already existed between the colonies and their mother country.

The American Revolution tore families apart, and in Legend of Korra, the Avatar’s father and uncle find themselves on opposite sides of a conflict that promises to be destructive and deadly. When Unalaq’s intentions become clear, Korra decides to take action against her uncle and defend her father and people. Korra seeks out the help of a powerful nation to bring down the overbearing Unalaq and his troops.

During the American Revolution, it was Benjamin Franklin who sought a powerful nation, France, to help the struggling rebellion. Franklin used wit and his crafty personality to persuade King Louis XVI to send assistance to the Americans. Korra is a bit more direct — and characteristically abrasive — when asking the president of Republic City for help, and the president is unwilling to help out the independence seekers.

Without the help of the president, Korra tries realpolitik, going right to the Republic military with the request for help. Korra, as the spiritual center of the Avatar world, holds a lot of sway, and her brashness brings to light the problems with autocracy, whether within a theocracy or even a constitutional monarchy. If the decision to go to war lies in the hands of only one person, it could come about simply on a whim while a diplomatic solution is still viable. This is the reason the U.S. Constitution grants war-making powers to the legislative, and not the executive, branch of government.

Of course, in practice, that’s not the only way armed conflict actually takes shape in the United States. Spreading the war-making powers out among hundreds of people does not entirely tie the hands of the commander in chief and doesn’t mean senseless wars don’t happen. Especially not in a world in which war is profitable.

In trying to protect her people, Korra receives help from the character that perhaps best epitomizes American plutocracy in the entire show, Varrick. Varrick is the head of a large corporation with its hands in everything from manufacturing to media. Varrick uses his money to influence the politics of the world, funding the political campaigns of both Republic City presidential candidates, supporting the rebels in the South, and selling the weapons of war. Varrick is so blatant in his intentions that he makes Halliburton and Blackwater look like Girl Scouts, even outright saying, “If you can’t make money during war, then you just can’t make money.”

Varrick represents everything wrong with the current American military-industrial complex. Varrick is amoral, showing none of the traits we would normally associate with cartoon villainy, yet performing some of the most evil actions in the show. Varrick sees war as big business and will do what he has to in order to ensure fighting breaks out. He hires criminals to bomb public spaces to build support for the rebels, and sends them to rob Future Industries to force Asami to sell her company to him. Varrick then plans to profit even more off of the war by selling those same rebels Future Industries aircraft and robots.

Varrick even proves to be a master of propaganda. He enlists the help of Korra’s friend, Bolin, to star in “movers,” the Avatar world’s movies, which show Unalaq as a blatantly evil, Fu Manchu-like villain, ironically playing off the fact that the villains in the Avatar world are often colored with shades of gray. This, too, has its analog in world history.

During the World Wars, both sides of the conflict used movies to propagate their messages, often to a ridiculous degree. The Nazis disseminated their ideology with films such as Triumph of the Will and The Eternal Jew, while in America, Frank Capra released the film series Why We Fight to rally Americans behind the cause. Dr. Seuss was commissioned during the war to draw cartoons supporting the war effort, including some racist depictions of the Japanese, while Batman fought Asian stereotypes in film serials, and Superman made sure to “slap a Jap” in the comic books. Today, movies may not be as blatant in their propaganda, but using the media to spread a message — and even to start a war — is very much alive.

Legend of Korra, despite Nickelodeon’s decision to limit the show to digital distribution, continues to be an impressive work of fiction. The series manages to create a mythical epic while grounding it in an unfortunate part of the human experience. We may never be able to bend the elements, but we can relate to the feelings of helplessness as we watch the world around us descend into war. We can only hope that cooler heads ultimately prevail, but with so many factors constantly pushing the world toward conflict, that hope often seems misguided.

marshall-statue

Hating on Supreme Court has always been American hobby

The Supreme Court of the United States made a poor, unpopular decision, and nobody seems to be happy about it. Questions were immediately raised about changing the makeup of the court and amending the Constitution to fix the justices’ huge error. After all, they are an unelected branch of government and an anomaly in our democracy.

You probably assume I’m referring to the Hobby Lobby decision handed down in June. Well, yes, I could be. Or I could be referring to Citizens United, or the Affordable Care Act decision, or Bush v. Gore. Or Roe v. Wade. Or Brown v. Board. Or Plessy v. Ferguson. Or the Dred Scott case. Or Marbury v. Madison. Really, I could be referring to any controversial case from the Supreme Court’s 225-year history. The results are always the same: one side or the other screams bloody murder.

See, hatred of the Supreme Court isn’t a liberal or conservative position. It’s an American position. Hating the Supreme Court is as American as complaining about apple pie on the Internet. It’s part of our DNA, and it dates back to the beginning. And there is a very good reason for that: it’s what some of the nation’s Founding Fathers wanted. I say some because, as I’ve stated before, the Founders didn’t really agree with each other.

To many of the Founders, the Supreme Court was deemed a necessary part of a strong republic because it offered something unique to the system: a strictly legal check on democratic power. The court’s advocates were men who believed in the rule of the majority, but also the rights of the minority.

Then, as now, of course, the minority didn’t mean the downtrodden or the non-whites; it meant the wealthy. Most of the Founders were quite well-off and understood that an unchecked democracy would immediately target the landed elite and lead to their demise. The wealthy couldn’t have that. To many of them, the three primary rights were John Locke’s life, liberty, and property — not the pursuit of happiness, as Thomas Jefferson, publicist in chief, phrased it in the Declaration of Independence.

In fact, it was Jefferson who was the first American president to question the legitimacy and necessity of the Supreme Court. Despite being one of the wealthiest men in the country at the time, Jefferson was an advocate for greater democracy in the young United States. When Jefferson became president, he wielded his power forcefully, despite his previous objections to executive overreach by his two predecessors, and was able to justify it through his belief that he was representing the will of the people.

The Supreme Court during the Jefferson Administration was headed by Chief Justice John Marshall, an appointee of John Adams, a Federalist, and Jefferson’s cousin. The court of the day was still uncertain of its place in the government, and many men who had been offered seats on the bench turned them down in favor of what we would see today as lesser jobs. In Marbury v. Madison, the court had its opportunity to assert its position as a coequal branch of the U.S. government but had to do so in a way that would not offend the president so much that Jefferson might ignore the ruling.

In the landmark case, Marshall ruled in favor of Jefferson’s State Department, led by James Madison, which had refused to deliver credentials to another Adams appointee, William Marbury. Marshall’s ruling was that Madison had indeed acted illegally, but the State Department didn’t have to follow through with the appointment because the provision of the Judiciary Act of 1789 that allowed Marbury to argue his claim before the Supreme Court was actually unconstitutional. The decision established judicial review and made it clear that even the president and his administration had to live under the rule of law, under the arbitration of the U.S. Supreme Court.

Jefferson was less than enthused. He wrote to Abigail Adams in 1804:

“The Constitution … meant that its coordinate branches should be checks on each other. But the opinion which gives to the judges the right to decide what laws are constitutional and what not, not only for themselves in their own sphere of action but for the Legislature and Executive also in their spheres, would make the Judiciary a despotic branch.”

And while Jefferson was the first president to have an issue with one of Marshall’s decisions, he was not the most famous. That would be Andrew Jackson, who responded to the Chief Justice’s decision to side with the Cherokee by completely ignoring it, leading to the Trail of Tears.

When Marshall died, it was Jackson, ironically, who had the honor of appointing a successor. True to form, Jackson followed the greatest chief justice with probably the worst: Roger Taney. Why was Taney the worst? Well, he made a ruling in a case we know as “Dred Scott.”

You may have heard of Dred Scott, the slave who tried to use the legal system and the Missouri Compromise to gain his freedom. When his case came before the Supreme Court, the justices could have simply decided whether or not Scott was legally bound to his master, but Chief Justice Taney had another idea. Taney wanted to use the case to settle the slavery question once and for all, and he wanted to settle it in favor of slave owners. Taney determined that not only did the federal government have no power to regulate slavery whatsoever, effectively destroying every compromise made since the country’s founding, but that African-Americans, whether slave or free, were not citizens of the United States.

Abraham Lincoln, who was then running for the U.S. Senate against Stephen Douglas, said:

“I have said, in substance, that the Dred Scott decision was, in part, based on assumed historical facts which were not really true … Chief Justice Taney, in delivering the opinion of the majority of the court, insists at great length that negroes were no part of the people who made, or for whom was made, the Declaration of Independence, or the Constitution of the United States.

“On the contrary, Judge Curtis, in his dissenting opinion, shows that in five of the then thirteen states … free negroes were voters, and, in proportion to their numbers, had the same part in making the Constitution that the white people had. He shows this with so much particularity as to leave no doubt of its truth…”

Lincoln was just one of many Americans outraged by the Supreme Court’s incorrect and biased decision. But within eight years, Lincoln would play a huge part in ending slavery in America, and would appoint Taney’s successor as chief justice. With the slaves emancipated through executive means, Lincoln wanted to preempt any questions the court would have about the Emancipation Proclamation and sought to pass a 13th Amendment to the Constitution, guaranteeing the end of slavery in the United States.

In addition to gaining their freedom, black Americans were given citizenship under the 14th Amendment, which also asserted, for the first time, the rights of individual citizens against the governments of the states. New justices on the Supreme Court, particularly John Marshall Harlan, determined that the 14th Amendment incorporated the Bill of Rights onto the states, so that no state government could abridge the rights of free speech, assembly, religion, etc. Before this amendment, if Virginia wanted an established Episcopal Church, the Commonwealth was constitutionally allowed to do so. Despite that intent, in the court’s first opportunities to rule on the new amendment, the Justices decided that, no, the 14th Amendment actually didn’t protect anyone against state governments. Well, except for America’s most valuable people: corporations.

In the Slaughter-House Cases, the court ruled that the “Privileges or Immunities Clause” of the 14th Amendment guaranteed to Americans only the rights offered by national citizenship, not state citizenship. And those rights of national citizenship were very limited. This paved the way for future cases, particularly the Civil Rights Cases, to state that the federal government had no business interfering in the rights of states to discriminate against their own citizens.

The decisions were met with outrage, at least across the north and among Republicans of the time. The 13th, 14th, and 15th Amendments, known as the Reconstruction Amendments, were largely deconstructed, except when it came to freedom of contract. While the court claimed it had no power to stop states from discriminating against their own citizens, it did have the power to stop states from trying to regulate their own economies. See, if a business wants to hire someone to work for slave wages, state governments have no right to stop that business — at least according to Lochner v. New York, which saw the court siding with business against people, setting the standard we still see today. So sure, the amendments couldn’t prevent state governments from enacting Jim Crow laws, but they could stop them from enacting minimum wage and maximum workweek laws.

Perhaps the most infuriating thing about these cases is that most are still on the books. The Civil Rights Acts of the Lyndon Johnson Administration had to jump through Constitutional hoops and claim that Congress held the power to enact the new laws under the “Commerce Clause,” as opposed to the 14th Amendment, which is supposed to grant privileges and immunities of citizenship.

Lochner maintained a limited policing power for the federal government and set a precedent that was used by the court to strike down several pieces of New Deal legislation. President Franklin Roosevelt was no fan of the court’s actions, as he made clear in his 1937 State of the Union Address:

“The Judicial branch also is asked by the people to do its part in making democracy successful. We do not ask the Courts to call nonexistent powers into being, but we have a right to expect that conceded powers or those legitimately implied shall be made effective instruments for the common good.”

But following FDR’s landslide reelection victory in 1936, the court’s opinion seemed to change. West Coast Hotel Co. v. Parrish saw two justices alter their usual judicial stance and allow a Washington state minimum wage law to remain in effect. Many believe the two justices made the change in order to prevent Roosevelt from enacting his so-called “court-packing plan,” which called for the membership of the Supreme Court to change from nine to 15.

Essentially, FDR was the first President since Jackson to successfully counter the Supreme Court, and he did it without using Jackson’s hilariously illegal means of just ignoring them. But the Supreme Court is supposed to be a bit antagonistic toward the president and Congress. It exists to balance out the power of the government so that even those who write and execute the laws are subject to them. And the president and the Senate get a say in who serves on the court in the first place.

Of course, presidential appointments don’t usually change judicial philosophies of the court overnight. In fact, the idea of lifetime appointments was that the previous generation of leaders would continue to hold some sway over the current. These appointees would last for about a generation before the president could choose a replacement. Ronald Reagan, however, changed the game by appointing very young justices to the court, allowing his appointments to affect American law for much longer than had been standard. His successors have followed the pattern, with George W. Bush’s appointment for chief justice, John Roberts, being young enough to almost guarantee him another two decades at the court’s head.

And the Roberts Court continues the long-held American tradition of frustrating a large percentage of the population. Ultimately, the court is a good thing, but it is easy to see why people are calling for changes in the system. Citizens United and the Hobby Lobby case have opened the floodgates to potentially dangerous changes in American governance. Or, they could be what finally causes some things in the country to change for the better. Only time will tell.

But until then, don’t be shocked the next time the Supreme Court does something that you find infuriating. After all, it is what (some of) the Founders intended.

summer-reading

Round out summer with easy-reading presidential history

Reading history books is tough. Even the most voracious readers often find it difficult to dive into a history. History books have a tendency to be overly academic, and even fans of the subject may have a difficult time with these works. What follows is a list of presidential biographies that are mostly easy to read and teach history in a way that is simple to follow. These authors turn their revered subjects into human beings, complete with flaws, problems, and goals.


American Lion: Andrew Jackson in the White House by Jon Meacham

Andrew Jackson is a pretty controversial president, and Jon Meacham captures perfectly the profound, contradictory nature of the nation’s seventh chief executive. Jackson is credited for expanding the very scope of democracy, yet he ruled with an iron fist, spit in the face of the Supreme Court, and owned slaves. He was a rough, rugged, angry, and vicious man who could also shake hands and behave cordially with the elites of Washington. Jackson is known for forcefully removing the Cherokee from their lands, yet he adopted a native boy as his son. He hated violently, and he loved sincerely.

Meacham displays the contradictions of Jackson’s personality and the drama that defined him by dividing the book into chapters based on each soap-operatic episode of the man’s life. You are able to more easily understand Old Hickory’s nature by learning about his courtship and loss of his wife, Rachel, his feud and friendship with Thomas Hart Benton, the Peggy Eaton affair that led to the resignation of Jackson’s entire cabinet, the fight over the National Bank, and his vicious, personal disputes with John Quincy Adams, Henry Clay, and John C. Calhoun.

Meacham’s American Lion is a great read that displays the evolution of a nation through the lens of the life of one controversial man — a man who lived through the American Revolution and died trying to prevent a Civil War.


Lincoln at Gettysburg: The Words That Remade America by Garry Wills

Lincoln at Gettysburg is a much more intellectually challenging book than the rest on this list, but it is an important one to recommend. The book is short but dense, so take your time when reading this work.

Garry Wills sets out to explain the historical, political, philosophical, and even literary context of Abraham Lincoln’s famous speech on a hallowed Pennsylvania battleground. You will learn the influences on Lincoln’s mind, from Daniel Webster to Ralph Waldo Emerson, as well as his intentions when making the Address: Lincoln’s goal was to change the purpose of the war, to reclaim the Union’s status as righteous warriors from a Confederacy that was close to gaining help from European powers. And while Lincoln’s short speech was actually right in line with what was expected of him, it achieved a level of notoriety that revolutionized American literature and set the stage for writers like Mark Twain.

Wills makes you think harder about a speech that many Americans know snippets of without understanding the meaning behind the words. The book ties together 100 years of American history in dissecting a single 272-word speech by viewing it from every angle. The reading is demanding but also enlightening and incredibly satisfying.


Destiny of the Republic: A Tale of Madness, Medicine and the Murder of a President by Candice Millard

When concocting a list of the most interesting presidents, few would rank James A. Garfield among the Top 30. But that is almost exclusively due to his short time as the nation’s leader. Candice Millard does a masterful job of explaining the story of Garfield’s presidency and its abrupt end at the hands of an assassin, making it a much more compelling tale than any history class has been able to tell.

Garfield, more than perhaps any other commander in chief, is a victim of boring history classes. To many, he is just a name in a list of 44 presidents. And Millard even highlights this fact to capture the true tragedy of the nation’s second presidential assassination: Garfield just didn’t have enough time to make a real impact. The assassin Charles Guiteau’s pulling of the trigger robbed us of a good man with good intentions who could have gone down in history as one of America’s greats.

This book is not just about Garfield. It is just as much a book about Guiteau and Alexander Graham Bell. They were all men on a mission: Garfield is determined to end the spoils system and the corruption running rampant in the government. Guiteau is on a mission to gain a government post, which he believes he deserves for a single speech in which he praised candidate Garfield — and when he is not awarded the job, he begins a new mission: to end the president’s life. Bell’s mission is to save the life of the president by quickly inventing an early version of the metal detector to track the bullet that came from Guiteau’s gun barrel; Garfield stays alive long enough for Bell to use his invention, but due to the doctor’s arrogance and monarchical control of the situation, Bell is unable to find the bullet.

Garfield’s America was in a state of change, and much like Meacham’s American Lion, Destiny of the Republic displays the country’s tremendous technological advances on display at the end of a presidency: Jackson was the first president to leave office in a train, and Garfield was the first president who could have been saved by modern science, including antisepsis, which was gaining popularity in other countries at the time.


The President and the Assassin: McKinley, Terror, and Empire at the Dawn of the American Century by Scott Miller

Much like Millard does with Garfield, Scott Miller makes President William McKinley interesting by framing his life alongside his assassin and the popular figures of the era. Unlike Guiteau, who was driven by madness, McKinley’s assassin, Leon Czolgosz, was driven by philosophy — and very much understood what he was doing by killing the president.

McKinley’s presidency marked many significant changes in America, including the end of non-intervention as U.S. foreign policy, the beginning of the American empire, and the expanse of American trade around the globe — all a result of the Spanish-American War. While the book doesn’t apologize for McKinley’s actions as president, it does explain them in a way that shows McKinley as a conflicted character who honestly believed he was doing what was best for the country and the world.

Czolgosz’s path is fascinating as well, as Miller looks at a man who truly believed in anarchism and set out to bring the world closer to that ideal. In many ways, Miller leads you to better understand Czolgosz and his crusade in an almost sympathetic light: McKinley was the president of the moneyed interests in the time of the robber barons, and his death did lead to the presidency of Theodore Roosevelt, who instituted sweeping reforms aimed at helping the middle and working classes. Certainly, killing the president was the wrong course, but Czolgosz’s actions really underscore the growing sentiments of the working class at the time.


1920: The Year of the Six Presidents by David Pietrusza

Want to learn about several presidents all at once? The Year of Six Presidents tells the story of, well, seven presidents. Six of them — Theodore Roosevelt, Woodrow Wilson, Warren G. Harding, Calvin Coolidge, Herbert Hoover, and Franklin D. Roosevelt — competed for the presidency in 1920, and a seventh, William Howard Taft, was awaiting the opportunity to join the Supreme Court.

David Pietrusza takes you on a journey through the lives of all seven men as they rise to greatness and attain their respective presidencies. Along the way, you will learn about the flaws and neuroses of each president, like Harding’s love for women and Teddy Roosevelt’s belief that everybody but him did a poor job as president.

Adding to the intriguing tale are the era’s defining issues, including Prohibition, the League of Nations, and women’s suffrage, with the election of 1920 being the first time in American history that women had the right to vote in all 48 states.

Pietrusza tells of Teddy Roosevelt, who was an early favorite to return to the Executive Mansion but died a year before the election; Wilson, who was suffering after a stroke but, as the incumbent, was certain he would win a precedent-breaking third term; and Hoover, who was riding a wave of popularity following his actions in Belgium during World War I but would not be able to secure his party’s nomination until 1928. Pietrusza tells the tale of 1920’s ultimate victors, the unlikely candidates Warren G. Harding and his running mate and successor Calvin Coolidge, along with their vanquished opponents, Al Smith and his running mate, the future president Franklin D. Roosevelt.

All seven Presidents featured in this book get their stories told, and all seven certainly seem more human when Pietrusza is done.


If you’re looking for some good reads that you’ll find both interesting and educational, check out these five books, read Lincoln’s twice, and see just how incredible — and flawed — these historical demigods really were.

Oklahoma!

State your name: Etymology of these United States

The names of provinces in any country can be seen as a road map of the people who lived there: the conquering, the conquered, the vainglorious, and the descriptive. The United States is no different. The East Coast is draped with the names of kings and the poetry of conquistadors, while the northern and western parts of the country are largely sprinkled with indigenous words and phrases that have all but lost their original meanings. Without further ado, let’s have a closer look at this patchwork tapestry.

 

Alabama is likely from a Choctaw term meaning “plant-cutters.”

Alaska is from the Aleut word alaxsxaq, which means, “the object toward which the action of the sea is directed,” or perhaps more succinctly, “the coastline.”

Arizona is a Spanish transliteration of an O’odham word meaning “having a little spring,” probably not in reference to the Colorado River. The O’odham are an Aztec offshoot that were frequently called Piman by the Spanish. This is probably because, when the Europeans asked them who they were, they replied “pi mac,” which means “I don’t know.” Y’know, because language barrier.

Arkansas is a French transliteration of the word used by the Algonquian peoples to describe the Quapaw, a Sioux tribe that lived on the Arkansas River.

California was named by Spanish explorers after a fictional place in a 1510 novel of Garci (Ordóñez) Rodríguez de Montalvo, whose works are compared to those which make up Arthurian legend. The name may as well be Arrakis or Hogwarts, as the word does not mean anything in particular.

Colorado was named after the Colorado River. The Spanish word simply means “colored,” and implies a reddish, weather-beaten hue.

Connecticut is from the Algonquian quinnitukqut, “at the long tidal river.”

Delaware was named for the first English colonial governor of, ironically, Virginia, Thomas West, 3rd Baron De La Warr. The word is roughly French for “of the war,” or “warrior.”

Florida was discovered by the Spanish in 1513 on Palm Sunday, one name for which is Pascua florida, or “flowering Easter.”

Georgia was named for King George II of Great Britain. As a side note, the Caucasian country Georgia was named for Saint George.

Hawaii is Proto-Polynesian for “place of the gods.”

Idaho is from the Kiowa-Apache word idaahe, meaning “enemy,” used to describe the Comanches.

Illinois comes from an Algonquian tribe who called themselves ilinouek, or “ordinary speaker.”

Indiana was named for the “Indians” in the Americas, who were, of course, not actually from India at all. The word originally derives from Sanskrit sindhu, the same root as the word Hindu, and it means “river.”

Iowa comes from Dakota ayuxba, or “sleepy ones,” describing the Chiwere people who lived there. I feel like we can just name the entire country “Iowa” without much difficulty.

Kansas shares the same root as Arkansas.

Kentucky probably comes from an Iroquoian word meaning “meadow,” something like geda’geh.

Louisiana was named after King Louis XIV of France, deriving from Old High German Hluodowig, “famous in war.”

Maine comes from the name of a French province. The word is Gaulish, probably in reference to “mainland.”

Maryland was named after the wife of King Charles I of England, Henrietta Maria of France.

Massachusetts is Algonquian for “at the large hill,” because there’s a big hill southwest of Boston.

Michigan was named after the lake, probably Old Ojibwe meshi-gami, which means “big lake.” So Lake Michigan is “Lake Big Lake.”

Minnesota is from Dakota mnisota, which means “cloudy water,” in reference to a river.

Mississippi was, of course, named after the river, originally an Ojibwe construction, mshi ziibi, or “big river.” There we go again. The Mississippi River is the “big river river.”

Missouri is from an Algonquian word meaning “people of the big canoes,” in reference to Chiwere tribes from that region.

Montana is just Spanish for “mountain place,” because of all the Rocky Mountains there.

Nebraska is from the Omaha word ni braska, which means “flat river.”

Nevada was named after the Sierra Nevada mountain range on the border. The word is Spanish for “snowy.”

New Hampshire was named after the English county of Hampshire. Hampshire, similar to Hampton, comes from Old English hamtunscir, which simply refers to a village-town and its surrounding farmlands.

New Jersey was named for the Channel Island of Jersey. The word probably either comes from Old Norse jarl (earl) or somebody’s name.

New Mexico was not actually named after the country of Mexico. The state, then a province of New Spain, was named in 1598 — that’s 220 years before Mexico gained independence and named itself. The word is Aztecan mexihco, of unknown origin.

New York was renamed from New Netherland, from when the Dutch East India Company set up shop for the lucrative fur and trading opportunities. It was renamed after British acquisition in honor of future King James II, then Duke of York and Albany. The word York eventually derives from ancient Celtic Eborakon, or “Yew tree estate.”

North Carolina, South Carolina, and Georgia were all one state until the early 1700’s, when governing disagreements caused many of the Lords Proprietor to sell their shares back to the crown. Carolina was named in honor of King Charles I of England (Latin Carolus).

North Dakota was named for the Dakota tribe, who spoke Sioux. Their word, dakhota, means “friendly.”

Ohio was named after the river, with the Seneca word ohiyo, which means “good river.”

Oklahoma is from a Choctaw construction, okla homma, which means “people who are red.” Daniel Snyder is vindicated.

Oregon is probably originally Algonquian, but we don’t know what it means!

Pennsylvania means “Penn’s woods.” Penn is Welsh for “head,” sylvania is Latin for “woods.” It was named not after William Penn, the founder of the colony, but rather his late father, also William Penn. Will wanted to name it “New Wales.” That’d be fun.

Rhode Island was probably an extension of the Dutch name given to Block Island, Roodt Eylandt, or “red island.”

South Carolina shares the same origin as North Carolina.

South Dakota was separated from North Dakota in 1889 on account of population growth. Name origin is shared.

Tennessee is named for a Cherokee village Ta’nasi, the meaning of which is lost.

Texas is a Spanish transliteration of a Caddo word, taysha, which means “friends.” Is it time to change the state’s motto to “I’ll be there for you?” *clap clap clap clap*

Utah is from Western Apache yudah, or “high.” As in mountainous, not as in stoned, considering the large Mormon population. Though Colorado is next door.

Vermont sort of means “green mountain” in French, but the correct form would be “Mont Vert,” so maybe the person who named it didn’t actually know French very well?

Virginia was named, of course, after the Virgin Queen of England, Elizabeth.

Washington was named after George Washington. The word is Old English for “Wassa’s house.”

West Virginia was originally part of Virginia but split during the Civil War.

Wisconsin was named after the river, but the origin of the word is unknown.

Wyoming comes from a region on the other side of the country, in Pennsylvania. It’s from an Algonquian word, chwewamink, which means “at the big river flat.” They named the state Wyoming because they had nothing better to call it, I guess?

 

So that’s the end of the tour. Hopefully you’ve discovered whether your favorite state is friendly or sleepy or even if you’re a “Jarlsey girl.” See you next week!

nationalism

Nationalism, not just killing of Ferdinand, sparked WWI

“Patriotism is the belief your country is superior to all other countries because you were born in it.”
— George Bernard Shaw

With World Cup fever overtaking the globe, it is interesting to see the rise of patriotism in people, including me, who rarely go about flag-waving and chanting “U-S-A.” It’s an interesting look into who we are as people and a remnant of something tribal within us. But while patriotism and nationalism can be great and honorable things at times, it also has a negative side — one which has caused some of the greatest mass slaughters in human history.

On June 28, 1914, Archduke Franz Ferdinand, the heir to the throne of Austria-Hungary, was assassinated by Gavrilo Princip, a Yugoslav nationalist. What followed was one of the bloodiest conflicts in human history, with over 16 million deaths and 20 million wounded.

In a world in which the United States sees far fewer casualties despite being perpetually at war, the idea that 16 million people could march to their deaths in a span of four years is simply unfathomable. But the reasons for which so many fought and died are still very much a part of our world. And while I am optimistic enough to hope it won’t happen again in my lifetime, I am not so naive as to believe it can’t happen.

The Great War in 1914 did not simply begin because of an assassination of an Austro-Hungarian royal. The archduke’s killing was just the final spark that set off the dynamite. The fuse had been burning for the previous 100 years, and while the flame had been doused at times, Ferdinand’s death ensured the fuse would be lit again with a flamethrower.

The story of World War I is not just the story of an assassination and entangling alliances. It is the story of the overwhelming power of nationalism — of the belief in national self-rule, of the belief in British exceptionalism, of the belief in American ideals and the inevitable triumph of democracy. In short, millions died because they believed their nations stood for something greater, and they were willing to fight for that.

The story begins in the 18th century, when a little-known soldier from Virginia named George Washington got into a scrap with French soldiers in what today is called Pittsburgh. The actions started the French and Indian War, which ended with Great Britain in control of vast new territories in Canada but in substantial debt. The Brits asked their American colonies to pay more in taxes, which sparked a long debate about the right of a free people to choose their own representatives. It ended with a war.

In declaring themselves free, however, the American colonists also declared that “all men were created equal.” The American cause spread to France, and soon, the French overthrew their own government and proclaimed liberty, equality, and fraternity. French nationalism became the new order of the day. No longer was a citizen identified as Corsican or Norman or Parisian — he was simply a Frenchman. The French Revolution, despite its high ideals, ended with the installation of a new emperor: Napoleon Bonaparte.

Napoleon spent the next few years conquering much of the European continent, and despite the fact that he was now a French monarch, he spread the ideals of the French Revolution. Napoleon created small républiques based on national identity, including the Kingdom of Italy, the Duchy of Warsaw, and the Confederation of the Rhine. For the first time, many of these peoples were coming to terms with the idea of ruling over themselves like the Americans and the French. The idea of nationalism came into vogue.

Most of these new nations were split up following Napoleon’s defeat and the Congress of Vienna. Reactionary forces were determined to maintain the status quo. That meant returning nations to their former masters and reinstalling the king of France. An example could not be set that it was OK to overthrow a monarch.

The Congress of Vienna’s goal was to maintain what they called the Balance of Power. Similar to the philosophy which kept world powers in a Cold War for half a century, the Balance of Power philosophy stated that no nation on the European continent should gain enough power to be able to crush the other Great Powers. The Great Powers, which included Great Britain, Russia, Prussia, Austria, and France, would act as checks on one another, with their strength being so equal that war between the Powers would be too deadly and costly. As an added bonus, the Powers would be expected to help one another out when any pesky ethnic minorities decided they wanted to pass their own laws.

That agreement came in handy in 1848, when numerous nationalistic groups tried to spark their own revolutions. The French were successful, ending the French monarchy and creating the French Second Republic, which was headed by Louis-Napoléon, who, of course, became emperor a few years later. Other nations were less successful.

The Austrian Empire faced the greatest threat from nationalism. The empire included Austrians, Hungarians, Slovenes, Slovaks, Ukrainians, Poles, Czechs, Croats, Romanians, Serbs, and Italians. Each ethnic group included those who hoped to either achieve autonomy or independence. Further complicating things were the Germans, who, like the Italians, were seeking national unification.

Within the next two and a half decades, both Italy and Germany would achieve their nationalist goals through war, bribery, and realpolitik. The unification stories of both nations were intertwined and helped to set the stage for World War I.

Italian unification began with the Congress of Vienna and ended in 1871, when Rome was named the capital of the united Italy. Rome was taken by the Kingdom of Italy in a victory against the Papal States when Louis-Napoléon, now known as Napoleon III, had to remove troops so that they could be used in a fight with Prussia. Italian unification set off a wave in Europe and created a sixth Great Power, offsetting the balance that was created in Vienna.

Prussian Chancellor Otto von Bismarck was no nationalist, but he understood power and understood how to manipulate the population. By provoking a war with Napoleon III and the Second French Empire, Bismarck knew he would be able to persuade the smaller German states to side with their brother countries. The Franco-Prussian War began and quickly ended with France’s humiliation. In the end, Napoleon III was removed from power, Alsace-Lorraine was ceded to Prussia, and the German states united under Prussia’s King Wilhelm to form the first German Empire. The loss of Alsace-Lorraine wounded France’s pride, and the country was determined to win the rematch, whenever that day came. The unification of Germany also offset Europe’s Balance of Power, upsetting Great Britain, which began an arms race with the Germans.

German unification was technically not complete, however. The Austrians were also a Germanic people, but as rulers of their own empire, which included many belligerent ethnic minorities, they had no desire to promote the nationalism that was making Germany strong. Still, their shared desire to avoid war with Russia led to the Dual Alliance of 1879.

Austria in the beginning of the 20th century was in the midst of political turmoil driven by nationalism. In 1882, Serbia proclaimed its independence from the Ottoman Empire, which, like the Austrian Empire, contained many disparate ethnic groups. Ethnic Serbs in Austria, as well as other Slavic peoples, desired to be rid of their Austrian overlords and wished to unite with their brother country. Austrian leadership was torn on how to handle Serbia, with Franz Ferdinand, ironically, maintaining a dovish stance toward the country. The biggest reason for Austria to stay out of Serbia, however, was that Russia considered itself to be the protector of this smaller Slavic country.

And thus the stage was fully set for a World War. Yugoslav nationalists, including Bosnians and Serbs, were sick of being dictated to by the Austrian aristocracy and the Congress of Vienna. A small group of militants chose to do as Washington would do and fight for their freedom. Ironically, they killed their best ally in the Austro-Hungarian Empire, and the aging monarch took action against a country that had been a thorn in his side.

By attempting to crush Serbia, Austria provoked Russia, which stirred German nationalism into a frenzy, causing the young empire to mobilize for war. Germany went to war again with France, which was looking for an excuse for a rematch to take back Alsace-Lorraine. France, which was yet again a Republic, had sided with the other prominent republic in Europe, Great Britain, in a Triple Entente with Russia. Thus, German aggression provoked Great Britain and Russia, which was already involved to protect Serbia. Oh, and Germany then signed a pact with the Ottoman Empire in the midst of this, bringing the dying empire of the east into the battle as well.

But what about the United States? Well, American nationalism has always been a bit different than that of their European counterparts. Even in the early 20th century, American ethnicity was not really clear-cut. Immigrants had come to the United States from all over Europe and were becoming part of a nation that already included immigrants from China and the descendants of slaves from Africa. American identity became less about a common history and more about a common philosophy.

That philosophy was democracy, and even though it’s debatable how truly democratic the United States has been at various points in history, it was something the country believed was worth fighting for. Thus, with France and Great Britain being pushed by the empires of Europe, it became necessary as an extension of American nationalism to join the war. After all, the world had to be made safe for democracy.

Sound familiar? Remember this the next time you are watching the World Cup and cheering for the U.S. of A., while CNN is explaining why we may be entering Iraq for the third time in as many decades. Nationalism can be a great thing, but it may also lead to the downfall of society. Appreciate what makes us different, but don’t let it divide us to the point of destruction. After all, the end of humanity is in no one’s national interest.

vanburen

Democratizing American English has been OK for years

This week’s special Lingwizardry is brought to you by our resident historian, Kevin Hillman.

Every year, Merriam-Webster adds new words to the dictionary. And every year, we bemoan the addition of lingo like “LOL” and “sext” to our official lexicon, as though it is indicative of the downfall of society. But these words, primarily created in the Internet age, are part of a long American tradition.

See, unlike the French, who have an Académie that regulates their language, English, particularly American English, has a long tradition of democratization. So while it is easy to whine about kids and their PDAs and typewriters ruining good ol’ American words, remember that it is nothing new, and an evolving language is really OK.

Actually, “OK” is the perfect example of how Americans have democratized the English language in the past 200 years. “OK” sounds like a word that would have been invented in the Internet age alongside LOL and WTF. The word, in all its simplicity, actually predates Internet lingo by about 155 years. It sounds like an acronym and looks like one, too, but ask people what “OK” stands for and very few could give you the answer. Most signs point to “OK” meaning “orl korrekt,” which you might recognize as making no damn sense whatsoever.

It turns out that American men and women of the 1830s actually had a sense of humor, and even though they were among the few people in the world who were actually literate, they liked to misspell words and make weird acronyms. You might recognize this as common practice for anyone who’s ever sent a text message. “Orl korrekt” was a purposely corrupted spelling of “all correct” and took on its meaning, and, as was hip at the time, was then shortened to simply “OK”.

This democratization of language was a direct result of the Declaration of Independence. Following the American Revolution, common folk began to see themselves as human beings on an equal footing with their supposed social betters. The American aristocracy (Greek aristokratía, from aristos, “excellent,” and kratos, “power”) certainly didn’t love the idea, but it came with the territory after claiming that “all men were created equal.”

The first and second generations born after the American Revolution had taken its purported goals to heart and began democratizing institutions everywhere. This democratization led to the drastic divisions in the Protestant Church, which created several new sects due to popular differences in interpretation. Politics became less about deferring to the greater sort and more about selecting the guy who agreed with your views — or bought you beer.

It also affected much of our language, with Mister, Madame, and other honorifics coming into common use for common folks, as opposed to being reserved for the upper class. Democratization of the English language meant the spelling of some words was changed to align with more common usage, a practice that exists to this day and can cause a lot of confusion.

Even the word “democracy” itself took on new meaning — or, at least, new connotations. To the Founding Generation, “democracy” (Greek dēmokratía, from dēmos, “people,” and kratos, “power”) was a dirty word, used in disgust and largely viewed as describing “mob rule.” Many of the founders of the American Republic thought the common people just weren’t capable of ruling themselves, which is the reason we have a Senate and an Electoral College and don’t pass laws based on referendum the way the Athenians did things.

The change in tone for “democracy” came at about the same time as the rise of “OK” — and largely thanks to the same man. “OK” was perhaps the biggest product of American linguistic democracy. An abbreviation and misspelling fad in 19th-century Boston created the most commonly understood word in the 21st-century world. Its simplicity and catchiness gave it tremendous spreading power. But “OK” never would have spread far outside of the working classes of Boston if not for the most important president in the history of the United States.

Of course, I’m speaking of Martin Van Buren. The eighth president of the United States and key founder of the Democratic Party began his life in Kinderhook (Dutch for “children’s corner”), New York, and was of Dutch ancestry. The first “ethnic” President, Van Buren actually spoke Dutch as his primary language. Van Buren was also the first president born after the American Revolution, making him the first president born a U.S. citizen — and making the “This is America, so speak English,” argument look a bit absurd. Van Buren, the son of a tavern keeper, had no love for the aristocratic ways of Old America and was a strong advocate for giving a voice to the common man.

In the early 1820s, Americans believed party politics was dead. The Democratic-Republicans of Thomas Jefferson were the only viable political party left, having defeated the Federalists soundly and frequently. With the election of 1824, however, that all changed. With four different candidates on the presidential ballot, no man was able to secure a victory, and the race was thrown into the House of Representatives, which awarded the election to John Quincy Adams. Popular favorite Andrew Jackson was furious, and Van Buren, with his brilliant political mind, saw an opportunity.

The country was shifting from the days of the Revolution, which romanticized the ancient Roman Republic and the ideals of disinterested leadership, to the second generation of American political leaders and an affinity for all things Ancient Greece. Hence the shift in popularity of the word “democracy” as well as its philosophy, which was associated with Ancient Greek government.

Van Buren understood the shift better than anyone. After the election of 1824, Van Buren began assembling an opposition party against Adams by using the prestige and stature of Old Hickory. Jackson, viewed as a hero of the people, was able to garner support from all around the country, as common men viewed his rough nature and gruff attitude as proof that he was just like them. His immense popularity led to Adams leaving after only one term in office, as Jackson, the people’s chosen representative, was elected in 1828, establishing the Democratic Party as a force to be reckoned with.

Van Buren had built a party from the ground up and changed the way we look at politics — and our own language. Van Buren, in the tradition of Jefferson, changed the connotation of “democracy” into a positive one. The first generation born under the Declaration of Independence was seeing to its promise of equality for all — err, all white men anyway.

1836 was Van Buren’s year to take up his party’s mantle. He took on the Whig candidate, William Henry Harrison, and was victorious in a landslide, largely thanks to the popularity of the incumbent. In 1840, however, Van Buren was not so lucky.

Van Buren’s Democratic Party and the Whigs who opposed it established what we would consider the modern political system. The elections were no longer about issues but more about appealing to the masses with slogans and sound bites. Harrison’s Whig Party ran with “Tippecanoe and Tyler Too” in reference to Harrison’s military victory in the Battle of Tippecanoe and his running mate, John Tyler. They also perpetuated the myth that Harrison, like Van Buren and Jackson, was a common man’s hero, born in a log cabin on the frontier — which he certainly was not. Harrison was Virginia aristocracy, born in a comfortable mansion, with a father who signed the Declaration of Independence. The log cabin image, however, caught on.

Van Buren, despite being the son of a tavern keeper, was portrayed as a wealthy, aristocratic snob. You may recognize these tactics, as they still exist to this day, with the best example being George W. Bush, a Yale legacy and son of a president convincing the country that he was a Washington outsider and a cowboy.

To combat the Whigs, the Democrats began calling Van Buren “Old Kinderhook,” or “OK” for short, in reference to his hometown. “Old Kinderhook” used the nickname to gather grassroots support, but was ultimately defeated by Tippecanoe and his Log Cabin campaign, proving that perception is everything when it comes to politics.

On the bright side, the race greatly popularized the use of the word “OK”, with it reaching all corners of a country which had only a president and a common language uniting it. The introduction of the telegraph and the railroad, which signaled the birth of mass communication, meant that “OK” would spread across the nation, just like the Internet did for “LOL” and “sext”.

In time, our language further evolved, with “OK” now being written as “okay” in most of our conversations.

Today, we honor Old Kinderhook’s memory by invoking his nickname in every single conversation we have. So remember, Old Kinderhook may not make your list of top ten U.S. presidents — unless you rank them in chronological order — but Martin Van Buren did popularize the most American of words, and that is pretty OK.

tantalus

We all scream for … frogurt? Divine delicacies defined

By the gods, do I ever love ice cream. I would be eating ice cream right now if I had some. It’s like, 80 degrees in my house because I’m cheap when it comes to my electric bill, and it’s a struggle for me not to buy and consume the delicious, delicious foodstuff on a daily, if not hourly, basis during these warm months. Maybe you can relate.

But it’s not just ice cream either. There are a spate (spate, I say!) of frozen yogurt shops springing up all over the place, and you’d be remiss not to visit Rita’s for Italian ice once or twice this summer. Perhaps the elusive ice cream truck will stop by with its merry song, delivering frozen dairy product to your front door.

Where do we get all these tasty desserts, and how do they differ from each other?

Ice cream, of course, is best during warm weather, but modern refrigeration techniques (vapor compression) have only been in use since the mid-1800s. How did the people get their refreshing cold desserts before that? The Persian empire would save snow in a domed building called a yakhchal (“ice pit”), in which evaporating water kept the temperature cool. If a city was close enough to a mountain, citizens could also send runners to the higher elevations and fetch fresh snow. They would pour grape juice on the snow and eat it. Just like mom used to make.

Around 200 B.C., the Chinese first used an ice cream-making device that looks like the kind of thing you can buy at Walmart today: a container that gets dunked in snow and salt, to lower the freezing point of the water, then sloshed around until the milk and rice have frozen inside. This was pretty much standard practice for the next 2,000 years.

More recently, ice cream became more accessible and popular along with the onset of refrigeration, and the ice cream sundae (a respelling of “Sunday,” due to religious deference?) made its appearance in the late 19th century. Shortly thereafter, the ice cream cone, the soda fountain/ice cream parlor, and soft serve ice cream (in which air is mixed in to make the product lighter and allow it to move through a spigot) came along.

Aside from straight-up ice cream, we also have frozen custard, frozen yogurt, gelato, sorbet/Italian ice, and sherbet. The fab five?

Custard was invented in 1919 by the Kohr brothers, who basically beat egg yolk into the ice cream to keep it cold longer for afternoons on the Coney Island boardwalk.

Yogurt (Turkish, “to be curdled”), of course, is a deeply storied food, stretching back at least four millennia in India and the Middle East. It’s made by introducing bacteria cultures to milk. The bacteria ferments the lactose into lactic acid (like when milk curds but … tastier), which makes it squishy and sour. The frozen variety was introduced in 1970, but only really came into its own with TCBY (The Country’s Best Yogurt) in 1981. Froyo went through a low in the late 90s (probably because marketing suggested a different Ice Cream of the Future) but has been coming back strong in the last few years. Though typically, when I go to a frogurt place, I pour just a little bit of yogurt and add like 500 pounds of toppings.

Gelato is from Latin gelatus (“frozen”). In the United States, there’s no specific differentiation between gelato and ice cream, although ice cream has to have a certain amount of milk fat (10 percent — less than that and you have to call it a “frozen dairy product”). Gelato, on the other hand, can be anything, though it is typically airy and highly fatty. Like I like my … men?

Sorbet and sherbet share the same root: Arabic sharbat (“a drink”). They are different things, though! At least in America, sorbet and Italian ice (or water ice) contain no dairy products, just frozen fruit juice, making it sweeter and giving it that sort of flaky consistency. Sherbet (also sometimes called sherbert, a recurring character in the Dilbert comic strip notreally) has 1 to 2 percent milk fat, which cuts the fruity sweetness and makes it scoop up more like ice cream.

So now that we’re caught up on mortal desserts, let’s see what the gods are slurping down in the heat. Contrary to what you might think, they aren’t all immortal just because. No, they get longevity from tasty treats, which is sort of the opposite of how we work in the mortal realm. For instance, the Greek gods would chow down on nectar (Proto-Indo-European: nek, “death”; tar, “overcoming”) and ambrosia (Indo-European n-mer-to: mer, “to not die”; to, “being”).

Now, the gods are typically quite possessive about their grub. When Tantalus (root of our “tantalize”), a mortal king, ate at the gods’ table, he sneaked a bit of ambrosia into his doggy bag and took off. The gods punished him by locking him in the nether regions of Hades, where he couldn’t reach the fruit hanging over his head or the water below him, forever yearning and never being able to sate his hunger and thirst. Well, they did that either because he stole their fruit punch or because he carved up his own son into a pie and tried to feed him to the gods for brunch one time. Hard to say.

It’s not just the Greek gods who get immortality from their desserts. The Hindu angelic devas had to recover their longevity after a curse was placed on them. They convinced the demonic asura to help out, promising half of the life-giving, milky amrita (same root as “ambrosia”). After churning the ocean, using a giant snake king and a mountain, the devas got their juice and left the demons poisoned by the snake king’s toxic breath, unable to reap the benefits after all. Sneaky buggers.

The Norse gods, too, needed snacks to keep up their lifespans. In their case, the goddess Iðunn (“ever young”) tended apples that reversed aging in those who ate them. At one point, the giant bird Þjazi, a jötunn and enemy of the gods, coerced Loki into luring Iðunn away from her orchard in search of interesting apples, and Þjazi kidnapped her. The other gods started to age and wither, and they forced Loki (who just gets pushed around a lot in this story) to get her back. Loki transformed into a falcon and flew to where Iðunn was kept and turned her into a nut, which he could carry on his back, and flew her home. Hooray for ambiguously heroic/villainous trickster gods!

So whether you’re chilling out this summer with ice cream, frogurt, or immortality-granting produce, remember that you won’t win the gods’ favor by trying to feed them your progeny. Cheers, and we’ll see you next week!

viking_funeral

Memories of the fallen: Of Valkyries, burnt arrows

Today is Memorial Day in the United States, the day that marks the unofficial start of summer, and the day we honor and remember those who died in military service to the country.

We live in a relatively peaceful time here in the States. While I know a few soldiers or veterans from my generation, I haven’t personally known anyone who died in the Iraq or Afghanistan wars. Of course, over 6,700 have given their lives in these and other recent military engagements since the year 2000, but the days when U.S. soldiers would come back in body bags by the thousands seem to be mostly behind us, as our technology continues to outpace that of the third-world militias against which we tend to fight, and as machines take over much of the riskier jobs.

The good news, of course, is that we have fewer service men and women to mourn. The only trouble is that, because of our lack of practice, we’ve collectively grown worse at mourning and honoring those who have died for us in the past, and those who continue to die for us in the present. We do have a lot of great Memorial Day sales, though, right? So that’s something.

Memorial Day as we know it came out of the American Civil War. While the true origin is a point of some contention, ranging from observances held in Waterloo, New York, to Boalsburg, Pennsylvania, to Warrenton, Virginia, to Savannah, Georgia, there came about a common practice between 1861 and 1864 of decorating the graves of soldiers with flowers. Of course, graves have been decorated with flowers before that, but the establishment of a broad tradition of so adorning the graves of soldiers, specifically, began in response to the Civil War’s massive casualties, which reached somewhere around 600,000 to 700,000 by the end.

In fact, the holiday was called “Decoration Day” until 1967, shortly before the passage of the “Uniform Monday Holiday Act.” That law designated it, along with Columbus Day, Veterans Day, and Washington’s Birthday (now more commonly called Presidents Day) to always fall on Mondays in order to create convenient three-day weekends. Prior to that, Memorial Day occurred May 30 every year.

So, we have parades and grave decoration and flags flying at half-staff, but the day is no longer called Decoration Day, but Memorial Day. What’s the best way to go about remembering? The word “memory” is closely linked with the word “mind.” They share common roots, such as Greek merimna (“care, thought”), Latin memini (“remembrance”), and Old English murnan (“mourn, remember sorrowfully”).

The Norse giant Mímir (“rememberer, wise one”) had his head cut off in the Æsir-Vanir War, and the god Odin carried Mímir’s head around with him, apparently in order to hear the dead, beheaded giant speak secret knowledge to him. A similar word is “mnemonic,” which comes from Greek mnemonos (“mindful, remembering”). The goddess Mnemosyne birthed the nine Muses and presided over a pool in the underworld, granting memories back to those who had drunk from the river Lethe and forgotten their past lives.

So we remember our fallen soldiers by visiting their graves and decorating them with flowers. We symbolically give them life for a day out of the year. What have other cultures done to honor and remember their dead?

The Aztecs didn’t bury their dead at all, but burned them, as offering to the sun god. The dead (and the soon-to-be-dead via sacrificial knife wound) would be dressed in ceremonial garb and burned at the holy ziggurat. This also had the side effect of limiting the spread of disease. When warriors died afield, their bodies were burned en masse after the battle, and one of their arrows was returned home … to be dressed in ceremonial garb and burned in their stead.

In the time of Vikings, Scandinavians were often sent off in a small funeral boat with some of their treasured possessions. The boat would then be lit on fire, again to limit the spread of disease. A dead warrior’s soul was said to be escorted to Valhalla by the valkyrie (“chooser of the slain”), warlike maidens who may or may not be Odin’s daughters. The warriors then became einherjar (“fighters for a single time”), who eat of a beast that resurrects each night and train to fight in Ragnarök (“conjured fate”), the final battle that will destroy the world and create it anew.

Egyptians buried their dead rather than lighting them on fire. Rich Egyptians might afford mummification, including all the necessary preparations and spells (like pulling liquid brains out of the nose with a hook) that would make it most likely the person’s spirit would be welcomed into the afterlife by the gods. Common foot soldiers likely would have been put along with some of their best possessions into a pit, where the heat and dryness would preserve the body to some degree. That way, their ba and ka (sort of like their mind and their agency) might unite and become ahk, which is sort of a benevolent ghost energy, ideally watching over and protecting their living loved ones.

Maybe this Memorial Day, you don’t know any soldiers who’ve given their lives in service. If you do, though, you might try branching out in how you remember them. Decorate their graves with flowers, sure, but how about burning one of their arrows, cheering them on in their Ragnarök training, or saluting their ahk, which may be floating around you as you’re reading this?

Probably just stick with the flower thing.