Wednesday, July 10, 2024

The Inheritance of Ephemerality

The shelves in my apartment are overly stuffed with books, most of which were purchased over the past two decades. But interspersed among these many hundreds of texts are a precious few that have trailed after me throughout my life, following me as I traversed both time and space, decades and oceans. One of the most precious of these old friends is American Murder Ballads, a study by Olive Woolley Burt of the category of traditional folksongs that I must have purchased at a second hand bookstore sometime in the late 1980s when I was deeply involved with a local folk music society.

My paperback copy of Burt’s book was printed in 1964, just a few years after I was born. Basically, the book and I are both of the late-Boomer Generation: released into a society on the verge of tremendous cultural upheaval.

Burt’s book, which I have sitting before me as I write this, became important to me as the only printed work I have come across that deals with a topic which has intrigued me since early childhood. (Note: American Murder Ballads is not the “only” book on the subject of Murder Ballads, but it is so far the only physical copy that I own.)

You see, I was not even 10 years old when I became aware that the pop music scene — normally the territory of boundless cheerfulness — could also create somber and melancholic songs about Death — and still get airplay and attention! That revelation came to me through the radio and the turntable.

Interspersed with the majority of songs focused on romance — the endless variations of the eternal “I Love Her but She Doesn’t Love Me!” theme — the local New York City radio stations also gave us melodramatic songs about loved ones who have died. It’s very likely that anyone of my generation will automatically recall those tragic songs of lovers lost to motor vehicle crashes, with “Teen Angel,” “Tell Laura I Love Her,” and of course “Leader of the Pack.” 

But there were also more subtle melancholic compositions about death brought about by illness, with titles like “Honey” (Bobby Goldsboro), “Seasons in the Sun” (Terry Jacks), and “Memories of Love” (Chicago) remaining most prominent in my memory alongside the more lighthearted contemplation of the afterlife “And When I Die” as performed most successfully by the band Blood, Sweat & Tears (written by Laura Nyro and first recorded by Peter, Paul and Mary).

As an adult I would two contemporary folk songs that focus entirely on the end of life: “Long Ride Home” by Patty Griffin, and “Fancy Funeral” by Lucinda Williams. Both songs are worth checking out. I also highly recommend “My Death” from the classic Jacques Brel is Alive and Well and Living in Paris.

Alongside these emotional works were songs that addressed the violence of the times, from the American war against Vietnam to the assassinations of Civil Rights leaders and the deaths brought about by overall inequality.

American aggression in Vietnam spawned songs on both side of the controversial war, with the nationalistic “Ballad of the Green Berets” memorializing veterans who lost their lives in war. But as hostilities intensified along with the number of young American soldiers killed during the presidency of Richard Nixon, a much larger body of works were recorded that referenced battlefield casualties in order to create powerful anti-war songs. This category included the ironic songs “Sky Pilot” (Eric Burdon and the Animals) and “Lucky Man” (Emerson, Lake & Palmer). Other lyric works referred more directly to the death of American soldiers but failed to gain much airplay, such as “The Unknown Soldier” (The Doors), “The Grave” (Don McLean), and much later “Goodnight Saigon” (Billy Joel).

There were also songs that addressed the victims of social inequality and those who died in the Civil Rights Movement, such as “Abraham, Martin, and John” (especially the combination “sound montage” of “What the World Needs Now/Abraham, Martin, and John” by radio host Tom Clay), “Ohio” (Crosby, Stills, Nash and Young), “Alabama” (Neil Young), and “In the Ghetto” (Elvis Presley).

And then there were songs that continued the folk music tradition of songs about historical accidents or incidents, such as “The Wreck of the Edmund Fitzgerald” (Gordon Lightfoot), “The Night Chicago Died” about a fictional gangland shooting that was inspired by the 1929 Saint Valentine’s Day Massacre (Paper Lace), “30,000 Pounds of Bananas” (Harry Chapin) about an actual trailer truck crash in Scranton, Pennsylvania, and the obviously titled “Dance Band on the Titanic” (also by Harry Chapin). It would be remiss not to include mention of songs about apocalyptic crises: “Miami 2017” (Billy Joel), “It’s the End of the World as We Know It” (R.E.M.), and the dueling versions of dystopia and utopia from The Talking Heads in “Life During Wartime” and “Nothing but Flowers.”

As a child, however, a handful of the most impressionable songs came to me via the turntable, and it was these that first introduced me to the category of “the murder ballad.” For this I can thank the folk music trio Peter, Paul, and Mary. They gave me “accessible” versions of murder ballads such as “Pretty Polly,” “Polly Vaughn,” and “Lily of the West.”

Hearing and being drawn to these songs as an adolescent made me more aware of and open to the appeal of musical works that highlighted the presence of death and frailty as part of the human experience. But this fascination still flummoxes me because, you see, I had no personal reason to be taken aback or surprised by the reality of our mortality. Death was always part of my understanding of Life, the prize at the bottom of the box. It still is.

Perhaps the big shock and draw (pardon the pun) was the realization that music could be the medium through which this message is told. The airwaves of my childhood were dominated by cheerful songs, even when those same songs were moaning about a broken heart and the impossibility of mending it. The pleasure must have been in the contrast, the dark splotches on the otherwise brightly colored canvas.

Gimme Shelter?

As an adolescent I had never been sheltered from the presence and possibility of death being visited upon our loved ones or ourselves. You see, the grandparents who raised me and my siblings never tried to shield us from the reality of death.

Their generation had known the world differently. They knew Death as a cosmic force, a divinity with its own presence and will. They respected it, even as they dreaded it.

They were Eastern European immigrants in America who had been born in the late 1800s and experienced firsthand the anguish of being a colonized people caught between the martial ambitions of powerful nation states. They had survived wars.

They were also, as devout Catholics, inheritors of a spiritual system still deeply infused with traditions that were far older than the Roman Church. The Christian prayers they whispered and the rites they performed were unconsciously palimpsestic and attended to more than one metaphysical perspective.

It was their “traditional” worldview that cradled my psyche’s development from infancy to early adulthood. They had been my primary guardians, having taken my elder siblings and me into their home after my parents divorced and my mother was forced into a furious struggle of survival at a time when there were practically no structures to support a single mother.

The reflection I am engaging in here allows me to see that there was very little in my adolescent experience that would encourage me to look through rose-tinted glasses and pretend that death was not always a part of life. It was there in my family. It was there in the society.

It was there.

We were what today would be called the “working class poor.” This meant we were capable of getting through each season thanks to the financial cleverness of my grandparents. (It helped that we had a large vegetable garden, my grandmother knew how to budget, and my grandfather was a carpenter capable of repairing anything — an innate talent passed on to my older brother who could perform the miracle of resurrection upon dead motor vehicles). My mother labored seven days a week at her full-time and two part-time jobs, while we children hustled however we could to bring in spare change until we turned 16 and were legally allowed part-time employment in retail and service jobs.  

But I’m traversing well-trodden territory, rehashing memories rich with the gripes and regrets that I’ve so often sprinkled into my blog posts. Sorry about that. My thoughts tended in this direction because I cannot separate the spiritual (and cultural) experiences from the larger socioeconomic ones. You see, while my guardian grandparents never tried to protect me from the awareness of death as an ever-present potentiality, neither did the local community we grew up within.

Ours was a wonderfully diverse neighborhood just a short bus ride from Manhattan. We were a neighborhood in constant change and undergoing great economic and demographic turbulences — even as the stormy Sixties reigned all around us at a national-cultural level. The streets we roamed and roared through as children did nothing to shelter us from the reality of death or the awareness that larger histories were unfolding far above our lowly streets.

In my Roman Catholic household (and this was probably true for all of those families headed by European immigrants from other Catholic-majority states such as Italy, Ireland, Hungary, and Poland) children were expected to accompany their elders to wakes and funerals. We had to be there, and we had to “pay our respects” to whoever was in the open casket, even when we had no idea who the “dearly departed” were.

Sometimes in our teenaged years we actually did know the departed.  They were childhood friends who died of drug overdoses or automobile accidents.

And then there were the ones who went somewhere and never came back.

My earliest recollection of someone disappearing is the high school classmate of one of my much-older brothers, a young man who went overseas to serve in Vietnam and never returned. I was constantly fascinated by the chest of personal belongings he had stored in our garage, his private cache of items that he didn’t even want his parents to have access to. For as long as it remained in our garage that box would stay locked. I never knew what had become of that secret trove of a disappeared man’s treasure.

Another box was part of my childhood: the large cedar “hope chest” that belonged to my mother.  

This coffin-shaped box stays in my memory not because it held the naphthalene-scented blankets that would keep us warm through cold winter months, but because beneath the quilts and comforters was another kind of comfort, something my mother simply called “the 22.”

To this day I have no idea who was the actual owner of the .22-caliber hunting rifle, or how it had come to be secreted beneath the linens. But I somehow always remember my mother’s urgent mid-afternoon phone call from work when she asked me to tell my grandmother to disinter the gun from beneath the blankets. That was the day the frighteningly violent ’67 Newark riots turned once-quiet city streets into a warzone, and it was all happening just a few blocks from our home. Slightly more than a decade later I would pass those burnt-out buildings on my way to college and shudder at the violence that unfurled wings of fire that day and left behind crumbling, empty husks where once had been apartment homes filled with life.

Not long after the riots and their overflow of rage had burned out — leaving 26 dead and over 700 people injured — I remember accompanying my grandmother to the final mass held in an old church building situated at the edge of this battleground between despair and oppression. The once-impressive stained glass windows of the sacred place had been shattered, and the walls outside were latticed with graffiti. The church building was scheduled to be deconsecrated. That was another kind of death.

My grandmother was one of a handful of seniors from “the old country” who had come to join the gray-haired priest in bidding farewell to the only place where these transplanted souls had worshipped in the language of their motherland. She would never attend a Sunday mass again.

My brothers and I, the American born, would spend our Sunday mornings at prayer in a much newer church building that served a different parish, one untouched by the furies of impoverishment and inequality. We spoke to the angels, the saints, and the tripartite deity in English.

We were just children.

And though we were unaware of it at this, the age of our ignorance, we were certainly not growing up within an age of innocence. The world was shifting around us, and we were — all of us — caught up in the turmoil. The instability had begun long before my generation was born.

I was too young to understand why the adults whispered about the Jewish mother in the corner apartment whose husband paid me to walk their family dog when their only son went away to college. I knew firsthand that the woman was “crazy.” She was barely able to overcome her terror when she had to crack open the door to let her son’s large dog slip free so I could catch and leash him. On occasions I heard her screaming in terror though she was alone in her rooms.

And then I told my friends. We’d chortle, too young to understand the meanness behind our laughter. Our merriment was the manifestation of the barbarism all children betray when burning ants with magnifying glasses or pouring salt on slugs. We didn’t quite know why the adults scolded us for our private mockery and told us that “the poor woman” still bore on her forearm the tattooed numbers of her nightmare. But the adults never explained the meaning of those numbers. Perhaps the memory of the Second World War was still too fresh and painful for them, with the revelations that accompanied post-war “liberation” being too cold, too personally terrifying, for them to speak of to their children.

Only decades later would I remember this important detail, how the grownups spoke in hushed words about the woman’s tattoos and the numbers that signified a history so unimaginably horrible that even those who had actually come through the flames of war still couldn’t speak forthrightly about it.

We were only children, but ancient death was always among us. And despite our ignorance and immaturity, surely our unconscious minds must have sensed upon our shoulders the inherited burden of modern human history, a garland of skulls and warfare.

So, again, why was it such a surprise to me as a child when death showed up in the music we listened to?

The commercial media that catered to the late-Boomer generation born in the late Fifties and early Sixties fed us a steady diet of cultural sugar. Perhaps that is why the infrequent dollops of bitterness within this recipe of sweet amusement seemed all the more noticeable, their darkness more starkly visible against the backdrop of bright candied treats that commercial radio and television upon our plates for daily consumption.

My childhood in the late Fifties/Sixties was awash in “entertainment.” For a closer look at the “ear candy” that we consumed so much of, check out the “Bubble Gum Bereavements: Part One” post at the Grandfather Hu blog. That post and this one are part of a larger project looking at Murder Ballads that is slowly unfolding as part of the Wyrd Words column at the online magazine The Antonym, starting with a look at the darkness woven into childhood lullabies. This post is designed to accompany an earlier writing titled "Sugary Sweet Entertainments" posted at Grandfather Hu's "Hu Reads Horror" blog.

Graphic: Painting by Edmund Dulac illustrating "Stories from Hans Christian Anderson," available within the Public Domain at Gutenberg.org.

Friday, May 26, 2023

Language as Resistance in Literature

Today my mind wanders back to the topic of “resistance” in literature. Indeed, I’ve just finished a look at the subject as I summarized the ideas that spoke most strongly to me after my reading of Barbara Harlow’s Resistance Literature, an academic study of “a body of writing largely ignored in the West.”

 Harlow's 1987 book looks at the role of literature — especially poetry and the prison memoir — in “Third World” liberation movements during the late days of European colonialism. What I have shaved off that summary post at the Grandfather Hu blog is shared here, in this Voce Della Volpe blog. It is a brief look at the topic of “language” as a weapon in the hands of the oppressor.

What are the “representative aspects” of resistance literature?

At the top of the list is “language.” Harlow argues that the writer’s choice of the language is itself “a political statement.” In this Voce Della Volpe post, I’ll focus on the idea of “language” as a strategy for oppression. 

This is actually a topic close to my heart, as I recall stories told by my grandmother who grew up as a minority subject of the Austrian-Hungarian Empire. My grandmother could speak and read Hungarian, which was the language of her schooling. But she also spoke of the punishment she and her classmates received when the teachers caught them their native tongue. They were forced to kneel on the point of a triangular piece of wood until their knees bled. Whippings for repeat violators were not unusual.

This memory of my grandmother’s stories remained with me as I heard stories from Taiwanese friends who shared their own memories of being chastised, often with corporal punishment, for speaking their mother tongue in the classroom.

Language loss and linguistic repression are powerful implements in the colonial toolbox. The history of colonization in Taiwan offers a fascinating and frightening example of how conquerors understand the power of language as both a force of oppression and resistance. 

In the early years of imperial Japan’s occupation of the island (1894 to 1945) the local “Taiwanese” dialect and aboriginal languages were allowed as the languages of the non-official realm, while Mandarin was strictly forbidden. That situation was completely, albeit gradually, flipped when the island was ceded to the control of the Nationalist Party from China. With the Nationalists in full power, the many indigenous languages, the Chinese Hakka language, and the majority Taiwanese language were prohibited in classrooms, the media, and trade. 

Mandarin came to be associated with “loyalty,” and Taiwanese was shunted aside as a signifier of lower social and cultural status until the late 1980s when the full authority of the authoritarian regime started to fade. Even so, the result of the decades-long oppression of linguistic expression was a highly successful endeavor of the colonizing regimes, resulting in the endangerment of some tribal languages and the reduction in the number of Taiwanese-language speakers, even in those regions where the majority population once was dominated by non-“mainlander” communities.

Linguistic colonization continues today in many nations, in part through language and educational policies, as is suggested by what some see as a decrease in the number of Cantonese speakers throughout Guangdong Province, the regional birthplace of this beautiful and complex language. Though the cause of this oppression may just as likely be the flattening effects of the electronic media. It may even be the demands of a capitalist economy that play a role in cutting the hard crust of regional accents from the bread of a national language, a concern expressed by some in the United Kingdom. In the United States the dominance of the media has inarguably affected the way English is, like, you know, like, you know, spoken? Oh Valley Girl, what have you done to English?

In terms of literary production, polyphony is an important aspect of resistance writing. I cannot help but think of Mikhail Bakhtin’s idea of “heteroglossia” as a quality of resistance writing, with different speech patterns standing as challenges to the “historical record” of officialdom while representing the continuing presence of the marginalized. Polyphony in the resistance text reveals social and political dimensions, as well as the psychological depths of the protagonists engaged in these struggles. 

One of the living examples of the use of language as an obvious form of resistance is Gabino Iglesias, whose novels include entire passages of untranslated Spanish as well as accents that identify protagonists’ cultural and class backgrounds. Iglesias’ first book, Zero Saints, is an ideal example of Harlow’s observation that resistance narratives “may be positively or healthily challenging” to readers who are unfamiliar with texts that ask of them a larger degree of “awareness.” The predominance of Spanish in Zero Saints challenges the idea of English as the “American language,” while the protagonists and the narrative serve as “resistance” to the major political climate that has given rise to the horrors of human trafficking and economic inequality.

Language can be such an important part of a literary text, providing not only an element of resistance but of reality as well.

----------

*Artwork: “Roots and Wings” by Patti Durr, as found at the Whitman Works Company art gallery blog. Used without permission, so please undo my error by visiting their website and checking out the beautiful and relevant artworks displayed and discussed on their blog.

Tuesday, February 28, 2023

Words Can Make a Difference

 

An inspiring article about neoliberal French philosopher Bernard-Henri Lévy appeared in this morning’s New York Times newspaper (2023 March 1).

Did you say Inspiring? Yes. I actually used that adjective — and more shockingly I’m applying it to the philosopher’s words.

I know, I know. It is downright frightening that I could be lulled into admiration for anything said by or about a proponent of near-fascist ideals, but there it is. The report is an introduction to a new documentary film that Lévy has recently made about the Russian colonization of Ukraine: Slava Ukraini (Glory to Ukraine).

The inspirational arguments I’m drawing from this report — a combination of news, review, and analysis — come not from any of the millionaire philosopher’s particular sociological views (such as the gratingly sexist and anti-environmentalist arguments he’s expressed), but from his larger sense of “liberation,” “resistance,” and “relevance.”

Specifically, I’m drawn to Lévy’s notion of “tikkun olam,” the idea that Jews have a responsibility to “repair the world” through good deeds.

The Hebrew term — “mipnei tikkun ha-olam” — is derived from a Medieval mystical tradition that sought to separate the sinister and the sacred in the world by bringing people together for the contemplative performance of religious acts.

Lévy’s 1977 book Barbarism With a Human Face is credited with the popularization of the idea of “tikkun olam,” spreading it beyond the liberal Judaic community where it is at the heart of progressive social action programs that strive to improve the world through tzedakah (charitable giving) and gemilut hasadim (acts of kindness).

The ideals of “tikkun olam” appeal to me, especially as I learn to struggle against my mind’s addiction to the wicked brew of nihilistic cynicism and academic detachment.

The greatest inspiration I glean from this Times report of Lévy’s career is his notion that words can be powerful weapons of resistance and liberation. And while the philosopher endorses the power of words as expressed in philosophical argument offered in the textual and audiovisual media, I see in this the idea that Art has strengths that extend far beyond aesthetic and entertainment values. Art can change the world, influencing human activity toward good or evil.

The controversial philosopher claims his original muse was his father, André Lévy, an Algerian refugee whose teenaged engagement as a resistance fighter in Spain and France during the Second World War gave stimulus to the young Bernard-Henri’s idea of going to wartime Sarajevo in the early 1990s to make a documentary film about the genocide of Bosniak Muslims. The film Bosna! was released in 1994.

Lévy said his belief in the power of individuals to wake up the world was cemented during the time he spent filming, observing, and talking to people in Sarajevo. “Bosnia showed me that ideas matter, words can make a difference, decision makers can be convinced and that individuals can be a grain of sand that blocks the machinery.” (NYT)

Monday, September 19, 2022

 


I couldn't help but be delighted by the beauty of these tribal engagements with reality.

  • An Australian aboriginal creation story says the universe was sung into being. 

  • Papua New Guinean tribes believe there is no separation between the physical realm and the world of spirit. That's perhaps why plants and animals also hold divine powers.

  • A pygmy tribe in Africa uses song and dance to thank the forest.

  • Southern African bushmen shamans use a dance ceremony to enter the spirit realm. This spirit realm is a world that exerts great power upon our corporeal reality.


Sunday, February 20, 2022

If Only?


 Do you remember when the news sites were rippling with suggestions that neuroscientiests had found a link between "reading serious literature" and increased empathy? While discarding old papers from my files I came across an essay by Prof. Julie Sedivy from a 2017 issue of Nautilus. 

Without discarding the research entirely, Sedivy pretty much shows the holes that can be poked into the various findings. But I have to admit: I want to believe that our reading habits can make us better people.

Wednesday, June 23, 2021

"Simplicity" as Resistance Writing

Writing is a viable source of protest and resistance for a quiet “recluse” like me. You can understand why, when I was flipping through some old magazines at home, I was inspired by an article about Taiwanese writer Liu Ka-shiang (劉克).

He was profiled in the August 2011 edition of the Taiwan Panorama magazine, in which he was put forward as an example of a “quiet” resistance artist who writes on everyday life, inspired as he is by the encounters he experiences during his daily long walks and occasional extended tours.

Liu earned a reputation for his “slow travel” guides that appealed to the “non-mainstream” tourist. He sums up his collection of over 70 published titles as an attempt to capture “a Taiwan that most people don’t know about — one that is either long gone or on the way out.”

Addressing the idea of writing as a form of resistance for quiet, solitary personalities like himself, Liu argues that a writer can “do more than stand on the front lines of protest.”

Liu is a proponent of storytelling as an act of critique and education. He also finds solace in poetry. Through storytelling he advocates for Nature against the Industrial crush that increasingly threatens Taiwan. With poetry he re-energizes himself by communing with the world of spirit and energy.

And by taking long walks he re-invigorates both mind and body, and heightens his connection to the Natural world. “After you’ve walked a long time, you discover … the meaning of simplicity.” 

Friday, January 01, 2021

Art and the Sinking Ship

 

My friend gave up his career as a bureaucrat and became a painter. But he’s struggling financially, surviving on a budget that is well below the poverty level. And I think, at times, it’s killing him. It hurts when I see how he and many others among both my real-world circle of friends and my social media contacts are struggling to make ends meet. Some are fortunate to have full-time teaching jobs, but many are relying on income from box office, bookstore, and art gallery sales. 

It was with this dual sense of foreboding and admiration for them that I read in The New Republic a book review written by economic journalist Robin Kaiser-Schatzlein, who was critiquing Shannan Clark’s art history The Making of the American Creative Class

I’ve copy/pasted and summarized what I see as the most surprising, frightening, and intriguing elements of Kaiser-Schatzlein’s piece. There’s a link at the bottom so you can read the original essay entitled “The Artist Isn’t Dead” at The New Republic website. You really ought to read Kaiser-Schatzlein’s full essay.

Here’s what I see as the most important takeaway, with most of this quoted directly from the review:

The creative class as we know it emerged as a by-product of industrialization and the introduction of a consumer economy. White-collar work exploded into existence between 1880 and 1949, far surpassing the growth in blue-collar work. In the growing mass consumer economy, manufacturers had to find a way to make their goods desirable to the public, and so mass advertising was born and white-collar work became creative. Advertising dollars funded the growth of newspapers as advances in printing technologies enabled mass employment for writers, photographers, printers, graphic designers, editors, and artists.

The Great Depression tore a jagged hole in the heart of this culture industry, driving creative across all spectrum to join together in unions and guilds. Unionized cultural workers were able to achieve a middle-class salary that, for a while, kept up with the rising wages of unionized industrial workers. Meanwhile, the New Deal programs that hauled the United States out of the Great Depression and launched it into the prosperity of the postwar boom directly supported cultural workers as well. 

This dynamic combination of organized cultural worker groups and government support encouraged a flourishing of the culture industry that did quite well until the ugliness of McCarthyism struck against them. The anti-liberal phobia effectively neutered many cultural industry labor groups by forcing a culling of their more radical members. The larger labor movement, which at the top was thoroughly white, male, industrial, and conservative, was happy to watch the creative and perhaps more diverse left flank die. 

But then in the late 1960s the U.S. economy began to deindustrialize, which slowly eroded the bargaining power of the labor movement as a whole. By the 1970s, a number of newspapers and magazines began shutting their doors. During the Reagan and Clinton ears the hydrochloric acid of Neoliberalism flooded over the New Deal levees of socioeconomic security and started corroding not only the financial systems, but social and cultural structures as well. Public support for social welfare programs declined, income inequality skyrocketed, and prices exploded to feed the needs of the financial sector. In an age of toxic individualism, creative workers were largely left to fend for themselves. They are not doing so well.

The culture economy is brittle. Art organizations, from museums to concert venues, require a massive structure of employees to function properly. The creative classes also need an audience: people with enough income to buy a range of books and paintings, and enough free time to go to concerts and museums. Sadly, our current economic system leaves the majority of people, including creative workers, vulnerable and powerless.

Artists and other creatives are but a slice of the art world, which itself is a portion of the wider culture industry that is verging on collapse. Many creative people today are swimming barely above the poverty line. The walls are caving in everywhere: book publishing is contracting and consolidating; the music and film industries are taking huge blows as they transition to streaming; and journalism continues to shed workers.

Across the entire economic system we are seeing the need to empower workers through organized labor movements while ensuring these working people have affordable-yet-excellent healthcare, childcare, education, food, and housing. But unionism is not nearly enough. The creative class en masse will need to get behind political movements that aim to provide low-cost housing, curtail the financial sector, and reinvest in public schools and municipal infrastructure. We falsely see culture through the keyhole of individualism, which makes it almost impossible to connect the conditions of the working people in general with the bleak economic prospects facing artists, writers, performers, dancers, and other cultural creatives.

We are all in this together.

--------------

Source:

“The Artist Isn’t Dead” by Robin Kaiser-Schatzlein.
Published online for the January 4, 2021 issue of The New Republic Magaazine.
Read Kaiser-Schatzlein’s essay at:
https://newrepublic.com/article/160695/making-american-creative-class-book-review-artist-dead

Graphic:
Echo of a Scream
Art by David Siqueiros, Mexico


  

Wednesday, December 09, 2020

Who is Indigenous?

One of those important tasks that accompany Retirement is clearing away the debris of a career in academia. But habits die hard, and it’s not always easy to convince myself I no longer need reams of papers, tons of data for books I should have written but never will. But the habit of data collection is hard to abandon, and I cannot quite convince myself that information is useless. So how about this compromise: I will post in this blog (which I’m pretty sure nobody reads) what facts and figures and bits of information I am still somewhat in love with. This is information and observations I cannot imagine every applying to my post-retirement writing projects, but which appeal to my ongoing curiosity.

And so here’s a share of some outdated numbers about global indigenous peoples that I have taken from my soon-to-be-discarded copy of the UNICEF Innocenti Research Center’s Digest (No. 11) entitled “Ensuring the Rights of Indigenous Children.” I believe this report came out in 2003. There's a link at the bottom if you want to download a pdf copy of the report.

Who is Indigenous?
It is difficult to define who fits within the category of “indigenous people.” The UN settles on criteria such as: 1.) the historical time that a people have laid claim to the land; 2.) the group’s voluntary perpetuation of themselves as culturally different across various categories such as language, social organization, religion, and modes of survival or production; 3.) recognition of group distinctiveness by State powers; and 4.) an historical experience of “subjugation, marginalization, dispossess, exclusion or discrimination.”

Indigenous peoples are “distinct from ethnic minorities.” Indeed, in places such as Guatemala, Bolivia, and Greenland the indigenous populations represent the majority. Further, indigenous groups often claim self-identification based on a separate culture linked to a specific territory, while ethnic minorities often emphasize political autonomy rather than cultural autonomy.

Where are Indigenous Peoples?
According to UN estimates, there are some 300 million indigenous people in more than 70 countries around the world, and approximately half of these are in Asia.

East Asia may claim some 70 million indigenous people, South Asia maybe 50 million, and Southeast Asia around 30 million. Many of the major State authorities are uncomfortable with the idea of identifying these communities as “indigenous,” so you will see a variety of terms used by these official sources: Malaysia calls them “hill tribes,” Indonesia calls them “isolated and alien peoples,” India identifies them as “scheduled tribes,” and China absorbs them as “minority nationalities.”

After Asia, Latin America is the region with the largest indigenous population, with estimates placing some 32 million people within Mexico and Central America (13 million), the Amazonian region (1 million), and the Andes (18 million).

Things get more complicated in Africa, where UN officials find they must lean more heavily on histories of discrimination to determine whether or not to include the nomadic peoples or the Pygmies as indigenous. Altogether, the organization identifies some 15 million (or fewer) as “indigenous” throughout the continent.

Identifying “indigenous” peoples in North America is a simpler endeavor that gathers together the 1 million Inuit in Russia as well as the many tribes already officially designated by the United States and Canada who number some 1.5 million people.  

In the Australasian region there may be 1.5 million indigenous Pacific Islanders, as well as only 350,000 Maoris in New Zealand and 300,000 Australian Aborigines.

What are the Political Realities?
Indigenous peoples often experience discrimination that results in cultural exclusion, economic exclusion, and political marginalization. Cultural exclusion results in a perception of tribal cultures as “inferior,” a belief that is sometimes expressed in national policies aimed at the active suppression of those cultures. Economic exclusion blocks the tribes from the expected benefits of national economic development. Political marginalization hinders access to decision-making processes and governmental representation. “Often these manifestations of exclusion are overlapping and interrelated.”

Indigenous communities often experience crises in terms of healthcare and education. While many tribal populations demonstrate higher birth rates, which means younger and more vulnerable demographics, they are also more likely to have reduced access to healthcare. This is a phenomenon in both rich and poor nation states.

As for education, many indigenous schools find it difficult to pay adequate teacher salaries, so finding the best teachers is difficult, especially for tribal areas located in more “inaccessible” regions. On top of that is the crisis of imposing a monolingual or monocultural educational system upon children who come from backgrounds with unique learning habits. Children who leave tribal communities and attend schools in non-tribal areas then face the age-old misery of racially motivated bullying or school rules that demand the abandonment of indigenous practices of dress or hairstyle. Of course, the “trickle down” aspect of these educational challenges mean tribal school systems also have trouble finding well-trained or professionally qualified teachers of indigenous heritage.

What is the Best Path Forward?
A central message of this report is that “successful and sustainable initiatives for indigenous children … are most likely to be founded upon a human rights approach that is, by definition, intercultural and incorporates indigenous worldviews.” In other words, the tribal communities themselves know what they need, but a good deal depends upon the larger authorities to grant them both the full economic support and the political autonomy to promote their languages, customs, and social structures.

Taking this post for a personal spin, I'm just going to give a shoutout for Taiwan, where the political shift from a fascist tyranny to a modern democratic state has brought good things for Aboriginals. The fascist government used to crush the cultural qualities of the tribes, but now they are allowed and even encouraged to express their traditional cultures. Much still needs to be dealt with, including the ongoing debate between some tribes and land management. But overall their access to education and healthcare is stronger than what may even be achieved in North America. Certainly the terrible suffering inflicted upon Native American communities by the Wuhan Coronavirus demonstrate the larger reality of unequal access in the United States. Let's all just hope that Taiwan continues forever in the expression of democratic governance that offers tribal people greater access to improved healthcare, education, and economic opportunity.


Download a pdf copy of the 2003 report here.

 

 

 

 

 

Saturday, November 14, 2020

When Gold Led to War

In an effort to clean out my bookshelves and discard “research” materials I no longer need now that I’ve retired and no longer engage in academic scholarship, I did a speed reading of Larry McMurtry’s short biography of Crazy Horse, part of the Penguin Lives series. Even though I knew nothing about Crazy Horse going into the book, and have already forgotten most of what I read, I came away with some interesting general understandings about American history and the Indian Wars in the West. Perhaps it's because this bit of "cause and event" is tied too closely to modern phenomenon, a world where the desire for money or things that can be translated into money within a system of Capitalism can be a cause of war. 

My takeaway from the very short book …

Historian Stephen Ambrose argues that General Sherman foresaw the coming of the Union Pacific and Northern Pacific Railroads, and expected the trains would be a free ride into Indian territory for soldiers and buffalo hunters alike. Sherman’s was still a scorched earth policy, but now it would be paid for by the railroad barons. Apparently the inability to “put enough men in the field to scorch the vast earth of the west” was quite troublesome to the hero of the Civil War. It further frustrated him that fellow military veterans such as General Hancock and George Armstrong Custer had spent a good deal of money on campaigns that got some buffalo killed, earned the army a bit of publicity back east, but did very little militarily other than stirring up the anger of the southern Cheyenne who hitherto had managed to stay clear of hostilities. These endeavors had proven that “large forces of soldiers, dragging mostly useless equipment, could rarely catch up with the hostile Indians; the army was far more likely to blunder onto peaceful villages of Indians who were merely minding their own business.”

All that blundering was bad for business. The Federal government was actually forced to consider peace and offerings of money to tribes, financial reimbursements that apparently were hard to come by for a government mired in not only war debt but a miserably economy. For an economy based on the gold standard, the Depression of 1873 had sent a wave of panic throughout the bureaucracy in Washington D.C. and the major Eastern cities.

Years earlier in 1868 the government had signed a treating giving unequivocal and irrevocable ownership of the Black Hills to the Sioux, with “unusually clear provision that … the whites, were to be kept out.” But that U.S. economy craved gold.

Interestingly, it was Custer who was sent out to provide protection for the railroad geologists who scoped the land for future track laying, and he was sent again with an expedition of miners to “find gold in sufficient quantities to quench the thirst of the starving markets.” (78)

Unfortunately for the tribes, Custer announced in 1875 that the Black Hills were rich with gold. The citizen-led invasion of the Black Hills by miners, some as individuals and some as corporate mining employees, began in earnest. Conflicts between miners and the legal tribal land heirs also started in earnest, and the gold-hungry United States government had an excuse to start another war to drive the Native Americans from their land.

Too many thoughts come spinning into my mind from this, but what’s the right way to handle them, order them, control them? For that I will take up a fountain pen and a notebook. Sometimes that’s the only way I can get a hold on the hydra. 


Wednesday, November 11, 2020

When History Hides Behind Pedagogy

Pedagogical reforms within public school systems in the United States across the latter decades of the Twentieth Century were touted as undertaken for the benefit of the nation’s children coming from backgrounds of working class and urban impoverishment. And though many of these changes in curricula were appealing to the generation that came into adolescence during the exciting yet turbulent period of the late 1960s and early 1970s, can we be sure these reforms were of any benefit to the advancement of the core values of liberal humanism?

One area where we might get a clearer picture of how pedagogical changes look good on the surface, but do very little to actually challenge old ways of thinking, may be in the field of education for Native American communities. In the early 1960s educators and policy makers with backgrounds in the social sciences of anthropology and education began a push for school reform that was, in their eyes, designed to improve the opportunities of achieving economic success for future generations of Native Americans. 

Is it possible that these major curricula changes in public educational settings could actually have been designed not for the empowerment of the Native American individual, but for the disempowerment of the tribal nations themselves? That’s the question I ask myself after what I confess is an embarrassingly quick perusal of Guy B. Senese’s 1991 book called “Self-Determination and the Social Education of Native Americans.” What follows in this post is a “summary” of what I see (correctly or incorrectly) Senese saying in his text. 

“Self-Determination and the Social Education of Native Americans” begins with the reminder that the U.S. Congress holds Constitutional power to abrogate its trust responsibilities, its promise to financially care for the Native American people inhabiting the reservation lands that they have been promised under treaty, and most importantly to protect the material resources of the reservation lands the Native people agreed to call home in exchange for having abandoned far greater territories to an aggressive U.S. government. Those reservation lands and the resources within them could not be owned by anyone other than the tribes themselves. The reservation is effectively the property and management responsibility of each tribe’s government, so no individual member of the tribe can claim land ownership (the ability to sell the land to a non-tribal member).  

But of course, almost as soon as the ink was dry, the U.S. government would experience a hankering for the resources available within those lands. The need for gold was, for instance, at the heart of the “Indian Wars” in the postbellum years. Even as late as the early Twentieth Century that hankering had continued, with the Hoover Administration eager to release tribal lands from governmental oversight during the Great Depression. It was FDR’s New Deal program that temporarily stalled this Federal push to reclaim tribal lands. 

The end of the Second World War and the coming into its own of the Truman Administration reignited the impetus to release heirship-allotted lands into the control of returning Native American veterans. In other words, veterans returning from the war had the right to sell the land dwelt upon by their parents, even if that land was part of the tribal nation and was forbidden to be sold to individuals outside the tribe.

One representative of the Pine Ridge Reservation expressed worry that if “tribesmen” were given the land, they would sell it to outsiders and “spend their money foolishly.” This would be how great tracts of tribal land would end up in the ownership of non-Natives. Tribal leaders were fearful of the powerful interests that would try to separate them from their lands, a warning handed down by their elders. 

But in Washington D.C., the stronger argument was for “G.I. Justice” and payback to Native veterans who had so honorably served the United States in the Second World War. Federal officials within the Truman Administration therefore doubled down on efforts to terminate the reservation status of tribal lands and give former Native American soldiers the right to sell their lands for profit. 

Another factor was in play at the time: in the postwar years the nation was still recovering from the trauma of global conflict and loss, and the dominant mood was against “pluralistic sentiment.” This trend gave strength to the idea that Native people should be “assimilated” into the national (white) culture and economy (industrial).

This powerful cultural breeze, always swirling around the bureaucrats in Washington D.C. who oversaw “Indian Policy,” was whipped into a furious storm with the Truman Administration and its adherence to the goals of “Termination.” Interestingly, the push to terminate tribal reservation status lost some of its fury when the reins of power transitioned to the Eisenhower Administration. 

Termination was nothing new. By the end of 1947 the Department of Interior had begun a termination policy that removed reservation status from a number of tribes, a program overseen by the same officials responsible for the incarceration of Japanese Americans during the war. There was ground enough for the Truman Administration to plant its flags of “G.I. Justice” and resume the push for Native Americans assimilation and tribal nation termination that had begun with the Hoover Administration but was shunted aside by the need to deal with the Great Depression. 

For some reason, however, the Eisenhower Administration stuck its foot out and tripped the policymakers in their race to termination. But that might simply be a result of a new leader wanting to do something to stand apart from his predecessor, because the Eisenhower Administration never actually abandoned the goals of assimilation and termination. Instead, the new government maintained the end goal, but disguised the vehicle for achieving this aim with an increasingly sophisticated form, hiding behind various socioeconomic and psychological models that received the applause and active engagement of professional educators and academics, especially those in the social sciences. This newly garbed assault upon tribal survival found shelter beneath the umbrella of “self-determination.” 

The Eisenhower Administration and the new Congressional embrace of assimilating the Native and taking tribal lands was driven in part by the Cold War which made clear the need for the minerals that lay beneath so much tribal territory. It was argued that national security depended upon the acquisition of these resources, which included oil and gas as well as uranium deposits. But a brutal policy of termination could not be easily achieved, especially as the newly arising leaders of the “Third World Movement” and the propaganda ministry of the Soviet Union were pointing to the economic despair of Native tribal areas as proof of Capitalism’s failure. 

If termination goals could not be rushed through as eagerly as hoped for, at least the territories could be more easily controlled through the agency of the Bureau of Indian Affairs (BIA). But of course, the energy industry still slavered over the thought of operating free of BIA and tribal regulatory interference, possibly benefitting from the inexperience of even the most sophisticated tribal councils. 

Assimilationist policies that sought to draw individual Native citizens away from their tribal lands were invigorated by economic attitudes that envisioned people as resources. Put bluntly, Native individuals were little more than incredibly cheap labor. In a sort of feedback loop, economists saw in underdeveloped areas a workforce that would itself be crucial to developing a comfortable investment climate. This resulted in official programs that encouraged major industries to establish production lines in or near reservation lands. 

The BIA was eager to cooperate in this program, while the Eisenhower Administration saw international public relations gains in the “industrialization” of tribal communities. Tax incentives guided many non-Native business interests to open assembly lines on reservation lands where labor proved quite cheap, trainable, and tractable. Enticements were also provided to corporations that could draw Native workers away from their tribal homes and into urban industrial economies where they would be divorced and divided from the ways and worldviews of their ancestral inheritance.

It was unquestioningly assumed that industrial development would itself be an educative force, endowing tribal people with factory capable work habits, bank favorable values of thrift, and an overall embrace of Euro-American nationalist cultural values. These values, centered upon the ideals of individualism and “self-support and self-help,” could erode traditional, tribal, and communal sensibilities. It was hoped that these “assimilated” Natives would themselves abolish the tribal authorities and embrace the philosophy of “self-determination,” the popular new way of saying “termination.”

But when, in some instances, these same workers proved themselves quite adept at adopting Western labor practices by going on strike, the corporate investors were noticeably distressed. Production most noticeably came to a halt in the Navajo nation as poorly paid laborers attempted to unionize their workplaces, and investors did not hesitate to remove themselves from reservation lands. Unionization and the fight for a fair living wage in the mid-1970s was a wrench tossed into the BIA’s smooth running program since the 1960s of guiding Vietnam war-related defense monies to electronics assembly lines and garment manufacturers on tribal lands. 

Thus came crashing down the claim that the tribes had never had it so good and would eagerly be satisfied with economic exploitation in the form of low pay. What to do?


Academics turned to Education as an available route to assimilation and eventual undertaking by Natives themselves to sell off their reservation lands. 

Under the many decades when boarding schools dominated the national government educational policy, Native children were taught vocational subjects. Nearly 75 percent of their learning time was dedicated to preparing for repetitive assembly line skills and the development of “desirable attitudes” toward physical labor. 

By the early 1960s, however, it was generally agreed that Native children would be directed to public school on or near reservation lands. The dominant public school curricula appeared on the surface to be less invested in training an industrial labor force, but at heart educational policies continued to serve as producers of fodder for factory assembly lines. Professional educators, by and large, proved themselves thoroughly accommodating to the industrial economy.  

The influence of school reformers and social policy developers had endowed public education with a bright sheen of community development and self-determination. But many saw beneath this silver lining the ongoing strategy of transitioning tribal members toward eventual termination of their reservation governance. Bilingual, bicultural education, community control of local schoolboards, arts and crafts programs—all were seen as increasing a sense of apathy toward the warnings that colonial powers and nationalist endeavors could not be trusted. 

Among those who gave warning was Vine Deloria Jr., who separated “the means to educate” from “the intention.” He saw the “intention” behind all the late-1960s money and scholarly attention being given to the education of Native children as an effort to blunt the tribal person’s resistance to government termination policies. Education, he worried, was in the service of training Natives to willingly embrace their own eventual exploitation and termination of their own tribes. 

Deloria was part of a diverse movement of tribal thinkers and activists who had experienced a changing consciousness about “the anesthetic of false self-determination.” They cleverly used the ambiguous educational reforms of the 1960s to re-define for their communities educational structures that would benefit the economic and social development of the tribes without sacrificing actual power and control of the lands. With the occupation of Wounded Knee and the trashing of BIA headquarters in Washington D.C., this new generation of tribal activists saw not only the Federal government (and the succession of political leaders) as their adversary, but increasingly distrusted tribal governments they saw as non-representative of the majority of community members. The “invested” status of many tribal leaders, they suspected, was a barrier to their new nationalist definition of self-determination. 

As well as feeling unrepresented by tribal governments, these young activists also saw education as crucial to their goals of eventual self-determination. 

“Resource control and educational troll developed into key themes toward legitimate self-determination,” Senese says. The new generation encouraged curricular changes that would re-empower the cultures and values from which “self” arises. They believed that with self-regeneration would come a new form of tribal economy and development.

I’ll stop here, ruminating on the idea that all good education should contribute to the formation of a stronger, healthier sense of “self” that includes not only the individual but the communities that the person comes from and remains part of, a living social body.

------

Photo Source: Teach for America, NYU Special Collections


Sunday, May 10, 2020

The Awesomeness of Bookstagrammers

In a post at the Litreactor website, horror/thriller novelist Gabino Iglesias posted "10 Reasons Why Bookstagrammers are Awesome." Lists always intrigue me, so I was drawn to Iglesias' post. (The fact that I enjoy the author's work contributed to my interest, sure.)

Iglesia notes that, "Not many book reviewers will go into the feelings they experience when reading a book, but bookstagrammers will." The payoff is that such "honesty" encourages interest, perhaps leading to more book sales. For authors the prize is that these posts provide "a different version of what their work is accomplishing out there in the world."

Another point Iglesias makes is that blookstagrammers are "a positive force" insofar as they generally won't share a book they disliked, but will express great passion to support a novel they enjoyed.
And that leaves me wondering about the role of the critic. But are "amateur" online book reviewers "critics?" Are they even reviewers? Can their posts qualify them as "hobbyists?"

But wait, why bother asking these questions? One of the most important observations makes is that bookstagrammers "are great at building community." And if there's anything we need in this age of neoliberalism, it is the building of communities. What better community can we find than the community of book readers?











The Reading Brain Can Deteriorate

In his review of Reader, Come Home, Maryanne’s Wolf’s new book of essays on the neurobiology of reading, Mark Bauerline notes:

"If reading is not natural but invented, it can deteriorate. If the brain adapted to print because of repeated exposure, it can adapt away if exposure slows. The circuit will break if unused, or if something different from print draws more of the brain’s attention. If screens take the place of paper, the brain will react."

This is the topic Wolf focuses on in the last two of her four newly published essays. I speak for myself in recognizing the truth in her fear that our more frequent use of electronic screens is a real thing: the hyperstimulation (that thrill of a quick “like”) and loss of longterm focus.

In fact, my lazy brain tells me to stop summarizing and just leave a link to Bauerline’s review here.

Saturday, August 10, 2019

Bodily Breakthroughs to Comprehension

This essay from Irina Dumitrescu reminds me that body and brain/mind are of course linked, so sometimes the physical state influences the ability to read. Dumitrescu discovers, as an undergrad, interesting ways to break through into comprehension of "difficult" texts. She speaks of how Wordsworth came to her through pacing (walking), while Milton was a bit more of a challenge, requiring a hot bath. What she doesn't discuss is that her breakthrough moments seem to have occurred when she was no longer reading for class, but for herself. Interesting also is how as a scholar she sees the profession as draining the lifeblood from her reading. This essay offered online by the Longreads website is excerpted from the book How We Read: Tales, Fury, Nothing, Sound.

Graphic: The Death of Marat by Jacques-Louis David 

Friday, September 07, 2018

Neil Gaiman on Why We Need Libraries

We need libraries: a graphic essay by Neil Gaiman and Chris Riddell.

Everything changes when we read, and libraries are an important avenue to the goal of making changes that will enable us to be better people. Neil Gaiman, with graphic assistance from Chris Riddell, makes a down to earth argument for why we need libraries for our children and ourselves.

The argument that "fiction is the lie that tells the truth" is beautifully empowering. Reading encourages imagination, and that leads to the ability to imagine a better world. It is an argument against our contemporary cynicism that leads us to passive withdrawal from the world.


Sunday, March 08, 2009

The Smog of Academic Consensus

latimes.com

The Smog of Academic Consensus


By Crispin Sartwell


May 29, 2008

That the University of Colorado is raising $9 million to endow a professorship of conservative studies is rather delicious in its ironies. It smacks of affirmative action and casts conservatism in the syntax of departments decried by conservatives for decades: women's studies, gay studies, African American studies, Chicano studies and so on.

Furthermore, the idea of affirmative action for conservatives seems gratuitous. These other groups may be oppressed, but conservatives run whole wars, black site prisons, sprawling multinational corporations. In fact, if these other groups are oppressed, it's conservatives who are the oppressors, which may render faculty meetings a bit tense.

But as an academic who is neither a liberal nor a conservative (anarchism has its privileges), let me tell you why I think a "professor of conservative thought and policy" in Colorado, or anywhere else, is not such a bad idea. Within the academy, conservatives really are an oppressed minority. At the University of Colorado, for instance, one professor found that, of 800 or so on the faculty, only 32 are registered Republicans. This strikes me as high, and I assume they all teach business or phys ed.

I teach political philosophy. And like most professors I know, I bend over backward to sympathetically teach texts I hate; I try to show my students why people have found Plato and Karl Marx -- both of whom I regard as totalitarians -- compelling. But when I get to the end of "The Communist Manifesto," I'm usually asking things like this: "Marx says that all means of communication should be centralized in the hands of the state. Anyone see any problems with that?"

I don't deceive myself into thinking that I teach these texts as well as, or in the same way as, a professor who found them plausible. And that's fine. What I'm trying to point out is that even as I try to be neutral (well, even if I did try to be neutral), my personal opinions affect every aspect of what I do, and I think that is generally true.

But it can be horrendously true in academia, where everything is affected by the real opinions of real professors, from the configuration of departments to the courses on offer to the texts taught. And because there's a consensus, there is precious little self-examination; a slant that we all share becomes invisible.

Academic consensus is a particularly irritating variety of groupthink. First of all, the fact that everyone agrees and everyone has a doctorate leads to the occasionally explicit idea that all intelligent people think the same thing -- that no one could disagree with, say, Obama-ism, without being an idiot. This attitude is continually expressed, for example, in attacks on presidents Ronald Reagan or George W. Bush, not for their political positions but for their grades and IQs.

That the American professoriate is near-unanimous for Barack Obama is a problem on many levels, but certainly pedagogically. Ideological uniformity does a disservice to students and makes a mockery of the pious commitment of these professors simply to convey knowledge. Also, the claims of the professoriate to intellectual independence and academic freedom, supposedly nurtured by tenure, are thrown into question by the unanimity. Professors are as herd-like in their opinions as other groups that demographers like to identify -- "working-class white men," for example. Indeed, surely more so.

That's partly just a result of the charming human tendency to nod along with whomever is sitting next to you. But it's also the predictable result of the fact that a professor has been educated, often for a decade or more, by the very institutions that harbor this unanimity. Every new generation of professors has been steeped in an atmosphere in which the authorities all agree and in which they associate agreement with intelligence -- and with degrees, jobs, tenure and so on. If you've been taught that conservatives are evil idiots, then conservatism itself justifies a decision not to hire or tenure one. Every new leftist minted by graduate programs is an act of self-praise, a confirmation of the intelligence of the professors.

That this smog of consensus is incompatible with the supposedly high-minded educational mission of colleges and universities is obvious. Yet higher education is at least as dedicated to the reproduction of Obama-ism as it is to conveying information. But academics are massively self-deceived about this, which makes it all the more disgusting and effective.

So as my liberal old professor Richard Rorty said, referring to Allan Bloom, conservative Platonist: "Let a thousand Blooms flower." And if they take root in endowed chairs of conservative thought and policy, that's at least pretty funny.

Crispin Sartwell, author of "Against the State: An Introduction to Anarchist Political Theory," teaches philosophy at Dickinson College


BLOWBACK

Why not a professor of disco studies?

Creating a chair of conservative studies makes a discipline out of a political fad.
By Robert Lee Hotchkiss Jr.

June 13, 2008

Those who promote a chair of conservative studies, as Crispin Sartwell does in his Op-Ed article, "The smog of academic consensus,” seem to misunderstand both academia and the meaning of the term "conservative studies." They claim there is a problem with academia because most of the professors are liberal. They cite two proofs of this assertion: that the vast majority of professors vote for the Democratic Party and that some professors seem to let a kind of political groupthink guide their research and teaching.

Academia isn't known for its straightforward or desirable culture. My wife worked as an administrative assistant to a number of professors at a prominent religious university and saw such undignified behavior as a professor commanding his teaching assistant to spend all her time spying on his nemesis. When someone on a university campus says, "Let's throw a rally for gay illegal aliens," what they probably are thinking is, "I am going to grind peanuts (to which you are violently allergic) in the burritos and will have your parking space by Monday."

There are two reasons why liberalism -- as described above -- is directly expedient to a professor's career. First, universities in the United States depend on government funding at least in the form of Pell grants. Democrats tend to expand such programs, and so professors support Democrats.

Second, universities run on the publish-or-perish system. This leaves two basic career strategies for professors. The first is to make a discovery, such as "bees' wings are pieces of skin." The other is where groupthink comes in -- to say the exact same thing as someone else did about a slightly different situation, for example, "wasps' wings are made of skin."

The vast majority of academic writing falls into the second category and is often not worth the paper it is written on. But much of what falls in the first category -- the breakthrough research in social sciences, even in such disciplines as gender studies -- has been conservative.

Even the pretense of liberalism is swiftly being swept away by the increased desperation of tenure candidates for ever-shrinking spots and by the increasing amount of research that is paid for by corporations instead of the government.

So if the threat of liberal bias is overblown, Sartwell's proposed solution is positively batty. What exactly would a conservative chair teach? That is, what is conservatism? Ordinarily, it means highlighting the value of things as they are. But this is not what the proponents of a professorship of conservative studies have in mind. They are thinking of conservatism as the political and social movement that crystallized with Ronald Reagan's presidency -- that is, a particular collection of religious, social, political and economic views that is almost completely unique to the post-1980s United States and might end in the foreseeable future. While women, gays, immigrants and African Americans have played crucial roles in this country's history from the beginning and have been associated with various conflicting political moments, movement conservatism is a decidedly recent event. Even Barry Goldwater would have to be labeled proto-conservative. Where were the conservatives during the Revolution, the Civil War, the Whiskey Rebellion? And what side where they on? The conservative movement is certainly important and worthy of study within many disciplines. But given its short and geographically limited existence, giving it a professorship would be as absurd as giving a professorship to disco studies.

Robert Lee Hotchkiss Jr. is a computer science major at San Diego City College.

Blowback is an online forum for full-length responses to our articles, editorials and Op-Ed articles. Click here to read more about Blowback, or submit your own by e-mailing us at opinionla@latimes.com.