Monday, February 20, 2017

Spotify has indicated that in 2013 of it's 20 million track catalog 4 million of them have never been played?

The blockbuster effect has been even more striking on the digital platforms that were supposed to demonstrate the benefits of the long tail. On iTunes or Amazon, the marginal cost of “stocking” another item is essentially zero, so supply has grown. But the rewards of this model have become increasingly skewed towards the hits. Anita Elberse, of the Harvard Business School, working with data from Nielsen, notes that in 2007, 91% of the 3.9m different music tracks sold in America notched up fewer than 100 sales, and 24% only one each. Just 36 best-selling tracks accounted for 7% of all sales. By last year the tail had become yet longer but even thinner: of 8.7m different tracks that sold at least one copy, 96% sold fewer than 100 copies and 40%—3.5m songs—were purchased just once. And that does not include the many songs on offer that have never sold a single copy. Spotify said in 2013 that of its 20m-strong song catalogue at the time, 80% had been played—in other words, the remaining 4m songs had generated no interest at all. [emphasis added]
The blockbuster effect seems to ensure that no matter how "long" the "tail" gets the blockbusters have the advantage of promotional apparatus. 
C. S. Lewis, if memory serves, once remarked that every generation, within certain limits, gets the kind of science that it wants.  That may not really be true but it may be true, in a sense about literary critical fads.  Take the Bard.  There have ben any number of theories as to who Shakespeare was and one of the ideas being formulated in the last few decades can be described, perhaps not unfairly, as Shakespeare-as-brand.
It’s no longer controversial to give other authors a share in Shakespeare’s plays—not because he was a front for an aristocrat, as conspiracy theorists since the Victorian era have proposed, but because scholars have come to recognize that writing a play in the sixteenth century was a bit like writing a screenplay today, with many hands revising a company’s product.  ...
The idea that a famous literary figure with an associated body of work having been a historicized brand reflecting the work of a collaborative team isn't new to scholarship.  Plenty of biblical  scholars refer to the Pauline literature with debates and discussions about which epistles were genuinely Pauline.  That's old hat.  Sun Tzu is another author who is regarded by some military historians as kind of a brand who "may" have been an individual but who may not so certainly have written everything attributed to the name, a possible military Solomon who funded a collection, so to speak.  So far the Bard to be a brand rather than a lionized individual makes sense in terms of the last half century of critical scholarship in general.
But since the Bard has been canonized as high art and since, well, dead dudes get canonized in a way that women haven't, it seems interesting that the Bard-as-brand can come up these days.
To put it another way, when women are brands now how lofty is the brand?  Take Beyoncé or Taylor.  These women are brands if there were ever such a thing as brands.  One might regarded as fake and another as authentic or vice versa but brands are brands, right?  The authenticity may lay less in any "real" authenticity in the branding that Beyoncé or Taylor do or don't do than in the imputation of the self on to or away from ... the brand.  Both make more money in a year than I'll probably ever manage to see and both can be regarded as alpha females by just about any stretch of the imagination.  But people decide stuff like whether or not one or the other is faking the persona or not.  Show business is still show business.  Perhaps with the advent of social media the show must go on even when a person isn't on stage.  But the show is the show, not the person.

Now perhaps Beyoncé and Swift are expected to be "real" in a way that in itself saddles their personas with unrealistic expectations.  David Bowie let us know which characters he was playing, didn't he?  Johnny Cash had a persona, a character useful for performing songs but why is it that a persona as formulated by a guy would be recognized for what it is while with women a persona would be assessed as "real" or "fake"?  I remember a few years ago a friend of mine said she liked Jennifer Lawrence but disliked Anne Hathaway, the former seemed sincere and the latter seemed fake.  My reply was that since Hathaway's job is faking things, since that's what acting is all about, it hardly seems fair to hold it against an actress that that's what she does for a living. 
What's interesting for me to read, given the ... slightly Marxist or quasi-Marxist cant in a lot of arts coverage and criticism, is that it can seem as though mass culture and the commodification inherent in capitalist production of culture is totally bad UNLESS "I" happen to like the brand.  Then it's okay, it's even "redeemed" in some sense by a capacity to read radical politics on to the thing or to observe that radical political ideas are actually articulated in the mass cultural product.  Thus ... Chaplin.  Now perhaps the praise or blame can be laid at the feet of folks like Walter Benjamin.   Ironically, perhaps, the Frankfurt school wrier and a pre-suppositionalist Christian apologist like Francis Schaeffer might essentially agree that the "truth content" of the art work is paramount in assessing the form and content of any given art work. 

To the extent that we are inching toward a proposal that the great geniuses of yore in the artistic canon can be thought of as brands as much as individual agents is this ... a possible triumph of a corporate conception of "genius" or "art" that is retroactively being read back on to the canon?  It might be a necessary subversion of Romantic era tropes regarding the solitary genius who somehow transcends the petty limitations of "the rules".  The more I absorb 18th century music the less clear it is that the so-called "rules" were articulated as clearly or as insistently as the Romantic era theorists and critics said they were.  It's begun to seem, particularly in music, as though the rule-bending or rule-breaking advocated for by the Romantic era music critics was a weird, paradoxical double bind.  It turns out that as often as not 18th century guitarist composers might write sonata forms in which theme 1 might not come back in the recapitulation where it "ought" to have, or that themes would be recapitulated in truncated, almost gnomically "symbolic" ways (Matiegka, for instance).  Finding these composers wanting for a failure to live up to the ideal of sonata as a Hegelian dynamic process when Hegel wasn't even around to formulate this approach during the consolidation of 18th century idioms seems ridiculous and yet it seems to have been a scholarly commonplace on the subject of sonata form.

In other words, is proposing that Shakespeare's art was a collaborative effort suggest to us now that we should reassess our taxonomy of genius into something less individual and more social or communal?  That doesn't seem like it's really worth the trouble.  Didn't Dwight Macdonald point out in his explanations of the highbrow, lowbrow and middlebrow that the highbrow pinnacles of art happened in relatively tiny, insular and fiercely competitive circles?  That's not so different from positing a friendly or unfriendly rivalry.  The history of the arts is full of tales of rivalries and resentments.  Haydn liked Mozart's work and Mozart loathed Clementi's music even though Beethoven was influenced by Clementi's work in a few ways and so on.   Personally I find I've preferred a few of Clementi's works to almost anything I've heard by Mozart.  That's sacriliege to people who hold that Mozart is the pinnacle of the Classic era but I think Haydn was the pinnacle; that my own opinion is informed not just by the music itself but a historical observation that Haydn was the most celebrated composer of his generation isn't me saying Haydn was "better" than Mozart--it's my proposal that Haydn was regarded as the best because he, so to speak, solved the problems of the arts in his generation in a way that met with the most popular and critical approval.  My own personal take is that Haydn found a set of solutions to the high/low cultural dichotomy question that people from the Romantic era on forward have tried to reat as conceptually incommensurate spheres. 

But introducing the idea that genius can be corporate could make that kind of art/entertainment separation ultimately impossible to sustain.  If even Shakespeare could be proven to be a kind of brand with a team effort behind it then one of the more sacred tenets of high art defenses seems quite a bit shakier if the tenet of the artist as solitary rule-bending genius is to be embraced.  I think it's best rejected with prejudice.  Shakespeare didn't invent the sonnet any more than Haydn invented the sonata form or Bach invented the fugue.  We've had a century and a half of innovation without a set of observable moves toward consolidation that academics would like to concede have happened.  We might get told the thing to avoid is cliché.  Sure, but if people wanted to avoid everything because it's been done before will the whole human race foreswear ever having sex or food or water again?  Obviously not.  As we approach more possibilities that the ways we think within the arts may be, so to speak, hardwired or constrained by proclivities observable in the brain we may run into another phase of consolidation.

Maybe pop songs are all starting to sound the same because they "are" starting to sound the same.  But that's going to be expected of "low" musical culture, won't it?  There are truly only so many hymns in the Baptist or PResbygerian or Methodist or Lutheran traditions before you start picking out tropes.  One of the disadvantages in high culture defenses that goes unacknowledged is that a lot of the high culture material that's survived is just the "best" that has worked.  A lot of hymns are musically very simple while having theologically rich texts.  In fact many of the popular songs that have shoddy texts in theological or liturgical terms are far more musically sophisticated than the hymns they at times supplanted.  This isn't just the case now, it was also going on in what we now call the Baroque era.  The pietists were into fancier songs than the traditionalist Lutherans in a number of ways.  They may have wanted to get "back" to pure spirituality in song but paradoxically could end up embracing what was ultimately the more trendy style of the time, while the traditionalist Lutherans took a more pragmatic approach of retaining the musical idioms that they considered "not broken". 

So by the time we get to Johann Sebastian Bach his work was an urban and urbane cosmopolitanism that fused elements from German, French, Italian and English musical styles with maybe a few Polish folk songs thrown in for good measure here and there.  Yet thanks to histories that set agendas for how we are told to understand the past Bach became the archetypal German Lutheran purist in some accounts--never mind the actual history of his musical development, the mythology was more important. 

Of course Bach was not all that well known for a while, a musicians musician.  Bach's work is probably not in danger of being among those tracks that are never listened to even once, ultimately. 


Saturday, February 18, 2017

[review] post The Red Turtle wrap up, yup, called it--the island is the red turtle is the woman is the ...

Yup, figured based on the trailer this was going to be the case.  And now that I've seen the film ...

The island is the red turtle is the woman.  Of course the turtle is the carrier of the world and this is a trope you seen in some Japanese cinema from time to time. 

So, yeah, it retroactively makes a whole lot of the film seem super squicky.  I heard one woman in the theater actually go "Ew" when the woman's hand became the turtle flipper. 

So, yep.

I'm not anticipating a huge, wide release for this film.  It was a Ghibli co-production so the film is immaculate and beautiful but compared to the Ghibli films I've seen over the last twenty years this was beautiful and elegant but ... dare I say it?  Pedestrian.  It wasn't without moments of beauty and power but ... well ... everyone can do that in cinema these days. 

Well, okay, so not Michael Bay or the Wayans brothers, perhaps.

This wasn't a film that seemed steeped ina very specific metaphysical view.  Miyazaki's pantheism permeates his whole approach to film.  It informs his approach to villains and heroes.  This film was more like "isn't life beautiful" stuff.  It "can" be but this was ... not like even other Studio Ghibli films I've seen that run with that idea.

Only Yesterday is a plodding film about a mousy woman in clerical corporate work who goes on a work vacation at an organic farm and decides to become an organic farming housewife.  Thing is, even though that film is more literally pedestrian than the fable about the turtle the film anchors its earth veneration in observations about 1980s ish era Japan embracing mass agricultural approaches.  The idea is that this beauty of tradition is precariously hanging on in a world of mass production.  Moments of stillness and beauty hinge upon a recognition of the precarity of those moments.  There are also jokes about how when city-dwellers go and gawk at what they regard as unmitigated nature that, in fact, there's not a single thing in a beautiful landscape a farmer hasn't deliberately manipulated.  In other words, the farm life is revealed to be artifice, a life-giving artifice that helps people grow food they need to survive, but an art. 

The Red Turtle has more of a vibe that the earth si the red turtle that collaborates with us to make life beautiful for just one man and one woman and the one kid they have who goes off on his own.  The man at the start of this film is drifting in a stormy sea and there's no explanation for how or why he ended up there.  The fable doesn't intend to give this man a backstory that explains anything.  We're just thrown straight into a fable that's obviously a fable but what, precisely, it's a fable for beyond "isn't life beautiful?" is hard to pin down.

Having seen this Studio Ghibli production I'm afraid it's the one film I've seen where my gut reaction is to say that it's watchable but pedestrian, beautiful but to a fault.   Even in the gorgeous scenery of My Neighbor Totoro the forest can harm you and, crucially, the mother is possibly not going to recover from tuberculosis after all. 

In Miyazaki's stories the earth choose to help but Miyazaki's films reveal a nature that could just as easily have chosen to go the other way and killed us.  That's the thing about Miyazaki's pantheism that can be deliberately missed by Westerners in awe of his work.  Yes, he sees a great deal of beauty in the world and in people but he doesn't downplay the savagery.  It may seem that way because he can highlight how there's the possibility for beauty in even the savagery but the savagery isn't removed.  The king of the forest that is benevolent in Totoro has to be appeased and kept from destroying every human in sight in Princess Mononoke.  The king of the sea that is persuaded to relent in Ponyo could have chosen to crush. 

I suppose I'm riffing toward this idea that Miyazaki's films are beautiful but the power in their beauty lays in Miyazaki's capacity for what Edmund Burke would have described as the terrifying element of the sublime.  The Red Turtle has a moment or two where it seems nature will crush us but it's okay, really.  If the men died after falling into a crevice in the first third of the film there'd be no more film.  It's all a set up so that when the inevitable kid has the same experience the kid gets out okay, too.  What would have been a real shock would have been if the kid died and the man and the woman/turtle were left bereft.  This is where the attempt at the fable gets self-defeating.  If the man died in the first third of the film and we saw animals feeding on his carcass the whole point of the film would be lost.  If the red turtle devoured the man the whole film would be a different film.  No matter how beautiful Miyazaki's films get he'll have these little moments where he reveals that nature could, and maybe even should, bring that kind of horrifying unstoppable deadly force against us.  Maybe nature should unleash its full fury on us precisely because we seem, as humans, to feel entitled to the beauty of the world but not to the terrifying power of death it brings with so much of that beauty. 

There s, at least, a sense in which the man on the island feels imprisoned.  The island is full of life and beauty but it's an imprisoning kind.  This is an element that saturates Takahata's take on the Princess Kaguya legend.  The beauty of Kaguya is confining and imprisoning for her.  She feels trapped by the expectations and demands made upon beauty and by beauty.  In The Red Turtle the man attempts to escape three times and, of course, on the third attempt to escape discovers he has had his plans foiled by the red turtle.  Why?  Ours is not to ask why, apparently, and theirs is not the place to tell, there are no words beyond a few grunts and "hey" in this film.  Wordless fables can be powerful, to be sure, but the idea that a man and a woman and a child can go through the entire cycle of life and reproduction and death without so much as a single sentence strains credulity pas the breaking point for me.  We're always in fable mode but real fables very frequently have words.  The English class axiom to show don't tell can be misunderstood because in the language of cinema showing is still telling.  What are you telling us in what you show?

Roger Ebert once said that the beauty of animation, if I'm remembering this correctly,  is that nothing that appears on the screen can be an accident the way it can be in live-action film.  Someone has to have thought out in advance to draw EVERYTHING that appears in an animated film, no matter how small the detail or how big the scene.  In that sense the greatness or paucity of directorial imagination can come through in animation in ways that it may not as readily in other kinds of cinema. 

So we get back to the red turtle that has thwarted the mans attempts to escape.  The man stops trying to escape when the turtle shell cracks open and a woman is revealed.  Okay.  A bit standard issue but many a man chooses to endure tedium or hardship or both for the sake of a woman.  Jacob labored for years for the hand of Rachel and, when he got Leah instead, labored for more years to get Rachel.  We know the crazy things people do for love. 

But, still, this is pretty epic Stockholm syndrome if you start thinking about it.  We can feel confined by and imprisoned within the relationships we treasure most.  That's one of the motifs of literature and film, the sense of entrapment men and women feel after they heat of the moment has passed and the diapers have piled up and the bills come due and all that. 

and evenw ith all the symbolism inherent in turtles in Asian folkore and folk cosmology ... if this film is a fable (and it has the trappings to tell us it aspires to be one) who is it a fable for?

Or to put it another way, how do you tell a fable without a metaphysic? Miyazaki's pantheism anchors his most fantastic visual feasts and narratives with a point of view.  I hesitate to say worldview because that buzzword is kind of a problem, especially in Christian blogging and writing.  But there's a sense in which the question of what, exactly, our view of the red turtle as the world as the woman is supposed to be is inescapable since she is the title premise.  The world is inscrutable and beautiful and, so to speak, lets us live.  But this is where the fable ruptures when the son comes of age and goes off into the sea.  We see other turtles throughout the film and it seems that there are turtles who stand in for the man and the woman and the son and yet the woman is always the red turtle.

In other words, for this fable to work the cosmology has to be ... I don't know ... a little more ... rigorous. 

I think it may just be that the fable is about a nuclear family that, in historical terms, is the least likely family unit possible.  More ancient societies so often had a clan based sense of identity so this fable feels like it's a post-modern Western notion of the erotic pair bond and precisely as many children as it ought to spawn for the sake of a fable about the beauty of life to be told.  There's only one child, a son, and there's nothing over decades inherent in the narrative of The Red Turtle to get at sibling rivalry because there's only just the one kid.  Genesis revealed that a murderous sibling rivalry happened exactly as soon as Adam and Eve had enough children for there to even be the possibility of a sibling rivalry.

This, as I'm mulling it over, may be why The Red Turtle ultimately fails as a would-be fable, it only wishes to present a world in which moral evils are always averted by some last-second pang of doubt or grief or regret.  The idea that humans will do the wrong thing without remorse and even brag about it afterward is unthinkable in this kind of parable of the beauty of life.  So we won't get the red turtle devouring or molesting the man just as we won't get the man killing and roasting the red turtle for turtle meat. 

The more I think about it the more it seems that Edmund Burke would say the problem is that there's all this beauty but none of it is ultimately sublime.  We're looking at a film that is unquestionably beautiful and finely wrought.  It's a respectably made film and that may, in the end, be its trouble.  If it's a fable it's a fable for those who live comfortably enough and beautifully enough to feel like that's the most important thing to convey about life.  We don't get what we got at the end of The Wind Rises, a mixture of beauty and horror, of Jiro looking out at all the planes he built to go out to war with the wistful observation that not a single one of the planes he designed came back and that all those pilots died.  He still feels he did what he set out to do, to make a beautiful plane, but Miyazaki shows us that the beauty and the horror are integral.  We cannot ignore the beauty or the horror of the human condition.  We cannot forget that our capacity for selfless generosity can be paradoxically self-aggrandizing.  We can forget that in our philanthropy we can be harmful. 

For as much blame as religion gets for perpetuating moral evil within the human race there has beenno more central concern in religion than the endless capacity of humanity to seek out moral evil.  A fable that does not engage with the indisputable reality of our human capacity for oral evil is a fable that offers a comfort that we don't deserve.  In Christian terms grace can only be grace if you don't deserve it; once it's some kind of divine birthright by dint of humanity being so beautiful then any obstacle to Manifest Destiny has to be treated as something or someone to be crushed.

In Miyazaki's pantheism the world and humans are beautiful but the beauty reveals the terror, too.  This can show up in weird, wonderfully strange ways such as when Ponyo, princess of the sea, takes the form of a little child who is sprinting across the breakers of a storm.  The terror of the storm is still there but the weird beauty of this child treating the seaside battering storm as a thing to play on and with is indelible imagery. 

Here's hoping that whatever the future of Studio Ghibli's films may bring that they move back toward that and away from things like The Red Turtle. It's not exactly a bad movie, it's just merely a good movie but, sadly, in perhaps all the wost possible ways to mean that term. 

A 21st century paradox (a dialectical haiku, i.e. a linkathon)

Quoting Adorno?
Probably proof you're in the
culture industry

Because why not write a haiku about Adorno on the internet in which every word of the poem is a hyperlink to some writing about Adorno's thought?  The paradox that anyone who even knows who he was and what he had to say is necessarily part of the culture industry he had complaints about is just one of those faintly savory ironies for a 21st century weekend. 

Thursday, February 16, 2017

variations on a theme--the perceived decline of the fine arts in the US

Fifty years ago an instantly iconic photograph was taken of Rudolf Bing, general manager of the Metropolitan Opera, Leonard Bernstein, music director of the New York Philharmonic, and George Balanchine, artistic director of New York City Ballet. They are posed in front of Lincoln Center’s Philharmonic Hall. The Met is about to inaugurate its new home, completing the move to Lincoln Center of the three main institutional constituents. Bing stands alongside a poster brandishing the sold-out world premiere of Samuel Barber’s Antony and Cleopatra, inaugurating the New Met. Bernstein (with cigarette) stands alongside a poster showing the sold-out run of a subscription program comprising an obscure Beethoven overture, William Schuman’s String Symphony, and Mahler’s First (not yet a repertoire staple). A City Ballet poster, to the rear, announces the dates of the Fall season. So depicted, three performing arts leaders – all of them famously strong personalities — are seen poised to drive their celebrated companies to greater heights, buoying an unprecedented American cultural complex.

Half a century later, the photograph reads differently. We can newly observe that in September 1966 all three institutions were already fundamentally shaped by their pasts; that the pertinent histories of the Met and Philharmonic, and of New World opera companies and orchestras generally, were more confining than empowering; that Balanchine was the odd man out because he alone would sustain a creative aspiration in his new home, pursuing a kind of Americanization project that Bernstein could not successfully implement, and that Bing disdained attempting.

And this juxtaposition, of three art forms and their chief New York City institutional embodiments, carries vexed implications for the pivotal half century to come. If the coming Trump Presidency suggests an exigent priority to the cultural community (such as it is), it may be this: that never before in recent memory have the arts been as challenged to inspire hearts and minds. [emphasis added]

I wonder ... would Adorno have agreed during the peak of the jazz age?  ;)

The linked piece is long by internet terms but the short version is that between opera, symphony and ballet ballet did okay but the opera and the symphony are not always seen as being as robust as they once were.

At least according to Taruskin's account in the Oxford History of Western music, ballet didn't become "world class" until the 20th century, whereas opera arguably emerged in the 17th century, grew in the 18th century and peaked in the 19th century; and the symphony emerged in the 18th century and peaked in the 19th and "maybe" early 20th century before stabilizing and fading.  Some of the time reading all this I felt like a forest was being missed for some prominent trees.

The first American orchestra to embody a community of culture was the Boston Symphony, invented by Henry Higginson (like Thomas, a colossus) in 1881. Boston’s example inspired Cincinnati, Pittsburgh, Philadelphia, Minneapolis, Chicago, and St. Louis, all of which had substantial orchestras of their own by 1907. Though the players, conductors, and programs were, as in Boston, formidably Germanic, evolutionary change was anticipated. An “American school” of composers, it was assumed, would anchor the future. As abroad, orchestras would substantially perform native works.
That nothing of the kind happened was a defining feature of classical music in America after World War I
The failed attempt to produce an American symphonic canon was complicated by a late start (modernism mainly rendered cultural nationalism passé) and by an influx of powerful refugee musicians for whom American classical music meant European classical music in a new locale. Schoenberg, Stravinsky, Hindemith, and Bartok were among the relocated composers. The relocated performers included the pianist Rudolf Serkin, who influentially presided at the Curtis School of Music and the Marlboro Festival, and Arturo Toscanini, who as conductor of the New York Philharmonic and NBC Symphony became the iconic American classical musician of the thirties, forties, and early fifties. Neither Serkin nor Toscanini took much interest in Copland’s nascent American school.


That's arguably not the only reason, American music critics didn't WANT to focus on a nascent American school.

Douglas Shadle wrote a monograph on the history of American critics sidelining the American symphonic tradition as either being not-Beethoven-enough or too-Beethoven to be acceptable. 

The American symphonic tradition was constantly caught in an inescapable double bind throughout the 19th century.  Anyone who wouldn't pay homage to Beethoven was slighted as trivial and insignificant while anyone who paid homage to Beethoven (or Wagner) was slighted for failing to live up to the greatness of the masters.  By the first half of the 20th century a number of American composers concluded you can't win for losing and that rejecting the entire game had to happen for American music to come into its own in the highbrow scene. 

That would be where you get the likes of Henry Cowell and John Cage or Conlon Nancarrow.  You'd have fans of the Germanic tradition like Charles Ives, of course, but Ives so drastically transformed the application of the Germanic Romantic tradition it wasn't always easy for peple of his era to recognize what he was doing.   In any case, the double bind regarding Beethoven had to get shaken off and even now that we've arguably had that done in America it was basically too late.  The hour of the symphony as concert music had in many respects already passed and the symphonic moment has arguably been in soundtracks.  Taruskin's take on opera is that the opera got replaced by the cinema for popular appeal. 

To summarize this Lincoln Center tale: George Balanchine and his City Ballet changed the face of dance. Leonard Bernstein led audiences to Mahler: he expanded the canon. But Bernstein could not change the face of the New York Philharmonic. Rudolf Bing did not attempt to change the face of opera or of the Met. That, commensurately, he filled a vast auditorium with paying customers proved a cul-de-sac.

 So that,, however laconically presented, is the theme.  Here are some variations.


While there is an argument to be made for clustering together arts organizations and cultural buildings, the idea has to be animated in some way. Why should these organizations physically be together? Is it about art or about buildings? If it’s about buildings – creating a kind of critical mass of cultural activity that benefits by proximity – then the art comes to be defined by the buildings and how they’re used. The institutions themselves also come to be defined at least in part by their buildings. Build institutions and buildings that are impressive and can be visited and admired and pointed to with pride.  That’s how you build the arts, goes the conventional wisdom – build institutions and the physical infrastructure to support them.
Except what if it isn’t?
We live in a time of gathering distrust of institutions. Where institutions were once essential for marshaling resources to accomplish things, we all know that institutions are inherently inefficient. They can be clumsy and broad-stroked. Generic and slow to react. Cautious. Institutions now seem to be at a disadvantage compared to dynamic constantly-reconfiguring networks that can move quickly and nimbly adapt. Increasingly more of the creative energy in our culture is found outside of traditional institutions.

 Wow, see, this briefly gave me flashbacks to the last ten years of reading stuff from church growth gurus! 

Substitute the word "church" for the word "art" and it's pretty much seeker-sensitive church growth verbiage, isn't it?  The church isn't a place it's a people, right?  How many former members of Mars Hill remember that one?  Having seen what a trainwreck the former Mars Hill became over a twenty-year period it feels a bit weird reading people in the arts management scene and the arts worlds start talking in the same way about outreach to people and doing this in the era of Trump.  If the NEA does actually get gutted where's the funding going to come from? 

Now my own conviction is that the gap between high and low culture has gotten too big and that doubling down on an already alienating highbrow culture is probably not going to work.  To some extent Terry Teachout's advocacy for the middlebrow is that meeting in the middle has its artistic drawbacks but that some mid-point between the alienating extremes is nice to have.   Late Beethoven and folk songs have an awful lot of middle ground between them, for instance. 

Of the variations on the theme one of them explicitly proposed that maybe one of the core problems is that arts organization leaders have identified too closely with a specific strata of their donor demography.

Part III: Do arts leaders identify too much with their upper middle class donors?
While donor research and cultivation has become a serious science, the ideology driving such behavior has been with us since the founding of the nonprofit-professional arts sector in the US. I am amazed that we are able to say with a straight face that America’s 20th century nonprofit-professional theater companies were largely established to serve the general public when many institutionalized a practice (at their inceptions) that would ensure they paid attention to the needs of the upper middle class at the expense of all others.
In the 1960s Danny Newman persuaded theaters that it was better (not just economically better, but morally better) to focus their time and resources on the 3% of the population that is inclined to subscribe and to ignore everyone else. Though some artistic directors rebelled mightily against this approach in the theater industry—Richard Schechner and Gregory Mosher were among the most vocal who noted that it was undemocratic and had a stultifying effect on programming—it was embraced wholeheartedly by a majority of institutions. This was in large part because it was strongly encouraged by the Ford Foundation and its proxy at the time, Theatre Communications Group.

It still seems like donor cultivation is more of an art than an applied science but the point is taken. 

So if there "were" a class war of some kind the arts organizations would be unable to escape association with "the ruling class".  It's not like Cornelius Cardew didn't put that in the most direct and explicit terms possible while he was quoting Mao.  Quoting Mao approvingly has its own issues we just won't get into at the moment but the short version of that post would be there will never be a team that isn't guilty of atrocities but that won't stop partisans from trying to "no true Scotsman" their team into innocence.

Having worked with Joe Horowitz and having read his books I agree in principal with his assessment of the “classical” music industry, its history and present situation. Copland’s quote from 1941 stands as well today as it did 76 years ago. My copy of Joe’s “Classical Music in America” is dog-eared, with quotes from people during the late 19th-century underlined which could have come from my subscribers today. We have indeed created a museum culture out of this art form, one which is perceived as irrelevant for many reasons, an unnecessary result of a focus on the “masterpieces” we all know and love at the expense of the creation of our own voice.
The issue of a cultural shift in America (some would say decline) and the diminishing of the importance of education in music and the arts specifically, in the humanities more generally, is in my mind at least as important a factor. The inundation of our lives with popular culture and multiple distractions, and the lack of distinction of fine art from more popular forms confuses the issue further as people view everything through the same lens. (This is most prominent in American culture; in much of Europe and even Mexico, Central and South America this is not so much the case.) Our industry muddies the waters still more by marketing what we play in the same manner as more popular musical forms. I personally think of this as false advertising; we do not need to apologize for what we play – art and entertainment serve different functions in society. As Joe has often suggested, we need to reframe our institutions as cultural resources; I would say for the understanding of our own society and its place within history.
I have always believed that our art form is living and breathing, and have devoted myself, as well as substantial time and resources of the SDSO, to supporting living American composers. That is not to say that I am a lover of contemporary music, rather that I am a believer. I believe in the power of art to influence, even restore, society and I idealistically hope for a cultural renaissance in which art can serve in this way.

... kind of standard issue art-religion there? 

What's interesting to joke about here is that where a social conservative is always bewailing the endless downward slide in public morals the cultural progressive seems always able to lament the downward slide of the prestige of the high arts as a synechdoche for ALL art.  But does Beyoncé (who's no less a fully formulated brand than Taylor) really need that Grammy?  Did the relatively recently departed David Bowie ever even get a Grammy in his lifetime?  Awards have their uses but I remember reading the composer George Walker remark that winning a Pullitzer didn't really change his musical life in any way.  Ellington didn't get a PUllitzer and it would seem like his music didn't need that particular award, either. 

But in a way musicians can't help wanting a shot at mortality.  You can't be a forgotten musician if nobody knew who you were to begin with.  A reputation can only decline if it already exists. 

Another variation on the theme ... that the arts scene is characterized by what can be called a museum culture, particularly for the symphony.

Now my own take is quite biased, as a guitarist, which is to say that the tradition of extended and abstract conceptual development of musical ideas in complex formal processes can be done on the guitar.  We don't know whether the symphony will regain or reclaim its previous prestige but guitarists are legion and, as the oft-used and over-used bromide has it, the guitar is a miniature orchestra.  Like I blogged earlier this month, when war ravaged Europe Heinrich Schutz didn't double down on the idea that he HAD to have the symphonic scale of resources he worked with earlier in his career.  He scaled back the level of musical detail and thought into the resources he DID have available.  It entailed plenty of compromises and unrealized goals but it ensured the survival of the musical idiom in his area. 

a short experiment in pre-emptive speculative spoilers based just on a movie trailer, The Red Turtle.

So, it'd be fun to be wrong, but given the word "fable" in previews and reviews and the eagerness with which reviewers are trying to avoid spoilers ... there's that.

But the trailer highlights a few simple, obvious things.  The island has turtle-ish elements in its design that are visible in trailer excerpts.  The red palette for the turtle also corresponds to the hair of the woman.  So it wouldn't seem like a huge surprise if the island-is-the-turtle-is-the-woman-is-mother-earth or something like that. 

After all, anime draws upon all sorts of traditions that include storylines like The Hakkenden (you don't really want to know, perhaps, if you don't already do). So castaway man has life with island-turtle-woman-spirit doesn't seem too far-fetched.  It's super-squicky to a Western film reviewer, probably, who's unaccustomed to this kind of thing.

Again, it'd be fun to be wrong and not be able to call this just from seeing one trailer but ..

Wednesday, February 15, 2017

Tuesday, February 14, 2017

a piece on the Atlantic musing upon the fall of rock and roll since 1991 by way of a history of Billboard changing its measurement system for what "popular" meant.

Just a generation ago rock dominated the music landscape. By the 1990 Grammys, the genre was so stuffed with popular artists that there were three separate awards for Best Rock Vocal Performance—for Duo or Group, Female Performer, and Male Performer—plus additional awards for Best Rock Instrumental Performance, Best Hard Rock Performance, and Best Metal Performance (in fact, Metallica won the latter for “One”).

How has rock become so depleted? You can start by blaming the year 1991.

Two years ago, a group of British researchers published a study that charted the evolution of music styles and timbres by looking at 17,000 songs between 1960 and 2010. They charted the rise of Motown in the 1960s, the brief reign of drum-machines in the 1980s, and the spate of weepy love ballads in the 1990s. Among their many findings was that the rock genre, so dominant throughout the 1970s and 1980s, took a sudden nosedive in the early 1990s. In fact, they determined that one year, 1991, marked “the single most important event that has shaped the musical structure of the American charts."

What happened in 1991? Between 1958 and 1990, Billboard had constructed its Hot 100, the list of the country’s most popular songs, with an honor system. They surveyed DJs and record store owners, whose testimonies were often influenced by the music labels. If the labels wanted to push AC/DC, they pushed AC/DC. If they changed their mind and wanted to push the next rock release, AC/DC would fall down the charts and the new band would take their place.

But in 1991, Billboard changed its chart methodology to measure point-of-sales record data and directly monitor radio air play. [emphasis added] As I wrote in a 2014 article in The Atlantic, this had a direct impact on the sort of music that made its way to the charts and stayed there. The classic rock and hair-band genre withered in the 1990s while hip hop and country soared up the charts. In the next 25 years, hip hop, country, and pop music have carried on a sonic menage à trois, mixing genres promiscuously to produce the music that currently dominates the charts. [emphasis added]There is hip-hop-inflected-pop (Justin Bieber), country-pop (Lady Gaga), and country-rap (Florida Georgia Line and Nelly).

The recent British paper on the last half century of music found that hip hop has reigned the Billboard charts longer than any other musical style. Why might that be? In the early 1990s, some cultural critics argued that rock was qualitatively superior, because rap songs were mere “bricolage.” But it’s precisely because hip hop’s nature is to absorb other musical styles that it has proved so durably elastic. [emphasis added] Today’s most popular hip hop artists—like Beyoncé, Drake, Chance the Rapper, and Kanye West—sound very little like the styles that replaced rock in the pop music pantheon in the 1990s. They are more polyphonic, with more diverse inspirations and richer instrumentation and production. Meanwhile, 2017’s Metallica sounded a lot like 1990’s Metallica—even after they got the mics to work.

Now twenty some years ago I cared nothing for rap or hip-hop.  I found ti vulgar and annoying.  It's still not exactly my favorite style of music.  On the other hand, there was another era in which the evolution of a then-new style of musical expression that leaned heavily on the verisimilitude of a musically declamed text to the rhythms of spoken words was scorned as a sign of incompetence unfit for real musicians.  That idiom was the recitative and it was foundational to the evolution of opera. , which doesn't just happen to be another musical style I'm not entirely into.  I like a couple of operas quite a bit just like I like a couple of musicals a lot but overall these aren't my favorite musical idioms.  I don'tthink that "everyone" really "should" go to the opera at least once a year.  If there was a local staging of Wozzeck I'd absolutely go to that if I could afford to! 

But in his English language monograph on the Baroque era Manfred Bukofzer highlighted that for the Renaissance sympathizers in the early Baroque era recitative was the kind of junk a musician with no talent could do, that speech-song was a disaster, and that opera as a hybrid of drama, poetry and music was considered a dubious enterprise.  Yet poera is incontestably high art centuries later, even an arguably obsolete high art. 

The hybrid nature of contemporary popular music is taken as given in the article and you might disagree.  Whether or not you do, however, it got me thinking of something.

Leonard B. Meyer
Copyright (c) 1967. 1994 by The University of Chicago
ISBN 0-226-52143-5

page 178
As foreseen here, the future, like the present, will hold both a spectrum of styles and a plurality of audiences in each of the arts. There will be no convergence, no stylistic consensus. Nor will there be a single unified audience.

I find nothing shocking or deplorable in this. Though countless conferences and symposia are held each year at which the lack of a large audience for serious and experimental art, music, and literature is regularly and ritually lamented, I do not think that our culture is ailing or degenerate because Ulysses is not a best-seller and Wozzeck is not on the hit parade. They never will be. Expectations based on the premise that art is, or should be, egalitarian are not only doomed to disappointment but misleading because they create false aims for education and mistaken goals for foundation and government patronage. Democracy does not entail that everyone should like the same art, but that each person should have the opportunity to enjoy the art he likes.

page 209
... New idioms and methods will involve the combination, mixture, and modification of existing means rather than the development of radically new ones--for instance, a new pitch system or a new grammar and syntax. [emphasis added]  ...

Complementing this stylistic diversity and these patterns of fluctuation will be a spectrum of ideologies ranging from teleological traditionalism, through analytic formalism, to transcendental particularism

page 343
In the ideology of Romanticism greatness was linked not only to magnitude but to the prizing of genius; and genius was, in turn, bound to the creation of innovation. This coupling occurred because the Idea of Progress made innovation an important value and there needed to be causal agents of change. Geniuses were believed to be such agents. But if the future is unknowable and chancy, and if change per se ceases to be a desideratum, then the creation of categorical novelty (for example, the devising of new musical constraints) becomes less important, even pointless, because there is no assurance that innovation will "advance" musical style or lead anywhere--that is, be part of a coherent, predictable pattern. For these reasons, few "hats-off" geniuses will be hailed in the coming years, and creativity will involve not the devising of new constraints (for instance, serialism or statistical techniques) but the inventive permutation and combination of existing constraint-modes, especially as manifested in stylistic eclecticism. [emphasis added] 

Meyer wrote his postlude to his 1967 book in the early 1990s, just as the tectonic shift in the metrics of the hit parade were beginning to take effect.   The way the Cuban guitarist and composer Leo Brouwer put things, he suspected the future of music was fusion and that academic musicology had largely failed to come to terms with this, and that largely, perhaps, because scholars are interested in taxonomies of delineation rather than integration.  Or that's how I vaguely remember it.  Brouwer's communist credentials are not really in dispute here but it's particularly worth noting that aspirations to fusions of styles previously thought to be incompatible has been a quest on both sides of the Iron Curtain.  What, exactly, it means that musicians across the political spectrum have aspired to arrive at a fusion of high art techniques and forms with vernacular idioms is pen for endless discussion and debate. 

But for advocates of high culture here and now to reflexively dismiss popular styles would be to face the peril of repeating precisely the "mistake" made by advocates of the old Renaissance ars perfecta over against the nascent tonal language that would be consolidated and refined in the middle and later Baroque eras. 

It would be too easy, too, to forget that the end of the Renaissance saw the collapse of what was regarded as something of an international and unified style into a panoply of regional forms and styles.  Those advocates for the equal-tempered twelve-tone tonal system need to remember just how recent the vintage of this thing is. 

And in light of Billboard changing whatit measured and why it may turn out that the prestige of rock music as we've come to know it was because of an industry bubble, or perhaps even because of a basically inaccurate or dishonest way of accounting for popularity.   So in a sense 1991 wasn't the year rock stopped being "the msical style hat dominated the charts; perhaps it was the year that the presumed supremacy of rock was shattered by the introduction of metrics and measures that did a better job of revealing what people were really buying (back when people bought music because streaming wasn't a thing yet).

Cultural conservatives lamenting the loss of high culture on the horizon can too readily forget that we've been here before as a human species.  Fans of the styles of Byrd and Palestrina could conclude Monteverdi's music was utter garbage.  Opera and other Baroque era innovations emerged all the same.  Eventually the major/minor key system with functional harmony and tonal organization came about.  Eventually the wild array of styles and idioms and forms from the early and idle Baroque periods consolidated into the late Baroque; rather it would be more accurate to say that the composers we now regard as the landmark figures of the late Baroque completed a century and a half long process of consolidating the elements of the previously existing styles and forms into the cohesive musical language that too often gets misunderstood as being the summation of the Baroque era as a whole.  All of that is to say we've had eras in which dizzying stylistic fragmentation occurred and it was eventually rounded out by countervailing propensities to formulate new fusions of established idioms.  Meyer was writing as a musicologist and historian of music in a position to have an idea we'd seen and heard this kind of thing before.   His proposal was, as you just saw, that the musical heroes of the future (i.e. now) would be those who successfully hybridize existing idioms rather than inventing new musical modes of organization whole cloth. 

We seem to live in an era Meyer anticipated, in which the heroes of the arts are not necessarily traditionalist or purist but experimental formalists working with idioms that are already known and playing with them in relatively new ways.  If rock and roll does die out it will probably be because of stylistic purity rather than stylistic compromise. 

Monday, February 13, 2017

Academia in our moment, not sure if an army of adjunct perma-temps give the academic culture a good platform from which to denounce the new era

More than a decade ago Kyle Gann blogged about what he called the Musicology Ladder.

...   I was primarily not thinking of musicologists, but of theorists and composers, who seem loathe to subject to analysis any music not granted paradigmatic status. And I was also thinking not so much of “academic taste” as much as “acceptable topics of  research.” I’ve never quite gotten over how perplexed my fellow grad students were that I lowered myself to write an analytical paper on Bruckner.

Still, while I haven’t spent much time consorting with musicologists, I have spent enough to learn what a strict composer-based hierarchy the world of musicology is. I was once on a panel with some big names, and highly complimented a famous scholar on his book on Muzio Clementi, which had been a great help to me. He seemed almost irritated that I had brought it up, as though it were some secret from his past that he didn’t want mentioned in front of his colleagues. He had now written a book on Beethoven, which meant he had climbed a couple dozen steps up the musicology ladder. And I have learned in that world that to have written the first book on Nancarrow was a miniscule accomplishment, almost negligible, compared to writing the 67th book on Bach, Beethoven, or Brahms. In the world of music historians, your stature is exponentially proportional, not to the quality of your research and writing, but to the prestige of the composer you can claim to be an expert on.

I've vented at this blog a time or two on how difficult it was to find any monographs on sonata forms in the guitar literature. There are precious few.  There have been a couple of nicely done doctoral dissertations since 2012 or so on the topic but that's a depressingly recent vintage. 

The general impression I've gotten from some of Gann's lucid rants about academia is that too much of contemporary American academia is less about teaching or scholarship than what I'd have to describe as the semblance of a prestige racket. 

Sure, in the age of Trump we can get an assurance from Rebecca Schuman over at Slate that, more than ever, academia is important.  A bit too predictably there's linkage to the plight of adjunct faculty and then there's this conclusion:

Perhaps the answer moving forward, then, is not to join in the mockery of jargon, but to double down on it. Scholars of Yiddish studies are happy to tell you the thousand-year-old language developed as a kind of secret code so that its speakers could talk freely under the noses of their oppressors (and, yes, sometimes mock them). Perhaps academic jargon could serve a similar purpose. Yes, perhaps the last hope to problematize fascistoid nonprogressive edges, so to speak, is to reterritorialize the oppositional vernaculars. But perhaps that was the point all along, and jargon has been lying patiently and usefully in wait for all this time, a secret code in search of a foolish tyrant.

Translated into more vernacular parlance, the beauty of academic jargon is that it is opaque to those not currently fluent in it. To translate the thrust of this idea, academic jargon may play a necessary role, as the dog whistle to left-leaning academics who want to talk against the current administration without doing so in a way that could jeapordize funding of programs.  The trouble is that as some old author of yore might have put it, some ideas are so stupid only intellectuals can believe them.  The idea that the current administration represents a virulently anti-academic mindset could probably be agreed upon by a whole lot of people across the political spectrum. 

It's just that the trouble is even within an ostensibly left academy there's not exactly a consensus that right-wing billionaires are the whole problem.  Sure, there's any number of pieces about the emergent alt-right but what seems to have been slightly under-discussed in intra-left coverage is that if you go back and look at which movements gained momentum within what's call the alt-right these days, which groups were these?  To go by press coverage, white nationalists and devotees of Ayn Rand.  Huh, where were all these people before?  Did not evil Republicans and conservatives exist over the last half century?  Certainly they did but how is that it's only been in the last twenty odd years the alt-right has been able to develop?

At the risk of reminding people of something a whole lot of people might already know, the groups in the alt-right that gained traction kinda ... look like the groups William F. Buckley and others kicked to the curb in the mid-20th century.  If a person were willing to entertain a conspiracy narrative it's almost as if the alt-right represents a kind of political revenge or payback oment for those groups that tried to have a role in mainstream conservatism but got deliberately sidelined for their views on race or their explicitly and reflexively antagonistic view of the state.  Now there could be any number of criticisms leveled at how and why someone like Buckley did this.  That said, there's a sense in which the center-left might want to remember that not having someone like Buckley around to deliberately keep the alt-right from becoming viable is the kind of thing that's hard to appreciate until that kind of person isn't around any longer. 

The dilemma of the adjunct faculty is, if anything, a self-incriminating complaint to articulate from within academia, though not necessarily for the adjunct faculty themselves.  Biblioblogger Jim West was pretty direct and specific:

The adjunct crisis exists because too many departments have too many PhD students. The only cure is for departments to offer PhD’s for the number of jobs there actually are.
Creating 500 PhD holders when there are only 30 positions suitable for those PhD’s is not only immoral, it is driven purely by economic considerations on the part of the University.

Student debt would be, potentially, less of a crisis if universities stopped offering to take on students for advanced degrees who, on balance, stand no real chance of getting gainful employment within the academy because those tenured spots don't exist.  Now, sure, arts people and academics can be upset Trump got the Electoral College vote.  And there will be no end in sight to proclamations that theater, for instance, should stop staging Mamet.  The idea that theater is socially important is a little tough to buy, personally, but artists can take stands for what they believe in.  But some of us made the awkward discover decades ago that if we had, say, a degree in journalism, there were vanishingly few real-world options for using that degree within the industry.  The dissolution of the conventional press over the last twenty or thirty years would be a separate post if I felt like doing that. 

An op-ed like this about the decline of the regional press has a few things to commend to it.  This blog spent the better part of six or seven years compensating for the failure of the local press to adequately cover the life and times of what was once Mars Hill.  It takes very little to preach to this choir on the failures of the local press due to a lack of stable institutions or a steady job market.  Yet the double bind remains, while the conventional press faltered big time in 2016 it's not as though blogs or bloggers are taken seriously apart from some exceptional cases.  Perhaps like the academy the formal, institutional press has a propensity to only take itself seriously. 

There was an award handed out recently, actually, and the recipient of that award gave a little talk, linkable thanks to ArtsJournal.  The topic was the adjunct faculty problem..
We cannot blame this professional anemia on scarce funding. The largest adjunct-faculty increases have taken place during periods of economic growth, and high university endowments do not diminish adjunctification. Harvard has steadily increased its adjunct faculty over the past four decades, and its endowment is $35.7 billion. This is larger than the GDP of a majority of the world’s countries.

The truth is that teaching is a diminishing priority in universities. Years of AAUP reports indicate that budgets for instruction are proportionally shrinking. Universities now devote less than one-third of their expenditures to instruction. Meanwhile, administrative positions have increased at more than 10 times the rate of tenured faculty positions. Sports and amenities are much more fun.

But the problem goes deeper than administration as well. It’s systemic. The key feature of adjunctification is a form of labor-market polarization. The desirability of elite faculty positions doesn’t just correlate with worsening adjunct conditions; it helps create the worsening conditions. The prospect of intellectual freedom, job security, and a life devoted to literature, combined with the urge to recoup a doctoral degree’s investment of time, gives young scholars a strong incentive to continue pursuing tenure-track jobs while selling their plasma on Tuesdays and Thursdays.

This incentive generates a labor surplus that depresses wages. Yet academia is uniquely culpable. Unlike the typical labor surplus created by demographic shifts or technological changes, the humanities almost unilaterally controls its own labor market. [emphasis added] New faculty come from a pool of candidates that the academy itself creates, and that pool is overflowing. According to the most recent MLA jobs report, there were only 361 assistant professor tenure-track job openings in all fields of English literature in 2014-15. The number of Ph.D. recipients in English that year was 1,183. Many rejected candidates return to the job market year after year and compound the surplus.

It gets worse. From 2008 to 2014, tenure-track English-department jobs declined 43 percent. This year there are, by my count, only 173 entry-level tenure-track job openings — fewer than half of the opportunities just two years ago. If history is any guide, there will be about nine times as many new Ph.D.s this year as there are jobs. [emphasis added] One might think that the years-long plunge in employment would compel doctoral programs to reduce their numbers of candidates, but the opposite is happening. From the Great Recession to 2014, U.S. universities awarded 10 percent more English Ph.D.s. In the humanities as a whole, doctorates are up 12 percent.

Why? Why are professional humanists so indifferent to these people? Why do our nation’s English departments consistently accept several times as many graduate students as their bespoke job market can sustain? English departments are the only employers demanding the credentials that English doctoral programs produce. So why do we invite young scholars to spend an average of nearly 10 years grading papers, teaching classes, writing dissertations, and training for jobs that don’t actually exist? English departments do this because graduate students are the most important element of the academy’s polarized labor market. They confer departmental prestige. They justify the continuation of tenure lines, and they guarantee a labor surplus that provides the cheap, flexible labor that universities want. [emphasis added]


I really wanted to get into academia in my late teens and early twenties.  A mere two or three years out of college with an undergraduate degree and I just about gave up all hope of a master's or doctoral study.  Ten or eleven years ago I looked into things again and discovered that basically the whole prospect was a waste of time.  Grad school in music involves jumping through a lot of hoops, generally necessary hoops but not all of them may have been, strictly speaking, completely necessary.  A certain local program is pretty solid but without an undergrad degree in music you're off the table for any consideration.  It wasn't worth it to get an undergrad degree in music from scratch, which was the only option on offer (for want of a better way to put that). 

A few years ago I looked into continuing education for non-musical stuff and discovered that there wasn't really a lot by way of financial aid if you already got an undergraduate degree and had neither married nor served in the military nor brought a child into the world.  If you were divorced there were options, or a parent, but if you managed to not bring babies into the world through a formal or informal pair bond, good luck.  It began to seem that whatever advantages a degree theoretically conferred on the job market didn't even hold up in theory.  The more time goes by the more grateful I have become to have been unable to participate in academia and it's not because I lost my love of learning.  I'm incubating a blog post or maybe two about ways to imagine structural space in three-dimensional terms to create a model for synthesizing ragtime and sonata form based on manipulations of the syntax and vocabulary of the respective idioms.  It's been fun.  When I read academics or music journalists try to write about music the usual idiotic bromides abound, something about how writing about music is like dancing about architecture.  It's one thing if Richard Taruskin says too much academic music writing is bogged down by the arcane shop talk of theorists; it's another to bask in some kind of cult of hidden knowledge on the possibilities of overlap between blues and fugue based on some reverse of the Herderian mythology about truly German music. 

I read the above statements about the adjunct faculty situation and one of the things that comes to mind is that if the university system is so morally bankrupt as to keep taking in students whose advanced degrees can't possibly land them well-paying jobs in academia itself how do these fools think they have a moral high ground from which to condemn Trump?  If the academic culture that abuses adjunct faculty is as it has been described then academia is overrun with a surplus of labor in a way that all but vitiates talk about the problems of late capitalism or neoliberalism.  If anything academia as a prestige racket for those who believe their readings of performatives of dissent exempt them from being part of a ruling elite is the apotheosis of the problem or can we just forget the history of how leftist revolutions have a track record of culminating in the liquidation of intellectuals?  If there were to be a leftist revolution in which intellectuals didn't get liquidated that'd be great!  I don't even self-identify as particularly left but the trouble is that the history of the left and right alike has its share of scholars getting disappeared in one form or another. 

It's hard to shake the sense that modern people are basically totalitarian ideologues in contemporary technocratic societies and that this isn't really a question of left or right or red or blue but of the nature of the human condition as we now have it.  After a decade in and around Mars Hill and seeing how the partisans for and against behaved and argued it was impossible to shake the sense that humans are social creatures and that we are, to put it crudely, kool-aid drinkers by nature.  It's never a question of whether or not we're going to drink the proverbial kool-aid, it's what flavor we'll down and why.  In an era when it seems as though people across the spectrum want a race war or a class war or a civil war that they can claim they didn't start the people who seem least able to speak with a moral high ground these days are, unfortunately, academics.  The idea that if you just go to a decent school and get a degree you'll land a solid job seems like a  hat trick.  The discourse of privilege would seem to be off limits to anyone with enough college education to know what the term is.  Perhaps the performative of privilege assessment is itself the indicator of privileged status.

Kevin Birmingham's polemic is nothing if not direct, the assertion is that the adjunct faculty disaster is entirely the fault of the academic culture that has cultivated it.

You can't trust these people to stand up to Trump, can you?  Sure maybe the majority of academics are at least not at, say, for-profit universities, but is the prestige racket of the state school without its own problematizing dynamics of commodifying students and giving them a panacea of ideology through which they exempt themselves from the recognition of their own privilege?  Or is handing out more degrees than the job market can accommodate a way to level the field there?  Maybe the kids who get higher-five-figures or six-figures into student debt feel like victims because they really have been exploited but not merely by the creditors.  After all, if you don't choose to go to a school do you borrow money for the school?  Of course you don't. 

If the great need for academic jargon is the need articulated over at Slate then that need is for dog whistles.  There's no way to sugar coat that, really.  While the left has a few writers writing about right-wing dog whistles, dog whistle politics can exist anywhere where guilds want to hold on to what's theirs even if a few people have to be sacrificed along the way. 

I really didn't want Trump in the office, but when I read the self-exonerating op-eds of academics it's sometimes too easy to understand why the ostensibly uneducated resent the holders of college degrees.  Maybe those of us with college degrees don't get despised for having college degrees.  Some of my dearest friends are high school dropouts and we've got no problems talking about ancient Middle Eastern military campaigns or comparing Cicero to biblical texts on friendship or watching Batman cartoons.  I'm going to go out on a limb here and propose that people won't begrudge you your education if you don't wear it like a badge that tells them they have to think of you as inherently better than them.  Wisdom and even intellect are no the same as formal credentials.  If the academy wants to take a stand against someone like Trump dropping the charade of the prestige racket may be one of the things that needs to happen.  Otherwise academia will stand against Trump (at this point probably not really) while taking on students who go into debt in the hope of getting jobs that don't even exist and at that point people in academic institutions speaking against someone like Trump fooling people with impossible promises to fulfill unrealistic dreams will look like tenured/administrative pots calling the kettle-in-chief black. 

Sometimes it feels like people with graduate degrees can't joke about Sarah Palin and the bridge to nowhere--it sure seems like the level of adjunct teaching going on these days suggests that the average graduate degree earned has become its own bespoke bridge to nowhere for that person who went and got the degree, courtesy of American higher education.  The creepy thing about academia seems to be that it doesn't recognize that the most damning evidence for what's endemic in late capitalism and the contemporary dynamic labor exploitation may be the very nature of contemporary higher education itself. Or at least to go by what some in academia seem able and willing to say about it in the last ten years ... it sure seems to be hard to avoid getting that impression.

Saturday, February 11, 2017

some links for the weekend

Jen Graves resigns as art critic of The Stranger

Half the time I only read The Stranger when I read it for Chris DeLaurenti's columns so, man, it's hardly ever that I read The Stranger these days.  On the other hand, if they can surmount the era of Dan Savage being the prototype for Driscoll maybe there's hope for them.  Mudede's film reviews can still be fun to read so he's sort of the reason I even still bother to read The Stranger even intermittently these days.  For the record, it wasn't until Justin Dean indicated the publication fabricated content that WtH made any contact with anyone at The Stranger.  Back during the MH days one of the popular assumptions was that the sorts of people who hated Mars Hill went and blabbed to the newspaper.  Never did it.  Not on the same page as them on any number of issues but, in any event, Graves is parting ways. 

courtesy of ArtsJournal "radical empathy is the theatre artist's new job"

eh ... not personally hugely into theater.  It's a bit odd since I can get into opera and even more into ballet and I like film and television. I love music and enjoy scholarly literature.  Haven't read poetry in a while but I was on a Levertov kick years ago.  I still admire Donne.  But theater ... I don't know, it's just a medium in which I've found it virtually impossible to suspend disbelief.  So radical empathy on the part of the theater ... I don't know. 

In times of yore people with dramatically different political views could meet together in a common space that wasn't a theater but a church.  At the risk of repeating this point yet one more time, one of the things that was enjoyable and encouraging about Mars Hill in the 1999 through roughly 2005 period was I could converse on politics and art with people across the political spectrum.  There were people who were anarchists or traditional leftists, there were center-left liberal types (there were, in fact, people who voted for Clinton and Obama all over Mars Hill even if they increasingly felt unable to say they did so from the 2007-2014 period); there were also, of course, right and paleo-con types and plenty of people willing to say they backed Gulf War 2, which I regarded as a disastrous policy move but knew I wouldn't dissuade people from within that context.  I think we need the Department of Defense to be one of defense rather than offense but never mind that.

The idea is that in a self-selecting self-sorting society such as we have the difficult theater will have is it is so niche and so resolutely highbrow the ability of theater to reach out to the community could be comparable to the difficulty of fans of Xenakis doing the same.  Xenakis wrote some fun stuff.

but it's absolutely an acquired taste!

It's the kind of music that might end up in, say, a Resident Evil soundtrack. ;)  Actually, not really, but that's a segue-way to another link about whether or not there have been or will be good video-game film adaptations.

Milla's done half a dozen of these RE films and I only watched half of them.  Yeah, I watched half of them.  I gave up after the third. When a friend of mine said there hasn't been a successful mainstream action franchise on film with a female lead I kept wondering when he was going to remember that Milla Jovovich and Kate Beckinsale have been ostentatiously proving otherwise for the last fifteen odd years.  That they're both glamorous model types is probably counted against them, and that the action franchises they've starred in are summarily panned by film critics all over.  I get why.  That Milla and Kate are likely never going to be regarded as A-listers in Hollywood hardly seems to mean they aren't some of the most robust B-list actresses around.  Lest that seem to be too harsh a verdict let me invoke Bruce Campbell's observation that the A-listers are trapped in the typecasting their personas require of them.  B-listers don't have the same monetary leverage but what they may lack in that they can make up for in niche market reliability or versatility.  Jovovich may be more the former and Beckinsale the latter (she was one of the better Emma Woodhouses I've seen, wrong hair color withstanding, and she was immensely entertaining as the nasty piece of work Susan Vernon last year).

There is, reportedly, that Lego Batman movie out there.  Might see it, but maybe not right away.  Still wondering if The Red Turtle's going to hit Seattle. 

For folks who may not have kept up with Gospel for Asia stuff ...

Oh, and since it's been a while since we've mentioned Ferdinand Rebay or his music, here's a performance of the first movement from his Sonata for violin and guitar in E minor.

I keep meaning to get back around to blogging about Rebay's music again but I'm kinda plodding through some Frankfurt school stuff lately.  Also reading up a bit on John Cage, a favorite whipping boy of any number of conservative types.  Per some comments quoted form a blog post earlier this weekend, there were these culture wars in academia in the last twenty years and they were less about the aesthetics of the arts as such than about the canons preferred by the people who came to be the ruling castes of the last half century.  So in a way cultural reactionaries are fighting an already lost battle if they hammer away at Schoenberg or Cage. I haven't gotten around to writing on the stuff I've read at Future Symphony Institute but one of my reservations about it is the "symphony" part.  If arts funding gets sliced up even more in the Trump years then the mainstays of high culture are going to hurt even more.  As a guitarist I'm not going to pretend for a second that I don't think we should put more into the guitar literature.  When war ravaged Europe Heinrich Schutz did not take the approach of "just" insisting on more money and resources for the arts.  He took the approach of scaling back his practical approach to composition and his conception of the resources that were available to him.

Kyle Gann floated an idea years ago, "Make Way for the Guitar Era" and floated an explanation as to how it came about that so many music students at Bard College might have become guitarists.  I happen to love the instrument and I've spent much of my adult life working toward some specific goals for the instrument, namely composing sonatas and fugues.  The two 18th century forms or processes guitarists routinely and tediously assert are inimical to the inherent limitations of the guitar are sonata forms and fugues.  This is so obviously not the case I could probably write a book about the subject but I'm honestly not sure if I will.  Part of the reason is simply that even if you DO compose a guitar sonata in F minor inspired by the late Beethoven piano sonatas and the late string quartets of Shostakovich that doesn't mean other guitarists will want to play it. 

But, again, if there's a likelihood that arts funding as we've known it is even further gutted then what guitarists have an opportunity to do is keep the traditions of Western art music alive by tackling the kinds of things we've heretofore had too many guitarists claiming hasn't been done (that has been done, in the case of sonata forms) and that can be done.  When Western academics insinuate that sonata forms are obsolete this could only be true if you focus entirely on keyboard literature at the expense of neo-Romantic works within that milieu or if, further, you pretend that no composers who wrote for the guitar ever wrote sonata forms.  This, too, is so easily disproven I'm surprised there aren't monographs on this.  Gann's got a theory as to why, which has something to do with the musicology ladder and modern academia being more of a prestige racket than a setting in which genuinely path-breaking scholarship is likely to happen.  It does happen, and the work I've read by Hekoski & Darcy on sonata forms has encouraged me because if scholarship is rethinking what sonata forms even are this will give guitarists and guitarists into musicology an opportunity to demonstrate that our instrument has plenty of sonata forms to study once you bust out of the straitjacket of assuming all sonatas "must" be what Hepokoski & Darcy regard as "Type 3". 

But there's only so many thousands of words even I feel like writing on a weekend. 

Since this blog does touch on the topic of animation now and then ... you maybe knew this was coming.

Samurai Jack Season 5.  Yep. 

the crisis in the social sciences looks like a crisis in which we can see that social science may be statistics deployed in the service of stereotypes

social science is
statistics in the service
of stereotypes

This being more an arts blog than a science blog, we've kicked things off with a haiku. But the subject for the weekend is social science and we'll enter into this topic by way of something about the culture of the American university.

Internet Monk linked to something over at Heterodox Academy
This basic exploration of FIRE’s disinvitation revealed that:
  • Total disinvitation attempts per year increased from 2000 to 2016.
  • An unsuccessful disinvitation of a speaker was the most common outcome of a disinvitation attempt.
  • Disinvitation attempts occurred primarily for campus speeches/debates or commencement addresses.
  • The catchall category of “other political views or positions” spurred the most disinvitation attempts. Racial issues, views on sexual orientation, and views on the Israeli-Palestine conflict all produced over 40 disinvitation attempts.
  • Public colleges and universities experienced more disinvitation attempts than private secular and private religious colleges and universities, largely driven by more attempts to disinvite speakers from making campus speeches or participating in campus debates.
  • The success rate of disinvitation attempts was higher at private secular and private religious colleges and universities compared to public ones.
Summary and Conclusions:
  • Speaker disinvitation attempts from 2000 to 2016 were most likely to come from the left of the speaker.
  • These disinvitation attempts from the left occurred most often for controversies over racial issues, views on sexual orientation, and views on Islam.
  • Speaker disinvitations due to issues related to abortion almost exclusively came from the right of the speaker, at religious institutions.
  • Speaker disinvitations due to views on the Israeli-Palestinian conflict occurred almost equally from the left of the speaker and from the right of the speaker.
  • With the exception of 2006, the first decade of the new millennia saw a roughly equal number of disinvitation attempts from the left and right of the speaker. Beginning in 2010 an uptick in disinvitation attempts from the left of the speaker has occurred.
  • Disinvitation attempts from the right of the speaker have a higher success rate.
  • When disinvitation attempts are unsuccessful, moderate and substantial event disruptions are almost exclusively from the left of the speaker.

The first thing that came to mind reading these two parts was that the sample size was necessarily small and that there's not yet any clear certainty these are necessarily reproducible results. 

This problem is happening in a few places:

Cancer Research Is Broken
There’s a replication crisis in biomedicine—and no one even knows how deep it runs.

But it's most notable in social science and particularly in psychology:

Psychology’s Replication Crisis Can’t Be Wished Away

Does social science have a replication crisis?

Why Does the Replication Crisis Seem Worse in Psychology?
originally over here:

from the Slate version:

Researchers study small effects with noisy measurements and then look through their data to find statistically significant comparisons. This approach will be expected to lead to unreplicable claims. But, worse than that, it can lead to research communities where unreplicable results seem to reinforce each other: Study a small effect with noisy measurements, and any statistically significant claim will necessarily massively overestimate any underlying effects. In follow-up studies, researchers will then expect to see comparably huge effects, hence they anticipate “high power” (in statistics jargon), and they expect high rates of success. Coming into their studies with this expectation, they can feel justified in jiggling their data until they get the findings they want. The resulting claims get published in journals, their findings are believed, and the cycle continues.

But this is a problem in lots of scientific fields. Why does psychology continue to dominate the news when it comes to discussion of the replication crisis?

Why not economics, which is more controversial and gets more space in the news media? Or medicine, which has higher stakes and a regular flow of well-publicized scandals?
Gelman went on to propose there were four reasons why the scandal of not being able to replicate results is a bigger deal for psychology than for other fields.

Those four reasons aren't interesting enough to summarize. As someone who's not in the academy but likes to read academic literature there's a much, much simpler and more direct way to explain the real crisis underlying the crisis of replication, which is a doubt about whether or not the social sciences can be legitimately regarded as social science at all.  If you can't replicate the results how can you call what you've found science?  Throw in the concern that Jonathan Haidt and others have had about how WEIRD (Western, educated, industrial, rich and democratic) Western academic research has been; throw in concern about the level of duplicity and deception and paying off college students on a campus may be involved just to get people to participate in a study and a layperson can be left asking a question.  But rather than formulate it as an explicit question it can be conveyed as a haiku, since this is more an arts blog than a science blog.

social science is
statistics in the service
of stereotypes

That these stereotypes are only formally declared as backed up by "data" after the study has been done doesn't mean the stereotype wasn't the impetus for the sociological or psychological study to begin with; nor does it matter that the stereotype formulated might have been formulated over against some previously existing stereotype.  A counter-stereotype is still a stereotype.  Daniel Kahneman's writings have told us that in a majority of cases these "System 1" judgments are usually remarkably accurate, which is why when "System 1" judgments turn out to be wrong they are spectacularly wrong.

It may be a paradox that explicit partisans and the old left and old right may be better able to see the partisanship of the new center than the center can see in itself.  To put it in another way that's no less deliberately intended to provoke, a lot of left literature can be invoked by the academic mainstream to exempt itself from being a ruling class in a way that defeats the stated aims of the literature being quoted.  In other words, the high school dropout who never heard of Walter Benjamin but understands that class boundaries are not very permeable probably would get what Benjamin was aiming for more readily than the contemporary American college graduate who can quote Benjamin's work.  The college graduates who look down on the lower class electorate that they are convinced are to blame for voting for Trump (as opposed to the actual Electoral College members whose vote really did put Trump in office over against the popular vote) don't realize that they are a kind of ruling class looking down on another ruling class.  Worse yet, in the caste of the social scientist, we have people who justify their stereotypes not by a blunt appeal to the sum of bad experiences with "those people" which is what lower class people will do in the formation of their stereotypes, no, the social scientist can be clinical in the formulation of stereotypes.  Call it the paradox of arriving at pseudo-scientifically derived stereotypes on the basis of an unacknowledged positive on the part of those who would probably deny being positivists if directly asked.

Another possible paradox is asking who would be tasked with asking the questions of who might be the ruling classes of our age?  That could be ... social scientists.


In Cultural Capital, one of the first academic books to import Bourdieu’s ideas into literary and cultural studies, John Guillory made the counterintuitive suggestion that the exhausting canon debates of the 1980s culture wars were really “a crisis in the market value of [the literary curriculum’s] cultural capital, occasioned by the emergence of a professional-managerial class which no longer requires the [primarily literary] cultural capital of the old bourgeoisie.” In other words, the canon debates were not about empowering women and “non-Western” or minority cultures through education, but a sign that these previously subordinate groups already had increased in power to the point where they could create alternate canons, literary or postliterary, which reflected their new status within a capitalist order. Canon formation and reformation being something elite groups did whenever they became aware of themselves as elites. [emphases added]

Guillory didn’t intend to slight the attainments of these historically marginalized groups; he simply wanted to sidestep those annoying debates about whether Edith Wharton was really better for us than Henry James. He focused instead on how eruptions of conflict over symbols pointed to shifts in underlying power dynamics — whether the rise of the professional-managerial classes of the 1980s (which had produced the culture wars), or the bourgeoisie of the 1680s (which had produced the English novel itself).

This insight, radical enough for 1993, now gets a commonplace “fit to print” version in the well-meaning bourgeois paper of record, where the Columbia sociologist Shamus Khan recently took issue with a self-congratulatory tone he’d noticed among educated elites when it came to their global-minded tastes, their ability to channel surf between high and low culture, European and non-Western. “Elites today must recognize that they are very much like the Gilded Age elites of old,” he writes. “Paradoxically the very openness and capaciousness that they so warmly embrace — their omnivorousness — helps define them as culturally different from the rest. And they deploy that cultural difference to suggest that the inequality and immobility in our society is deserved rather than inherited.” [emphasis added]

It’s worth slowing down Guillory’s and Khan’s arguments to make explicit certain assumptions they share about the university and the culture it promotes: that its purpose is to train a professional-managerial class or a technocratic elite; that those who attend such schools do so with an intention, no matter how unconscious, of becoming members of either the professional-managerial middle class or the elite managers of those managers; and that such groups need distinguishing markers, the equivalent of secret handshakes, that allow them to recognize themselves as a class, and which, apart from their professional training, are provided by “culture,” which offers, at best, a way for people with shared interests to frame their lives to themselves, and for one another, in ways that are mostly flattering to their self-esteem.

The jaded view of “the arts” propagated by new cultural sociologists is not really different from what the sociologist of America’s first Gilded Age wrote in the 1890s: “The humanities . . . are pretty uniformly adapted to shape the character of the student in accordance with a traditional self-centred scheme of consumption.” Thus Veblen deplored what he called the “regime of status” in contrast to a more puritan and utilitarian “regime of productivity.” Post-Veblen, the contemporary sociologist’s idea of the university’s purpose does not really differ in kind from the neoliberal version: to provide training in a specific field so one may get a better job and have a better life than someone without such training. In the end, it’s irrelevant whether a degree’s additional symbolic value is provided by reading Shakespeare, pledging a fraternity, or DJing a radio show on the blues.

or writing about semiotics and gender identity in Buffy the Vampire Slayer.  Joss Whedon's a one-trick pony who has coasted for decades on the reality that he's had the privilege of working with actresses who are better than the dialogue and plots he writes for them.  It was fun for three seasons and should have ended.  But we live in an era in which Whedon can kinda resent that Firefly can be negatively compared to Cowboy Bebop.  Sure, maybe Cowboy Bebop WAS too hip for its own good but it did, at least, run two whole seasons and finished its story arc.  It also had a far more entertaining soundtrack.  That was way back in the era when Viz was cranking out volume after volume of Ranma 1/2.  There's some disturbing headlines about someone who was involved in translation of that project but that's more a "if you already know" passing reference.

It's interesting to consider that twenty years ago fans in the United States who were into anime and manga but had highbrow aspirations would snort with contempt at the popularity of Rumiko Takahashi's works like Ranma 1/2 or Maison Ikkoku; now Ranma 1/2 can be seized upon as a fantastic series to get into because it can be retroactively read as a trans genre comedy by someone like, maybe, Noah Berlatsky.  Let's just ignore that Takahashi did a brilliant send up of the genre tropes set forth in Lone Wolfe & Cub.  Let's just skip the possibility that a triumphantly low-brow satire of a low-brow pulp classic might be in the works.  While the alt right has its numerous flaws the reality of self-congratulatory virtue-signaling on the part of the new left seems pretty well beyond dispute.  To champion Ranma 1/2 not as a successful example of mass culture but as a pathway to a LGBTQ agenda is to repurpose mass culture from a Japanese context to an American context.

The best way to exempt mass culture of the stain of simply being mass culture, for Americans these days, is if it's from another country.  You can wallow in the most forthright topes all you want as long as its in culture east or west of the middle America, preferably separated by a complete ocean  I had a friend in college declare that European cinema was better than American cinema because it had fewer clichés.  When I angrily replied that European films have just as many clichés that are just as stupid as American clichés and that the film career of Emmanuelle Beart could produce evidence for this pretty readily my friend paused a moment and then said, "Okay, I guess I just like the European film clichés better."  Don't bother speculating as to why a 20-something guy in an American liberal arts college might watch a movie featuring Emmanuelle Beart in the 1990s.  It does not take a college degree to figure that one out. 

We're also at a moment where an author can complain that Asians are still sidekicks in mainstream Hollywood productions.  That the goal of alternative cultural idioms is the hope of one day being mainstream probably couldn't be more explicit than this:

Boyhood or the new Avengers movie? I could give a shit. A Girl Walks Home Alone at Night or Crumbs? Yes, please. And it’s not even that I’m actively boycotting the former. I just don’t care. They coast on the assumption that these are stories that matter to everyone; they don’t. I think it’s important to say that, repeatedly, out loud, and point to alternatives, until the alternatives become a new mainstream that reflects the actual world.
What if, thanks to the internet, this already exists?  What if that mainstream already exists as a subculture within the mainstream?  To put it in slightly more personal terms, even if an author complains that Asians are still sidekicks in Hollywood productions is that nothing?  The tipping point for me on whether or not to bother seeing Rogue One was when my brother told me Donnie Yen was in it?  Wait, Donnie Yen's gonna be in a Star Wars movie!?  Well, I can watch it as a matinee, at least.  What if we've just looked at a paradoxical complaint that what has already come to pass thanks to internet communication is being earnestly advocated for and sought for as if it did not yet exist at all rather than existing as less prevalent a subculture within the mainstream than the author wants?

The paradox is that once something becomes mainstream it's terrible.  It's entirely possible for a product made in Japan to get introduced into a mainstream American market with a newly-introduced backstory that permits participatory involvement and ... oh ... wait ... would that be Transformers?  It sure would be Transformers. 

So the dream that the alternative to the mainstream could become the new mainstream doesn't mean that the coteries of the anti-mass culture left and the snobbery of the highbrow aristocratic right are changing.  They're both going to damn Transformers in the harshest possible terms.  It's not like there aren't gay Transformers now. 

On to the official observation from the quotation, the proposal was that the criterion for being able to look down on the unwashed ignorant under-caste is their lack of open-mindedness and education.  The role of education is to ensure that your taste is ultimately worthy of your class. This is something Richard Taruskin explicitly brought up in his Oxford History of Western Music, even if it was recycled content from ideas he put forth decades ago in his 1990s writings--the liberality and globalist omnivore cultural taste of the current cultural elite is the way they exonerate themselves from being the guilt of a ruling class status they have no problem imputing to their forebears. 

For an old leftie like Dwight Macdonald there would be no point in excoriating the low brow fans their love of Batman or Transformers.  It might often be crap but you don't hold that against them.  The people who love junk have their reasons just as the people who love the high-brow stuff love what they love.  His condemnation was for the middle-brow, the people without the temerity to exult in the trash but without the intellectual daring to move into the difficulties of the highbrow while wanting the satisfaction of being able to imagine themselves up to the challenge.  His remark on Our Town was to say that it explicitly formulated this idea, that deep down there's something divine inside every human being.  His complaint was that on paper he agreed with every word but he would fight to the death against authors saying that idea so explicitly in that particular way.   Perhaps the nice way to put this is he set himself against not the truth of a pious observation but against the pious bromide.  Maybe we could put a spin on this and regard the pious bromide as an educated stereotype. 

To have a truly educated stereotype in the era of science you need a scientific verification of some kind and for that we have things like statistic and toward that end it would help to have something like the social sciences. 

The crisis of replication in the sciences is series but it's more series in the social sciences because the dishonesty and cruelty of some of the pioneering social scientists is impossible to escape.

We could do a whole post just on the criticisms leveled against Zimbardo for the ethics (or lack thereof) he was considered to use in his famous studies.   Throw in after decades of burgeoning concern about the duplicity employed in the social sciences the unsettling pattern that we can't even necessarily replicate their famous results, and social scientists really can start to seem like kinds of priests in our age. It would be nice to believe that social science can actually be something resembling a science but on the whole I can't shake my belief that there is ultimately no such thing as a social science because there are no scientific laws in the social sciences and the theories have uses but limits.  We may have a crisis in the form of the statistical models being used to reinforce the stereotypes given to us by the caste of social scientists have come under deserved scrutiny.

So as depressing as it may seem, when college professors are aghast that people without college degrees may have voted for someone they can't stand it's not as though the highly publicized crisis of what should be regarded as a question of the foundational credibility of the entire academic field wouldn't go downstream to the lay readers.  There are people within the academy who have recognize the scope of the crisis for what it seems to be, the long-term revelation of the possibility that contemporary Western social science may be the supple deployment of statistical methods to arrive at or reinforce stereotypes--for people who never went to college is it unfair that they be skeptical about the fairness of the stereotypes academics may cherish about them? They may well have their stereotypes but they can't back up those stereotypes with a semblance of scientific methodologies.  That may not make their stereotypes less vicious ... but it might mean they have to "own" their stereotypes in ways that social scientists have excused themselves from having to do through the appearance of having used statistical methods.  By now we know the old adage about lies, damned lies and statistics, though.  if anything the press has soft-pedaled the ethical and social implications of the crisis.  The way I was taught science in high school and college is that everything ultimately depended on the replicatable nature of your results.  If you couldn't arrive at the same results each time you subsequently did the experiment the hypothesis failed.  Best case scenario social science has to contend with a raft of failed hypotheses.  Worst case scenario, social science is in denial about the very question of whether or not it can ever legitimately be regarded, moving forward, as actually being science.