Dear Friends,
I’m sharing the ‘Curator’ posts for the week. These are discussions of the articles that I found most interesting, provocative, or just annoying on the ‘artistic/intellectual web.
Best wishes,
Sam
LIVING WITH MIMESIS
Everything that’s striking to me on the ‘intellectual web’ this week has the same underlying dynamic. The enemies - the fallacious arguments that I’ve been coming across - all emphasize determinism of one kind or another, some shrugging acceptance of the way of things (usually through some top-shelf-sounding argument (‘skeptical incompatibilism,’ ‘mimetic desire’, ‘the end of history’). The arguments I’m sympathetic to are accepting of the persuasiveness of the way of things but without conceding - whether in philosophy, biology, political science - that we ever truly are lacking in choice.
lays this out in simple, ringing terms in The Free Press. I’d argued in a recent piece here against the mimetic logic underpinning Yuval Noah Harari’s influential view of history. Burgis traces the discussion of mimesis back to the recently-deceased French philosopher René Girard. Girard is matter-of-fact about mimesis. “Man is the creature who does not know what to desire and turns to others in order to make up his mind,” he writes and sees it as inalienable and intractable, to human psychology what gravity is to physics.With the internet age and the sudden ubiquity of ‘memes,’ Girard is revealed - by universal consensus - to have been the critical prophet. Michael Serrés called Girard “the new Darwin of the social sciences” - and Peter Thiel, who studied with him and was an early investor in Facebook as a ‘bet on mimesis,’ claimed that social media almost perfectly validated Girard’s theories. “Social media proved to be more important than it looked, because it’s about our natures,” said Thiel in a quote for Girard’s New York Times obituary.
For people like Thiel, Harari, and the legions of copycats and ‘followers’ dominating the civic discourse, that’s pretty much the end of the story. We are the creature that imitates, therefore the tools that most successfully promote imitation are ineluctable. At the moment, it seems very difficult to argue with any of that - the triumph of social media (and, with it, of mimesis) has been so total. But, as Burgis notes, Girard himself was a bit less deterministic than his many followers. Identifying mimesis as the ‘gravity’ of our social existence doesn’t mean that we have to do whatever it tells any more than gravity makes us permanently a falling body. At every moment of every day, we work within the constraints of our gravity but we learn to dance with it - to accelerate in various different directions, to shape our movement to our desire. By identifying mimesis, and by giving it the credit it deserves, we don’t somehow reach the last word in human interaction. Instead, we identify the shape of a very powerful force and we give ourselves freedom to move in the margins allowed to us. Ultimately, there’s no fun really in mimesis - it’s just a form of self-protection within the herd. And there’s less fun than one might expect in ‘influencing’ mimetic behavior - it’s a matter of being part of the same loop. The real fun is in allowing mimesis to exert its force on us as it will but without being subsumed by it - and learning to push in a direction that is more in tune with our higher selves. As Burgis points out, the beauty of mimesis is that it can, when struck in the right way, encourage any sort of behavior, even - in rare instances, those promoting the exercise of will. “There is hardly anything more wonderfully, positively mimetic than a courageous person who inspires others to be courageous too,” Burgis writes.
THE UNDEAD THEORIES OF FRANCIS FUKUYAMA
A type of mimetic thinking is in evidence in Francis Fukuyama’s thought - which, however widely ridiculed it has been, is persistent and is getting a fresh lease on life at the moment, not least through an adulatory article in Quillette.
I guess my eyes glaze over a little bit whenever Fukuyama comes up. Discussion of him - as in the Quillette piece - turns inevitably into a scolding of the press and public for how ‘misunderstood’ he is. But, anytime that someone, like Matt Johnson writing the Quillette piece, attempts to clarify what Fukuyama really meant it sounds, in the end, a great deal like the popular caricature.
Fukuyama’s underlying idea, building off a playful riff by Alexandre Kojéve, is to argue that when Hegel announced the end of history in the aftermath of the 1806 Battle of Jena, he was fundamentally right - the forces of the French Revolution and ‘modernity’ had prevailed over the forces of (in the widest possible sense) the Ancien Régime. As Kojève and then Fukuyama were claiming - with caveats and with varying degrees of seriousness - the ideological conflicts of the next century or two were more surface than real and when the smoke finally cleared (c.1989) it became possible to perceive that the ‘liberal’ model, capitalism and democracy, had prevailed exactly as predicted by Hegel.
Johnson claims that what’s valuable about Fukuyama is that “ideologues of all stripes despise him. He argues against any political philosophy which revolves around achieving some grand, ultimate end.” But that view of Fukuyama, making him out to be a Karl Popper-ish ideological skeptic, is different from how he views himself. “The basic principles of modern government had been established by the time of the Battle of Jena; the task therefore was not to find new principles and a higher political order but rather to implement them through larger and larger parts of the world,” Fukuyama writes. In other words, there was a grand, ultimate end, which was ‘liberal democracy’ - it just happened to be less absolute, less wrapped in pageantry, than the Fascistic or Communistic alternatives that had been offered in opposition to it.
That strikes me as being the core of Fukuyamaism and pretty close to the publicly-disseminated ‘caricatured’ version. It’s neoliberal triumphalism - Hegelian dialectics wrested away from their German nationalist and Marxist associations and applied to (of all unexpected things) the neoliberal free market exactly when at the peak of vapid American consumerism. That’s the main point, although Fukuyama and his admirers make it difficult to understand what they’re saying by constantly undercutting and hedging their own argument. Johnson’s article is, for instance, a model of equivocation. What in the world is anybody to make of this sentence (as part of an attempt to give a 30,000-foot assessment of China): “It would be naive to hope that China is on the verge of a dramatic social and political transformation, but unrest is mounting in the country”? So is Johnson saying that China, in the long arc of history, is welded to an authoritarian, illiberal perspective or is he saying that China is moving, again from the vantage-point of the same inexorable arc, towards liberal democracy? He appears to be saying both. And Fukuyama, the master of this domain, is no more clear-cut. “The end of history will be a very sad time,” Fukuyama writes, “as the worldwide ideological struggle that called forth daring, courage, imagination, and idealism will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.”
The point is that we have reached the end - that only technocratic solutions are called for from this point forward - but that at the same time it is possible for people to be very unhappy with this state of affairs (Fukuyama himself confesses to “a powerful nostalgia for the time when history existed”), which would present the possibility, logically enough, that they could work to create a different dispensation.
If Bosnia, 9/11, Trump did not mean the reemergence of history - the Fukuyama crowd has ready rejoinders for all of those - the rise of illiberalism around the world would seem to have suggested that history once again existed, that there were real ideological challenges, and that even in this late day people around the world were capable of transforming their nostalgia into concerted political action. And Johnson’s article is, not surprisingly, an attempt to backstop Fukuyama just as Fukuyama was backstopping Kojève and just as Kojéve was backstopping Hegel. “Does Fukuyama’s argument really stand up [in 2022]?” writes Johnson before concluding, surprisingly, “The evidence continues to suggest that the answer is yes.”
Johnson’s critical piece of ‘evidence’ is, of course, the success of the Ukrainians in repelling Russia. And that’s been responsible for a resurgence in a ‘soft Fukuyamanism’ - an idea (which I agree with) that the American model has more resilience than is often ascribed to it, that if nations like Ukraine wish to move away from some region-specific, ‘sovereign’ view of their historical destiny and into a more neoliberal, Fukuyamaish scheme then that is very much their right. But in this emphasis on military conflict - the ‘evidence’ of Jena, the ‘evidence’ of Ukraine - we come up against the limits of ‘hard Fukuyamanism.’ The war in Ukraine has been a more close-run thing than people like to realize. The February strike on Kyiv really could have worked - the Ukrainians were lucky to beat it back. And, if it had worked, America and the West would have found themselves in no mood to defend a broken Ukraine. And a similar flaw in thinking appears anytime one argues from military results - numerically superior Coalition armies could have beaten Napoleon at Austerlitz or Jena (and, then, how would Hegel have described the arc of history?).
In other words, Fukuyamanism is really just a type of mimetic thinking - as applied to statecraft rather than, say, social media. It tends to reason from the facts on the ground - Napoleon defeated Frederick William III; Ukraine pushed back Russia from Kyiv - and then to see the results of military conflict as proof of some intrinsic superiority, as yet another stone placed in the straight line of history. The constant hedging of Fukuyama and his followers is an indication, though, that they intuitively sense that may have overstated their case, that the vagaries of events do actually challenge the certitudes of dialectical history. Here is how Fukuyama hedges it: “It is not necessary that all societies become successful liberal societies, merely that they end their ideological pretensions of representing different and higher forms of human society.” And how Johnson hedges it: “[Fukuyama] argued that liberal democracy would prove to be the most viable form of government over time.” So, in other words - on further inspection - we are not at the end of history at all, we are still very much in time and systems are still proving their viability over one another, it’s just that we, the Fukuyamists, believe that ‘liberal democracy’ is inherently better and will prove itself so in the fulness of time. Which may be a valid position to take - I also agree that capitalism + democracy is better than the authoritarian alternatives - but, when restated with all caveats duly applied, is no longer cloaked in the armature of Hegelian eschatology. And once we are, as Fukuyama & co begrudgingly admit, back in time, then conditionality matters again - then the results of key battles, key decisions, could swerve any which way and the ‘arc of history’ no longer serves as any particular guide to what is ethical or right: we have to figure all that out for ourselves.
DEATH TO SKEPTICAL INCOMPATIBILISM
Yet another case of mimetic thinking - this time in philosophy - appears in Barbara Fried’s recently dug-up 2013 article ‘Beyond Blame,’ in which Fried argues that ‘blame’ is a philosophical impossibility given our lack of free will and that we need to move to a model of justice beyond retribution.
That’s Fried as in Bankman-Fried - which makes it fairly obvious why this not-all-that-memorable or well-thought-through article is back in circulation. One of the side benefits of the FTX collapse is a unique opportunity for philosophical schadenfreude - and a set of ideas associated with Sam Bankman-Fried, or with his parents, are, overnight, in disrepute. Longtermism - the philosophy that Bankman-Fried himself seems to have concocted and then foisted on the Effective Altruism movement - may well be down for the count and not a moment too soon. Longtermism, as a large number of intelligent people have noticed, is a very self-serving sort of scam, recasting the language of ‘rights’ to incorporate the future, making the case that ‘future people’ should have just as much or more voice than people of the present but can really only be spoken for by highly-trained moral philosophers or by effectively-programmed AI. The real point, of course, is that if you deal entirely in unknowns (e.g. the fate of ‘future people’) common sense falls away and the ‘authorities’ turn out to be whoever (technologists, well-funded moral philosophers) is able to best project an illusion of hyper-rational authority. Ethical Altruism is next on the chopping block - and a great deal of ink at the moment is being dedicated to figuring out whether it too is permanently tainted by association with Bankman-Fried. That’s a bit harder to answer. Using some amount of rationality to orient philanthropy doesn’t seem like such a horrible ask, but, on the other hand, it’s fairly clear that Effective Altruism is prone to the sort of abuse that happens anytime wild-eyed techies with bar charts try to talk donors out of following their own common sense or personal conviction. But another, unexpected philosophical target emerges from the FTX fallout, which is the skeptical incompatibilism advocated for by Barbara Fried - SBF’s mother and emerita professor at Stanford Law School.
Fried’s argument rests on the philosophical truism - by this time stated rather than analyzed - that we have no free will, that, as we are entirely material beings rooted in time, there is no ‘ghost in the machine,’ no coherent point of consciousness or accountability that would allow for our actions to be evaluated in any kind of moral sphere. Fried seems to be genuinely angry that, over the period of time in which incompatibilism seized hold in philosophy, the public-at-large, the press, and legal system remained mired in an atavistic blame game. “One might have expected these developments [in neuroscience] to temper enthusiasm for blame mongering,” Fried writes. “Instead, the same four decades have been boom years for blame.”
Fried runs through a series of arguments - both compatibilist and ‘libertarian incompatibilist - that attempt to resuscitate free will, finds them all wanting, and concludes that the only choice left is ‘skeptical incompatibilism,’ in which the possibility of free will is categorically denied and, along with it, the peg on which ‘blame’ can be hung. “I have trouble seeing the case against skeptical incompatibilism,” concludes Fried, succinctly putting to rest 2,500 years of philosophical discussion on this topic.
The favorite target of the incompatibilists is the system of retributive justice. If there is no free will, then there can be no culpability and no viable punishment, contend the incompatibilists - and, every time they make their case, all the prison doors seem to swing open a little. And, in Fried’s construction, the burden of argument shifts abruptly to the law-and-justice crowd. “Those who wish to rely on [the existing system] have a moral obligation to show that [its] benefits are great enough to justify the costs we are imposing on the morally blameless, their families, and their communities,” she writes, and proposes instead a wholesale transition to a model of restorative justice. “The next time something goes terribly wrong, suppose that instead of immediately asking who is to blame we were to ask: How can we fix this problem?”
All of which sounds lovely but doesn’t actually have much to do with how the criminal justice system is structured at present. In a profound and deep-seated way, the criminal justice system has taken in the critique of the incompatibilists - starting with the foundational premise that we are punishing the act not the person. This may be an absurd construction - it is the person who is led away in handcuffs at the end of a trial, like the whipping boy answering for an abstract act - but it is nonetheless sacrosanct within the system. Courtroom trials are carefully designed to omit evidence of other crimes by the defendant, to omit charges that touch on the character of the defendant. If some technicality impugns on the evidence tying the accused to the particular act, then the accused must be released - even if everybody knows that the accused has a ‘bad character,’ that the accused ‘deserves’ to be in prison. Questions of character, of intentionality, inevitably slip into legal proceedings, particularly in the sentencing phase, but the keystone in assigning guilt or innocence is this peculiar fiction that it is the act that is punished, with the punishment then landing on the body or the bank balance of the guilty party. Remove ‘free will’ and the justice system stays exactly the same as it is - ‘hate the sin not the sinner’ serving as a convenient shorthand for this time-honored approach.
But even on the apparently firmer ground of neuroscience, the skeptical incompatibilist approach has its problems. Its argument holds that, since we are biological entities and the brain just another link in a casual chain, then a decision made by the brain is never truly ‘free’ but is just a response to circumstances. But this presupposes a very simplistic understanding of how the brain works. There are portions of the brain - particularly the prefrontal cortex - that are specifically responsible for planning and for executive reasoning. When we talk about somebody making a ‘choice,’ we are not talking about their making a choice on some higher plane, we are dealing specifically with the decision-making matrix of their prefrontal cortex. If those choices run against existing social mores, then society has a perfect right to claim that the social contract has been broken, that the offender deserves to be ‘blamed’ and punished.
The philosophical rush to materialism doesn’t dispense with compatibilism or free will as efficiently as philosophers like Barbara Fried like to think. And Fried, like just about anybody who makes this point, seems not to spend too much time actually considering the arguments for compatibilism. It’s assumed that science has settled these questions for us and then the turn is to envision the new, better society - which tends to be a kind of advanced Buddhism, everything understood as phenomena, nothing better or worse than anything else, everything in undulatory orbit. I wouldn’t argue with the higher truth of that perspective - at some level, it is all phenomena - but even very advanced Buddhists would agree that, in the day-to-day functioning of human societies, there can be a need for retributive justice. Fried seems to be working towards a very different kind of society - the sort of thing, it might as well be said, that can condone a massive fraud like Sam Bankman-Fried’s - in which all values are relative, all wealth as figmentary as an FTX token, in which actions are not tied to their consequences, in which we are ‘beyond blame’ even, for instance, for $51 billion dollar Ponzi schemes. And the advice for thought like this is the same as the advice as when confronted with Ponzi schemes - don’t fall for it. Belief in the absence of free will tends to result in a mimetic chain - everybody’s actions are understood to be just a kind of phenomena driven around by other phenomena. If there’s no ability to choose, then there’s also no ability to reverse course, to shape one’s own reality. And the tendency is to default to the phenomena that happens to be most readily at hand and most persuasive - which is to say that there’s a tendency to default to the dictates of power.
DEATH TO PEER REVIEW
On his Substack 'Experimental History,'
shows a keen understanding of how mimetic thinking can ruinously corrupt a process that was specifically designed to ward off corruption. Mastroianni, a research scholar at Columbia Business School, argues that it’s time to scrap peer review, not necessarily because it’s a horrible idea in and of itself but because it simply doesn’t work. “This is the grand experiment we’ve been running for six decades,” Mastroianni writes. “The results are in. It failed.”The evidence against peer review has been quietly accumulating and is damning. In three separate studies, scientists deliberately added errors to papers, sent them out to peer reviewers and counted the number of those errors that the peer reviewers caught. In all cases, the results were dismal - the peer reviewers, the ‘disinterested fact-checkers’ catching 30%, 29%, and 25% of errors across the three separate studies. And, meanwhile, even as peer review flunks basic competence tests, it has continued to dominate the ‘scientific process’ - driving scientists ever deeper into their infamously jargonesque prose, creating hierarchical bottlenecks within the scientific community, with the peer-reviewed journals wielding vast gatekeeping powers over the dissemination of new scientific ideas and with, as Mastroianni writes, “hiring committees and grant agencies acting as if the only science that exists is the stuff published in peer-reviewed journals.”
Under the aegis of peer review, science has turned itself ineluctably into a guild - and the tendency of all guilds, all closed systems, is to be conservative, preserve the status quo (and the particular financial incentives) of the status quo, and to stamp out or ignore new thinking. Mastroianni, having given this real thought, writes, “I think we had the wrong model of how science works. We treated science like it’s a weak-link problem where progress depends on the quality of our worst work. But science is a strong-link problem: progress depends on the quality of our best work.” In other words, peer review is somewhat useful - although still, as the three evaluation studies demonstrate, remarkably bad - at weeding out untrue ideas, but that mild benefit serves only to dampen actual innovation and inquiry and narrowly circumscribes the permissible bounds of scientific discussion.
I’m very much outside the scientific system. I’ve had an intuition for years that something has gone wrong with the scientific process - the papers are too conspicuously jargon-y; the cozy relationships between research labs, universities, governmental grants look too much like graft; the genuine breakthroughs, particularly in domains like physics, seem to be in short supply - and a critique like Mastroianni’s helps to pinpoint where the rot may have set in, a robust-sounding process (what could appear more daunting than ‘peer review’?) that is in fact a series of winking agreements between professionals, that stifles creativity and encourages business-as-usual, an ever-perpetuating conformity.
PORN PORN PORN
In Aeon, Kathleen Lubey, a professor of English at St. John’s University, offers up an alternative sort of space in which innovation is encouraged, radical new ideas are given a hearing, and discussion is genuinely free and uninhibited. That would be porn - or, specifically, her area of expertise, pornographic writing of the 18th century, which, she claims, was a critical site for free expression in the political ferment leading to the French Revolution.
In the way of a careful scholar, Lubey keeps her claims as narrow as possible. She is taking issue with Andrea Dworkin’s insistence that pornography is “the blueprint of male supremacy….the essential sexuality of male power.” Dworkin’s categorical claim is almost ridiculously easy to dismiss - is gay porn also violence against women? - but Dworkin’s theories are having an ever-increasing hold on the culture. As Amia Srinivasan writes in The London Review of Books, “Dworkin is being rediscovered - and rehabilitated - by a new generation of young women….giving definitive expression to the radical feminist tenet that sexual domination was the beating heart of patriarchy.” Dworkin really did prove to be the patron saint of #MeToo and of the ‘new prudery,’ the belief that heterosexuality is inherently exploitative, that male sexual desire is, essentially, by itself, a violation of the social compact. As mimetics go, that idea is proving very hard to stop - it gives women a powerful new weapon in the perennial war of the sexes - and an argument like Lubey’s, although constrained, is important for chipping away at it.
Lubey takes aim at Dworkin’s most famous claim - that pornography is exploitation - and contends that any real study of the historical record yields a more complicated picture. “Having spent years in library archives reading obscene works, I’ve found that pornography says many things at once,” she writes. The 18th century French pornography that Lubey focuses on is dedicated, above all, to eliminating the reigning myth of the era - the belief in the inherent non-sexuality, the ‘modesty’ of women. Her favorite text of this genre, the anonymous 1749 novel The History of the Human Heart, “disputes the dominant cultural belief that women are innately modest….[the author] calls modesty the ‘greatest Ornament’ of women but doesn’t believe it’s a natural condition,” Lubey writes. That argument - later to be a tenet of the Sexual Revolution - is paired with a surprisingly open-ended political conversation, somewhat in the way that in the 1960s the centerfolds of Playboy and Esquire would be accompanied by serious writing by top-shelf writers. “French pornography produced politically disciplined theories of personal liberty,” Lubey writes - all part of a revolution in mores of that period that picked up a political component as well.
There is a difference between the ‘porn’ that Lubey champions and the porn that besets our era. 18th century porn is imaginative and written and, essentially, harms no one. 21st century porn is an industry - and rests on the exploitation (or employment, depending on how you look at it) of real women. What Lubey is talking about is better described as ‘erotica,’ a tame niche compared to the mass-disseminated prostitution we have which does raise real ethical concerns. (The strongest argument made by Dworkin and her collaborator Catherine MacKinnon was the role they played in exposing the working conditions of Linda Lovelace aka Deep Throat, who had been seen up to that point as the epitome of the Sexual Revolution and was revealed, on closer analysis, to have been ruthlessly exploited.)
But Lubey is being very careful and making a very limited point. Porn is patriarchy, argues Dworkin - which has become a completely standard talking-point in our era. What Lubey finds in the erotica of a pre-industrial era is a free space in which ‘pornography’ becomes the site of open-ended dialogue, of a genuine, concerted attempt to understand how sexuality works and how its truths might differ from the received teachings of the era’s authorities. “[Pornography] is honest because it showcases the hard, often confusing work of reconciling private desire with public life,” writes Lubey. So - in other words - down with peer review; long live erotica. What erotica models (even if 21st century porn does not) is a conversation in which received ideas are put to the test; in which a taken-for-granted proposition (e.g. that a woman’s modesty is inborn) is challenged, subject to the counter-evidence of life experience, the question decided through a very-much-uncensored debate. Have those kinds of conversations and surprising, unmimetic things start to happen. At the moment everybody assumes that pornography is a patriarchal form of exploitation - for the good reason that that’s what the vast majority of our porn looks like. But in the decade that - as Lubey proudly reports - she has dedicated to “reading pornography from the 18th and 19th centuries,” she finds, not so surprisingly, that porn, or erotica, has had many different functions across history, among them the 18th century’s concerted challenge to a particularly pervasive type of patriarchal repression.