"The only real analogy is that AI may soon be to writing as the 19th century camera was to figurative imagery – with a machine taking over the core function of an activity (doing it with greater technical proficiency than a human being could possibly manage) and in so doing profoundly reshaping our understanding of that activity. If we allow AI to write for us — not help us in our research, not suggest directions but actually write (which is what all sorts of trade publications are in the process of doing, laying off their staff and welcoming in AI to generate the content) — then we are giving over our agency to a machine."
So I assume you say this about photographers like Diane Arbus and Vivian Maier, whose photography of course reflects no personal vision or aesthetic according to your logic, because they have given over their agency to a machine? Cool cool. No offense but I will go to the art gallery with somebody else 😄
I will grant Henry his two core points—that ignoring AI is foolish and the concept of connecting with the creator might be an overblown concept post-AI.
However, his paean to AI in the first part of this post was so techno-optimist-obnoxious to be utterly off-putting. I'll grant him grace in that I suspect he might have been swept up in making the argument, but that was not the way to convince a skeptical audience to pay more attention to AI.
If AI happens to make a sublime work of the ages, awesome. I'll enjoy it *when* it comes. But if it just adds more dreck to be forgotten along with the work of mere mortal authors, well I guess that's life ain't it.
AI can't be ignored, but I see no reason to valorize it for the arts.
Fair enough. But if you had avoided assuming the future is now (a common overreach in arguments of this type) it would have been more persuasive both rationally and tonally.
One is whether any novel that features the advent or existence of AI as its main point will ever be considered "art." Defined as still being read many years after being published. I say no to that. I cannot think of any novel that is part of the classic or modern canons that had as its main subject a technological development. Even Hard Times was still about the people. And it isn't close to Dickens' best.
The other point is whether AI itself can write a novel that will enter the canon. We won't know that for many decades. Because it takes decades to establish what lasts. Any list of prize winners from fifty or seventy five years ago establishes that. I'm rooting against that happening the same way I rooted for Rocky to beat Drago in Rocky IV.
I don’t know about AI as the main feature, but certainly as a feature. I mean, it’s already important to a lot of good sci fi right? As for the other point, I find it inconceivable that once AI is good enough to write a good novel they won’t be recognised, but they may well be recognised as something distinct, be that as genre, medium, or in some other way.
I think an AI novel will be recognized but will a future Henry Oliver be reading an AI novel the way you read Middlemarch or Austen or Dickens? That is so hard to tell.
Here’s an example: I knew a big Wall Street guy who had these strips of names over his fireplace. These names, he said, were people he admired and included a few mistresses, a few famous people he wanted you to know he knew. So I’m writing a scene in a garden and for some reason I remember this guy, now deceased, and so I incorporate a character having a rotating series of busts of people he admires. I’m not saying my scene is so great, but I wonder how AI would make that connection. I guess if my book ever sees the light of day, it could scrape the idea, but how would it know how to use it and when and with which characters around?
As with our ability to read comfortably, our sense that there are hypocrisies that necessarily imply viciousness comes and goes. Writers make small talk abt books and your person will say 'i cannot read'. They maybe or not in that liminal state from one of those times that one cannot read but 10 pages together, what takes a week of application to reading to overcome. Agree with me that either Giles Goat Boy or the Sotweed Factor is highlevel, and there we have the meat of this matter methinks. When betimes Ai appears to be having more fun writing than[us, or than]having pain? That subtle persuasion is kind of how J Barth wins us over. What only ROMly relates, was that John tried to make internet speak circa 1996 human or humane in book Coming Soon, which was clunky and nerve jangling. We will sorely miss ourselves, look at Kafka's Penal Colony. We read that story because of its jokelike structure, altho I never Lol'ed once. But the robot will write that story and tell us why it is funny, in my guesstimate..
No idea why this back and forth has got me so engaged, maybe because it's just appearing on my feed in the kind of 'number 23' way of it all. Maybe because this kind of thing makes literary chat fun instead of tedious.
I commented on Henry's post, but I wanted to say that reading this prompted me to write two pieces related to this angle in the AI discourse.
Your debate is mostly about AI’s role in writing as a collaborator, a disruptor, a hollow imitation of human experience, but I can't help but see what’s missing. AI isn’t just a new kind of writer. It’s a sorting system.
You’re arguing about whether literature can “incorporate” AI, but what’s really at stake is who gets optimised and who gets discarded, not so much the what. Not just as a writer, but as a worker, thinker, and participant in cultural production.
That's the simple beginning, but then it gets weird.
AI and fear function the same way. Both are pattern-based, trained on past data to anticipate the future, reacting before understanding. The whole thing with AI isn’t new, just a faster version of an ancient concept. The Oracle at Delphi was AI. The I Ching was AI. Bureaucracy is AI. Just input, process, output.
I just wrote a piece that digs into this and I'd love your notes if you could - I teach some binary logic, love on Foucault, and insert the ornamental hermits of 18th-century England, because I needed some light on a piece that's about why we’re all already being processed, whether we like it or not.
Would genuinely just love to chat with writers who are talking about AI and interested to see where it goes.
I don't know about Oracle of Delphi, but having played with the I Ching and being in a Bureaucracy I'd be curious why you argue why you consider these AI.
I Ching is a randomizer that might highlight possibilities that was not immediately obvious to the querent. The raw answer arrives in a minute (throw coins eight times), but a useful reading takes skill and practice to apply the results to the real world question at hand. Heck, that's why doing the old yarrow stick process is so powerful - it buys time for your brain to settle and focus on the question.
Same for Bureaucracy. I agree there is metric shitton of bad bureaucracy that is dumber than AI. But done well, wielding power (with in the confines of your authority within the hierarchy) is an art. It's all about navigating where rules "touch grass". A good bureaucrat thinks much more about working with fellow humans than the rules.
So, I totally see what you mean about the I Ching (and the Oracle of Delphi, back in its heyday) relying on human interpretation; it’s definitely not some cold, mechanical entity.
However, both the Oracle and I Ching basically gather scattered inputs and filter them down into a single, presentable outcome. That’s the same principle driving AI—just a much harsher, less relatable, never-ending funnel that takes everything in, then spits out a definitive answer. The spiritual or ritual aspect of other processes might obscure my rationale (and give it the quality of the cultural, historical and the sacred which is extremely important to people), but at the end of the day, you ask your question, the system processes it, and there it is: one outcome served up, courtesy of “completion logic.”
Something like a magic 8 ball works like this too. There is a finite number of responses inside it. Person has a question (input), then shakes it (process), and then the ball randomises its selection and presents it (output).
As for bureaucracy, to me it’s just another system we dreamed up to manage the chaos of daily life in a straight line—label, sort, stamp, done. That is not to simplify it or degrade its importance or its potential, but it was built as per the governments intention to run on societies in a linear, dualistic method (approve or deny, pick or discard) rather than something more holistic and relational.
And I agree so much, and is so needed that we’ve got wonderful people in these structures bringing empathy and nuance (because we need this to make bureaucracy not entirely function like a soulless AI), but the underlying code is still: input → process → output. It’s basically a huge machine built on “finishing” thoughts for us so we can move on to the next thing. A system is not kept to the digital or the mechanical, it is simply everything we create in order to simplify the way that people’s minds actually work, which is a form of cognition that we simply cannot replicate in a truly “cloned” way.
We’ve done this forever. We’re so used to needing closure—this idea that to survive is to keep pressing forward—that we can’t seem to maintain relational, open-ended systems. Even the earliest computers were made to do exactly that: solve, produce an answer, finalise. It’s up to us as to what we do with that result - that’s where the human element comes in, not the system that gave the answer. AI looks shiny and new, it’s really just the same pattern on steroids.
I think in order to come to a deeper understanding of things that truly operate outside without individualising the power of human thought and attributing it any more “power” than it needs, nature and climate stand way outside of these tidy lines in the truest sense. They don’t care about our categories or logic structures—they’re on their own cyclical, interconnected path that we can’t tame albeit how we try by extracting resources.
It’s a solid reminder that not everything is meant to be pinned down like a butterfly under glass. However, I think that understanding that we as humans have historical wanted to build things for ourselves that enforce certain structural and reliable outputs in order to rationalise the choices we make, and stop everyone from acting on pure instinct and intent, has been around since people decided they wanted to "progress" or "civilise" (gross word, but it's the right one).
Anyway, really keen to hear how all this sits with you, especially around how the I Ching’s randomness and repetition, might overlap with the iterative approach folks use in AI now.
Interesting. I'm actually inclined to agree with you more about Bureaucracy than with the I Ching.
As a bureaucrat, my job is to enact the will of my superiors up the ladder, ultimately deriving their authority from answering to the folks who won the most recent election. If the political machine had perfect knowledge and could write comprehensive rules, then my proper role would be to just execute (or quit if an order was unconscionable). Ideally my human touch wouldn't be necessary, but there is a real life opportunity cost of overly prescriptive rules which result in too much procedural overhead. Which opens up areas for judgement calls. (Conversely, not enough oversight and I might end up with a red Ferrari on my driveway, courtesy of the people).
As for the I Ching, I agree is that the input and initial output are as you describe. But that outcome is neither presentable nor definitive (unlike the simple answers of the magic 8-ball). Stopping at the raw output is essentially meaningless. Everything that matters about the I Ching practice is interpreting the random outcome into something that applies to the question.
I have no interest in prescriptive divination, as commonly found in Tarot booklets that comes with those decks. But since those booklets are so common, I might be the weirdo on this one, rejecting simple answers and demanding that additional human interpretive leap. Even though I call myself a "woo-adjacent atheist", I don't believe there is any actual power in the sticks, coins, or cards. Divination is merely one practice to scramble the brain for recontextualizing a question (which can also be a dangerous self-confirmation practice).
Ahh, I see what you mean! I love this distinction.
I think the key difference in how we’re seeing it is that I’m focusing on the structural mechanics of these systems—how they function as input-process-output loops. I'd say that’s because I’m career hospitality, where compulsive, intense sorting and reacting to input is a relentless part of the job, always in response to absolute chaos. I also studied tech, but I’ve never worked in it—which has probably kept me sane. Hospo keeps me human, tech keeps me aware of systems, and I read to fill in all the other shit.
I think where you're coming from (and where I might have been missing you) is that you’re zoning in on interpretation—which is a much more human, meaning-making layer, untouched by the computational side of things. And I agree, that's where the I Ching really departs from a “definitive answer” system like AI or a Magic 8-ball—it requires a uniquely human type of cognition in the translation step. That makes it different.
But I also think that’s where the real philosophical clash is. AI and bureaucracy are designed to flatten ambiguity. The I Ching, and a lot of older epistemologies, preserve ambiguity, or even embrace it as essential to understanding.
Lately, I’ve been obsessed with the Kyoto School—especially its “founder,” Nishida, and his whole idea of “nothingness” as a field rather than an absence. They were trying to keep a non-linear, relational approach to thought alive at a time when Western, binary logic was bulldozing its way across the world. It was a reaction to war, invasion, and existential threat—Japan was facing the dissolution of its own philosophies, trying to hold onto tradition without succumbing to dread.
So what did they do? They started studying Nietzsche, almost obsessively, because Western philosophy was the only place where they found descriptions of the dehumanisation they were experiencing. That’s terrifying—but also deeply valuable. It reveals what systems that run on completion logic do to the human mind. But it also shows how people defy, restructure, and adapt in order to remain human.
That’s why Kyoto School philosophy feels so alien to people raised in a Western framework—there’s no “conclusion,” no I’m sad, therefore I must get happy. It’s all about relationality—how things exist in relation to one another rather than as discrete, isolated objects. The I Ching works this way too, as you’re describing it—it doesn’t give you a singular truth but a framework to think through the flux of things. It’s a way of dwelling in uncertainty rather than rushing to resolve it.
That’s also why AI is such an uncanny disruptor—it mimics the interpretive part without actually understanding what it’s doing. It does a very convincing impression of pattern-based cognition, but it’s missing that essential, context-driven, why does this matter? filter that a person naturally brings.
And I think that’s what freaks people out the most about AI: it forces us to behave computationally instead of just… thinking. But the way everyone’s reacting is insane—demonising a non-sentient process while simultaneously condemning anyone who doesn’t understand it the same way they do. Not cool, and definitely not helpful.
And yeah, back to bureaucracy—it operates as if that “human cognition” filter doesn’t exist (even when it does in practice). It’s trying to reduce everything to a single pathway, despite the messy reality of human life. And I think you’re totally right—good bureaucracy acknowledges the inevitable gaps, the things that don’t fit neatly into a procedural framework. That’s where human intervention should come in. A rigid system that refuses that intervention? That’s when people start getting ground up in the gears.
And honestly, isn’t that why most post office workers are fucking angry all the time? And why everyone forced to go to the post office is even angrier? It’s a natural rejection of the systems we’re forced into. The rigidity means that the moment your humanity comes in—your natural way of thinking and living—it’s in stark contrast to the role you’re expected to perform.
I also love your “woo-adjacent atheist” take on divination—that’s such an interesting perspective. The I Ching (and Tarot, if you ignore the guidebooks) is more like a structured randomness generator that scrambles your brain just enough to make you engage with your own question in a new way. In a sense, it’s not a competitor to human cognition but an accompaniment to it.
It’s less about getting an answer and more about creating the conditions to find an answer that was already lurking in your subconscious. Which, yeah, is basically the opposite of AI’s function. AI collapses ambiguity into output. I Ching preserves ambiguity and asks you to sit with it.
So maybe that’s the biggest difference: AI is built to resolve things. We, when left to our own devices, often don’t want resolution. We want to circle, reinterpret, rethink. Or, at the very least, have the choice to do so.
Anyway, this is the kind of chat that makes AI discourse actually fun instead of doomsdayish or pedantic. Thanks for going deep on this—I could honestly talk about it forever.
I love how you centered ambiguity as the power in a "good" divination practice. That's why the Yarrow sticks for I Ching (which can take 20 minutes) is so powerful. The dull process lulls your brain into ambiguity before the answer arrives.
As for bureaucracy, I suspect that people like me (assuming it's just not folks blowing smoke up my as) because I am willing to delve into ambiguity. I've always been happy to move in the "light grey“ area and even when I slam into a hard limit for the agencies that I serve, I explain the process and how to be more successful next time...or even ways they might be able to find a workaround (such as who else to appeal to and ways to strengthen their request).
By the book, bureaucracy is a brutally cold logic, which can't fully be avoided. But having an insider-advocate who helps them understand such logic paradoxically opens avenues into ambiguity that gives them paths to pursue, giving them agency which reduces that sting of powerlessness in the face of an otherwise barely comprehensible regulatory regime.
we need more light grey diving. heaps of shells, corals, treasure, yarrow, coins, bits and pieces to find that keep us all feeling like everything is still alive, because it IS. my opinion that everyone regardless of their job or whatever they fill their days with, can't be categorised, because it just homogenises everything about them, and gives them no space to be who they are, and it's much easier to succumb to that "I must feel complete" horseshit rhetoric. Where I Ching gives comfort in ambiguity, is honestly a bodily response to the actual need for chaos as a component of true balance. Without chaos, everyone just creates it for themselves cause otherwise, what the hell is really even being a person? Not that!
Keep illuminating the man - hahahaha. But seriously, I mean it! It's why even Substack convos can be a little bit of humanity, despite the veil. We're both just sitting in ambiguity, in someone's yard (thanks Sam & Henry in this case) having a conversation and using tech really what is useful for, making. abridge that otherwise wouldn't be there.
Fantastic debate. Passionate, articulate, persuasive, and wonderful examples. Just for fun, I used my custom debate GPT (I am a debate coach) to declare a "winner." You can see the chat here.
"The only real analogy is that AI may soon be to writing as the 19th century camera was to figurative imagery – with a machine taking over the core function of an activity (doing it with greater technical proficiency than a human being could possibly manage) and in so doing profoundly reshaping our understanding of that activity. If we allow AI to write for us — not help us in our research, not suggest directions but actually write (which is what all sorts of trade publications are in the process of doing, laying off their staff and welcoming in AI to generate the content) — then we are giving over our agency to a machine."
So I assume you say this about photographers like Diane Arbus and Vivian Maier, whose photography of course reflects no personal vision or aesthetic according to your logic, because they have given over their agency to a machine? Cool cool. No offense but I will go to the art gallery with somebody else 😄
Love Maier!
I will grant Henry his two core points—that ignoring AI is foolish and the concept of connecting with the creator might be an overblown concept post-AI.
However, his paean to AI in the first part of this post was so techno-optimist-obnoxious to be utterly off-putting. I'll grant him grace in that I suspect he might have been swept up in making the argument, but that was not the way to convince a skeptical audience to pay more attention to AI.
If AI happens to make a sublime work of the ages, awesome. I'll enjoy it *when* it comes. But if it just adds more dreck to be forgotten along with the work of mere mortal authors, well I guess that's life ain't it.
AI can't be ignored, but I see no reason to valorize it for the arts.
Apparently it was because you conceded my main points!
Fair enough. But if you had avoided assuming the future is now (a common overreach in arguments of this type) it would have been more persuasive both rationally and tonally.
There are two separate points here.
One is whether any novel that features the advent or existence of AI as its main point will ever be considered "art." Defined as still being read many years after being published. I say no to that. I cannot think of any novel that is part of the classic or modern canons that had as its main subject a technological development. Even Hard Times was still about the people. And it isn't close to Dickens' best.
The other point is whether AI itself can write a novel that will enter the canon. We won't know that for many decades. Because it takes decades to establish what lasts. Any list of prize winners from fifty or seventy five years ago establishes that. I'm rooting against that happening the same way I rooted for Rocky to beat Drago in Rocky IV.
I don’t know about AI as the main feature, but certainly as a feature. I mean, it’s already important to a lot of good sci fi right? As for the other point, I find it inconceivable that once AI is good enough to write a good novel they won’t be recognised, but they may well be recognised as something distinct, be that as genre, medium, or in some other way.
I think an AI novel will be recognized but will a future Henry Oliver be reading an AI novel the way you read Middlemarch or Austen or Dickens? That is so hard to tell.
Here’s an example: I knew a big Wall Street guy who had these strips of names over his fireplace. These names, he said, were people he admired and included a few mistresses, a few famous people he wanted you to know he knew. So I’m writing a scene in a garden and for some reason I remember this guy, now deceased, and so I incorporate a character having a rotating series of busts of people he admires. I’m not saying my scene is so great, but I wonder how AI would make that connection. I guess if my book ever sees the light of day, it could scrape the idea, but how would it know how to use it and when and with which characters around?
As with our ability to read comfortably, our sense that there are hypocrisies that necessarily imply viciousness comes and goes. Writers make small talk abt books and your person will say 'i cannot read'. They maybe or not in that liminal state from one of those times that one cannot read but 10 pages together, what takes a week of application to reading to overcome. Agree with me that either Giles Goat Boy or the Sotweed Factor is highlevel, and there we have the meat of this matter methinks. When betimes Ai appears to be having more fun writing than[us, or than]having pain? That subtle persuasion is kind of how J Barth wins us over. What only ROMly relates, was that John tried to make internet speak circa 1996 human or humane in book Coming Soon, which was clunky and nerve jangling. We will sorely miss ourselves, look at Kafka's Penal Colony. We read that story because of its jokelike structure, altho I never Lol'ed once. But the robot will write that story and tell us why it is funny, in my guesstimate..
No idea why this back and forth has got me so engaged, maybe because it's just appearing on my feed in the kind of 'number 23' way of it all. Maybe because this kind of thing makes literary chat fun instead of tedious.
I commented on Henry's post, but I wanted to say that reading this prompted me to write two pieces related to this angle in the AI discourse.
Your debate is mostly about AI’s role in writing as a collaborator, a disruptor, a hollow imitation of human experience, but I can't help but see what’s missing. AI isn’t just a new kind of writer. It’s a sorting system.
You’re arguing about whether literature can “incorporate” AI, but what’s really at stake is who gets optimised and who gets discarded, not so much the what. Not just as a writer, but as a worker, thinker, and participant in cultural production.
That's the simple beginning, but then it gets weird.
AI and fear function the same way. Both are pattern-based, trained on past data to anticipate the future, reacting before understanding. The whole thing with AI isn’t new, just a faster version of an ancient concept. The Oracle at Delphi was AI. The I Ching was AI. Bureaucracy is AI. Just input, process, output.
I just wrote a piece that digs into this and I'd love your notes if you could - I teach some binary logic, love on Foucault, and insert the ornamental hermits of 18th-century England, because I needed some light on a piece that's about why we’re all already being processed, whether we like it or not.
Would genuinely just love to chat with writers who are talking about AI and interested to see where it goes.
https://ellastening.substack.com/p/part-one-ai-has-been-here-since-the
either way, I'm still readin' , have a great day xx
I don't know about Oracle of Delphi, but having played with the I Ching and being in a Bureaucracy I'd be curious why you argue why you consider these AI.
I Ching is a randomizer that might highlight possibilities that was not immediately obvious to the querent. The raw answer arrives in a minute (throw coins eight times), but a useful reading takes skill and practice to apply the results to the real world question at hand. Heck, that's why doing the old yarrow stick process is so powerful - it buys time for your brain to settle and focus on the question.
Same for Bureaucracy. I agree there is metric shitton of bad bureaucracy that is dumber than AI. But done well, wielding power (with in the confines of your authority within the hierarchy) is an art. It's all about navigating where rules "touch grass". A good bureaucrat thinks much more about working with fellow humans than the rules.
So, I totally see what you mean about the I Ching (and the Oracle of Delphi, back in its heyday) relying on human interpretation; it’s definitely not some cold, mechanical entity.
However, both the Oracle and I Ching basically gather scattered inputs and filter them down into a single, presentable outcome. That’s the same principle driving AI—just a much harsher, less relatable, never-ending funnel that takes everything in, then spits out a definitive answer. The spiritual or ritual aspect of other processes might obscure my rationale (and give it the quality of the cultural, historical and the sacred which is extremely important to people), but at the end of the day, you ask your question, the system processes it, and there it is: one outcome served up, courtesy of “completion logic.”
Something like a magic 8 ball works like this too. There is a finite number of responses inside it. Person has a question (input), then shakes it (process), and then the ball randomises its selection and presents it (output).
As for bureaucracy, to me it’s just another system we dreamed up to manage the chaos of daily life in a straight line—label, sort, stamp, done. That is not to simplify it or degrade its importance or its potential, but it was built as per the governments intention to run on societies in a linear, dualistic method (approve or deny, pick or discard) rather than something more holistic and relational.
And I agree so much, and is so needed that we’ve got wonderful people in these structures bringing empathy and nuance (because we need this to make bureaucracy not entirely function like a soulless AI), but the underlying code is still: input → process → output. It’s basically a huge machine built on “finishing” thoughts for us so we can move on to the next thing. A system is not kept to the digital or the mechanical, it is simply everything we create in order to simplify the way that people’s minds actually work, which is a form of cognition that we simply cannot replicate in a truly “cloned” way.
We’ve done this forever. We’re so used to needing closure—this idea that to survive is to keep pressing forward—that we can’t seem to maintain relational, open-ended systems. Even the earliest computers were made to do exactly that: solve, produce an answer, finalise. It’s up to us as to what we do with that result - that’s where the human element comes in, not the system that gave the answer. AI looks shiny and new, it’s really just the same pattern on steroids.
I think in order to come to a deeper understanding of things that truly operate outside without individualising the power of human thought and attributing it any more “power” than it needs, nature and climate stand way outside of these tidy lines in the truest sense. They don’t care about our categories or logic structures—they’re on their own cyclical, interconnected path that we can’t tame albeit how we try by extracting resources.
It’s a solid reminder that not everything is meant to be pinned down like a butterfly under glass. However, I think that understanding that we as humans have historical wanted to build things for ourselves that enforce certain structural and reliable outputs in order to rationalise the choices we make, and stop everyone from acting on pure instinct and intent, has been around since people decided they wanted to "progress" or "civilise" (gross word, but it's the right one).
Anyway, really keen to hear how all this sits with you, especially around how the I Ching’s randomness and repetition, might overlap with the iterative approach folks use in AI now.
Thanks for making me think! E x
Interesting. I'm actually inclined to agree with you more about Bureaucracy than with the I Ching.
As a bureaucrat, my job is to enact the will of my superiors up the ladder, ultimately deriving their authority from answering to the folks who won the most recent election. If the political machine had perfect knowledge and could write comprehensive rules, then my proper role would be to just execute (or quit if an order was unconscionable). Ideally my human touch wouldn't be necessary, but there is a real life opportunity cost of overly prescriptive rules which result in too much procedural overhead. Which opens up areas for judgement calls. (Conversely, not enough oversight and I might end up with a red Ferrari on my driveway, courtesy of the people).
As for the I Ching, I agree is that the input and initial output are as you describe. But that outcome is neither presentable nor definitive (unlike the simple answers of the magic 8-ball). Stopping at the raw output is essentially meaningless. Everything that matters about the I Ching practice is interpreting the random outcome into something that applies to the question.
I have no interest in prescriptive divination, as commonly found in Tarot booklets that comes with those decks. But since those booklets are so common, I might be the weirdo on this one, rejecting simple answers and demanding that additional human interpretive leap. Even though I call myself a "woo-adjacent atheist", I don't believe there is any actual power in the sticks, coins, or cards. Divination is merely one practice to scramble the brain for recontextualizing a question (which can also be a dangerous self-confirmation practice).
Ahh, I see what you mean! I love this distinction.
I think the key difference in how we’re seeing it is that I’m focusing on the structural mechanics of these systems—how they function as input-process-output loops. I'd say that’s because I’m career hospitality, where compulsive, intense sorting and reacting to input is a relentless part of the job, always in response to absolute chaos. I also studied tech, but I’ve never worked in it—which has probably kept me sane. Hospo keeps me human, tech keeps me aware of systems, and I read to fill in all the other shit.
I think where you're coming from (and where I might have been missing you) is that you’re zoning in on interpretation—which is a much more human, meaning-making layer, untouched by the computational side of things. And I agree, that's where the I Ching really departs from a “definitive answer” system like AI or a Magic 8-ball—it requires a uniquely human type of cognition in the translation step. That makes it different.
But I also think that’s where the real philosophical clash is. AI and bureaucracy are designed to flatten ambiguity. The I Ching, and a lot of older epistemologies, preserve ambiguity, or even embrace it as essential to understanding.
Lately, I’ve been obsessed with the Kyoto School—especially its “founder,” Nishida, and his whole idea of “nothingness” as a field rather than an absence. They were trying to keep a non-linear, relational approach to thought alive at a time when Western, binary logic was bulldozing its way across the world. It was a reaction to war, invasion, and existential threat—Japan was facing the dissolution of its own philosophies, trying to hold onto tradition without succumbing to dread.
So what did they do? They started studying Nietzsche, almost obsessively, because Western philosophy was the only place where they found descriptions of the dehumanisation they were experiencing. That’s terrifying—but also deeply valuable. It reveals what systems that run on completion logic do to the human mind. But it also shows how people defy, restructure, and adapt in order to remain human.
That’s why Kyoto School philosophy feels so alien to people raised in a Western framework—there’s no “conclusion,” no I’m sad, therefore I must get happy. It’s all about relationality—how things exist in relation to one another rather than as discrete, isolated objects. The I Ching works this way too, as you’re describing it—it doesn’t give you a singular truth but a framework to think through the flux of things. It’s a way of dwelling in uncertainty rather than rushing to resolve it.
That’s also why AI is such an uncanny disruptor—it mimics the interpretive part without actually understanding what it’s doing. It does a very convincing impression of pattern-based cognition, but it’s missing that essential, context-driven, why does this matter? filter that a person naturally brings.
And I think that’s what freaks people out the most about AI: it forces us to behave computationally instead of just… thinking. But the way everyone’s reacting is insane—demonising a non-sentient process while simultaneously condemning anyone who doesn’t understand it the same way they do. Not cool, and definitely not helpful.
And yeah, back to bureaucracy—it operates as if that “human cognition” filter doesn’t exist (even when it does in practice). It’s trying to reduce everything to a single pathway, despite the messy reality of human life. And I think you’re totally right—good bureaucracy acknowledges the inevitable gaps, the things that don’t fit neatly into a procedural framework. That’s where human intervention should come in. A rigid system that refuses that intervention? That’s when people start getting ground up in the gears.
And honestly, isn’t that why most post office workers are fucking angry all the time? And why everyone forced to go to the post office is even angrier? It’s a natural rejection of the systems we’re forced into. The rigidity means that the moment your humanity comes in—your natural way of thinking and living—it’s in stark contrast to the role you’re expected to perform.
I also love your “woo-adjacent atheist” take on divination—that’s such an interesting perspective. The I Ching (and Tarot, if you ignore the guidebooks) is more like a structured randomness generator that scrambles your brain just enough to make you engage with your own question in a new way. In a sense, it’s not a competitor to human cognition but an accompaniment to it.
It’s less about getting an answer and more about creating the conditions to find an answer that was already lurking in your subconscious. Which, yeah, is basically the opposite of AI’s function. AI collapses ambiguity into output. I Ching preserves ambiguity and asks you to sit with it.
So maybe that’s the biggest difference: AI is built to resolve things. We, when left to our own devices, often don’t want resolution. We want to circle, reinterpret, rethink. Or, at the very least, have the choice to do so.
Anyway, this is the kind of chat that makes AI discourse actually fun instead of doomsdayish or pedantic. Thanks for going deep on this—I could honestly talk about it forever.
I am like an AI that way. Ha.
I love how you centered ambiguity as the power in a "good" divination practice. That's why the Yarrow sticks for I Ching (which can take 20 minutes) is so powerful. The dull process lulls your brain into ambiguity before the answer arrives.
As for bureaucracy, I suspect that people like me (assuming it's just not folks blowing smoke up my as) because I am willing to delve into ambiguity. I've always been happy to move in the "light grey“ area and even when I slam into a hard limit for the agencies that I serve, I explain the process and how to be more successful next time...or even ways they might be able to find a workaround (such as who else to appeal to and ways to strengthen their request).
By the book, bureaucracy is a brutally cold logic, which can't fully be avoided. But having an insider-advocate who helps them understand such logic paradoxically opens avenues into ambiguity that gives them paths to pursue, giving them agency which reduces that sting of powerlessness in the face of an otherwise barely comprehensible regulatory regime.
we need more light grey diving. heaps of shells, corals, treasure, yarrow, coins, bits and pieces to find that keep us all feeling like everything is still alive, because it IS. my opinion that everyone regardless of their job or whatever they fill their days with, can't be categorised, because it just homogenises everything about them, and gives them no space to be who they are, and it's much easier to succumb to that "I must feel complete" horseshit rhetoric. Where I Ching gives comfort in ambiguity, is honestly a bodily response to the actual need for chaos as a component of true balance. Without chaos, everyone just creates it for themselves cause otherwise, what the hell is really even being a person? Not that!
Keep illuminating the man - hahahaha. But seriously, I mean it! It's why even Substack convos can be a little bit of humanity, despite the veil. We're both just sitting in ambiguity, in someone's yard (thanks Sam & Henry in this case) having a conversation and using tech really what is useful for, making. abridge that otherwise wouldn't be there.
Fantastic debate. Passionate, articulate, persuasive, and wonderful examples. Just for fun, I used my custom debate GPT (I am a debate coach) to declare a "winner." You can see the chat here.
https://chatgpt.com/share/67b3a5b9-d5a4-800d-ba51-d98b76b56aa5
For the record, I lean heavily towards Henry's position, but I appreciate and understand Sam's concerns.