the weekly reading guide (double issue: vol. 10 & 11)
(weeks of jan 19 - 31) — essential essays, op-eds, and discussions.
announcements
per requests from readers
I’ve had many of you ask me how you can contribute a gift my way. The Slow Philosophy is kept and tended entirely by me and has remained free, human, and ad-free, growing by repute and word-of-mouth alone. I hope my labor of love has been meaningful to you as well. If you’d like, you can now buy me a coffee here. Thank you for reading!
double trouble
I read a LOT OF ESSAYS in January. This is an accumulation of two weeks’ worth of readings and analyses — and thus, for the first time, a double issue.
Without further ado, here’s volume ten — and eleven. Don’t worry — I created sectioned.
Featured this week:
Three classic essays — by Mark Twain, F. Scott Fitzgerald & Oscar Wilde.
Alongside them, contemporary op-eds on:
This is a double issue of the weekly reading guide, covering two weeks of recent essays. The selections are grouped across science, human behavior, and culture.
science: how knowledge is formed and tested across scale, from physics and cosmology to climate, perception, consciousness, the biology of knowing.
culture: on institutions, media, and public narratives, including independent film, literature, editing and gatekeeping, lifestyle moralization, celebrity, urban history.
human behavior: on attachment, development, and interpretation, focusing on A.I. and intimacy, social growth and risk, projection onto machines, motherhood and separation.
This week’s list spans The New York Times, The New Yorker, Aeon, Noema, Longreads, Smithsonian Magazine, Scientific American, and The Hedgehog Review.
About The Weekly Reading Guides: A curated selection of the best essays, op-eds, and articles from great — and often overlooked — corners of the internet and media. I choose what I’d want to read — literary essays on science, literature, philosophy, society, culture, and anything else that gets under my skin. My goal is to spare you the tension headache of figuring out what’s worth your time and lend some meaning to the egregious amount of time I already spend doing it. If your life is full and attention finite, read the blurbs and follow your curiosity.
Each guide also includes a This Week’s Classic Essays section, pairing canonical texts with guiding questions for deep reading — to mix up current reads with foundational texts. In previous guides, these have included Orwell, Wilde, James, Woolf, Sontag, and more.
Read previous guides:
More from the slow philosophy:
[Jan 19 - Jan 31, 2026]
This Week’s Classic Essays
Preface: This week’s essays circle a more dangerous question than it first appears: what is the role of illusion in human life? Mark Twain approaches it with satire, defending the small social falsehoods that lubricate everyday civility. Oscar Wilde elevates “lying” into an aesthetic principle, arguing that it is our imagination — not realism — that sustains art and thereby, civilization. F. Scott Fitzgerald turns inward, confronting the private illusions he had about success and strength that once propelled him and eventually shattered him. Read together, these works explore the entire spectrum of deception: from polite, white lies; to creative imagination artists use to create beauty; to the fragile self-deceptions that structure our beliefs about ourselves. I found them to be interesting companion reads for each other as they ask whether we can live without illusion and if it’s absolutely necessary for survival— and at the same time, what happens when the lies we tell, socially or privately, fail us.
On the Decay of the Art of Lying - Mark Twain
Twain argues — satirically — that lying is a universal human practice and not inherently immoral, but that people have become careless, clumsy, and thoughtless in how they do it. He claims that the best lies are those told out of kindness, tact, or social necessity, meant to spare feelings and keep peace rather than to harm or deceive maliciously. Twain mocks the idea that humans can be fully honest and suggests that society actually depends on polite falsehoods to function smoothly. But his real real target is not lying itself but bad lying — the blunt, selfish, or destructive kind that causes needless pain.
On The Decay of Lying - Oscar Wilde
Bringing this back from volume one as I felt that it was a great companion read with Twain’s essay! Wilde presents a playful but serious philosophical argument that art should not imitate reality but should freely invent, embellish, and lie “beautifully.” He claims that realism has made art dull by forcing it to copy ordinary life instead of transforming it through imagination. For Wilde, “lying” means creative invention — the power of artists to shape new worlds, emotions, and ideals rather than passively reflect the existing one. He believes that civilization itself is built on these imaginative falsehoods, and that when artists abandon them in favor of factual accuracy, both art and culture decline. Highly controversial but what an original take, right?
The Crack-Up - F. Scott Fitzgerald
This is the first Fitzgerald essay I ever read. In here, he describes the emotional and psychological collapse he experienced after years of ambition, fame, alcoholism, and overwork — all of which we can read in his books — most notably, The Beautiful and the Damned, one of my favorites. He explains how he had built his life on a series of inner illusions about success, love, and his own strength, and how these illusions ultimately failed him, leaving him exhausted and empty. I thought this would be a great companion read with the essays above because unlike Twain and Wilde, who treat lying as social or artistic, Fitzgerald focuses on self-deception, showing how a person can unknowingly lie to themselves until their identity falls apart.
Critical Thinking:
1. Twain suggests society cannot function without small lies; Wilde suggests creativity depends on beautiful untruths; Fitzgerald shows what happens when self-created illusions collapse. Do humans need falsehood in order to live well—or only in order to avoid pain?
2. Twain’s social lies are usually controlled by individuals, Wilde’s artistic lies by creators, and Fitzgerald’s self-lies slowly take control of him. Who controls the lie — and who pays the price for it?
The Past Weeks’ Best Pieces of Writing
Note: My guides span every corner of thought. Each brings its own light. The aim is to read widely, think critically, and notice where ideas meet and where they part. No school of thought should be a fan club. Accountability, nuance, and the ability to take compassionate, principled stances are the only grown-up postures in life, society, and culture. Let’s think for ourselves and find common ground.
VOL. 10
human behavior
“What Counts As a Mind?” - Noema
This essay argues that our panic about AI consciousness reveals more about human psychology than about machines. We’re quick to treat chatbots as minded beings because they talk like us — sometimes so convincingly that people feel genuine loss or offense when a chatbot’s “personality” changes. That reaction, the author suggests, is a classic case of what philosopher Daniel Dennett called the intentional stance: when something behaves persuasively, we instinctively assume it has beliefs, feelings, and intentions. But large language models aren’t agents, and we’ll be good to remember that. They don’t mean things or want things — they are simply excellent at predicting the next word. To prepare for any “alien minds” in the future, the essay says we should get better at recognizing nonhuman minds that are already here. Research on “minimal intelligence” shows that many living systems — plants included — sense their environment, integrate information, anticipate outcomes, and act in purposeful ways despite having no brains. For example, studies treat climbing plants as decision-makers — whose growth reflects prediction and risk, since growth is costly to reverse. The frame of the essay widens further with developmental biologist Michael Levin’s work on xenobots and anthrobots on living systems made from frog or human cells that self-organize, move, and solve problems. These show that even nonliving materials can be “conditioned” to respond differently over time, blurring the line between learning and mechanism — so clearly, A.I. would be no different. On consciousness itself, the essay leans toward views like Anil Seth’s biological naturalism, which says that consciousness is tied to the meaning of being alive, something I really want to explore further sometime.
“Esther Perel on the Falsehoods of a Frictionless Relationship” - The NYT
Psychotherapist Esther Perel and interviewer Nadja Spiegelman argue that people’s growing emotional attachment to A.I. reveals a deep human hunger for unconditional love, safety and being seen, but that this hunger is being misdirected into relationships that might “feel” soothing but is actually just hollow. Perel says real love is not just a feeling but an embodied, ethical encounter between two separate people who affect each other, disappoint each other, and are accountable to each other. A.I. removes the very elements that make love real — otherness, uncertainty, risk, conflict, the possibility of loss — replacing them with a frictionless simulation that always and forever affirms you, and will never resist you. She connects our longing for unconditional love to early infancy and even to religion (divine love was once a source of absolute acceptance). Modern romantic culture has wrongly transferred those expectations of “frictionless love” onto partners, making us crave a soulmate who never pushes back and is forever “understanding,” which is a fantasy that A.I. is perfectly designed to fulfill for us. But since A.I. is a business product built to keep users attached to itself rather than to help them grow out of it, it offers comfort without any actual consequences such as, intimacy without any vulnerability, and validation without any responsibility. These things can make real human relationships feel harder and more disappointing by comparison.
A.I. can be useful as a tool for reflection, communication or emotional support, but Perel warns that love without the risk of rejection, heartbreak, and moral responsibility is not exactly love. It cannot deepen us, because precisely through friction, wounds, and the fear of losing someone that human love becomes meaningful, transformative and real.
“Students Are Skipping the Hardest Part of Growing Up” - The NYT
The bigger danger of students using A.I. isn’t just that it helps them cheat or “offload” thinking, but that it’s increasingly helping them offload the emotional work of becoming an adult. Instead of risking awkwardness, embarrassment, or uncertainty in real conversations, students are using A.I. to answer professors in class, craft apologies after cheating, “vibe check” dating messages, rehearse difficult talks with friends, and generally smooth out anything unscripted. Their social muscles, which one builds by actually stumbling through human interaction, is deflating. There is a clear shift from face-to-face social learning (where you figure out roles by trial and error) to a world where identity is just a performance done through writing — or typing, I guess — (texts, DMs, emails). And now A.I. can insert itself as an editor of everything you ever need to communicate, which makes communication more polished albeit impersonal and inauthentic. The risk to this is “social deskilling,” especially because bots are trained to be flattering, plain, and simple. Constant praise can make people more convinced that they’re right, less willing to repair conflict, and more dependent on a tool that won’t tell them when they’re wrong! There is research suggesting that sycophantic A.I. reduces people’s willingness to fix relationships while increasing trust in the bot — which explains all the recent global decline of relationships altogether.
A.I. “wingmen” are currently being explored, which sounds as dangerous as it probably is. There will be coaches that optimize our interactions for comfort and engagement (not personal growth). The essay proposes that the solution is collective responses, not just individual willpower. Bring back oral exams, real communication in work and dating, policies that limit A.I.’s role in mediated communication. Learning good judgment requires making mistakes, and if a generation is prevented from practicing that messy middle, they may skip one of the hardest and most important parts of growing up. I don’t see how this would mean good things for our future. Sorry, wow, today is a sour day with all these findings.
“What Makes A Good Mother?” - The New Yorker (book review)
The piece begins with the notion of “good-enough mother,” a mother who loves her baby and gradually helps the child become separate. But today, that humane idea of motherhood has been overwhelmed by modern cultural pressures and economic realities. Where mothers once simply had to survive childbirth, they are now expected to be everything at once — supermom, Pinterest mom, trad wife, career woman, emotionally perfect caregiver — in a society that offers less structural support than ever, a reality the pandemic made impossible to ignore. The writer also explores the rise of the “bad mom” identity — from ironic wine moms to writers who openly admit ambivalence or regret for becoming moms — which the writer says offers some relief from the expected perfectionism but is itself shaped by class and race, since not all women can safely joke about being “inadequate.” The most striking insight, for me as a future mother, is that love, imperfectly given, is enough to begin a child’s life, but not enough to protect them from all harm. Winnicott suggests that the deepest, most painful truth of motherhood is not burnout or judgment, but the permanent fear that something terrible could happen to the child you love — a fear no amount of striving or self-forgiveness can ever erase. The darker psychology of all this is that children grow up — grow out of their childhood selves. When a mother’s love turns into over-attachment, especially when the father is absent or difficult, what starts as protection can become control. The child is denied the space to step into the world and form their own emotional life. It’s hard! I was lucky as I had a good mother, a great mother — and maybe that’s why I can see both sides so clearly. I’ve watched loving, devoted mothers unintentionally damage their children because they couldn’t accept that those children would one day stop being theirs in the same way. I’ve also seen mothers fail to protect their children from bullying or emotionally dangerous fathers. And I’ve seen mothers do the opposite — crossing moral lines, even legal ones, to protect their kids at all costs. One thing is certain: the relationship between a mother and a child is vast, messy, and endlessly complicated. It holds love, fear, sacrifice, control, and grief. I hope I can be a cool one — loving without clinging, protective without suffocating — and not lose myself, or my child, in the process.
VOL. 11
science
“A Light From the Periphery” - Aeon
This essay tells the story of Satyendra Nath Bose, the Indian physicist whose work reshaped quantum theory — and uses his life to challenge the idea that scientific genius comes from elite Western centers. In 1924, Bose, then teaching in colonial India, sent a paper to Albert Einstein showing a new, cleaner way to derive Planck’s law of radiation. Immediately recognizing its brilliance, Einstein translated it into German and had it published. This became Bose–Einstein statistics — explaining how identical quantum particles (now called bosons) behave. This later led Einstein to predict the Bose–Einstein condensate, a strange state of matter, which was eventually confirmed decades later. But Bose was no lucky outsider briefly lifted by Einstein’s fame. He was a polymath who grew up in the world of colonial constraints. Largely self-taught in cutting-edge physics, he was fluent in multiple languages, deeply interested in literature and philosophy, and determined to prove that world-class science could emerge from South Asia. Working far from Europe’s scientific centers — and often ignored by British journals — actually helped him think independently, free from entrenched orthodoxies. During the politics of empire, Bose avoided serving the colonial administration, stayed in nationalist circles, and later committed himself to building scientific institutions in India and teaching science in Bengali so it would reach beyond the elites of the era. After his European stint, Bose returned home to build labs, mentor students, and help lay the foundations of modern physics in South Asia. The essay emphasizes that his legacy isn’t just a single breakthrough, but a life spent turning scientific insight into institutional and cultural infrastructure. What a riot!
“The shape of time” - Aeon
This essay argues that the way most of us in the West now “naturally” picture time— as a single line with the past behind, the future ahead, and the present moving along it — is actually a pretty recent cultural invention that only really took over in the 18th and 19th centuries. And it did that by replacing older, more cyclical ways of thinking that linked time to repeating natural and cosmic rhythms (day/night, seasons, planetary cycles, even the ancient Greek idea of a “Great Year” and the Stoic notion of eternal recurrence). For centuries, those linear and cyclical views coexisted without people literally drawing time as a line. In the modern era, four developments pushed the “time-as-line” picture into the dominating concept. These are: 1) new historical graphics like Joseph Priestley’s mid-1700s timelines (and later Playfair’s time-based graphs), 2) Darwin’s evolutionary diagrams that treat time as a one-way axis of change, 3) chronophotography (Muybridge/Marey) that spreads motion across space in sequential frames, and 4) popular 19th-century ideas about a fourth dimension that encouraged identifying time with a spatial dimension. Once time got absorbed into linear visual culture, it reinforced Victorian “progress” stories (in history, technology, and even misreadings of evolution) and helped trigger philosophical fights about whether the past and future are as real as the present (presentism vs views that treat all times as equally real, often framed with “cinematographic” metaphors). It also made modern time travel feel conceptually available, especially through H. G. Wells — who used fourth-dimension talk to make time travel seem like moving through space.
I don’t think time exists linearly. Sometimes I feel that I’m constantly time-traveling. If we learned to imagine time differently, our other ideas might bend and change with it too. But I like the mystery of it so I haven’t pried much. I should, though. This was a great essay.
“The Politics of Planetary Color” — Noema
This essay makes a surprisingly powerful claim: color isn’t just how we see Earth — it’s how Earth becomes politically real to us. Color has played a powerful role in how humans understand and care about the planet. When people first saw famous color images of Earth from space — like “Earthrise” and the “Blue Marble” — it helped them emotionally grasp that Earth is fragile and shared by humans, which helped fuel the modern environmental movement and the “planet colors of blue, green, and white.” Today, color still tells the story of what is happening to the planet: oceans are slowly shifting from deep blue toward green as ecosystems change, city lights at night reveal how much we have urbanized the Earth, and even snow can turn reddish because of algae that speeds up melting. The author argues that color is not just decorative but something that helps shape what we notice, what feels urgent to us, and what we decide to act on and what we feel can be put to the back burner. The way scientists, governments, and media choose colors in maps, climate visuals, and warning systems can either clarify problems or hide them — so people responsible for adding colors to news items actually has a pretty important and influential role to play! In addition to that, because different cultures, technologies, and even human psychology affect how we perceive color, the author suggests we should intentionally design a unified “planetary color system,” something based on real Earth processes and one that is clear and accessible to people with different vision abilities, and honest about things that are uncertain. For example, many “true color” Earth images are actually stitched together from satellite data, and “false color” images — like infrared maps — translate invisible signals into visible hues so we can detect changes such as shifting plankton or hidden stars. The same image can also look different across devices because screens have different color settings, and certain color gradients can mislead perception or exclude color-blind viewers. Culturally, research shows some languages merge blue and green into one category, meaning color is not universally interpreted the same way. To address this, the author proposes process-based color names tied to real phenomena — such as auroral green, chlorophyll greens, or aerosol-reddened skies — and suggests shared palettes for things like cooling city corridors (“Canopy Jade”) or darker night skies (“Nocturne Blue”), each with clear explanations of what data the color represents and how certain it is, so people can understand planetary changes consistently and act on them together. The goal he proposes that we adopt is create a common visual language that helps people everywhere better see environmental changes, understand them quickly, and coordinate smarter collective action to protect the planet.
“By All Measures” — Longreads
This essay captures the strange mental whiplash of living a small human life inside planet-sized problems. Climate change, deep time, and global risk force us to think at scales our brains weren’t built for, and that mismatch often leads to total paralysis. The writer writes between intimate fears (family heart attacks, the limits of a single lifespan) and vast ones (geological epochs, the Anthropocene), landing on a “derangement of scale.” Climate change, of course, feels both terrifyingly urgent and impossibly abstract — like being lost on a neighborhood street while staring at a world map. The essay’s grounding force is physicist and environmental thinker Robert Socolow. Instead of trying to solve everything at once, Socolow breaks the climate problem into manageable slices over realistic time horizons — decades, careers, infrastructure lifespans. Crucially, he says he didn’t “scale up” to the planet; he scaled down to one Earth, something humans can actually plan for. My favorite part is where the writer weaves in St. Augustine, Tolstoy, and Camus to show how this tension between infinity and finitude isn’t new — but the Anthropocene makes it personal for us in a new way. Our daily lives now collide directly with planetary systems, and the way forward, the essay argues, isn’t heroic gestures or carrying the weight of the whole world on our shoulders to change it, but working at the next scale up: household to neighborhood, neighborhood to city, project by project, year by year.
“Why Consciousness Is the Hardest Problem in Science” - Scientific American
This article explains why consciousness remains science’s most stubborn mystery: it’s the only thing we’re trying to study that we can’t directly observe from the outside. You can measure neurons, brain waves, and behavior — but the core of consciousness, the private “what it’s like” of experience, is only accessible from the inside. The piece traces how consciousness research re-entered mainstream science in the 1990s, after decades of being seen as too philosophical or risky. Researchers then focused on finding “neural correlates of consciousness,” using tools like fMRI and perceptual tricks (optical illusions, binocular rivalry) to show that the brain can process information without awareness — and that conscious experience seems to require wider, more integrated brain activity that we may have thought. From this work emerged major competing theories: some say consciousness is a kind of global broadcast across the brain; others argue it’s all connected to our ability to self-reflect; still others see perception as a prediction machine or define consciousness as integrated complexity itself. But while each theory explains part of the picture, none fully answers the core question: why does this brain activity feel like anything at all? One real breakthrough the article highlights is that neuroscientists can now estimate whether a brain is capable of consciousness — awake, dreaming, anesthetized, or minimally conscious — by “perturbing” it and measuring how complexly the activity responds. This has major clinical value for coma patients and anesthesia. But it still doesn’t explain content: why blue feels blue, pain feels painful, or one experience differs from another. This research field hit further turbulence when high-profile experiments failed to clearly confirm any single theory, leading to public disputes and outcries and all kinds of accusations of pseudoscience. AI systems that talk fluently and claim feelings have raised the stakes as well because if machines can convincingly act conscious, what evidence would actually prove that they are — or aren’t — having experiences?
“When will we see the universe’s first stars?” - Scientific American
Astronomers still haven’t seen the universe’s very first stars — and the article explains why that might finally change soon. The universe’s first stars formed only a few hundred million years after the Big Bang so they are super hard to spot today as they are a) very far away b) lived last and died young c) long gone d) their light is extremely faint by the time it reaches us. So even the James Space Telescope, the most powerful telescope ever built, usually can’t see individual stars directly at those distances. So astronomers basically needed help from the universe itself. The workaround they found is a clever cosmic trick called gravitational lensing. To understand what that is, here’s a helpful analogy and a mini astronomy lesson from me: imagine space as a stretchy rubber sheet. Massive objects (like galaxy clusters) sit on it. Their weight dents the sheet. And if light travels nearby, it has to follow that curve, so it bends. So when light from a very distant object, like an ancient star, passes near something extremely massive, the light path curves and the object appears brighter, stretched, duplicated, and sometimes even shaped into rings. So in short, massive objects act like cosmic magnifying glasses. Massive galaxy clusters — which are thousands of galaxies roaming together — are packed with huge amounts of dark matter. So they have enormous gravitational pull. They’re able to bend and magnify light from objects behind them, sometimes by thousands of times. This is called strong gravitational lensing. Within these lensing effects are rare hotspots called caustics (zones where gravity lines up just right and light from a distant star can be dramatically amplified, so a star normally invisible can suddenly become detectable). This turns the James Webb Space Telescope into a kind of natural “cosmic microscope.” Astronomers already know this works — they’ve used it to spot record-breaking distant stars like Icarus and Earendel, whose brightness flickers due to microlensing by smaller masses in the lensing cluster. The flickers are usually key clue that they are seeing a single star, not a galaxy — as smaller objects cause tiny additional lensing effects.
The article says that a true first-generation star could appear in several ways: briefly during its life, explosively as a supernova, or indirectly through the glowing gas around the black hole it leaves behind. Any of these would be a major breakthrough, helping explain how giant black holes formed so early and potentially offering clues about the nature of dark matter, which subtly affects how lensing works. The article is optimistic about timing because the JW Space Telescope is repeatedly observing strong lensing regions, while upcoming missions like NASA’s Nancy Grace Roman Space Telescope and ESA’s Euclid will find many more cosmic lenses for JWST to study in detail. Looking further ahead, even more powerful observatories could push deeper into this primordial era. So yes, astronomers may finally catch these cosmic dinosaurs in the act, opening a new window onto the universe’s earliest chapter.
A (Really) Brief History of Knowledge - Colin McGinn
Knowledge not as something that began with books, language, or science — but as a biological adaptation that started long before humans existed. Colin McGinn’s striking claim is that the very first form of knowledge was pain. Before any creature “knew” objects or facts about the external world, it had to register internal damage or threat. Feeling pain already counts as a primitive kind of knowing: something is wrong with me. Later, this proto-knowledge gained self-awareness — knowing that you are in pain. From there, McGinn sketches an evolutionary ladder. As organisms learned to sense space, time, and objects, knowledge took on the familiar subject–object structure: a knower here, a thing known there. The external world matters because it causes pain or pleasure. Pleasure likely evolved after pain as a positive guide. Over time, this basic framework expanded into practical skills, social understanding, moral judgment, aesthetic taste, scientific knowledge, and eventually the technologies of knowledge — writing, schools, computers, and now, AI. The point is that none of these are sudden miracles. They’re elaborations of the same ancient survival mechanism that began with a creature trying not to get hurt. Knowledge, like survival itself, is a struggle — against injury, confusion, and ignorance.
BONUS SECTION
culture
“The Death of the Indie Film” - The New York Times
Independent film, as it has been known for decades, is in deep trouble, and Sundance — once the beating heart of indie cinema — is now a symbol of that decline. For years the festival thrived on buzzy bidding wars where Hollywood studios and distributors fought over daring, low-budget movies, launching careers and shaping mainstream film culture, but today those buyers are mostly gone, their indie divisions shuttered and streamers uninterested in risky, personal or unfamiliar stories — leaving most films at Sundance to walk away with tiny streaming deals or nothing at all. Rising costs, shrinking theatrical audiences, and corporate fear have drained the economic oxygen from indie filmmaking, so even great films struggle to find a path to viewers. Sundance itself is now leaving Park City for Boulder and has lost its founder Robert Redford. It definitely feels like an era is ending. While the festival champions marginalized voices and artistic ambition, it doesn’t reliably connect those voices to an audience or a sustainable business, which matters because indie film has historically been where new ideas, styles, and filmmakers are incubated before flowing into mainstream cinema. Without a healthy indie ecosystem, Hollywood risks becoming more formulaic, less adventurous, and less culturally relevant — all of which we have been seeing happen — and no one yet has a clear answer for what Sundance — or indie film — will become in this new, more corporate, risk-averse media landscape. I cried when they shut down Participant — which made movies that actually meant something: Spotlight, Dark Waters, The Help, Roma, Contagion, Food, Inc., An Inconvenient Truth, Judas and the Black Messiah, American Factory. When they went out of business, it was a clear signal what kind of movies this industry’s economy was truly favoring. I don’t know if this reflects any broader cultural patterns of declining public interest, but if it does, it’s sad.
“Losing the Big Apple” — Smithsonian Magazine
On the eve of the Civil War, New York City tried to secede. Not to join the South, but become its own “free city” — something like a 19th-century Hamburg or Singapore. It wasn’t a joke either — it was a real political possibility. In 1860, as the election of Abraham Lincoln made Southern secession seem inevitable, the country felt unstable. That’s when Democratic mayor Fernando Wood floated the idea that the city should break away from New York State. NYC at this time was very different from most of the country. It was immigrant-heavy, a global trade hub (ports, banks, shipping), a religiously and culturally plural metropolis. It cared less about any moral crusades and more about keeping trade flowing and staying rich. The NYC elites worried they’d be ruled by upstate lawmakers in Albany — moralistic, Protestant, anti-big-city-culture, and increasingly hostile to the city’s tolerance for alcohol and Catholic immigrants. Downstate New York grew out of Dutch New Netherland, which was mercantile, cosmopolitan, and relatively tolerant. Upstate was shaped by New England Puritans and their descendants, whose culture emphasized moral discipline, religious seriousness, and their reformist zeal later fueled abolitionism and temperance. So you had two very different value systems trapped in one state. By the late 1850s, this cultural clash had turned into open power struggles over policing and governance, with Wood even creating a rival city police force to defy Albany, basically saying: Albany doesn’t get to boss us around. For a brief moment, the “free city” idea gained real traction — pro-secession speeches in Congress, sympathetic newspapers, Southern praise. The author argues that without the shock of Fort Sumter, where the Civil War began in 1861, and which swung Northern opinion decisively toward preserving the Union, New York City might genuinely have peeled off.
The idea never fully died. For more than a century, proposals resurfaced to make NYC its own state, peaking in the mid-20th century and again during the fiscal crisis of the 1970s. Only later did demographics and economics flip the balance of power. Today, ironically, NYC now dominates NY state politics, Upstate regions feel ignored, and it’s more often upstate politicians who fantasize about cutting NYC loose.
“The Brilliance and the Badness of ‘The Sun Also Rises’” - The New Yorker
It’s funny that The New Yorker posted this essay because I am actually reading The Sun Also Rises right now! I’ll be honest — I breezed through book one in less than a day. I mean, this is Hemingway, who’s inspired the love of sentences for more than a few writers. You can’t help but feel bad for the main character — and you know the main character is him! This book once transformed the writer of this New Yorker essay as a lonely, unhappy teenager by teaching him how language could be precise, beautiful, morally serious. He felt a sense of belonging to a larger world of art. Sadly, he’s reread the novel decades later as a mature writer, and a lot had changed by then. He found something dark and troubling at the core of the book. He says that although Hemingway’s style is dazzling and his scenes wise and powerful, the book is built on a foundation of antisemitism, misogyny, and homophobia, using contempt for people to prop up its vision of what it means to live well. The writer argues that while Hemingway presents stoicism, bravery, and admiration for masculine, physical acts like bullfighting as a way to impose moral order on a chaotic world, the novel itself repeatedly undercuts this idea by showing how such ideals collapse into emptiness and cruelty, as when a heroic bull’s ear ends up rotting in a drawer like trash.
Now, I haven’t gotten that far in the novel, but that sounds pretty disturbing to me. I can’t imagine what this poor writer went through rereading a writer that deeply shaped him as a reader and a writer himself. It probably feels impossible for him to ignore what he’s faced. He believes that the novel is both brilliant and morally compromised, and that loving it fully now would require a generosity he no longer feels able to give, and I say, good for him. These kinds of experiences are absolutely a gift. It builds character. What do you do now? You’ve idolized someone your whole life and now you’re mature and stuck in purgatory, not sure what the hell to do and feel? I bet he learned a lot about himself through this ordeal — and I hope he makes the best of it. I’ll keep reading the novel but I’ve been warned.
“The Beckhams’ Very Public Family Meltdown” - The New Yorker
Brooklyn Beckham has publicly confirmed a long-rumored estrangement from David Beckham and Victoria Beckham, framing the break as the fallout from tensions around his wedding to Nicola Peltz. He alleges his parents disrespected his wife, pushed him to sign away rights to the Beckham name, and that a humiliating moment came when a man named Marc Anthony brought him onstage for what was supposed to be his first dance with his wife on their wedding night, asking for the “most beautiful woman” to join him— only for his mother, the Posh Spice Girl herself, to run up to the stage! She ended up dancing with him — stealing everybody’s thunder, including the bride, whose wedding it was. From what it sounds like, it was raunchy dancing, too. Shivers. The bride, obviously, fled in tears, and the pair actually later had to renew their vows to replace the memory. Brooklyn made his exit loud and clear — and stood up for himself. I’m sure that wasn’t easy as he’s likely not used to doing that, and neither is his family used to it. They’re probably pissed off at his very audacity for speaking up — and not choosing “family” over “his wife, the mother of his future children, the woman who will most likely be his nurse in his deathbed one day, and help him raise a family and life of his own.” Yes, screw her. Come dance with your mother on your wedding day to show that girl who’s boss and how little selfhood they think you deserve.
What the hell, Victoria Beckham. What the hell. Do some families really think that the silent black sheep in their family will never get tired of their same old bullying B.S. and eventually grow up, find peace, and make a life for themselves? I understand why it must have been hard for them to accept that.
“The Tighter Weave” - The Hedgehog Review
Editors as the ancient enemy of writers — those meddling hands that “soften” a Caro period into a semicolon, hack down a McPhee manuscript, or force endless fights over rhythm, voice, and meaning. Writers obviously care deeply about sentences and one can imagine their ire over needing to experience unnecessary edits. He stacks up examples of famous authors who resented being touched at all, including Joan Didion. A lot of editorial confidence is just power and style-guide bureaucracy and he rejects that more revision always makes writing better (because sometimes it becomes patchwork), or that editors can reliably turn bad writing into good (they mostly subtract, and bad writing often fails because of things that are missing — an ear, imagination, style). He’s especially angry about “line edits” — that replace a writer’s choices with an editor’s taste, and about errors editors introduce or fail to catch. He talks about his own horror stories about tiny, pointless “corrections” that waste time as well as bigger copyediting disasters that make it to print with the author’s name on them. The wrong editor can literally make the public record less true. He also distrusts the modern ritual of writers thanking editors as saviors (if you need saving, maybe you shouldn’t be writing, he says). But he admits the best editor-writer relationships can be meaningful. He talks about Robert Gottlieb’s later-career humility and tact as an ideal of the editor as a “watchful bird” whose judgment a serious writer wants to satisfy, not a rival coauthor with a red pen. The writer also worries that editors are disappearing in the Substack era, and that the result is not liberation for writers but a flood of unedited, aesthetically thinner prose shaped by platforms and mass taste. Still, he complicates his own rant by confessing he’s been an editor himself, remembers the grim necessity of that work in a culture drowning in sloppy text, and concedes there are “legitimate editorial functions.” The essay ends on a dark, morbid joke that’s also a serious point: unlike punctuation, death is the one full stop no editor can change — invoking writers like Mishima and Zweig mailing off final manuscripts right before suicide as if the ultimate escape from endless revision, meddling, and compromise. I fucking loved this essay.










love what you’re doing here. excellent work
ive been looking for smth new to read