About Me

My photo
New Orleans, Louisiana, United States
Admire John McPhee, Bill Bryson, David Remnick, Thomas Merton, Richard Rohr and James Martin (and most open and curious minds)

31.8.15

Learning

LET’S ABOLISH SOCIAL SCIENCE

A proposal for the new university

BY 

EmailTwitterFacebook
In my old age, I hope to found a new university, called rather unimaginatively the New University, with funding from one or another imprudent billionaire (a prudent billionaire would turn me down). In contemporary universities and colleges there is often a division among the natural sciences, social science and humanities. In my New University, there would be only two faculties: natural sciences and the humanities. The social sciences would be abolished.
Social science was — it is best to speak in the past tense — a mistake. The dream of a comprehensive science of society, which would elucidate “laws of history” or “social laws” comparable to the physical determinants or “laws” of nature, was one of the great delusions of the 19th century. Auguste Comte formulated a Religion of Humanity based on “the positive philosophy” or Positivism. Karl Marx went to his grave convinced that his discovery of laws of history had made him the Darwin or Newton of social science.
Positivism mercifully had little political influence, except in 19th-century Brazil, to which it contributed the national motto “Order and Progress.” In the 20th century Marxism split between a revisionist branch which became indistinguishable from welfare-state capitalism and communist totalitarianism, which survives in pure form today only in North Korea, and from the devastating effects of which Russia, China, Eastern Europe, Cuba, Vietnam and other countries are slowly recovering.
By the mid-20th century, the utopian fervor that had inspired earlier attempts at comprehensive sciences of society had burned out. But within post-1945 Anglo-American academic culture, more than in continental Europe, the ambition to emulate the methods of the physical sciences in the study of human beings persisted.
Economics, for example, grew ever more pseudoscientific in the course of the 20th century. Before World War II, economics — the field which had replaced the older “political economy” — was contested between neoclassical economics, which sought to model the economy with the methods of physics, and the much more sensible and empirically-oriented school of institutional economics. Another name for institutional economics was the Historical School. After 1945, the institutional economics associated in the U.S. with John Kenneth Galbraith was purged from American economics faculties, in favor of the “freshwater” (Chicago) and “saltwater” (MIT) versions of mathematical economics, which focused on trying to model the economy using equations as though it were a fluid or a gas.
While “physics envy” has been most pronounced and destructive in economics, pseudoscience has infected other disciplines that study human behavior as well. The very term “political science” betrays an ambition to create a study of politics and government and world politics that will be a genuine science like physics, chemistry or biology.
In the late 20th century, an approach called “Rational Choice” spread through American political science departments like oak blight through a forest. The method (or, to use the ugly word preferred by pseudoscientists, the “methodology”) of Rat Choice, as this school is known to its detractors, was borrowed from pseudoscientific neoclassical economics. Culture and institutions were downplayed, in favor of attempts to explain political outcomes in terms of the strategic self-interest of rational individuals.
While studies of domestic politics have been damaged by Rat Choice, the field of political science I know best, International Relations, has been warped by a different kind of pseudoscience. Much of the discipline has adopted the approach to the scientific method of the late Imre Lakatos, a Hungarian émigré who sought to provide an approach to scientific reasoning that would be an alternative to the explanations of the scientific method by Karl Popper, Thomas Kuhn and Paul Feyerabend, among others. Lakatos, who died in 1974, was a mathematician and physicist, and might have been surprised and dismayed by some of the uses to which his thinking has been put. Stilted and ritualized language about “Lakatosian scientific research programs” mars the published work of many otherwise thoughtful and insightful IR scholars.
I once asked a leading American IR theorist who had become a major figure in a presidential administration if any IR theories — including those of the sub-school that he led — had ever come up in discussions within the government about foreign policy. “Not once,” he said.
You might think that the ancient humanist discipline of law would be more resistant than others to pseudoscience — and you would be right. Still, legal theorists afflicted with physics envy and economics envy have made attempts to turn law into a social science. The most important was the late 20th century “law and economics” movement.
Within the academy, a growing number of scholars are speaking out against the degeneration of social science disciplines into pseudoscience and scholasticism. In a recent essay, “Breaking Discipline and Closing Gaps? — the State of International Relations Education,” Francis J. Gavin of the MIT Political Science department laments the state of his discipline and adds: “It is important to recognize that these concerns are not limited to any one discipline: Sociology, for example, has struggled with these issues, while my own discipline, diplomatic history, has almost completely abandoned any effort to contribute to serious discussion of national and international security. Nor is it clear what constitutes success. Economics graduate training is plagued by (and arguably responsible for) many similar pathologies, yet it has, albeit controversially, much influence in the policy world.” Gavin notes the trend toward reorienting IR scholarship toward policy relevance and accessibility to policymakers, manifested by efforts such as American University’s Bridging the Gap project and interdisciplinary studies programs at many campuses.
In 2000, students in France disgusted with otherworldly equation-building rebelled and established the Post-Autistic Economics (PAE) movement. The movement spread across the Atlantic and its name was changed to the Real-World Economics movement, because comparing them to neoclassical economists was insulting to autistic people.
In economics, there is a growing reaction against what Noah Smith calls “mathiness.” New organizations, like the Institute for New Economic Thinking (INET) and Erik Reinert’s Other Canon foundation, and new publications, like the Real-World Economics Review, are enlivening the dismal science with heterodoxy and a renewed interest in the world beyond the blackboard. Ha-Jon Chang among others has revived economic history, a discipline that declined during the decades when economics became fake physics.
In my New University economics, political science and law will be part of the humanities, studied by humanist methods, supplemented, when it is appropriate, by statistics and other useful mathematical tools.
The difference between the natural sciences and the humanities is the difference between motion and motive. Laws of motion can explain the trajectories of asteroids and atoms. The trajectories of human beings, like those of any animals with some degree of sentience, are explained by motives. Asteroids and atoms go where they have to go. Human beings go where they want to go.
ASTEROIDS AND ATOMS GO WHERE THEY HAVE TO GO. HUMAN BEINGS GO WHERE THEY WANT TO GO.
If you want to stimulate the economy, you can cut taxes and hope that individuals will spend the money on consumption. But they may hoard it instead. Such uncertainty does not exist in the case of inanimate nature. If you drop a rock from a tall building, there is no chance that the rock will change its mind and go sideways, or retreat back to the top, instead of hitting the sidewalk.
All human studies are fundamentally branches of psychology. That is why the great German philosopher Wilhelm Dilthey distinguished theGeisteswissenschaften — the spiritual or psychological sciences — from theNaturwissenschaften — the natural sciences.
Dilthey argued that the essential method in the human sciences or studies isVerstehen, “understanding” in the sense of insight based on imaginative identification with another person. If you want to understand why Napoleon invaded Russia, you have to put yourself in Napoleon’s place. You have to imagine that you are Napoleon and look at the world from his perspective at the moment of his decision. The skills that this exercise requires of the historian or political scientist are more akin to those of the novelist or dramatist than those of the mathematician or physicist. Hermeneutics — the interpretation of the words and deeds of human beings by other human beings on the basis of a shared human psychology — is the method of all human studies, not the scientific method, which is relevant only for the natural sciences.
“Macro effects” can also be explained without the need to posit pseudoscientific things like “social forces” comparable to physical forces like gravity or electricity. Unintended consequences — like depressions that are prolonged when everybody hoards money at the same time, or elections in which the division of the vote among many candidates ends up electing a politician whom most voters don’t want — are still the result of individual decisions, albeit individual decisions that interact in an unforeseen and counterproductive way. In most of these cases, the unintended results must be explained in terms of institutions — economic or electoral — that interact with individual motives in a way that cannot be explained if the institutions are ignored.
In my New University, the worthwhile scholarship found in modern-day economics, political science, law, anthropology, sociology, psychology and other contemporary social sciences will be separated from pseudoscience and incorporated into the new humanist disciplines. The faux-physics will be tossed out.
The distinction between the reorganized humanities and the traditional natural sciences will be strictly enforced. Any professor who explains anything in domestic politics or international affairs as the result of a Social Force will be summarily dismissed. The same fate will await any natural scientist who attributes motives to inanimate objects — for example, a geologist who explains that a volcano erupted because its long-simmering resentment finally boiled over into public anger.
Architectural styles and dress codes will be enlisted to further accentuate the distinction between the humanities and natural sciences. All human studies are historical sciences. To acknowledge this, the buildings of the Humanities departments on the campus of the New University will be constructed in an eclectic and somewhat repulsive mixture of historical styles — Greco-Roman classicism, traditional Chinese, Muslim, Gothic and Tiki Bar. The buildings that house the natural sciences will be ultra-modern glass and steel boxes. Humanists will be required to wear togas, scientists white lab coats.
As on a traditional campus, at the New University a spacious quad will divide the buildings of the humanists on one side from those of the natural scientists on the other. But the buildings in each row will turn their backs to the buildings of the faculty on the other side. To enter either row of faculty buildings, you will have to go around to the outward-facing facades. To symbolize the absence of methodological contamination, the interior quad will take the form of a moat, with a spiked palisade on each side. A few crocodiles might add some scenic interest.
I haven’t settled on a mascot for the New University yet. Obviously it would need to have two heads. •

Heidigger

Martin Woessner on Freedom to Fail: Heidegger’s Anarchy

Fail Slow, Fail Hard

August 28th, 2015RESET-+
I WENT to graduate school for two reasons: to study Heidegger, and because there was a wealthy university in Texas that inexplicably offered to fund such study. Who could resist four more years to keep reading big books and thinking deep thoughts, even if it meant trading picturesque San Francisco for hot and humid Houston? After only a couple months of pondering in the air-conditioned comfort of the university library, I realized that it wasn’t really Heidegger I was interested in — it was Heideggerians. What a strange lot, and why would anyone want to be counted amongst them, myself included?
Once you start looking for them, Heideggerians are everywhere. But identifying what they had in common with each other wasn’t easy. It was hard to tell who even counted as a Heideggerian, anyway, especially in the United States — a nation for which Heidegger himself had little positive to say throughout his life (among other things, we had too much technology and too little history, he thought). Catholics read him, but so too did Protestants and Jews. Existentialists claimed him as one of their own, despite his protests, but deconstructionists did the same, and by then he was no longer around to protest. Pragmatists sometimes made their peace with him, and occasionally poets and novelists played around with his wordplay-filled writings. I found that those last ones generally had the most fun, partly because they didn’t take it all so terribly seriously. Critical Theory, Hermeneutics, and Phenomenology — theoretical paradigms predicated on seriousness — each genuflected in Heidegger’s direction at one point or another, sometimes skeptically, sometimes not. There was hardly a corner of the American academy that hadn’t been infiltrated by some kind of at least latent Heideggerianism —except, of course, actual philosophy departments, where Heidegger often remained simply too foreign and too suspicious. One had better luck finding him in anthropology, literature, or theology.
As an undergraduate I was (un)lucky enough to have landed in two departments — one history, the other philosophy — where Heidegger wastaught in a serious, and, occasionally, fun way. But even my professors couldn’t agree about how, or more importantly, why, to teach Heidegger. Was he a philosophical role model, a representative intellectual-historical figure, or did he represent something else entirely, perhaps even a kind of morality tale for the modern era — how else could one explain the fact that one of the greatest philosophical minds of the 20th century had also been a Nazi? Looking back on it now, I take it as a remarkable sign of my teachers’ commitment to free and open inquiry that they made the question of teaching Heidegger as important as the question of being, which according to Heidegger was the only question that really mattered.
Whether or not the question of being or Seinsfrage is the only properly philosophical question worth asking, the question of how, let alone whether, Heidegger’s works should be taught is now, in light of the recent publication of his private notebooks — the so-called Schwarze Hefte, or black notebooks — inescapable. Some 40 years after his death, Heidegger continues to scandalize; the fallout from the publication of the black notebooks has been felt in seminars and reviews both far and wide (see Gregory Fried’s “The King Is Dead” and Santiago Zabala’s essay “What to Make of Heidegger”). Providing a window onto Heidegger’s most private philosophical musings, the notebooks offer troubling evidence not just of his pervasive anti-Semitism, but also his abiding commitment to certain strands of National Socialist ideology. By the time of his death in 1976, Heidegger surely knew that the notebooks in which he scribbled his philosophical and political reflections were riddled with dubious, even incriminating remarks. So why, then, did he decide not just to include them in the edition of his collected works that would ensure his fame, but also, and more importantly, to dictate that they appear as the culminating volumes of the decades-long project? What could he have been thinking?
As the editor of the black notebooks, German philosopher Peter Trawny knows their contents perhaps more intimately than anybody else, and it is precisely that question that he tries to answer in his short little book,Freedom to Fail: Heidegger’s Anarchy. I’m not sure he answers it, exactly, but as a kind of Rorschach test, his essay certainly proves useful. How you respond to it will tell you what kind of Heideggerian you are, or if you are one at all anymore.
One of the strengths of Freedom to Fail is that its author is not an orthodox Heideggerian, but that may not be saying much these days. The black notebooks may have made the idea of a strict fidelity to Heidegger’s writings moot — they have become, as Trawny puts it, “an unavoidable point in question for anyone who would like to encounter Heidegger’s thinking.” Bemoan them, criticize them, lament them, but there is no avoiding them. The black notebooks exist. Although the remark is buried in a footnote, Trawny suggests that “unconditioned partisanship of Heidegger’s own thinking” is now out of the question. It is time to face the facts — well, sort of.
As if to signal the danger inherent in these culminating volumes of Heidegger’s collected works, the three epigraphs Trawny chooses to introduce his essay — from Hölderlin, Heidegger, and Paul Celan, respectively — dwell on things “monstrous,” “tragic,” and, in Celan’s case, a combination of the two. In a direct reference to the Holocaust, the third epigraph speaks of “the monstrousness of the gassings.” The original German title of Trawny’s book, Irrnisfuge, or “Errancy Fugue,” deliberately echoes Celan’s famous poem “Todesfuge,” or “Death Fugue,” from which the epigraph is drawn. That poem describes “death” as a “master from Germany.” In another celebrated poem titled “Todtnauberg,” a reference to the location of Heidegger’s celebrated Black Forest hut, Celan immortalized a famous postwar meeting with the philosopher, during which the poet’s hope for a word of contrition and/or explanation — a word literally “to the heart” — from his host never came. For a philosopher who made mortality, our “being-towards-death,” a cornerstone of all philosophizing, such silence surely spoke volumes to Celan, who lost both his parents in the Holocaust. “Todtnauberg” was published only after Celan’s suicide, in Paris, in 1970.
In deciding to render Irrnisfuge as Freedom to Fail, Trawny’s English-language translators have obscured this explicit reference to Celan, but otherwise their decision cannot be faulted (they explain their reasons in a brief and helpful introduction). The book really is about the freedom to fail, not just in a pragmatic, everyday sense, but in a kind of grand gesture of philosophical experimentation. Trawny’s essay can be read as a retelling of the story of Icarus, with Sein in place of the sun, and Heidegger taking over for the winged highflyer. But where some might see hubris at work, others see only a willingness to push the envelope, and it is clear from the very beginning that Trawny would rather have his philosophers be daredevils than hall monitors — Nietzsche rather than Kant, Kierkegaard instead of Descartes. In trading “drama” and “poetry” and “tragedy” for mere “argument,” contemporary philosophy has, Trawny thinks, totally lost its way. Or as he puts it: “The drama of thinking has vanished in the world of the argument.”Freedom to Fail is a lament — not for Heidegger’s mistakes, but for a philosophical epoch that, as he sees it, avoids mistakes at all costs.
According to Trawny, Heidegger’s commitment to thinking leads him into a realm beyond argumentation. It also led him into a realm beyond good and evil. Rhetorically, but also dramatically, Trawny asks if Nietzsche, who first surveyed this territory, was “Heidegger’s master” and then spends the next 80 pages answering his own question. Many of these pages make for captivating reading, but a shorter route could have been taken simply by quoting Heidegger’s late confession, as reported by his student Hans-Georg Gadamer, that “Nietzsche hat mich kaputt gemacht” — or, to translate it a little loosely, “Nietzsche broke me.”
On one level, Trawny’s essay is a meditation on the necessity of brokenness and failure for philosophical thinking. It takes as its lodestar Heidegger’s infamous — and undeniably self-serving — postwar declaration that, “He who thinks greatly must err greatly.” Without failure, success is meaningless. Without endings, no new beginnings. Without daring and danger, no true safety nor security. One could go on to list any number of productive oppositions: darkness and light, concealing and revealing, calculating and poetizing, erring and thinking. Heidegger’s understanding of truth as aletheia, or unconcealment, was predicated upon this chiaroscuro-like interplay of opposing forces — things were revealed one minute, only to slip into darkness and oblivion the next. Who was to say when something stood in the light of truth and not, in fact, in the shadow of error? “Is there an absolute criterion for the assessment of a philosophy?” Trawny asks, once again more rhetorically than not.
There probably isn’t any “absolute criterion for the assessment of a philosophy,” but we can probably agree that certain statements, whether “philosophical” or entirely prosaic, do or do not describe the world in ways that we find helpful, inspiring, or thought-provoking. Take, for instance, Heidegger’s remarks about “world Jewry” throughout the black notebooks. Trawny makes no bones about the fact that Heidegger certainly “harbored a private ressentiment against Jews,” which “cannot be understood otherwise than as anti-Semitic.” But he thinks that such sentiments cannot be used to “condemn his entire thought.” We can’t be done with Heidegger because philosophy “cannot be brought to a conclusion.” The only cure for philosophical errancy? More thinking, and more errancy, clearly.
Trawny repeatedly portrays Heidegger as the living embodiment of the philosophical life, as somebody who did everything for, and through, philosophy — even the bad stuff. But this runs the risk of taking everything Heidegger said at face value, of taking all too seriously the act that Heidegger was always performing, which was that of the academic outsider, the philosophical rebel who showed up to conferences and lectures still wearing his ski clothes. Many of Heidegger’s most famous students, from Hannah Arendt to Herbert Marcuse, were taken in by this image, but they also eventually came to see its limitations. Even the first Americans to hear about him or to see him teach, such as Sidney Hook and Marjorie Grene, knew that Heidegger was putting on a show. He may have railed against academic philosophy, but he still participated in it. Unlike Nietzsche, his “master,” Heidegger never abandoned his academic post and, when he got the chance, he even tried to reorganize the venerable University of Freiburg along Nazi party lines. What kind of academic outsider goes into university administration willingly, and then tries to militarize it? The one who errs greatly, of course.
Trawny makes a big deal of Heidegger’s preference for poetry and tragedy over technology and ethics. It was a preference that, supposedly, motivated his enthusiasm for overhauling university education: not necessarily more of the former and less of the latter; more like, some of the latter, but only in service of the former. What the university needed, what Germany needed, was a grand narrative to latch onto. The one Heidegger offered was one that, as Trawny admits, had a lot in common, though not everything, with Oswald Spengler’s The Decline of the West. It was a narrative of operatic proportions. “Heidegger wanted to narrate this history to the Germans,” Trawny suggests. “He wanted to determine a role for them to play in it.” He would connect the dots between the history of being — as it played out in everything from Ancient Greek tragedy to the poetry of Friedrich Hölderlin — and 1930s geopolitics. In Trawny’s formulation, “the narrative imperative runs: Be Oedipus! Be tragic! Yet it came about altogether differently.” That last line might be a bit of an understatement.
Heidegger’s tragic, overblown interpretation of Nazism may have been unique to him, but he was certainly not the only 20th-century philosopher to think that poetry and tragedy might preserve something integral to human experience that was in danger of being swallowed up by the forces of reason and demystification. Even somebody as different from Heidegger in temperament and orientation as the analytic philosopher Bernard Williams — a reader of Nietzsche who also went into university administration, but with far better and more humane results to show for it — thought that we could learn more about how to live from Sophocles than from Socrates and Aristotle.
But Williams never went so far as to proclaim that his own philosophical works were themselves the results of a tragic, world-historical narrative. And here is where Heidegger’s self-conception sets him, and those Heideggerians who follow too closely in his footsteps, apart. Whereas for Williams the lessons of Greek tragedy emphasized the contingency and frailty of even our best intentions (hence his famous idea of “moral luck”), for Heidegger they pointed towards the inescapability of fate and destiny. True philosophy wasn’t a matter of luck or chance at all: it was predestined, scripted even, by the historical unfolding of being itself. Heidegger was just the first and only thinker to recognize as much. He knew that history was tragedy and he thought he knew what roles he and the German people were supposed to play in it. He may have been miscast.
“The truth of being is onto-tragic,” Trawny writes at one point in Freedom to Fail. Following Heidegger, he thinks that the tragic history of being can be traced back “to the first of all inceptions, the inception of the history of being” itself. Heidegger thus came to see his life and his philosophy as part and parcel of the tragic narrative that resulted from this inception, an inception that pre-dated Socrates and stretched back to Heraclitus and maybe even all the way back to Daedalus, Icarus’s father, who gave him those wings in the first place. The history of being was, for Heidegger, the history of the forgetting, the oblivion of being. Tragedy was the only genre that suited it.
Once one begins to think in world-historical and onto-tragic proportions things often get dicey — never more so than when you start to think of yourself in such terms. Hegel had enough hubris to think that he stood at the end of history; Heidegger likewise considered himself the first true philosopher since Heraclitus, precisely because he alone had seen the tragic unfolding of the history of being leading up to him. Heidegger also thought he was alone in recognizing how, like Greek tragedy, the onto-tragedy of being contained within it the possibility of a new beginning. (In the black notebooks he also recognized that the names Heraclitus, Hegel, and Hölderlin each began with the letter “H.” So too did Heidegger and Hitler, of course. Was it fate?)
How tolerant you are of this kind of thinking will determine how persuasive you find Trawny’s defense of Heidegger’s errancy, which entails accepting at least three interrelated things: first, that Heidegger’s errancy was a necessary component of his thinking; second, that his thinking was destined by the history of being going back to Ancient Greece; and third, that this tragic narrative exists not just beyond good and evil, but also beyond guilt and responsibility, in an “abyss of freedom.” In other words, true thinking means never having to say you’re sorry (see critics’ responses to Gregory Fried’s “The King Is Dead”).
At times, Trawny’s meditation on Heidegger’s errancy reads almost like a kind of secularized theodicy. He dwells as much on the inescapability of evil as he does on the inevitability of failure. “For Heidegger,” Trawny writes, “evil belongs to thinking. Insofar as it elucidates being, it elucidates evil. For even evil belongs to the world-narrative.” But does this mean that, insofar as I recognize the role I play in the “onto-tragic” narrative of western history, I do not have to take responsibility for my actions? Is it all being’s fault?
Trawny is sharp enough to recognize that, in the light of day, all this can sound more than a little dubious. “Errancy can operate as an immunization of thinking,” he admits, and there is always the danger of such talk slipping into “farce” or even “buffoonery.” Richard Rorty once suggested that we should take Heidegger’s talk of fate and destiny with a grain of salt, especially when it came to Heidegger’s understanding of his place in the history of philosophy. “Heideggerese is only Heidegger’s gift to us,” Rorty remarked, “not Being’s gift to Heidegger.” But Trawny will not go so far as that.
There’s no room in Trawny’s narrative for Heideggerians who, like Rorty, might want to leaven all this errancy and tragedy with a bit of irony, or even some self-deprecating comedy. Only seriousness prevails. And that’s a shame, for it imposes a with-us-or-against-us mentality that limits engagement with Heidegger’s writings only to those who have demonstrated their full fidelity to the Heideggerian narrative. Readers of Heidegger are asked to “give it everything they’ve got” or “to give up. Isn’t this asking for precisely the kind of “unconditioned partisanship” that Trawny rightly calls into question in his footnotes?
In any case, what would it mean to give Heidegger everything we’ve got? Trawny ends his essay with a Heideggerian lament about how we now live a rational, technological world, one in which argument holds sway over poetry and tragedy. Our intellectual “sobriety,” he thinks, marks the “end of all ‘greatness.’” Maybe he’s right. Maybe academic philosophy today has conceded too much ground to demystifying argumentation, to judgment and quantification. Maybe we do need more poetry in our lives. Maybe films really do represent a last gasp for tragedy and grand-scale thinking in the modern world. (Trawny mentions Terrence Malick’s The Thin Red Line as a suitably Heideggerian work, though its debts to Heidegger, rather than, say, Emerson, are debatable.)
From a current West-coast vantage point, though, it seems clear that, when it comes to failure, there is no need to worry. Or, alternatively, that there is every reason to worry, just not any of those suggested by Trawny’s book. After all, “the freedom to fail” is very much alive and well these days, and that might just be the problem in and of itself, especially because it has taken root in a place that any real Heideggerian would be horrified by: Silicon Valley — the land of Uber rather than the Übermensch. It is in the world of tech start-ups and venture capital, algorithms and IPOs, where the productive and undeniable power of errancy is praised and rewarded most vociferously. Industry mottos such as “Fail fast, fail often” or, in a more Beckettian tone, “Fail better,” encapsulate our current narratives of intellectual daring and innovation. In the domain of digital technology, small failures are seen as a sign of small thinking; large failures, meanwhile, are held up as the hallmarks of revolutionary change. Has “technicity” — as Heidegger called it — co-opted even errancy itself? Or is this just the inevitable farce following the tragedy? Whatever it is, we should remember that, though he may not have failed fast, Heidegger sure failed hard. Maybe there is a lesson in that somewhere, whether you are a Heideggerian or not.

Clothing

Iron-collared and corseted

MIKA ROSS-SOUTHALL

Denis Bruna, editor
FASHIONING THE BODY
An intimate history of the silhouette
272pp. Yale University Press. £35 (US $50).
978 0 300 20427 8

Kimberly Chrisman-Campbell
FASHION VICTIMS
Dress at the Court of Louis XVI and Marie-Antoinette
352pp. Yale University Press. £35 (US $60).
978 0 300 15438 2

Published: 19 August 2015
An American corset, c.1865; from Fashioning the BodyAn American corset, c.1865; from Fashioning the Body
We hope you enjoy this piece from theTLS, which is available every Thursday in print and via the TLS appAlso in this week’s issue: Greek economic miracles; Marilynne Robinson’s longings; the Charlie Hebdo movement; Lisa Appignanesi draws lines between mad and bad; Joseph Cornell gets wanderlust – and much more.
There’s nothing natural about clothes. Some people like to think that what they wear is free from artifice. But it never is. Clothes shape, reshape, highlight, squeeze, falsify, constrain our bodies; they signal ideals of beauty, social etiquette or morality. Those shoulder pads, little plastic stiffeners in shirt collars, push-up bras and contouring underwear in our wardrobes today are the successors of starched neck ruffs, padded codpieces, hoop petticoats, girdles and stomach belts – structuring mechanisms, that work on our body’s silhouette to bring it into line with what we think we ought to look like.
How and why fashionable, often irrational, concepts of what we should wear and what is and is not beautiful are questions thatFashioning the Body, a collection of essays published in conjunction with an exhibition in New York earlier this year, attempts to answer. Undergarments, or “scaffolds”, and how they construct a body’s silhouette, are the focus here. “When these articles are removed from the person wearing them, they look like carcasses, like bodies foreign to the body they dressed”, Denis Bruna writes in his introduction. “Without a body, the garment has no reason to exist; it is merely a lifeless mass of fabric, a soulless hide.” Several pages of abstract, close-up photographs of, for instance, beehive-shaped wire frames and rattan hoops suspended on white or black backgrounds prove Bruna’s point: pictured in isolation these shapes have little meaning. “In short, fashion makes the body”, he says: “there is no natural body, only a cultural body. The body is a reflection of the society that presided over its creation”.
It is not uncommon to read that fashion was invented in the Middle Ages, Bruna writes, though he warns that this consensus may stem as much from the increase in written and pictorial evidence as from any genuine change. These materials suggest that from the fourteenth century a new awareness of clothing, as a way to sculpt the body, developed. Where both men and women, Bruna shows, had worn a voluminous garment like a monastic habit – the surcoat – women now dressed in a long robe (the bliaut) often with a low neckline (sometimes provocatively bare down to the nipples), fitted tightly at the waist with laces tied at the front or back to support, compress and lift the breasts and exaggerate the hips. Although the binding of breasts was nothing new (women in ancient Rome wore bands of fabric called mamillare), this impulse was noticeably documented in the medieval period. Men, meanwhile, wore doublets – so called because the garment was made from doubled-up material, between which cotton padding or silk cocoon scraps were stitched – at first as cushioning underneath armour, and then as a way of enhancing the chest and broadening the shoulders under everyday clothing, covering the whole torso to just below the waist, or not: one of Bruna’s rich examples comes from the Parson in The Canterbury Tales, who denounces the shortness of men’s doublets that “show the boss and the shape of the horrible swollen members that seem like to the malady of hernia . . . and eke the buttocks that fare as it were the hinder part of a she-ape in the full of the moon”. An exquisite frontispiece from an illuminated Bible given to King Charles V of France by his adviser, Jean de Vaudetar, in 1372, is reproduced here, showing the King on the left sitting in an outdated surcoat and de Vaudetar kneeling on the right in a doublet that strikingly contorts his body: a swollen chest and tiny waist, like a greyhound. Still, these male and female silhouettes have both played a decisive role in Western fashion.
from the fourteenth century a new awareness of clothing, as a way to sculpt the body, developed
Shoulders were further broadened in the fifteenth century, as men added a cylindrical roll around the armholes to which ballooning fabric was attached. But by the sixteenth century, they were no longer the star attraction. The doublet was modified to become the peascod, or goose-bellied doublet, which was padded to a point at the waist like a breastplate, while more padding swelled with supports around the abdomen, sculpting a hanging paunch. This all centred on the codpiece, and Bruna dedicates an entire chapter to it. Besides being a functional opening at the crotch – indeed, earlier codpieces were a piece of cloth partly attached with buttons or eyelets at the groin – these pouches were stuffed or layered with stiff fabric to highlight and stimulate the penis. Puffed up, or trying to puff themselves up, with rank and virility, men of all social classes adopted this new-fangled appendage. Giovanni Battista Moroni’s entertaining portrait of Antonio Navagero (1565), for example, depicts the Venetian bureaucrat with a bulging red-velvet codpiece protruding from his fur-lined robe, like his shiny, ruddy nose poking out from his beard above. As Philip Stubbes pointed out in his pamphlet The Anatomie of Abuses (1583), men were “so stuffed, wadded, and sewn that they can’t even bend down to the ground”.
Women fared little better. In the sixteenth century, beauty among the elite was concentrated around the face. Women’s figures were elongated, flared and padded at the hips with the help of farthingales (a series of connected hoops made from whalebone, rattan, reeds or cord under the skirt) to hide the “carnal” parts of the body, and the head, the “noble” part of the body, was emphasized at the top with a high, stand-up collar. Later, in the seventeenth century, the same effect was achieved with a stiff white linen ruff (“the platter upon which the head was served”, Bruna tells us), also worn by men and children.
One of the most shocking items from this time, though, is the iron corset. A fascinating chapter by Bruna and Sophie Vesin focuses on the ten or so that survive in various museum collections: “more closely related to metalwork than textiles” and “at times compared to instruments of torture”, they are the oldest versions of a corset, some of which have been dated to the sixteenth and seventeenth centuries; they open and close with hinges, and are pierced, not just for decoration but to reduce their weight (those still in existence each weigh between 800 grams and a kilo). Some of the sharp ridges still have traces of velvet edging. (Just imagine the pain when caught on skin!) No visual evidence survives of their being worn, but it seems likely that some were. What we do have are written records: Eleonora of Toledo ordered two from her family’s armourer in 1549. The authors perhaps don’t make it clear enough, however, that another of their examples, the “marquise-marquis de Banneville”, is a fictional one, from the tale ascribed to the Abbé de Choisy (1695): a mother, fearing her son will be lost in battle, puts him in a metal corset to reshape his body, creating feminine hips and a bust.
The surgeon Ambroise Paré, in 1575, recommended iron corsets for “flaccid” girls who had hunchbacks. To Bruna and Vesin, fashion and orthopaedics are not always in opposition: “orthopedics, which are today exclusively a branch of medicine, were principally a social art in former times. Holding oneself erect, and staying that way, was a preoccupation of the upper classes, and iron corsets furthered this aim”. The preoccupation persists over centuries. We repeatedly come across undergarments in this book that offer the body “support”, help with “fat-busting”, toning, moisturizing and so on. A French poster from the 1950s promotes stomach bands for toddlers for their “delicate frame”, a custom that was standard between the seventeenth and nineteenth centuries when girls and boys wore the same clothes as adults, including corsets and skirts. Only after the age of six did boys abandon severe body-binding undergarments to wear pants or breeches like men. Anti-obesity belts became a popular way for men in the 1900s to compress their flab – a symbol of softness and indulgence not admired as it was in the previous century. An advert from 1928 proclaims: “Obesity makes you ridiculous. Big-bellied men, give up the figure that makes you ugly and start wearing the Franck-Braun belt”. The second half of the twentieth century gave us Issey Miyake’s plastic-moulded bustiers, and plaster corsets by Alexander McQueen, as well as a skin-tight brown leather corset, with large diagonal stitches across the chest and abdomen as if closing up a wound.
Puffed up, or trying to puff themselves up, with rank and virility, men of all social classes adopted codpieces
Certainly, hindering the body’s movement was deliberate in the seventeenth and eighteenth centuries. It was a way of showing off one’s wealth: the less you could do physically, the more servants you needed to do things for you. Petticoat breeches, laden with ribbons and lace, worn by men at the court of Louis XIV were described by Molière as “folly” in L’École des maris: “large rolls wherin the legs are put every morning, as it were into the stocks”, making the wearer “straddle about with their legs as wide apart as if they were the beams of a mill”. Added to this were silk stockings to slim the legs (calves were sometimes subtly padded with material to amplify lacking muscles) and precarious heels (also worn by women), often up to three or four inches high, altering one’s gait.
A few decades on in Versailles, whalebone corsets, known as stays, unforgivingly squeezed women’s shoulder blades together one on top of the other to such an extent that you could put two fingers into the hollow created down the spine. The farthingale had developed into ever-widening panniers that extended sideways from the hips. Walking with ease was a skill you had to learn. Before she was seven, the Comtesse de Genlis remembered: “I was quite surprised when I was told that I was to be given a master to teach me what I thought I knew perfectly well – how to walk . . . and to rid me of my provincial airs, I was given an iron collar”. It was also fashionable to wear shoe buckles so enormous that they could deliver glancing blows to the opposite ankle as you walked. And, of course, to wear wigs: during the reign of Louis XVI – a significant moment in European fashion history, according to Kimberly Chrisman-Campbell’s absorbing and well-illustrated survey, Fashion Victims – some men wore wigs fitted with metal, face-lifting armatures to stretch out wrinkles on the forehead, while women stiffened and enhanced the height of their own hair with pomade and false attachments. In a letter of March 5, 1775, Marie-Antoinette’s mother chastised her daughter: “They say your hair is 36 inches high from the roots, and with so many feathers and ribbons that it rises even higher! . . . A pretty young queen, full of attractions, has no need of all these follies”.
What Chrisman-Campbell does so well in this book is to explain how a new global fashion system, established in France during the eighteenth century, became political. “The sartorial restlessness . . . was symptomatic of – and, ultimately, responsible for – the gradual, inexorable unraveling of France’s social fabric that would culminate in revolution.” Three archetypes provoked and personified the country’s changes: the queen; the petite-maîtresse, a label given to urban women lower down the social scale, who were occupied in keeping up with the latest fads despite how unflattering, expensive or frivolous they were; and the marchande de modes, similar to what we would now call a designer, who perpetuated the fashion cycle by relentlessly introducing new garment constructions.
An influential individual could single-handedly garner support for current causes, and sustain or bankrupt whole branches of the country’s commerce. When Louis XVI was inoculated from smallpox in 1774, the marchandes de modes commemorated the event with the pouf à l’inoculation, a headdress representing a rising sun and the serpent of Asclepius. Hats adorned with miniature ships celebrated French naval victories, as well as showcasing the wearer’s patriotism and political engagement. Clothing was a way of telling others which plays, composers and ideas you liked. If it hadn’t been for fashion, the Enlightenment might not have spread through Europe, Chrisman-Campbell suggests.
whalebone corsets, known as stays, unforgivingly squeezed women’s shoulder blades together one on top of the other to such an extent that you could put two fingers into the hollow created down the spine
Marie-Antoinette, however, had an inappropriate interest in clothes. Her decision to use Paris’s most fashionable marchandes de modes to dress her, rather than those officially appointed at Versailles, deviated from court protocol. She spent 258,002 livres on clothes and accessories in 1785 (more than twice her annual budget). A third of this went to her favourite marchande, Rose Bertin, whose career was made (and with time, destroyed) by the royal association: “wildly rich without being even remotely wellborn, Bertin was a walking threat to the entire social order”. Anything Marie-Antoinette wore would quickly appear in fashion plates and magazines as “à la reine” and be copied by the public. Without sumptuary laws, luxury was suddenly within reach for anyone. In the 1780s, for example, the Queen’s preference for imported muslins and gauzes over the silks produced in Lyon helped put France’s textile industry out of business. This was part of her move towards a more natural aesthetic and to fend off critics of her extravagance, but the catastrophic economic impact of her chemise à la reine – a plain, white muslin gown with a gathered neckline and sleeves, a wide sash tied at the waist and no hoop under the skirt – meant that she was never more criticized for her wardrobe. With the throne’s reputation at stake, Bertin and the Queen’s portraitist, Elisabeth Louise Vigée Le Brun, were called on to perform “sartorial damage control”. The result, a portrait, here given a full page, shows Marie-Antoinette posing in a suitably regal red velvet dress, trimmed with sable and Alençon lace (a pointed endorsement of the French lace industry), surrounded by her children. It was exhibited at the Salon in August 1787, and almost immediately withdrawn because of a public outcry. The empty frame remained on the wall of the Louvre with a note pinned to it reading, “Behold the Deficit!”
In some ways, France never escaped the potency of fashion. Looking beyond the sans-culottes, Chrisman-Campbell argues that the red, white and blue cockade became a symbol of enforced conformity to the principles of the French Revolution. By 1792, it was mandatory for both sexes, even foreign visitors to France, to wear it. “Absolute monarchy was replaced by an equally despotic form of mob rule.” The Revolution had transformed “la mode” to “le mode”, she says, acknowledging that fashions in dress were inseparable from fashions in ideas.
The history of the Revolution is dynamically told in Fashion Victims, and where Kimberly Chrisman-Campbell tries to gauge the cultural significance of clothes, art, personal memoirs and other assorted and well-chosen sources, she avoids jargon. The book is thoroughly researched (the translations from the French texts are her own) and inflected with energy. Marie-Antoinette is condemned, again; but we can see more clearly than ever why it happened.

How the Atheist Son of a Jewish Rabbi Created One of the Greatest Libraries of Socialist Literature | The Nation

How the Atheist Son of a Jewish Rabbi Created One of the Greatest Libraries of Socialist Literature | The Nation



Abramsky-books_wilson_img

30.8.15

The World is not a better place this morning. Dr. Sacks Rest in Peace


sacks_1-042315.jpg
Magnum Photos
Newcastle Beach, New South Wales, Australia, 2000; photograph by Trent Parke

1.

Nothing is more crucial to the survival and independence of organisms—be they elephants or protozoa—than the maintenance of a constant internal environment. Claude Bernard, the great French physiologist, said everything on this matter when, in the 1850s, he wrote, “La fixité du milieu intérieur est la condition de la vie libre.” Maintaining such constancy is called homeostasis. The basics of homeostasis are relatively simple but miraculously efficient at the cellular level, where ion pumps in cell membranes allow the chemical interior of cells to remain constant, whatever the vicissitudes of the external environment. More complex monitoring systems are demanded when it comes to ensuring homeostasis in multicellular organisms—animals, and human beings, in particular.
Homeostatic regulation is accomplished by the development of special nerve cells and nerve nets (plexuses) scattered throughout our bodies, as well as by direct chemical means (hormones, etc.). These scattered nerve cells and plexuses become organized into a system or confederation that is largely autonomous in its functioning; hence its name, the autonomic nervous system (ANS). The ANS was only recognized and explored in the early part of the twentieth century, whereas many of the functions of the central nervous system (CNS), especially the brain, had already been mapped in detail in the nineteenth century. This is something of a paradox, for the autonomic nervous system evolved long before the central nervous system.
They were (and to a considerable extent still are) independent evolutions, extremely different in organization, as well as formation. Central nervous systems, along with muscles and sense organs, evolved to allow animals to get around in the world—forage, hunt, seek mates, avoid or fight enemies, etc. The central nervous system, with its sense organs (including those in the joints, the muscles, the movable parts of the body), tells one who one is and what one is doing. The autonomic nervous system, sleeplessly monitoring every organ and tissue in the body, tells one how one is. Curiously, the brain itself has no sense organs, which is why one can have gross disorders here, yet feel no malaise. Thus Ralph Waldo Emerson, who developed Alzheimer’s disease in his sixties, would say, “I have lost my mental faculties but am perfectly well.”
By the early twentieth century, two general divisions of the autonomic nervous system were recognized: a “sympathetic” part, which, by increasing the heart’s output, sharpening the senses, and tensing the muscles, readies an animal for action (in extreme situations, for instance, life-saving fight or flight); and the corresponding opposite—a “parasympathetic” part—which increases activity in the “housekeeping” parts of the body (gut, kidneys, liver, etc.), slowing the heart and promoting relaxation and sleep. These two portions of the ANS work, normally, in a happy reciprocity; thus the delicious postprandial somnolence that follows a heavy meal is not the time to run a race or get into a fight. When the two parts of the ANS are working harmoniously together, one feels “well,” or “normal.”
No one has written more eloquently about this than Antonio Damasio in his book The Feeling of What Happens and many subsequent books and papers. He speaks of a “core consciousness,” the basic feeling of how one is, which eventually becomes a dim, implicit feeling of consciousness.1 It is especially when things are going wrong, internally—when homeostasis is not being maintained; when the autonomic balance starts listing heavily to one side or the other—that this core consciousness, the feeling of how one is, takes on an intrusive, unpleasant quality, and now one will say, “I feel ill—something is amiss.” At such times one no longer looks well either.
As an example of this, migraine is a sort of prototype illness, often very unpleasant but transient, and self-limiting; benign in the sense that it does not cause death or serious injury and that it is not associated with any tissue damage or trauma or infection; and occurring only as an often-hereditary disturbance of the nervous system. Migraine provides, in miniature, the essential features of being ill—of trouble inside the body—without actual illness.
When I came to New York, nearly fifty years ago, the first patients I saw suffered from attacks of migraine—“common migraine,” so called because it attacks at least 10 percent of the population. (I myself have had attacks of them throughout my life.2) Seeing such patients, trying to understand or help them, constituted my apprenticeship in medicine—and led to my first book, Migraine.
Though there are many (one is tempted to say, innumerable) possible presentations of common migraine—I described nearly a hundred such in my book—its commonest harbinger may be just an indefinable but undeniable feeling of something amiss. This is exactly what Emil du Bois-Reymond emphasized when, in 1860, he described his own attacks of migraine: “I wake,” he writes, “with a general feeling of disorder….”
In his case (he had had migraines every three to four weeks, since his twentieth year), there would be “a slight pain in the region of the right temple which…reaches its greatest intensity at midday; towards evening it usually passes off…. At rest the pain is bearable, but it is increased by motion to a high degree of violence…. It responds to each beat of the temporal artery.” Moreover, du Bois-Reymond looked different during his migraines: “The countenance is pale and sunken, the right eye small and reddened.” During violent attacks he would experience nausea and “gastric disorder.” The “general feeling of disorder” that so often inaugurates migraines may continue, getting more and more severe in the course of an attack; the worst- affected patients may be reduced to lying in a leaden haze, feeling half-dead, or even that death would be preferable.3
I cite du Bois-Reymond’s self- description, as I do at the very beginning of Migraine, partly for its precision and beauty (as are common in nineteenth-century neurological descriptions, but rare now), but above all, because it is exemplary—all cases of migraine vary, but they are, so to speak, permutations of his.
The vascular and visceral symptoms of migraine are typical of unbridled parasympathetic activity, but they may be preceded by a physiologically opposite state. One may feel full of energy, even a sort of euphoria, for a few hours before a migraine—George Eliot would speak of herself as feeling “dangerously well” at such times. There may, similarly, especially if the suffering has been very intense, be a “rebound”after a migraine. This was very clear with one of my patients (Case #68 in Migraine), a young mathematician with very severe migraines. For him the resolution of a migraine, accompanied by a huge passage of pale urine, was always followed by a burst of original mathematical thinking. “Curing” his migraines, we found, “cured” his mathematical creativity, and he elected, given this strange economy of body and mind, to keep both.
While this is the general pattern of a migraine, there can occur rapidly changing fluctuations and contradictory symptoms—a feeling that patients often call “unsettled.” In this unsettled state (I wrote in Migraine), “one may feel hot or cold, or both…bloated and tight, or loose and queasy; a peculiar tension, or languor, or both…sundry strains and discomforts, which come and go.”
Indeed, everything comes and goes, and if one could take a scan or inner photograph of the body at such times, one would see vascular beds opening and closing, peristalsis accelerating or stopping, viscera squirming or tightening in spasms, secretions suddenly increasing or decreasing—as if the nervous system itself were in a state of indecision. Instability, fluctuation, and oscillation are of the essence in the unsettled state, this general feeling of disorder. We lose the normal feeling of “wellness,” which all of us, and perhaps all animals, have in health.

2.

If new thoughts about illness and recovery—or old thoughts in new form—have been stimulated by thinking back to my first patients, they have been given an unexpected salience by a very different personal experience in recent weeks.
On Monday, February 16, I could say I felt well, in my usual state of health—at least such health and energy as a fairly active eighty-one-year-old can hope to enjoy—and this despite learning, a month earlier, that much of my liver was occupied by metastatic cancer. Various palliative treatments had been suggested—treatments that might reduce the load of metastases in my liver and permit a few extra months of life. The one I opted for, decided to try first, involved my surgeon, an interventional radiologist, threading a catheter up to the bifurcation of the hepatic artery, and then injecting a mass of tiny beads into the right hepatic artery, where they would be carried to the smallest arterioles, blocking these, cutting off the blood supply and oxygen needed by the metastases—in effect, starving and asphyxiating them to death. (My surgeon, who has a gift for vivid metaphor, compared this to killing rats in the basement; or, in a pleasanter image, mowing down the dandelions on the back lawn.) If such an embolization proved to be effective, and tolerated, it could be done on the other side of the liver (the dandelions on the front lawn) a month or so later.
The procedure, though relatively benign, would lead to the death of a huge mass of melanoma cells (almost 50 percent of my liver had been occupied by metastases). These, in dying, would give off a variety of unpleasant and pain-producing substances, and would then have to be removed, as all dead material must be removed from the body. This immense task of garbage disposal would be undertaken by cells of the immune system—macrophages—that are specialized to engulf alien or dead matter in the body. I might think of them, my surgeon suggested, as tiny spiders, millions or perhaps billions in number, scurrying inside me, engulfing the melanoma debris. This enormous cellular task would sap all my energy, and I would feel, in consequence, a tiredness beyond anything I had ever felt before, to say nothing of pain and other problems.
I am glad I was forewarned, for the following day (Tuesday, the seventeenth), soon after waking from the embolization—it was performed under general anesthesia—I was to be assailed by feelings of excruciating tiredness and paroxysms of sleep so abrupt they could poleaxe me in the middle of a sentence or a mouthful, or when visiting friends were talking or laughing loudly a yard away from me. Sometimes, too, delirium would seize me within seconds, even in the middle of handwriting. I felt extremely weak and inert—I would sometimes sit motionless until hoisted to my feet and walked by two helpers. While pain seemed tolerable at rest, an involuntary movement such as a sneeze or hiccup would produce an explosion, a sort of negative orgasm of pain, despite my being maintained, like all post-embolization patients, on a continuous intravenous infusion of narcotics. This massive infusion of narcotics halted all bowel activity for nearly a week, so that everything I ate—I had no appetite, but had to “take nourishment,” as the nursing staff put it—was retained inside me.
Another problem—not uncommon after the embolization of a large part of the liver—was a release of ADH, anti-diuretic hormone, which caused an enormous accumulation of fluid in my body. My feet became so swollen they were almost unrecognizable asfeet, and I developed a thick tire of edema around my trunk. This “hyperhydration” led to lowered levels of sodium in my blood, which probably contributed to my deliria. With all this, and a variety of other symptoms—temperature regulation was unstable, I would be hot one minute, cold the next—I felt awful. I had “a general feeling of disorder” raised to an almost infinite degree. If I had to feel like this from now on, I kept thinking, I would sooner be dead.
I stayed in the hospital for six days after embolization, and then returned home. Although I still felt worse than I had ever felt in my life, I did in fact feel a little better, minimally better, with each passing day (and everyone told me, as they tend to tell sick people, that I was looking “great”). I still had sudden, overwhelming paroxysms of sleep, but I forced myself to work, correcting the galleys of my autobiography (even though I might fall asleep in mid-sentence, my head dropping heavily onto the galleys, my hand still clutching a pen). These post-embolization days would have been very difficult to endure without this task (which was also a joy).
On day ten, I turned a corner—I felt awful, as usual, in the morning, but a completely different person in the afternoon. This was delightful, and wholly unexpected: there was no intimation, beforehand, that such a transformation was about to happen. I regained some appetite, my bowels started working again, and on February 28 and March 1, I had a huge and delicious diuresis, losing fifteen pounds over the course of two days. I suddenly found myself full of physical and creative energy and a euphoria almost akin to hypomania. I strode up and down the corridor in my apartment building while exuberant thoughts rushed through my mind.
How much of this was a reestablishment of balance in the body; how much an autonomic rebound after a profound autonomic depression; how much other physiological factors; and how much the sheer joy of writing, I do not know. But my transformed state and feeling were, I suspect, very close to what Nietzsche experienced after a period of illness and expressed so lyrically in The Gay Science:
Gratitude pours forth continually, as if the unexpected had just happened—the gratitude of a convalescent—for convalescence was unexpected…. The rejoicing of strength that is returning, of a reawakened faith in a tomorrow and the day after tomorrow, of a sudden sense and anticipation of a future, of impending adventures, of seas that are open again.

Epilogue

The hepatic artery embolization destroyed 80 percent of the tumors in my liver. Now, three weeks later, I am having the remainder of the metastases embolized. With this, I hope I may feel really well for three or four months, in a way that, perhaps, with so many metastases growing inside me and draining my energy for a year or more, would scarcely have been possible before.
  1. 1
    Antonio Damasio and Gil B. Carvalho, “The Nature of Feelings: Evolutionary and Neurobiological Origins,” Nature Reviews Neuroscience, Vol. 14 (February 2013). 
  2. 2
    I also have attacks of “migraine aura,” with scintillating zigzag patterns and other visual phenomena. They, for me, have no obvious relation to my “common” migraines, but for many others the two are linked, this hybrid attack being called a “classical” migraine. 
  3. 3
    Aretaeus noted in the second century that patients in such a state “are weary of life and wish to die.” Such feelings, while they may originate, and be correlated with, autonomic imbalance, must connect with those “central” parts of the ANS in which feeling, mood, sentience, and (core) consciousness are mediated—the brainstem, hypothalamus, amygdala, and other subcortical structures.