Читать книгу Processual Pessimism. On the Nature of Cosmic Suffering and Human Nothingness (Vladislav Pedder) онлайн бесплатно на Bookz (2-ая страница книги)
Processual Pessimism. On the Nature of Cosmic Suffering and Human Nothingness
Processual Pessimism. On the Nature of Cosmic Suffering and Human Nothingness
Оценить:

5

Полная версия:

Processual Pessimism. On the Nature of Cosmic Suffering and Human Nothingness

It is important to understand: this mode of existence is not virtuous in the Stoic sense. Virtue presupposes a moral dimension in which actions possess intrinsic value. Moral categories have lost a universal foundation, and nevertheless a selective, contextual ethics of harm-minimization remains possible – this is already a pragmatic ethical stance, not a metaphysical truth. This is not a path to liberation in the Buddhist sense – because there is nothing to be liberated from and no one to be liberated. This is not a technique of emotion regulation in the psychotherapeutic sense – because regulation presupposes a goal, and there are no goals. It is simply a way to pass the time between birth and death that minimizes one’s own contribution to the total suffering not through heroic self-sacrifice, but through the simple recognition that any active intervention in the world is more likely to multiply suffering than to reduce it.

This is not proposed as a model for imitation; you are free to do whatever you wish until you encounter external resistance. There is no claim that this is “right” or “better.” Most people will continue to create projects, build careers, start families, participate in economic cycles, believe in the possibility of improvement. That is their path. To be convinced of the contrary would be just another form of active intervention, an attempt to impose one’s own vision. But for those who have arrived at certain conclusions about the nature of reality and see no grounds for fuss, there is the possibility simply to stop fussing.

The difference between this view and the “liberating” ontological practices is fundamental. The latter present themselves as solutions, as paths toward something better – toward inner peace, toward “liberation.” If life is meaningless and will end in death (and nothing else is given), then heroic efforts to fill it with meaning are merely another form of self-deception, requiring constant energetic expenditure to maintain yet another cognitive distortion.

I hope this clarification puts everything in its place. Thus the position presented in the book is neither existentialism nor nihilism in the conventional sense. It is a consistent pessimism that rejects not only consolatory meanings, but even the very possibility of finding consolation in any action, practice, or “acceptance.” All known philosophies of acceptance, humility, or emotion management are only more refined forms of self-deception, ephemeral constructions. The second part, written in the voice of Professor P., destroys even this final illusion, showing that experience itself is primary in relation to all these constructions of the mind, and that there is no exit. In the third part, The Experience of the Tragic, written in my own voice, I once again clarified all these points so that no doubts would remain about what was said. Nothing is more real than pessimism – not because pessimism is “realism,” but because it alone refuses consolatory illusions and self-deception. It does not deceive you – and if it does deceive you, it will certainly not harm you, for what can harm a human being more than life itself? It simply reveals reality as it is – and this reality does not require our acceptance, because it exists regardless of whether we accept it or not, whether we hide or consciously drag our existence to the final day. The best thing a human being can do in life is to do nothing. Having dealt with this question, we may begin our path.

Part I. Ontological Foundations

Flat Ontology of Process

Traditional European thought, from Christian anthropology to phenomenology, insisted on the special status of the human being, deriving it from the act of self-consciousness: “I think, therefore I exist.” This position not only consolidated anthropocentrism, but also made the subject into an ontological foundation, a point of reference for the world. However, if we look closely at how the very sense of “I” arises, it becomes clear that it is not a source but a result of information processing – a temporary and fragile product of differentiation, integration of information, metabolic and neural processes that unfold without any “observer.”

What is called consciousness is only a process, stable only so long as it is supported by flows of energy, information, and interactions. Thinking, accordingly, is a processual activity. As soon as these flows change, consciousness collapses – not because an “I” disappears, but because it never existed as a substance in the first place. Instead, there is only a non-subjective dynamics, a process that I will describe further.

Such an approach makes it possible to step beyond not only anthropocentrism, but also the very opposition of “subject – object.” The point is not to “return the human being to nature,” but to see that nature never knew the human as a separate category. Living nature is not the center of the Cosmos, but one among many temporary forms of organization of process, as vulnerable and transient as any other. Culture, ethics, creativity – none of this is abolished, but it is deprived of its transcendent status. They turn out not to be manifestations of “spirit,” but complex, historically conditioned modes of stabilizing experience, which themselves obey the laws of thermodynamics, information, and decay.

The contemporary rejection of biocentrism represents a consistent philosophical movement, beginning with the deconstruction of anthropocentrism and reaching its radical phase in the overcoming of naturocentrism as a whole. This evolution is vividly expressed in the work of Jean-Marie Schaeffer, who enacts his rejection of anthropocentrism through a critique of the “Thesis of Human Exceptionalism.” Schaeffer identifies the structures of anthropocentrism by analyzing the Cartesian cogito, demonstrating that the claim to the uniqueness of human consciousness is untenable. However, while Schaeffer stops at the boundary of the biological, disputing human exceptionality without questioning the distinctiveness of the living, other philosophers have gone considerably further. Thinkers such as Manuel DeLanda, Graham Harman, and Levi Bryant initiated a broader perspective, first rejecting consciousness as a philosophically privileged phenomenon, and then discarding the very vertical ontology that structures hierarchies among nature, life, and objects. Their aim became the development of a flat ontology, in which humans, artifacts, natural phenomena, and technological objects coexist on a single ontological plane without any center or primacy. Thus, the final point of this trajectory is a complete departure not only from anthropocentrism and biocentrism, but also from any naturocentrism that dissolves heterogeneous entities into a single whole.

But let us not get ahead of ourselves. In his book, Jean-Marie Schaeffer conducts a meticulous deconstruction of the complex of assumptions we have come to designate as the “Thesis of Human Exceptionalism,” and in this deconstruction, the center of gravity invariably falls on the problem of the “I” and on the legacy of Cartesianism. For Schaeffer, the Cartesian cogito is not merely a historical argument; it functions as a methodological device through which an entire system of epistemological and ontological privileges is defended: the self-referentiality6 of the statement “I think” is attributed immunity against doubt, and on this basis an extended conclusion about the nature of the human as a thinking substance is constructed. Schaeffer clearly demonstrates both the power of this device and its limits: self-referentiality indeed grants the cogito a special performative force, but this force is not equivalent to a proof of the essential nature of the “I.” Descartes sought to derive from the immediate intuition of existence not only the fact of the speaker’s being, but also a characterization of the nature in which this being is realized; Schaeffer emphasizes that such a transition from fact to essence is unjustified when taking into account modern knowledge of biology, neuroscience, and the social nature of human life.

Schaeffer’s key idea is to show that the Cartesian defense of the “I” operates as a strategy of immunization: it delineates the boundaries of what is considered admissible in human knowledge and refuses to accept “externalist”7 evidence originating “from the third person.” Due to this strategy, any external knowledge about the human being can easily be declared irrelevant to understanding their true nature, because the true nature is supposedly revealed only in the act of self-consciousness. Schaeffer terms this segregationism8: the Cartesian defense renders philosophy and the humanities partially insulated from naturalistic explanations, and this, in his view, is precisely what makes the Thesis of Human Exceptionalism resilient, regardless of empirical advances in biology.

Through an analysis of phenomena that disrupt the authentication of mental activity – auditory hallucinations, delusions of “inserted thoughts,” and similar disorders – he demonstrates that the very fact of “I think” can be experienced as non-self; the sense of authorship of thoughts and the sense of agency are separable and susceptible to failure. These clinical cases show that the immunity of the cogito does not guarantee that we are dealing with a monolithic, inflexible center of consciousness; in practice, the act of “I” is vulnerable to distribution, fragmentation, and erroneous attribution. Consequently, even where the performative force of the cogito remains undeniable, its conclusions regarding the nature of the “I” lose their persuasive power.


The development of these ideas can be traced in the works of the philosopher and cognitive scientist Daniel Dennett. Contemporary debates on the “hard problem of consciousness” are gaining momentum today. I have already discussed David Chalmers in the previous book, but I do not share his position on consciousness, although, as you will see in the second part of this book, I acknowledge his arguments regarding cognitive functions. The hard problem of consciousness is resolved if consciousness as a phenomenon does not exist; in this sense, I am an eliminative materialist. Dennett rejects Chalmers’ panpsychism; he is a physicalist, and his approach to consciousness can be called illusionism, which is much closer to eliminativism, but it does not eliminate the phenomenon of “consciousness” – rather, it provides a new explanation, stripped of any magical properties. From his book From Bacteria to Bach and Back, it becomes clear that his critique of the Cartesian subject is even more radical than Schaeffer’s. Whereas Schaeffer analyzes the cogito as an erroneous transition from the fact of thinking to a substantial “I,” Dennett goes further – he questions the very existence of that central subject which Cartesian tradition so vigorously defends. For Dennett, the “I” is a late product of evolution, a kind of “narrative center” arising from the intertwining of multiple cognitive processes. He compares the self to a theoretical construct in physics – a convenient reference point lacking any substantiality.

In the context of Schaeffer’s critique of segregationism, Dennett’s position appears as a logical completion: if Schaeffer shows that the Cartesian “I” cannot serve as a foundation for human exceptionalism, Dennett demonstrates that this “I” simply does not exist as a unified, coherent entity. His famous metaphor of the “heterophenomenon”9 describes consciousness as the result of distributed neural networks. Interestingly, unlike radical eliminativists, Dennett preserves the self as a useful illusion – similar to how the center of mass remains a useful abstraction in physics, even though it does not exist as a discrete entity in reality.

Dennett provides a powerful conceptual apparatus for demystifying the subject, but his caution regarding elimination leaves room for more decisive conclusions. His analysis shows that the Cartesian “I” is not merely erroneous – it is the result of a kind of simplification (the word “illusion” is poorly suited here, because if it were an illusion, there would have to be a reality beyond it, which does not exist at all), created by evolution to simplify complex cognitive processes.

But if the “I” is a process, not a substance, then its ontological status must not only be downgraded but reinterpreted as a temporary pattern within the flow of mental events. Dennett stops halfway, preserving a functional role for the self; I, however, insist that disintegration is not a side effect but a fundamental property of any formation. And this brings us to the discussion of flat ontology. To describe it, we turn to Ray Brassier’s article Deleveling: Against “Flat Ontologies”, where he elaborates the essence of flat ontology in detail.:

“[…] The expression “flat ontology’ has a complicated genealogy. It was originally coined as a pejorative term for empiricist philosophies of science by Roy Bhaskar in his 1975 book, A Realist Theory of Science. By the late 1990s, it had begun to acquire a positive sense in discussions of the work of Deleuze and Guattari. But it only achieved widespread currency in the wake of Manual De Landa’s 2002 book about Deleuze, Intensive Science and Virtual Philosophy. More recently, it has been championed by proponents of “object-oriented ontology’ and “new materialism’. It is its use by these theorists that I will be discussing today. I will begin by explaining the “four theses’ of flat ontology, as formulated by Levi Bryant. Bryant is a proponent of “object-oriented ontology’, a school of thought founded by Graham Harman. In his 2010 work The Democracy of Objects, Bryant encapsulates flat ontology in the following four theses:

Thesis 1: “First, due to the split characteristic of all objects, flat ontology rejects any ontology of transcendence or presence that privileges one sort of entity as the origin of all others and as fully present to itself.”

Thesis 2: “Second, […] the world or the universe does not exist. […] [T] here is no super-object that gathers all other objects together in a single, harmonious unity.”

Thesis 3: “Third, following Harman, flat ontology refuses to privilege the subject-object, human-world relation as a) a form of metaphysical relation different in kind from other relations between objects, and that b) refuses to treat the subject-object relation as implicitly included in every form of object-object relation.” The basic idea is that, unlike Descartes, Kant and other philosophers who put epistemology before ontology, flat ontology does not begin by negotiating conditions of cognitive access to the world. It begins by treating the human-world relation, i.e. our relation of cognitive access to things, as simply another thing in the world, which is to say, an inter-object relation. It refuses the claim that this epistemic or cognitive relation is inscribed in all objectifications, so that anything we say or do with objects reflects or encodes some kind of conceptual or practical transaction.

Thesis 4:”[F] ourth, flat ontology argues that all entities are on equal ontological footing and that no entity, whether artificial or natural, symbolic or physical, possesses greater ontological dignity than other objects. While indeed some objects might influence the collectives to which they belong to a greater extent than others, it doesn’t follow from this that these objects are more real than others. Existence, or being, is a binary such that something either is or is not.” These four theses taken together are supposed to entail something that has been called “antropodecentrism’. Bryant explains this in the following way: In this connection, flat ontology makes two key claims. First, humans are not at the center of being, but are among beings. Second, objects are not a pole opposing a subject, but exist in their own right, regardless of whether any other object or human relates to them. Humans, far from constituting a category called “subject” that is opposed to “object”, are themselves one type of object among many. What is significant are the denials that accompany the four theses of flat ontology. According to the first thesis, there is no transcendence: forms, species, kinds, archetypes, propositions, laws, and other abstract entities are disallowed. The flatness affirmed by flat ontology is the flatness of a more or less differentiated but nevertheless level ontological playing field. According to the second thesis, there is no world: no totality, universe, One-All, etc. This claim is not peculiar to flat ontologists; other contemporary philosophers, including Markus Gabriel and Alain Badiou, defend some version of it. According to the third thesis, there is no constituting subjectivity: no pure Apperception, Geist, consciousness, Dasein, etc. Flat ontologists do not begin by identifying subjective conditions of epistemic access to reality.

According to the fourth thesis, there is no appearance/ reality duality: what is, is, what is not, is not. Here we have an interesting reassertion of the Parmenidean thesis discussed in Plato’s Sophist. For Plato, philosophy or dialectics is predicated on the subversion of this Parmenidean interdiction on asserting the being of non-being or non-being of being: dialectics affirms the mixture of being and non-being. Flat ontology, in contrast, treats being as univocal: things can only be said to be in a single sense. But the claim about putting entities “on an equal ontological footing” implies that there are no degrees of being, just as there is no distinction between being and non-being, or between reality and appearance. Of course, this means that flat ontologists deny Plato’s claim that it is necessary to think the interpenetration of being and nonbeing, which is the task of dialectics.”

The critique offered by Brassier exposes the vulnerable points of flat ontology, but it does not invalidate its philosophical productivity. I accept its rejection of privileged entities, while also acknowledging the necessity of distinguishing between epistemological and ontological levels – a distinction that will become clearer in the subsequent analysis of process and experience. Flat ontology attempts to realize a thoroughgoing anthropo- and bio-decentrism. It moves far beyond the critique of human exceptionalism, subjecting to radical doubt the very idea of the privileged status of life as such. Life is no longer the center of the cosmos, but merely one among many temporary and co-equal modes of material organization, alongside other formations. This is the final point of the trajectory that begins with the critique of the Cartesian “I”: a world without a subject, without a biological center, without any hierarchy between the organic and the inorganic. Yes, it has its shortcomings; however, the essential contribution of flat ontology lies in compelling us to think beyond not only anthropocentrism, but any hierarchy grounded in the supposed “specialness” of life, mind, or metaphysical force. It is simply a necessary step.

Fractal Nature of Determinism

In the previous work I argued that human behaviour and consciousness are not the result of free choice but the lawful consequence of neurobiological, genetic, hormonal, and environmental factors. The work of Robert Sapolsky has shown that free will is an adaptive illusion necessary for social functioning yet incompatible with a scientific understanding of causation. The brain produces a sense of control, while at a deeper level all our decisions are determined by a chain of events beginning long before the moment of conscious choice.

However, the conclusion of the first part left open the fundamental question of determinism beyond human life and behaviour. For the Cosmos, classical linear Laplacean determinism10, now appears untenable in light of contemporary science. Quantum mechanics introduces fundamental indeterminacy at the microscopic level. Chaos theory demonstrates exponential sensitivity to initial conditions, rendering long-term prediction impossible. Self-organization and the emergence of novel forms in nature and society seem incompatible with rigid predestination. How then can determinism be preserved without collapsing into indeterminism or mysticism?

The answer requires a radical reconception of the concept of determinism itself. The linear model of causality, where one cause sequentially produces an effect, must be replaced with a fractal model in which causality is distributed, recursive, and self-similar at all scales. Fractal determinism does not deny quantum randomness, chaotic unpredictability, or spontaneous self-organization. On the contrary, it integrates them into a deeper notion of necessity in which randomness appears as a mode of manifestation of lawful structure, and novelty is a lawful consequence of complex interactions among many causal lines.

If linear determinism assumed that the future is implicitly encoded in initial conditions as an explicit plan, fractal determinism asserts that the future is not pre-scripted but inevitably arises from the system’s self-organization. If classical determinism sought a prime cause and a final goal, fractal determinism describes being as a self-generating process without an external source or teleological direction. If the traditional approach set necessity and chance in opposition, the new perspective treats them as complementary aspects of a single process.

This model finds expression in a number of fundamental mechanisms described within fractal geometry and the theory of complex systems. The observation of self-similarity – that form repeats across scales – emerged from practical problems of measurement. Lewis Richardson’s famous coastline paradox showed that the more finely one measures an indented line, the longer it becomes. This observation received mathematical formulation in the work of Benoit Mandelbrot, who laid the foundations of fractal geometry. Mandelbrot introduced the concept of fractal dimension, which permits the description of irregular, self-similar forms widespread in nature: coastlines, river networks, the vascular system of the lungs, and the distribution of neural activity in the brain.

Parallel to this, the theory of nonlinear dynamics and deterministic chaos developed. Concepts such as strange attractors, bifurcations, multistability, and self-organized criticality emerged. These research directions provided tools for measuring the scale-structure of systems via power laws, autocorrelations, and fractal dimension, and for modelling how simple local rules generate complex global patterns. To disclose the idea of a fractal basis for determinism it is sufficient merely to point to these terms without delving into their technical definitions. Fractal determinism takes these empirical and formal results and builds from them a new ontology of causality.

The first fundamental mechanism of fractal causality is feedback. Any action alters the environment, and the altered environment acts back upon the source of change. This cycle repeats, and through repetition a stable direction of change is formed. A classic instance of feedback is the river channel. The flow of water alters the bank; the altered bank changes the current; the current again acts on the bank. Gradually a stable form emerges, although no single act was precomputed or predetermined by an external force. The form of the channel is the result of continuous interaction between flow and bank, where each state is determined by the previous one and determines the next.

Such cyclical causality dissolves the classical distinction between cause and effect as separate, sequential events. In fractal determinism cause and effect merge into a continuous process of mutual determination. The river-and-its-channel is a single process in which the division into active and passive, forming and formed, is conditional and depends on the viewpoint. This principle applies to any system. Neural networks in the brain are shaped by experience, yet the formed networks determine what experience will be perceived and how it will be integrated. Social institutions are created by people, but then the institutions shape the people who recreate them. Economic systems are produced by individuals’ decisions, but those systems define the space of possible decisions.

The second mechanism of fractal causality is threshold response. Not every perturbation triggers a process of development. To move from an insignificant state to a noticeable one, the cumulative effect must cross a certain threshold. Below the threshold perturbations are damped by the system; above it a rapid redistribution begins, often cascading in character. This property explains why many changes appear random and unpredictable. A system can accumulate tension for a long time without visible change, then suddenly transition to a new state. The concept of self-organized criticality, formulated by Per Bak, Chao Tang, and Kurt Wiesenfeld, describes systems that naturally evolve toward a critical state in which the smallest disturbance can trigger an avalanche of any size. Example: a sandpile onto which grains are slowly dropped. Most grains provoke minor readjustments, but occasionally avalanches of varying scale occur, whose distribution obeys a power law. The system reaches the critical state by itself without external tuning of parameters. This implies that many natural and social systems constantly reside on the edge between stability and chaos, where the accumulation of imperceptible changes can suddenly produce dramatic reorganization.

bannerbanner