Wednesday, November 23, 2016

Fundamental Values and the Relevance of Uncertainty

As argued in a previous essay, reflection on fundamental values seems to stand among the most important things we could be doing. This is really quite obvious: working effectively toward a goal requires knowing what that goal is in the first place. And yet it does not seem to me that we, as purportedly goal-oriented “world improvers” or “effective altruists”, have much such clarity when it comes to what our goal is, nor do we seem to be working particularly hard on gaining it. This, I think, is unreasonable. How can we systematically try to help or improve the world as much as possible if we do not have decent clarity about what this in fact means? We can’t. We are trying to optimize an ill-defined function. And that is bound to be a confused endeavor.

It is tempting to be lazy, of course, and to think that we at least have some idea about what a better world looks like — “one that contains less unnecessary suffering”, for instance — and that this suffices for all intents and purposes. Yet that would be a fatal mistake, since fundamental values are what the sensibility of the pursuit of any action or cause rests upon. Everything depends on fundamental values. And even apparently small differences in fundamental values can imply enormous differences in terms of what we should do in practice. This means that a three-word sentiment like “minimize unnecessary suffering”, while arguably a good start, will not suffice for all intents and purposes (after all, what exactly does “minimize”, “unnecessary”, and “suffering” mean in this context? E.g. does “minimize” allow outright destruction of sentient beings or not?). We need to be as elaborate and qualified as possible about fundamental values if we are to bring about the most valuable/least disvaluable outcomes.

Indeed, I would argue that the unmatched importance of fundamental values — after all, clarification of fundamental values is all about clarifying what is most important, which almost by definition makes it the most important thing we could be doing; only when we are reasonably sure that we have a decent map of the landscape of value, a decent idea of what the notional “utility function” we are trying to optimize looks like, can we move effectively toward optimizing accordingly — combined with the fact that serious reflection about fundamental values seems widely neglected, implies that reflection upon fundamental values itself stands as a promising candidate for being the most important cause of all, however detached from real world concerns it may seem.

“Improving the World” — Two Questions Follow

If we have the goal of improving the world, this gives us two basic things to clarify: 1) what does improving the world mean? In other words, what does the goal we are trying to accomplish look like in more specific terms? And 2) how does one accomplish that?

We have an “end question” and a “path question”. And I would argue that we are not sufficiently aware of this distinction, and that we are generally far too fixated on paths compared to ends. We are not wired to reflect on goals, it seems, at least not as much as we are to accomplish an already given goal. We are optimizers more than we are reflectors, which makes sense from an evolutionary perspective. Yet it makes no sense if we are serious about “improving the world”. Success in this regard requires reflection on the aforementioned “what” question, and perhaps far more resources should be spent on reflecting on this question than on attacking the “how” question, since, again, the sensibility of any path depends on the sensibility of the end that it leads to. Paths depend on ends.

What Does Clarification of the “What” Question Look Like?

To many of us, answering this “what” question has consisted in our deeming utilitarianism to be the correct/our preferred moral theory, and then we have jumped to the path stage from there — how do we optimize things based on this theory?

Yet this is much too vague an answer to warrant moving on to the “how” stage already. After all, what kind of utilitarianism are we talking about? Hedonistic or preference utilitarianism? Even similar versions of these two theories often have radically different practical implications. More fundamental still, do we subscribe to classical or negative utilitarianism? The differences in terms of practical implications between the two can be extreme.

What Kind of that Kind of Utilitarian?

And yet much still remains to be clarified at the “what” stage even if we have these questions settled. For instance, if we subscribe to a version of negative hedonistic utilitarianism — i.e. hold that reducing conscious experiences of suffering is our highest moral obligation — this still leaves us with many open questions. For to say that our focus is purely on suffering still leaves open how we prioritize different kinds of suffering. Crucially: are we much more concerned with instances of extreme suffering than we are with comparatively milder forms, perhaps even so much more that we consider it impossible for any number of mildly bad experiences to be worse than a single very bad one? And, similarly, that it is impossible for any number of very bad experiences to be worse than a single very very bad experience, etc. We may place any number of points along the continuum of more or less horrible forms of suffering where no amount of less bad experiences can be considered comparably bad as at that given point, and the differences in terms of the practical implications that follow from views with and without such points can again be enormous (for instance, given such a “chunked” view of the relative disvalue of suffering, averting the risk of instances of maximally optimized states of suffering — “dolortronium” — would seem to dominate everything else in terms of ethical priorities, while other views might only consider it yet another important risk among many).

Is the Continuum Exhaustive or Not?

Another thing that would seem in need of clarification is whether the continuum of more or less (un)pleasant experiences provides an exhaustive basis for ethics, as opposed to merely being an extremely significant part, which it no doubt is on virtually any ethical view that has ever been defended. For example, if we imagine a world inhabited by a single person who suffers significantly and who is destined to suffer in this way for the rest of their life, yet who nonetheless very much wants to live on, would it be right for us to painlessly kill this person if we could? It would seem that we are obliged to do so on hedonistic versions of utilitarianism, and yet saying that such an act is permissible, much less normative, seems highly counterintuitive, and it seems to suggest that where on the continuum of more or less (un)pleasant states of consciousness a person’s experiences fall is — while highly important — not all that matters. One may consider this a strong reason in favor of granting significant weight to preferences in one’s account of what matters.

Yet to consider both the quality of experiences and preferences is arguably still not sufficient when it comes to what is ethically relevant. For imagine that we again have a world inhabited by just one person, a person who experiences the world like you and I do, with the (admittedly rather significant) exceptions that their experience is always exactly hedonically neutral — i.e. neither pleasant nor unpleasant — and that they have no preferences. If preferences and hedonic tone exhaustively account for what makes a being ethically significant, it would seem that there is nothing wrong with killing this being. Yet this does not seem right either, at least not to me. After all, this being does not want to die, so who are we to deem their death permissible? What if we learned that one of our fellow beings in this world actually experiences the world in this way? Would this mean that they are not inherently valuable as individuals? That does not seem right to me.

The Nature of Happiness, Suffering, and Persons

Even if we have clarified the questions above and know that our goal is, say, to minimize extreme suffering and premature death as much as possible (among other things), this still leaves an enormous research project related to the “what” question ahead of us. For what is suffering and what is a person? While the answers to these questions may be fairly clear in phenomenological terms (although perhaps they are not), they are far from clear when we speak in physical terms. What is suffering and happiness in terms of physical states? And what are the differences between the physical signatures of mild and extreme forms of suffering? More generally, what is a person in physical terms? In other words, what does it take to give rise to a unitary conscious mind? Without decent answers to these questions concerning the nature of our main objects of concern, we cannot hope to act effectively toward our goals.

And yet barely anyone seems to have made the clarification of these crucial questions a main priority (David Pearce, Mike Johnson, and Andrés Gómez Emilsson are notable exceptions). Such clarification of this aspect of the “what” question must also be considered a neglected cause (one can say that there is a phenomenological side of the “what” question, where we discuss what is valuable in terms of conscious states [e.g. suffering and happiness], and a physical one, where we describe things in terms of physical states [e.g. brain states], and both are extremely neglected, in my view).

Can digital computers mediate a unitary mind that can suffer? Can empty space? If so, does empty space contain more suffering than what we would expect there to be in the same amount of space filled with digital computers of the future? These may seem crazy questions, but much depends on our answers to them. Acting sensibly requires us to have as good answers to such questions concerning the basis of consciousness as we can, as quickly as we can. We need to attack these “what” questions concerning the nature of consciousness, including happiness and suffering in particular, with urgency.

Reflection on Values — Win Win

As mentioned, small differences in fundamental values can yield enormous differences in terms of practical implications, which hints that it makes good sense to spend a significant amount of our resources on becoming more qualified about them. And this applies whether we are moral realists or simply wish to optimize “what we care about”. For a moral realist, there are truths about what has value, and we can discover these or fail to. Similarly, for a moral subjectivist who wishes to optimize “what they care about”, deep reflection seems equally reasonable, since there in this case also are truths to be discovered in some sense: truths concerning what one in fact cares about.

Why, then, do we see so little discussion concerning fundamental values? After all, having a large discussion about these seems likely to help us all become more qualified in our reflections — to reconsider and sharpen our own views — and not least to bring others closer to our own view by causing them to update. And even if discussion only causes others to move away from one’s view, this seems like a welcome call for serious reexamination, which can then be done based on the reasons given for the rejection of one’s view. It seems like a win-win game that we are all guaranteed to gain from, yet we refuse to show up and claim the reward.

One may object that all this reflection is a distraction from the real suffering going on today that we should address with urgency. While I am quite sympathetic to this sentiment, and share it to some degree, the urgency and magnitude of suffering going on right now does not imply that we should reflect less. After all, the primary reason that many of us have made reducing suffering a priority in the first place was reflection, and the same applies to how we got to care about the biggest specific sources of suffering we are concerned with, such as factory farming and suffering in nature — we came to realize the importance of these via reflection, not by optimizing already established goals. And who is to say that there may not be more forms of suffering we are still missing, even forms of suffering that could be taking place today? Moreover, the fact that the far future is much bigger than the immediate future, and therefore will contain much more suffering by any standard, implies that if we truly are concerned with reducing suffering, starting today to reflect on how we can best reduce suffering in the future seems among the most sensible things we can do. Even in a world full of urgent catastrophes, we still urgently need to reflect.

However, saying that reflection should be a priority is of course not to say that we should not also be focused on direct interventions. After all, experience with interventions is likely to also teach us many things, and provide valuable input, for our reflections about value and what can be achieved in the space of more or less valuable states of the world.

What Is Value and What is Valuable? — My Own View

In the hope of encouraging such thinking and discussion about fundamental value, I shall here present my own idiosyncratic, yet unoriginal, account of what value is and what has value. This, in my view as a moral realist, is an attempt of getting the facts right when it comes to what value is, which is not to say that I do not maintain considerable uncertainty about it (as we should in the case of all difficult factual questions).

I believe value is a property of the natural world — more specifically, a property of consciousness.

Perhaps I should be intensely skeptical of myself already at this point. For doesn’t it seem suspiciously self-centered for me, as a conscious being, to claim that consciousness is what, even all, that matters in the world? Why should only conscious beings matter? Why not something else?

This may indeed seem strange, but I think this skepticism gets everything backwards. Contrary to common sense, it is not the case that we have a general Platonic notion of “value” drawn out of some neutral nowhere that we then arbitrarily assign to consciousness. Rather, value emerges and is known in conscious experience, and might then be projected on to the “world out there” from there. In my view, value, like the color red, does not exist, indeed cannot exist, outside conscious experience, because, like red, value is itself a phenomenal phenomenon. We may talk about non-phenomenal value, and even do so in meaningful ways — we can for instance talk about instrumentally valuable things — just like we can talk about red objects “out there”, yet, ultimately, “value” and “red” are not external to consciousness; they are properties/states of it.

“But how does this fit with the thought experiment above that strongly hints that preferences seem intrinsically important as well, and the thought experiment that hinted that even in combination, hedonic tone and preferences do not seem able to provide an exhaustive account for what is valuable?”

Preferences do indeed matter, yet in what sense can they be considered different from our conscious states? Preferences are contained in our experience moment-to-moment, and if a state of experience contains a preference to continue that state, this conscious state can, I would argue — even if it contains pain — be considered valuable in a broader sense of value, yet one that still only places value in experience itself. Preferences are yet another aspect of experience, a highly value significant one.

Another, more controversial response one might give is that our healthy social intuitions that are of great instrumental value — such as a relentless insistence on respect for the preferences and lives of others — cause us to overestimate the badness of death in both thought experiments above (which is a reasonable reaction, and hence not an overestimate, in our social world, where embracing the notional sanctity of life indeed is of immense instrumental value). After all, we do not find it terribly bad, if even bad at all, when a person who very much wants to stay awake falls asleep against their will, and yet the case of painlessly turning off someone’s consciousness against their will is, modulo the secondary effects on others and on ourselves (that we were supposed to ignore in the thought experiments above, given that we had a world inhabited by just one person, which should hold all else equal), in effect the same from the perspective of the person who falls asleep. One might object that in the case of sleep, one will wake up again, yet we could also say in the case of “turning someone off” that we could turn the person on again eight hours later. This hardly makes us see the turning off as less bad, especially if we continue turning the person off like this every day. The fact that the turning off is done by someone else, and that that someone is ourselves of all moral angels in the universe, just does not sit right with our social and moral — to a first approximation, “afraid to get punished/stand outside” — intuitions.

We have strong intuitions about death being a bad thing, which is not at all hard to make sense of in evolutionary terms. In our evolutionary past, we needed our fellow beings whom we cared about to be around for the sake of our survival and for our genes to be propagated. Largely for that reason, it seems safe to say, we have evolved to feel great sorrow and pain when those we care about die. To perceive that as very bad. Yet is the badness in the death or in our perception of it?

I do not have clear answers to these difficult questions. However, what I think is clear is that value ultimately pertains to consciousness and consciousness only. This is the common thread in both thought experiments above: we have pitted hedonic tone and preferences against each other, and also removed them both, yet consciousness was there in the subjects in both cases, and this does seem the undeniable precondition for there to be any value, and hence for any ethical concern to meaningfully apply. If we were talking about unconscious bodies, there would be no dilemma. The only remaining problem would then be the secondary effects of the kind Kant worried about with respect to our harming non-human beings: that hurting them might make us more prone to harming “real” moral subjects.

Conclusively, the claim that value is something found only in consciousness holds, in my view. And not only do I hold that value is ultimately contained in this singular realm that is consciousness, I also think we can measure it along a single scale, at least in theory if not in practice. In other words, I find value monism compelling (see the link for a good case for it).

This is not to say that (dis)value is a simple phenomenon, much less something that can be easily measured. Yet it is to say that it is something real and concrete that we can locate in the world, and something there can be more or less of in the world, which of course still leaves many questions unanswered.

Positive and Negative Value — Commensurable or Not?

For to claim that value comes down to facts about consciousness is rather like saying that science is about uncovering facts about the world — it says nothing about what those facts are. For example, saying that there is positive value in happiness while there is negative value in suffering does not imply that these values are necessarily commensurable. Many have doubted that they are. Karl Popper was one such doubter: “[…] from the moral point of view, pain cannot be outweighed by pleasure, and especially not one man's pain by another man's pleasure. Instead of the greatest happiness for the greatest number, one should demand, more modestly, the least amount of avoidable suffering for all […]”

So is David Pearce: “No amount of happiness or fun enjoyed by some organisms can notionally justify the indescribable horrors of Auschwitz.”

I find that I agree with many of these asymmetrical intuitions — at least when it comes to extreme suffering (to say that extreme suffering cannot be outweighed by any amount of happiness is not to say that this also applies to mild forms of suffering; too often discussions get stuck in this latter dilemma, happiness vs. mild suffering, rather than the former, happiness vs. extreme suffering, where it is much harder to defend a non-negative position).

There is such a thing as unbearable suffering, yet it seems that there cannot be anything analogous on the scale of happiness. The expression “unbearable levels of happiness” makes no sense. Another thing that we find in suffering, at least in extreme suffering, that we do not find in happiness is urgency. There is no urgent obligation for us to create happiness. For instance, imagine that we are at an EA conference and someone shows up with happiness pills that would make everyone maximally happy. Would there be any urgency in giving everyone such a pill as quickly as possible? Would and should we rush to distribute this pill? It seems not. Yet if a single person suddenly fell to the ground and experienced intense suffering, people would and should rush to help. There is urgency for betterment in that case — and that urgency is inherent to extreme suffering while wholly absent in happiness. We would rightly send ambulances to relieve someone from extreme suffering, but not to elevate someone to extreme levels of happiness.

A similar consideration was crucial in my own moving away from the view that happiness and suffering are commensurable, more specifically, a consideration about the Abolitionist Project that David Pearce advocates. For if happiness and suffering are truly commensurable and carry the same ethical weight, this would mean that a completion of the Abolitionist Project — that is, the abolition of suffering in all sentient life — would not represent a significant change in the status of our moral obligations. We would then have just as great an obligation to keep on moving sentience toward greater heights. Yet this did not seem right to me at all. If we were to abolish suffering for good, we would, I think, have discharged our strongest moral obligations and be justified in breathing a deep sigh of relief.

Another reason in favor of the asymmetrical view is that it seems that the absence of a good is not bad in the same way that the absence of a bad is good. If a person were in deep sleep, experiencing nothing, rather than, say, having the experience of a lifetime, this cannot, I believe, be characterized as a catastrophe. It is in no way similar to the difference between sleeping and being tortured, the difference between which is a matter of catastrophe and great moral weight.

In contemplating any supposed symmetry between suffering and happiness, it seems worth considering whether there is any pleasure so great that it can justify just a single of the atrocities that happen every day — a rape, for instance. Can the pleasure experienced by a rapist, if it is made great enough, possibly justify the suffering it imposes on the rape victim? Classical utilitarianism has it that if the pleasure is great enough for the rapist, the rape can in fact be justified, even normative. Negative utilitarianism pulls the breaks here, however. The level of pleasure experienced by the rapist is irrelevant: imposing such harm for the sake of pleasure cannot be justified.

This is of course not to say that there is not great value in happiness. Indeed, there is no contradiction in considering pleasure more valuable than nothing, and to consider increasing happiness to be valuable, yet to not ascribe urgency to it, and to not consider it a moral obligation. This is my view: Happiness is wonderful, but compared to the alleviation of extreme suffering, increasing happiness (of the already happy) seems secondary and morally frivolous — like a supererogation rather than a moral obligation. Counterintuitively, however, the urgency of alleviating extreme suffering does actually make boosting happiness and good physical and mental health an urgent obligation too, at least an instrumental one, as we must stay healthy and motivated if we are to effectively alleviate extreme suffering.

The Continuum of Suffering: Breaking Points or Not?

As my repeated mention of extreme suffering above hints, I do believe that there is a breaking point along the continuum of suffering, likely many, at which no amount of less bad experiences can be considered as bad as at that given point. For example, it seems obvious to me that no number of moments of tediousness can be of greater disvalue than a single instance of torture. One might argue that such a discrete jump seems “weird” and counterintuitive, yet I would argue that it shouldn’t. We see many such jumps in nature, from the energy levels of atoms to the breaking point of Hooke’s law: you can keep on stretching a spring, and the force with which it pulls will approximately be proportional to how far you stretch it — up to a point, the point where the spring snaps. I do not find it counterintuitive to say that gradually making the degree of suffering worse is like gradually stretching a spring: at some point, continuity breaks down, and our otherwise reasonably valid framework of description and measurement no longer applies.

Unfortunately, I do not have a detailed picture of where such points lie or, as mentioned, how many there might be. All I can say at this point is that I think this is an utmost important issue to contemplate, discuss, and explore in greater depth in the future, and that much depends on how we view this issue.

Thus, it seems to me that the prevention of the most extreme forms of suffering — the prevention of the emergence of “dolortronium”, if you will — is our main moral obligation. In my view, this is where the greatest value in the world lies. I could be wrong, however.

The Relevance of Uncertainty — Doing What Seems Best Given our Uncertainty

“When our reasons to do something are stronger than our reasons to do anything else, this act is what we have most reason to do, and may be what we should, ought to, or must do.”
— Derek Parfit, from summary of the first chapter of “On What Matters”

It seems reasonable to maintain some uncertainty when it comes to our view of fundamental values. This again applies whether we are moral realists or subjectivists. In the case of moral realists, there is always the risk of being wrong about what is in fact valuable, while in the case of moral subjectivists, there is the risk of being wrong about what one actually cares about most deeply. This is not, however, to say that one knows nothing, or that one has no functional certainty about anything. For instance, while we may not be able to settle the details about value, we likely all agree and have great confidence in the claim that, all else equal, suffering is bad and worth preventing, and the more intense, the worse and more worth preventing it tends to be.

The interesting question is how to act given our moral uncertainty. Doing what seems most reasonable in light of all that we know seems, well, most reasonable. Yet, given uncertainty about fundamental values, what seems most reasonable is not to merely pick the single ethical theory or account of value that we find the most compelling and then to try to work out the implications and act based on that, although it may be tempting and straightforward. Rather, the most reasonable thing would be to weigh the plausibility of different accounts of value, including one’s preferred one, and to then work out the implications and act based on the collective palette of weighted values one gets from this process, however small one’s normative uncertainty may be.

And it is worth noting here how the distinction between absolute and relative uncertainty is highly relevant. For imagine that we know only of three different value theories, and we assign 5 percent credence to value theory A, 10 percent to theory B, and 15 to C. This is not the same situation as if we assign 10 percent to A, 20 percent to B, and 30 to C, although the relative weight between the theories is the same. In the first case, the possibility that we are fundamentally wrong about values is kept far more open than in the latter case, and this has implications for how confident one should be, and how many resources one should put into getting a better grasp of values compared to other things.

To relate this to my own view, I have relatively high confidence in the view that all value relates to consciousness ultimately — more than 90 percent — which is a high absolute credence, yet when it comes to the possibilities of consciousness, my own provincial knowledge of the space of possible states of mind forces me to admit that my view of the landscape of value — that is, the landscape of value found within consciousness — could be deeply flawed. Concerning this question, I have a considerable degree of uncertainty, yet relatively speaking, compared to other accounts of value I have come across, I still find my own view most compelling by far (and this should hardly be surprising, given that my current view already is a product of countless updates based on writings and discussions on ethics). However, the fact that I have changed my mind about value in significant ways over the last few years should also teach me to be humble and to admit that my present view could well be wrong.

How confident I should be in my best estimate concerning what the landscape of value in consciousness looks like is hard to say — 70 percent? 5 percent? For the fact that the landscape of possible experiences lies mostly unexplored before me does not invalidate the limited knowledge of the landscape that I do have, and the reasoning about it I have done, and this knowledge and reasoning does provide what appears to me a decent basis for my current best estimate. I should probably be humble and keep on reflecting, yet at the same time it does not seem like my large uncertainty, in itself, should cause me to change my current estimate — after all, my view might be wrong along all axes, in both positive and negative directions. I might have a too negative view of value, or I might not. If anything, my uncertainty calls for deeper exploration and reflection.

The Views of Others

There are other people in the world than ourselves, and many of them have thought a lot about the subject of value as well, which makes it seem worth paying attention to their views and to update based on that. After all, why should we be more correct than others when it comes to what is valuable? Why give our own perspective a privileged position compared to those of other conscious minds that also experience value? Or phrased in subjectivist-friendly terms: if others upon reflection have found that they value something different from what we value ourselves, might we not in fact also value that upon reflection, at least to a greater degree than we thought?

After all, in dealing with ethics and values, what many of us think matters is not our own view of the perspectives of others, but those perspectives themselves. It therefore makes good sense to listen to those perspectives. And if they report something radically different from what we believe about their perspectives and what they find valuable, who are we to claim we know better on their behalf, about their perspective, which they know intimately and we don’t? Is that not just yet another instance of the “vulgar pride of intellectuals”?

I think this is a valid point, and in my case, this consideration should arguably move my view in a less negative direction, and it probably has, although it is not entirely clear how much it should move me. After all, I do not think my view is that contrary to what others report. Again, my view is not that happiness is not of great value, but rather that it cannot outweigh extreme suffering, and I have yet to encounter a convincing case against this view (and not many have tried to make such a case, it seems). Another reason I should perhaps not move so much is that some of the most influential traditions in Asia — the continent where the majority of the human population lives — such as Buddhism and Jainism seem to share my negative, i.e. suffering-focused, values. The fact that paying close attention to consciousness is a central part of these traditions, and the fact that optimism bias is strong in most humans and seems likely to influence our evaluation of what is valuable, could well imply that I should resist the tug from the view of the more “positive”, predominantly Western thinkers. More than that, the fact that I do not like the negative view, and very much wish that the magnitude and moral status of negative value were not incommensurably greater than that of positive value, also suggests that if I have a bias in any direction, it is probably away from the negative (for the same reason, I’m likely also strongly biased against moral realism being true, in that I wish that no continuum of truly disvaluable states could exist in the world; unfortunately, I find that there is too much evidence to the contrary).

Yet all this being said, I could be wrong, and I wish we had much more discussion on these matters that we could sharpen our views on. I should also note that there of course are other disagreements about value than the relative significance of positive and negative value, for instance concerning egalitarianism, prioritarianism, and the value of rules. In my view, these things are ultimately all instrumental to how the dynamics of consciousness play out, as opposed to being inherently valuable, which is not to say that these views are not important to discuss, or that they don’t contribute with much wisdom. I think they do, and I do maintain some, although admittedly very small, uncertainty about them being intrinsically valuable.

Biasing Intuitions

In trying to be reasonable, it is always worth being aware of one’s own biases. And when it comes to thinking about values, we have many biases that are likely to influence our views and what we say about them. We are social primates adapted to survive and propagate our genes, which means that we have moral intuitions that have been built to accomplish this task efficiently — not to help us land on deep truths about value.

This can influence us in countless ways. For example, as mentioned above, one could argue that the only reason we view death as bad is because it was costly for our group’s survival, and hence our genes, to lose anyone in our group (although I would argue that there indeed are good reasons to consider death bad, even if we disregard our immediate feelings about it).

In the case of my own negative view of value, I might be biased in that I’m an organism evolved to signal that I am sympathetic and compassionate, someone who will protect you if you are in pain and trouble. Negative utilitarianism, a skeptical person might claim, is merely an attempt to signal “I’m more ethical than you”, and ultimately, I’m just another horny organism that makes elaborate sounds in order to get satisfied.

I certainly don’t deny the latter proposition, and it is worth being mindful of such pitfalls, even when they seem unlikely to be influential, and when there seems to be many reasons that count against them — for instance, do equally strong, or perhaps stronger, biases not exist in the opposite direction as well? Isn’t a willingness to sacrifice suffering for some positive gain generally much more attractive in a male primate? Does negative utilitarianism really make you appear cooler than the classical utilitarian who prioritizes to work for a future full of happy life above all else?  (Although it should be noted that a future full of life is, at least according to David Pearce, not necessarily incompatible with negative utilitarianism.)

More generally, might our uniquely wired brain lead us to value something that might not at all be valuable, or perhaps only of puny value, compared to what might be found outside of a biological perspective, or at least outside the perspective of what we can remember in the present moment from less than a single lifetime of one species of primate? It is certainly worth pondering.

It is hard to appreciate just how strongly our thinking is influenced by our crude, survival-driven intuitions, many of which contradict each other much of the time, and it is worth being intensely skeptical of these, and mindful of ways in which our narrow perspective might be misguided in general, when thinking about values.

Updated View of Value — What Follows?

What follows from an updated, weighted view of value is that it leads us, however slightly, toward favoring causes and actions that are robust under many different views of value as opposed to our single (immediately) most favored one. What these causes and actions are, and how to assess this, is largely an open question that of course depends on what our palette of weighted values ends up being. Yet a good example of a cause that seems highly normative on almost all accounts of value is the Abolitionist Project proposed by David Pearce: to move all sentient beings above hedonic zero by making sentience entirely animated by gradients of purely positive experiences.

Regardless of which palette of weighted values we get, however, it seems like the continued effort of gaining more clarity about the composition of this palette seems an ever-relevant task. As mentioned above, and as I will argue below, encouraging such an effort is of utmost importance.


We will always perceive the world from a limited perspective that contains limited information. When we are talking about the most value significant events that can emerge in the world — notionally, “dolortronium” and “utilitronium” — I maintain that none of us have a good idea about what we are talking about. What to do in light of this ignorance, apart from maintaining some degree of humility, is not clear. All we have to go by is our limited information.

What does seem clear, however, is that continued reflection on fundamental values is important. Indeed, given the importance and difficulty of getting fundamental values as right as possible, it seems that seeding a future in which we continually reflect self-critically on fundamental values and how to act on these is among the best things we can do.

Again, this applies to moral subjectivists too, who will also benefit from reflecting on what others have found through their serious reflections, and who might even take the position that they care about what others care about, in which case the importance of ensuring that these others — such as beings of the future — find out what they care about is self-evident.

In conclusion, kindling a research project on fundamental values and widespread, qualified discussion about it — more generally: moving us in the direction of a more reflective future — should be a main priority for anyone who wants to “improve the world”. We need to be much more focused on the “what” question in the future.