Thursday, August 30, 2018

New Old Blog


I moved my blog to magnusvinding.com in August 2017.

Much of the content on this old blogspot.com page is outdated and does not necessarily reflect my current views.

Tuesday, July 18, 2017

Response to a Conversation on “Intelligence”




I think much confusion is caused by a lack of clarity about the meaning of the word “intelligence”, and not least a lack of clarity about the nature of the thing(s) we refer to by this word. This is especially true when it comes to discussions of artificial intelligence (AI) and the risks it may pose. A recently published conversation between Tobias Baumann (blue text) and Lukas Gloor (orange text) contains a lot of relevant considerations on this issue, along with some discussion of my views on it, which makes me feel compelled to respond.


The statement that gave rise to the conversation was apparently this:


> Intelligence is the only advantage we have over lions.


My thoughts on which is that this is a simplistic claim. First, I take it that “intelligence” here means cognitive abilities. But cognitive abilities alone — a competent head on a body without legs or arms — will not allow one to escape from lions; it will only enable one to think of and regret all the many useful “non-cognitive” tools one would have liked to have. The sense in which humans have an advantage over other animals, in terms of what has enabled us to take over the world for better or worse, is that we have a unique set of tools — upright walk, vocal cords, hands with fine motor skills, and a brain that can acquire culture. This has enabled us, over time, to build culture, with which we have been able to develop tools that have enabled us to gain an advantage over lions, mostly in the sense of not needing to get near them, as that could easily get fatal, even given our current level of cultural sophistication and “intelligence”.


I could hardly disagree more with the statement that “the reason we humans rule the earth is our big brain”. To the extent we do rule the Earth, there are many reasons, and the brain is just part of the story, and quite a modest one relative to what it gets credit for (which is often all of it). I think Jacob Bronowski’s The Ascent of Man is worth reading for a more nuanced and accurate picture of humanity’s ascent to power than the “it’s all due to the big brain” one.


There is a pretty big threshold effect here between lions (and chimpanzees) and humans, where with a given threshold of intelligence, you're also able to reap all the benefits from culture. (There might be an analogous threshold for self-improvement FOOM benefits.)


The question is what “threshold of intelligence” means in this context. All humans do not reap all the same benefits from culture — some have traits and abilities that enable them to reap far more benefits than others. And many of these traits have nothing to do with “intelligence” in any usual sense. Good looks, for instance. Or a sexy voice.


And the same holds true for cognitive abilities in particular: it is more nuanced than what measurement along a single axis can capture. For instance, some people are mathematical geniuses, yet socially inept. There are many axes along which we can measure abilities, and what allows us to build culture is all these many abilities put together. Again, it is not, I maintain, a single “special intelligence thing”, although we often talk as though it were.


For this reason, I do not believe such a FOOM threshold along a single axis makes much sense. Rather, we see progress along many axes that, when certain thresholds are crossed, allows us to expand our abilities in new ways. For example, at the cultural level we may see progress beyond a certain threshold in the production of good materials, which then leads to progress in our ability to harvest energy, which then leads to better knowledge and materials, etc. A more complicated story with countless little specialized steps and cogs. As far as I can tell, this is the recurrent story of how progress happens, on every level: from biological cells to human civilization.


Magnus Vinding seems to think that because humans do all the cool stuff "only because of tools," innate intelligence differences are not very consequential.


I would like to see a quote that supports this statement. It is accurate to say that I think we do “all the cool stuff only because of tools”, because I think we do everything because of tools. That is, I do not think of that which we call “intelligence” as anything but the product of a lot of tools. I think it’s tools all the way down, if you will. I suppose I could even be considered an “intelligence eliminativist”, in that I think there is just a bunch of hacks; no “special intelligence thing” to be found anywhere. RNA is a tool, which has built another tool, DNA, which, among other things, has built many different brain structures, which are all tools. And so forth. It seems to me that the opposite position with respect to “intelligence” — what may be called “intelligence reification” — is the core basis of many worries about artificial intelligence take-offs.


It is not correct, however, that I think that “innate differences in intelligence [which I assume refers to IQ, not general goal-achieving ability] are not very consequential”. They are clearly consequential in many contexts. Yet IQ is far from being an exhaustive measure of all cognitive abilities (although it sure does say a lot), and cognitive abilities are far from being all that enables us to achieve the wide variety of goals we are able to achieve. It is merely one integral subset among many others.


This seems wrong to me [MV: also to me], and among other things we can observe that e.g. von Neumann’s accomplishments were so much greater than the accomplishments that would be possible with an average human brain.


I wrote a section on von Neumann in my Reflections on Intelligence, which I will refer readers to. I will just stress, again, that I believe thinking of “accomplishments” and “intelligence” along a single axis is counterproductive. John von Neumann was no doubt a mathematical genius of the highest rank. Yet with respect to the goal of world domination in particular, which is what we seem especially concerned about in this context, putting von Neumann in charge hardly seems a recipe for success, but rather the opposite. As he reportedly said:
“If you say why not bomb them tomorrow, I say why not today? If you say today at five o' clock, I say why not one o' clock?”
To me, these do not seem to be the words of a brain optimized for taking over the world. If we want to look at such a brain, we should, by all appearances, rather peer into the skull of Putin or Trump (if it is indeed mainly their brain, rather than their looks, or perhaps a combination of many things, that brought them into power).


One might argue that empirical evidence confirms the existence of a meaningful single measure of intelligence in the human case. I agree with this, but I think it's a collection of modules that happen to correlate in humans for some reason that I don't yet understand.


I think a good analogy is a country’s GDP. It’s a single, highly informative measure, yet a nation’s GDP is a function of countless things. This measure predicts a lot, too. Yet it clearly also leaves out a lot of information. More than that, we do not seem to fear that the GDP of a country (or a city, or the indeed the whole world) will suddenly explode once it reaches a certain level. But why? (For the record, I think global GDP is a far better measure of a randomly selected human’s ability to achieve a wide variety of goals [of the kind we care about] than said person’s IQ is.)


> The "threshold" between chimps and humans just reflects the fact that all the tools, knowledge, etc. was tailored to humans (or maybe tailored to individuals with superior cognitive ability).
So there's a possible world full of lion-tailored tools where the lions are beating our asses all day?


Depending on the meaning of “lion-tailored tool” it seems to me the answer could well be “yes”. In terms of the history of our evolution, for instance, it could well be that a lion tool in the form of, say, powerful armor could have meant that humans were killed by them in high numbers rather than the other way around.


Further down you acknowledge that the difference is "or maybe tailored to individuals with superior cognitive ability" – but what would it mean for a tool to be tailored to inferior cognitive ability? The whole point of cognitive ability is to be good at make the most out of tool-shaped parts of the environment.


I think the term “inferior cognitive ability” again overlooks that there are many dimensions along which we can measure cognitive abilities. Once again, take the mathematical genius who has bad social skills. How to best make tools — ranging from apps to statements to say to oneself — that improve the capabilities of such an individual seems likely to be different in significant ways from how to best make tools for someone who is, say, socially gifted and mathematically inept.


Magnus takes the human vs. chimp analogy to mean that intelligence is largely “in the (tool-and-culture-rich) environment.


I would delete the word “intelligence” and instead say that the ability to achieve goals is a product of a large set of tools, of which, in our case, the human brain is a necessary but, for virtually all of our purposes, insufficient subset.
Also, chimps display superior cognitive abilities to humans in some respects, so saying that humans are more “intelligent” than chimps, period, is, I think, misleading. The same holds true of our usual use of the word “intelligence” in general, in my view.


My view implies that quick AI takeover becomes more likely as society advances technologically. Intelligence would not be in the tools, but tools amplify how far you can get by being more intelligent than the competition (this might be mostly semantics, though).


First, it should be noted that “intelligence” here seems to mean “cognitive abilities”, not “the ability to achieve goals”. This distinction must be stressed. Second, as hinted above, I think the dichotomy between “intelligence” (i.e. cognitive ability) on the one hand and “tools” on the other is deeply problematic. I fail to see in what sense cognitive abilities are not tools? (And by “cognitive abilities” I also mean the abilities of computer software.) And I think the causal arrows between the different tools that determine how things unfold are far more mutual than they are according to the story that “intelligence (some subset of cognitive tools) is that which will control all other tools”.
Third, for reasons alluded to above, I think the meaning of “being more intelligent than the competition” stands in need of clarification. It is far from obvious to me what it means. More cognitively able, presumably, but in what ways? What kinds of cognitive abilities are most relevant for taking over the world? And how are they be likely to be created? Relevant questions to clarify, it seems to me.


Some reasons not to think that “quick AI takeover becomes more likely as society advances technologically” include that other agents would be more capable, and thus be closer to notional limits of “capabilities”, the more technologically advanced society is; there would be more technology mastered by others that an AI system would need to learn and master in order to take over; and finally, society may learn more about the limits and risks of technology, including AI, the more technologically advanced it is, and hence know more about what to expect and how to counter it.

Friday, April 21, 2017

New Book: 'You Are Them'




What follows if we reject belief in any kind of non-physical soul and instead fully embrace what we know about the world? The main implication, this book argues, is a naturalization of personal identity and ethics. A radically different way of thinking about ourself.

“A precondition of rational behaviour is a basic understanding of the nature of oneself and the world. Any fusion of ethical and decision-theoretic rationality into a seamless package runs counter to some our deepest intuitions. But "You Are Them" makes a powerful case. Magnus Vinding's best book to date. Highly recommended.”
— David Pearce, co-founder of The Neuroethics Foundation, co-founder of World Transhumanist Association / Humanity+, and author of The Hedonistic Imperative and The Anti-Speciesist Revolution.

Free download:
https://www.smashwords.com/books/view/719903

Friday, September 23, 2016

The Tree of Priorities: A (Cause) Prioritization Framework


Imagine a couple that tries to make a decision about how to set the table at their wedding. They spend all their time trying to work out this difficult decision, making lists and drawings, and asking Google and friends for advice. Yet underneath their efforts pertaining to the wedding table, a deeper doubt is lingering in their minds: whether they really want to marry each other in the first place. Unfortunately, they have not spent sufficient time contemplating this more fundamental question, and yet what occupies their attention is still the wedding table.

This is clearly unreasonable. Whether it makes sense to spend time on setting the wedding table depends on whether the wedding is sensible in the first place, and therefore the latter is clearly the most important question to contemplate and answer first. Two weeks after the wedding, a divorce is filed. It was all a waste, one that deeper reflection could have prevented.

This example may seem a little weird, yet I think it captures what most of us do most of the time to a striking extent. We all spend significant amounts of energy planning and executing ill-considered “weddings”. Rather than considering the most important and fundamental questions, we get caught up in ill-considered specific tasks that happen to feel important or interesting.

This is hardly a great mystery when considered from an evolutionary perspective: doing whatever felt most interesting at any given time probably made a lot of sense in our ancestral environment, and no doubt still does much of the time — ignoring every moderately interesting thing that jumps into consciousness is not a recipe for success in today’s world either. The key, of course, is balance. Yet I believe we are out of balance for the most part, unfortunately. It is too rare for us to be guided by considerations about which objectives are most reasonable to pursue, and we too rarely see the importance of thinking hierarchically about our priorities.

For that is arguably the point the example above illustrates: we should contemplate the fundamental questions and decisions before we move on to the more specific ones, since the answers to the fundamental questions largely determine which specific tasks are worth pursuing. In short, the specifics are contingent on the fundamentals. And this has significant implications: we need to pay much more attention to the fundamentals.

This is what the “tree of priorities” illustrated below is all about. It is a framework for making decisions that emphasizes first things first, while highlighting that “first things first” is best thought of in hierarchical terms.

At the bottom of this tree we have our fundamental values upon which everything else rests and depends — the root and stem of the tree, one could say. From this, something slightly more specific follows, namely the causes that are worth pursuing given our values — the branches of the tree. Finally, on these branches, we find something more specific still, namely interventions that enable us to attain success in the respective cause areas — the leaves of the tree, if you will.

An illustration of this tree might look like this (there can obviously be any number of causes):

So, at the most general level, this “tree of priorities” asks us to consider three questions, in the following order:

    1) What are our fundamental values? (In other words: what matters?)

    2) Which causes should we pursue given our fundamental values?

    3) Which interventions should we pursue within our specific causes?

I think this is a valuable set of questions, not least due to their ordering: it is clear that our answers to question 3) depend on our answers to question 2), which in turn depend on our answers to question 1).

Hence, the tree of priorities suggests an idea that does not seem shared by many, namely that contemplating fundamental values should be our first priority. I think this is largely correct, at least if we do not have a thoroughly considered answer in place already.

Our fundamental values can be thought of as the point of departure that determines our forward direction, and if we take off in just a slightly sub-optimal direction and keep on moving, we might well end up far away from where we should ideally have gone. In other words, being a little wrong about the fundamentals can result in being extremely wrong at the level of the specifics, and hence it is worth spending considerable resources on clarifying and refining the fundamentals.

Wednesday, September 7, 2016

Cause Prioritization



Cause prioritization is the most effective use of altruistic resources.”

Paul Christiano


People who want to improve the world are, like everybody else, extremely biased. A prime example is that we tend to work on whatever cause we have stumbled upon so far and to suppose, without deeper examination, that this cause is the most important one. This cannot be safely assumed, however.

Here’s what a typical path of “cause updating” might look like: We find out that thousands of people die every single day due to extreme poverty, and find that to be the most important cause to work on. Then we realize that humanity torments and kills billions of non-human beings every year, and that discrimination against these beings cannot be justified, which might then prompt us to focus on ending this moral catastrophe. Then we are told about the suffering of wild animals, its enormous scope, and why we ignore it, and then we might (also) work on that. Then we are convinced by arguments about the enormous importance of the far future, and then that becomes our main focus. And so on.

To be sure, such an evolutionary progression is helpful and even laudable. The question is just whether we can optimize it. Might we be able to undertake this process of updating in a more direct and systematic fashion? After all, having undergone a continual process of updating that has made us realize that we were wrong about, and perhaps even completely unaware of, the most pressing causes in the past, it seems reasonable to assume that we are likely still wrong in significant ways today. We should be open to the possibility that the cause we are currently working on is not be the most pressing one.

Cause prioritization is the direct and systematic attempt to become better informed about which causes are worth prioritizing the most. The importance of such a deliberate effort should be apparent: working on the causes through which we can have the best impact is obviously of great importance — it means that we can potentially help many more sentient beings — and in order to identify those causes, deliberate exploration seems significantly more efficient than expecting to stumble upon them by chance. Rather than optimizing specific tasks that further a given cause, cause prioritization goes a step meta and asks: given our values, what causes are most important to focus on in the first place?

I hope to explore this question in future essays. I wish to provide a rough framework for how we can think about cause prioritization, and based on this, I will try to point to important causes and questions that I think we should focus on and explore further.


Thursday, August 4, 2016

New Book: 'Reflections on Intelligence'



A lot of people are talking about “superintelligent AI” these days. But what are they talking about? Indeed, what is “intelligence” in the first place? I think this is a timely question, as it is generally left unanswered, even unasked, in discussions about the perils and promises of artificial intelligence, which tends to make these discussions more confusing than enlightening. More clarity and skepticism about this term “intelligence” is desperately needed. Hence this book.

Free Download:

Sunday, May 24, 2015

A Short Critique of 'The Effective Altruism Handbook'




Today I read the recently published Effective Altruism Handbook. I had been looking forward to reading it, hoping to read something that I would both agree with and learn from. Unfortunately, the main lesson I learned was that there is a big problem with the effective altruism movement in its current form.

The problem is actually well exemplified by my personal experience with donating based on GiveWell’s recommendations. I came upon GiveWell and their work about two years ago, and this encounter prompted me to immediately redirect my donations to their three top recommended charities at the time, namely Against Malaria Foundation, Deworm the World Initiative and GiveDirectly.

This made the best sense ethically. Or so I thought. For about a year later, I got an email update from GiveDirectly, which informed me what the money I donated was being spent on: a plurality was being spent on “livestock.” Having just finished writing the essay Why “Happy Meat” Is Always Wrong at the time, I felt that my position on this matter was quite thoroughly considered, and the conclusion was clear: I could not continue supporting GiveDirectly, so I cancelled my donations to them.

One might object that my cancellation was unfair. After all, the goal of GiveDirectly is poverty reduction, not anti-speciesism, so can we not give them a break? The answer is no, and the reason why is captured perfectly in the following nine words from Peter Singer’s piece on speciesism in the EA Handbook: “'speciesism,' by analogy with racism, must also be condemned.”

Unfortunately, my reading of the EA Handbook made it clear to me that this indeed is a big problem in the EA movement today: it is profoundly speciesist. What else can one call it when its evaluations of success and effectiveness almost always focus uniquely on one species, homo sapiens?

Given the ubiquity of speciesism in our world today, this should perhaps not come as a big surprise, yet the EA movement really should do better. After all, the EA Handbook itself contains a chapter on speciesism that soundly argues for its rejection, yet unfortunately the book, including that chapter itself, fails completely to make explicit the most basic of implications of such a rejection, even though the main implications of rejecting speciesism could in my view have been listed fairly shortly: endorse veganism, end the property status of non-human animals, and take the suffering of non-human beings in nature seriously.
I have tried to elaborate on all these points in my recent book on the subject, yet the following conveys some of my reasoning in brief:

1) We do not find it justifiable to buy products that result from deliberately enslaving and killing humans, so upon rejecting speciesism, we should not find it justifiable to buy products that result from deliberately enslaving and killing non-human beings.

2) We rightly reject the property status of human individuals, no matter what cognitive abilities they may have, and so upon rejecting speciesism, we should also reject the property status of non-human individuals.

3) We do not disregard human beings just because they find themselves in “nature” or otherwise outside of any human society, and upon rejecting speciesism, we cannot disregard non-human beings on those grounds either.

These are all rather relevant points, and the failure to include any of these in the EA Handbook must be considered a serious omission, especially when one considers the enormous numbers of individuals involved. As Luke Muehlhauser recognizes in one of his two chapters in the book, the vast majority of sentient beings on the planet are of non-human rather than human kind, and the vast majority of these, more than 99.9 percent, live in nature. Who speaks for them? Fortunately, there is a growing number of people in the wider effective altruism community who do, and thankfully, Muehlhauser mentions two of the most vocal such advocates: David Pearce and Brian Tomasik.

Hopefully, the EA movement will keep on advancing and eventually live up to its dedication to the well-being of all sentient beings. Unfortunately, today, the bulk of the movement appears to greatly underestimate the moral importance of non-human beings and our strong reasons to help them.