Ives Parr is interested in the effects of genetic enhancement of children, which does seem likely to be a thing pretty soon, since the technology seems to be available, at least some parents clearly want to do it, and some jurisdiction or other in the world (Thailand? Singapore? The Philippines?) seems likely to allow it. That’s enough for it to start happening.

I think Parr’s thoughts miss what seem to me the most interesting potential consequences:

1 – As people get smarter, the way they think may change.

My personal observation has been that people with highish IQs (~115 to 135) are more leftist than most. It’s not clear to me if the same is true of people with extremely high IQs (> 140).

Given that leftist societies tend to collapse, I wonder about the social consequences of rising median population IQs.

(I may be just conflating education effects with intelligence effects, in which case nevermind.)

2 – More important, this is a first step down a path of recursive genetic modification. We’ve some idea of how we’d change our children, if we could. We have less idea of how those children – different from us – will choose to change their own children, etc. down the generations. The path seems unpredictable, potentially chaotic, and may lead to extinction.

I don’t think we have any good reason to think that after 5 or 10 generations of such changes, the result will look anything at all like present humans.

(This is similar to the “AI explosion” recursive improvement argument.)

Forget Musk’s efforts to save the human race, transition the world from carbon fuels, his other projects. And forget the Gates Foundation’s attempts to end malaria. And Andrew Carnegie’s libraries. Forget philanthropic projects of the wealthy. Or whether those projects are driven by ego or love of mankind. Put all that aside.

Our ancestors lived in caves, infested by parasites, chased by predators, constantly on the edge of starvation. Today we have nice things like indoor toilets and medicine. Electric light, refrigerated food, airliners, the Internet. We didn’t steal that wealth from other cavemen or from space aliens. Wealth isn’t a zero-sum game.

People created those technologies, that wealth. Out of plants and animals, dirt and air, and their own cleverness and work. Who did that? All of us, yes, but a few made vastly larger contributions than others.

Our society is wealthy because of Boulton’s engines, Carnegie’s mills, Vanderbilt’s railroads, Edison’s lights, Gates’ software, and Musk’s cars and rockets. Most of us have always plowed our farms, woven our cloth, done our jobs. And mostly broken even – fed ourselves, raised our children, helped our neighbors survive…and created very little that was new.

But some people are better at creating wealth than others. Just as an Albert Einstein is rare, or a Tiger Woods, or a William Shakespeare is rare, there are a few rare people who are vastly – incredibly – better at creating wealth than most everyone else. Today we call them “billionaires”.

They may not be better than most of us at physics, or golf, or literature, or in any other way, but they have a rare talent for creating wealth. Billionaire’s money (when honestly earned; I exclude crony capitalists and kleptocrats) mostly reflects value created. Value that benefits us all.

Earning a billion dollars is really difficult. See how many try, and how few succeed.

And the living standard at $100 million is virtually identical to that of $100 billion. Most rational people retire when they have enough – long before billionaire status. We are very lucky that a few of these astoundingly productive and capable people keep working – keep chasing dreams, keep creating wealth – long after their personal material needs are satisfied. They made our world, and will make our future.

Sure, Musk makes us look bad. But only in the sense that Mahatma Gandhi does. Nobody should feel jealous of Shakespeare’s writing, Edison’s inventiveness, Einstein’s discoveries. Nor should we resent them for their talent and success. Au contraire; we should celebrate them.

[adapted from a comment on https://fakenous.substack.com/p/elon-musk-is-better-than-you]

What democracy is for

August 23rd, 2023

Democracy is popular, despite leading to public policy that doesn’t generally seem to be better (or worse) than that produced by other systems.

As is well known, pure democracy (majoritarianism) leads to tyranny at least as often as other forms of government. In a pure democracy, 51% of voters can torture and kill the other 49% of the population. That’s why every even moderately successful democracy has things like constitutions and “bills of rights” – there are many things even majorities should not be allowed to do, and these are necessary constraints. Some advocates of democracy don’t seem to understand that “human rights” and “democracy” are in tension – rights are things that even majorities may not infringe.

Regardless of the system of government, constitutions, or formal rights, sufficiently large majorities always get whatever they want. Because a sufficiently large majority will always win a civil war.

This unfortunate fact leads to the one really unarguable benefit of democracy – it provides a way for large majorities to get what they want peacefully via elections instead of via bloody civil war. If they’re such a large majority that they’re going to win anyway, far better for them to win peacefully.

Other than that (not inconsiderable!) benefit, I’m not sure there’s anything very good about democracy – it certainly hasn’t been shown to lead to wise governance, honest leaders, or respect for human rights.

There have been many proposals to limit or bias the franchise to improve democracy by giving extra weight to more-competent-than-average voters – for example extra votes for military service, avoidance of crime or debt, payment of taxes, marriage or child rearing, education, tests of intelligence, knowledge, or competence, etc. In the unlikely event of their adoption, these might improve the quality of elected officials and of legislation.

But if you take the point of view that democracy is mainly for keeping the peace, these attempts defeat that purpose – tax-paying university graduates with children and without criminal records are unlikely to start or participate in civil wars. Instead, there’s something to be said for limiting the franchise (or weighting votes) according to ability and propensity to make trouble. This is probably why, historically, only landowners and men were allowed to vote – penniless peasants and women didn’t make civil war very effectively. Nor children.

Never attribute to brilliance…

November 23rd, 2022

Never attribute to brilliance that which is adequately explained by dumb luck.

A corollary to Hanlon’s razor.

Steve Jobs and Elon Musk were and are brilliant – we know this because they did astounding things not once but multiple times (Apple, NeXT, Pixar, Apple again, and PayPal, Tesla, SpaceX…). That doesn’t happen by dumb luck – the world is not that large.

But that’s pretty rare.

A lot of other amazing success is due to dumb luck. Not all of it, but without strong evidence, assume dumb luck.

Update, June 2020:

After a few months, I’m starting to think my readers don’t “get” what I’m upset about here. So I’ll explain.

The notion that wealthy people, in the name of “fashion”, should dress up to look poor – literally, in clothes falling to rags – is disgusting.

It shows an utter lack of awareness of what poverty is. What hunger is. And the human misery they entail.

These scourges have ravished mankind throughout history. Billions of our fellows lived lives of want, of hunger, of near or actual starvation. As a result they lived with disease, degradation, filth, pain, ignorance, superstition, and fear. This was the common condition of virtually everyone, across the world, for most of history. It was no fun.

It’s not “chic”. It’s not something to be admired or emulated. It’s something to celebrate that we’ve almost eliminated.

My niece, raised in a wealthy suburb of Boston, at age 12 had never heard of the word “famine”, or of the concept. She had no idea what it was, or that such things could exist. It had to be explained to her. Famine – mankind’s oldest enemy.

How is it possible for moderns to be so ignorant of history? Of the state of the world? Of the sources of suffering? Of the realities of nature?

How is it possible to think playacting as a sufferer is “fashion”?

Important and non-obvious things I’ve learned:

  1. If you have sufficiently good tactics, you don’t need strategy.
  2. Sufficiently frequent, deep, and thorough backups compensate for a multitude of sins.
  3. Everything is more complicated than it seems.

As far as I can tell these things are unrelated. I could be wrong.

Optimism is a duty

October 26th, 2017

I have never met a philosopher who had anything to say that wasn’t nonsense.

But I have read Karl Popper. He constitutes an existence proof that meaningful philosophy is possible.

The motto Popper seems to have most liked repeating was:

Optimism is a duty. The future is open. It is not predetermined. No one can predict it, except by chance. We all contribute to determining it by what we do. We are all equally responsible for its success.

This appears to be an expansion of Kant’s “optimism is a moral duty”. If I recall correctly, Popper first published this in 1945, in The Open Society and its Enemies.

I’ve used that quote many times, in many places. It summarizes one of my own core moral values. But many people seem to be confused as to what it means.

It seems obvious to me, but Popper found people had the same problem. So he tried to explain.

In a 1992 speech, he said:

The possibilities lying within the future, both good and bad, are boundless. When I say, “Optimism is a duty”, this means not only that the future is open but that we all help to decide it through what we do. We are all jointly responsible for what is to come. So we all have a duty, instead of predicting something bad, to support the things that may lead to a better future.

(Emphasis is mine.)

Two years later, in The Myth of the Framework:

The possibilities that lie in the future are infinite. When I say ‘It is our duty to remain optimists,’ this includes not only the openness of the future but also that which all of us contribute to it by everything we do: we are responsible for what the future holds in store. Thus it is our duty, not to prophesy evil but, rather, to fight for a better world.

Joseph Agassi says Popper’s

… arguments for optimism were diverse. First and foremost, the world is beautiful. (“The propaganda for the myth that we live in an ugly world has succeeded. Open your eyes and see how beautiful the world is, and how lucky we are who are alive!”) Second, recent progress is astonishing, despite the Holocaust and similar profoundly regrettable catastrophes. The clinging to life that victims and survivors of the Holocaust displayed despite all horrors, he observed, stirs just admiration for them that bespeaks strong optimism. Most important, however, is the moral aspect of the matter: we do not know if we can help bring progress and it is incumbent on us to try. This is the imperative version of optimism.

Because the future is undetermined, because it depends on our actions, we – all of us who yet live – have a moral duty to try to make it a good future. And we can do that only with optimism – with the belief that a good future is possible.

The next time you’re tempted to say “everything is going to hell”, “we’re all doomed”, “it’s over now – the enemy has won” …think again.

We always have the opportunity to change things for the better. Nothing is decided in advance – the future is always subject to improvement. And only those with optimism will make the attempt.

 

Like many people, I long worried about the specter of technological unemployment – as machines get smarter and gradually can do the jobs that people do, will we reach a point where machines can do everything people can do?

If and when that happens, we may have a paradise of abundance – machines will make everything we want, without any people needing to work.

But at the same time, how will people get money to buy these things?


I no longer think that’s going to be a problem.

Last week’s Economist had an excellent report on “The return of the machinery question”, which examines the problem from a historical perspective. People, after all, have been thinking about this problem since the start of the industrial revolution.

Despite all the hand-wringing, the economy always seems to generate more jobs than automation displaces.

Now I understand why (others have understood for a long time).

I’ll explain with a simplified economic model.

LIMITED STUFF-MAKING ABILITY

At any given time, the world (or national) economy has the ability to produce a certain amount of stuff. Food, clothing, houses, cars, entertainment, etc. – all the things we humans want.

How much stuff we can produce at any given time depends on:

  • Labor: How many people exist (each has hands and brain)
  • Capital: How many machines, factories, buildings, etc. we’ve accumulated. How much education people have received, etc.
  • Resources: How much raw materials we can easily get at – metals, chemicals, energy, land, etc.
  • Technology: The ways we know to do things and use the things we have.

Of course how well we use these things matters – we can run factories 24×7 or only 8 hours/day. We can have rules that make us waste time and materials, or incent people to be efficient. We can work long hours, or take lots of vacations.

And over time how much of these things we have changes – population can grow or shrink, resources can be discovered or run out, and we can discover new ways to do things that are better.

But, still, there are limits. At any given time, we can only produce some limited amount of stuff.

LIMITED MONEY

Also at any given time, there is a (again, roughly) fixed amount of money in the world.

Sure, we can print more, but that doesn’t help. If we can make a billion stuffs each day, and there are a billion dollars of money, then the billion dollars will buy all the stuffs, so 1 stuff for each dollar.

(I said this was simplified.)

But if we print another billion dollars, that doesn’t help. Now there are 2 billion dollars, but still only 1 billion stuffs each day. So you need 2 dollars to buy each stuff.  (This is inflation.)

MONEY CAN ONLY BE SPENT ON PEOPLE

When someone buys something, they give money to some person.

Yes, you can buy a building, which is not a person. But to buy it, you give money to a person (who owned it).

The same is true with everything – whether you’re buying labor or capital or resources or technology – the money goes to people.

There’s no place else it can go, except to people.

NOW INTRODUCE ROBOTS

So, we can make a given amount of stuff, and the given amount of money will buy that stuff.

And money can only be spent on people.

Now we introduce lots of robots that can do most of the jobs that people do. The robots are cheaper to use than humans, so they get used instead of people.

So now there are people doing nothing, who used to be making stuff.

I just said “the robots are cheaper”. This is key.

Let’s simplify some more and assume we’re still making the same amount of stuff (actually we’ll be making more, which makes things better, but let’s ignore that for now).

So we’re making the same amount of stuff, but the robots are cheaper. That means somebody has extra money left over (probably whoever is using the robots, but that’s not important, as we shall see).

Same amount of stuff. Extra money left over. Which can only be spent on people.

The extra money will eventually get spent (that’s the only thing money is useful for – being spent).

If it’s spent on stuff made by robots, there is still extra money left over. Because robots don’t get paid, only people do.

Yes, robots cost money, but that money is paid to people – the people who make or own them.

So there is still extra money around. Held by people. They can spend it on more robot stuff as much as they like, but that doesn’t use up the money – it just moves it around to other people.

And since we’re already making as much stuff as before, that means it has to be spent on new stuff made by people, that wasn’t being made before. That’s the only place it can be spent.

So now we’re making more stuff than before. And that new stuff was made by people. And the only people available – who aren’t already busy making the old stuff – are the ones who lost their jobs to robots.

So those are the people who get hired to make the new stuff.

That’s it. That’s the whole enchilada.

That’s why despite 200 years of worries, technology has never caused mass unemployment.

Because it can’t. There is only so much money around at a given time. If money is saved by cheaper robots, that money gets spent on people.

(Yes, technology has caused, and will cause in the future, adjustment problems as people switch jobs. That’s different, and while not minor or trivial for the people involved, temporary.)

Finally – Since the robots are cheaper, prices for the stuff they make go down. Which means people can afford to buy more of them.

Which means that people actually get wealthier, in terms of how much stuff they can afford to buy. (They don’t necessarily have more money – they might have more or less – but they can afford more stuff.)

They can’t buy more of the stuff made by people, but they can buy more of the stuff made by robots. Which means stuff made by people is more expensive – more valuable – than stuff made by robots.

Which means the wages of the people who make stuff has gone up, in terms of what the money you pay them will buy.

Up, not down.

Technology makes wages rise. Not decline. Rise.

And that is why we have nice things today, like houses, and medicine, and air conditioning, and clean rivers, and airliners, and indoor toilets, and the Internet, that our ancestors with little technology didn’t have.

Experts in any field seem very unwilling to speculate, or even endorse speculation, about long-term developments in their field.

Whether it’s physics, or AI, or medicine, or politics, experts don’t want to talk about their ideas on long-term developments. In those rare cases where they do, they often use pseudonyms (many professional scientists have published science fiction, but only rarely under their real name).

I’m not sure why this is, but I’ve been collecting possible reasons:

1 – They have a lot to lose as experts if the speculations turn out wrong, and by their nature speculations are…speculative.

2 – They are very focused on immediate problems and progress. This is what they’re paid to to do, and where they get their professional prestige.

3 – They are more keenly aware than non-experts of the many difficulties there will be in the actual implementation of speculative ideas. While they may know intellectually that these difficulties are not insurmountable in principle, as an expert they’re overwhelmed by the amount of work yet to be done, and tend to assume it’ll never happen.

4 – Even if they think the speculations are reasonable and will turn out correct in the long run, because of #3 they fear losing professional respect within their field – other experts may be discouraged by the amount of work yet to be done, and so consider as “crazy” anyone who takes a longer-term view.

Supporting these ideas is the observation that those few experts who are willing to engage in speculation tend to be from the very top (Nobel laureates, etc.) or very bottom of their field.

Those, in other words, who are either so respected they don’t fear a loss of status, or who have no status to lose in the first place.

In a recent post on his blog Overcoming Bias, Robin Hanson notes:

we often have academics who visit for lunch and take the common academic stance of reluctance to state opinions which they can’t back up with academic evidence

Which doesn’t directly explain why they don’t want to, while providing an excuse. Hanson suggests:

One does not express serious opinions on topics not yet authorized by the proper prestigious people.

Or, as Stephen Diamond has suggested,

Long-term speculation is hard to falsify until its propounders are safely dead. I suspect this is the reason for reluctance: it may seem a cheap way to get acclaim without empirical responsibility or consequences.

I think that’s a charitable interpretation – I suspect Hanson is closer to the truth.

For a shock, read Francis Wayland’s The Elements of Moral Science (1835; try also here), “one of the most widely used and influential American textbooks of the nineteenth century“.

As Wayland – prior to Darwin’s theory of evolution – explained, conventional Christian morals were based on the idea that Man was made by God, and so had special moral responsibilities.

Darwin knocked that bucket over, and in the process broke the long-accepted rationales for all kinds of legal, moral, and ethical rules. The reverberations from that were still being felt at least into the 1970s, and included socialism, progressivism, communism, the sexual revolution (of the 1920s, not the 1960s one), fascism, bad art, ugly buildings, environmentalism, hippies, flower power, and more. Some of it was good, more of it was bad. Things didn’t really start to settle down until the 1980s in the US, the 1990s in Europe, and still aren’t settled in the Islamic world.

And there are plenty of people – all over the world – who still haven’t made peace with it.

In Asia there wasn’t as much commotion about Darwin because Asian societies tended to take their social rules from non-theistic sources (as the West does now, mostly); Darwin’s revelations didn’t invalidate them.

It is telling, I think, that East and West had more-or-less similar rules (and still do, post-Darwin), despite supposedly getting them from independent sources.

I think that shows the rules really came from social evolution, a la Friedrich Hayek (certain rules tend to make societies dominant). Ironic, no?