Seasteading Thought Experiment

Introduction and Purpose

I find this scenario useful when considering any issue that involves being “captive” (more or less) to a geographic location. How much does this “captivity” allow others to control us or force us to make concessions to the will of others? For example, the issue of immigration, when considering this scenario, is seen as one of necessarily forcing fellow natives to live with either open borders or controlled borders.

The Physical Scenario

Imagine a large planet with no land mass and covered by a single vast ocean. Each single person or family has a floating island which they can navigate anywhere they want upon this planet. Each of these floating islands is capable of docking or undocking to another island or community of docked islands. Each island is pretty much self-sustaining, at least for several weeks, allowing each individual or family to traverse this planet before having to dock with another island or island community.

Rules of the Game

Mutual Consent to Dock

Any two islands are free to dock together, provided there is mutual consent. This is also true of any docking between an island and an island community or two island communities. Docking may be for an indeterminate period of time or for a set period of time to achieve an agreed upon purpose.

Unilateral Undocking

Any island is free to undock at anytime. No mutual consent is required to undock. The same is true of any community of islands. A subset of a community may undock from a superset. A superset may undock from a subset. There is no compulsory docking of any kind.

Concept of Operation

Individual islands will be able to dock with a community of their choice. Each island community is free to establish whatever form of self-government they want or believe will best suit them. This includes any function required to sustain their community and to establish support for values. Each community is free to establish a form of defense, to ward off pirate communities (or any community that wants to force dock). Each community may wish to attract (or not attract) new comers, but anyone must be free to leave (undock). The idea is that, over time, there would be a dynamic but relatively stable set of peaceful and pluralistic communities.

Challenge

If applicable, find a stance in which your position is challenged in some way by this freedom from captivity. Tells us about it!

Open This Content

A Strictly Scientific Worldview is Incompatible with Moral Responsibility

Many scientists hold to a worldview that is strictly scientific, one in which “free will” is taken to being an illusion or an old superstition.  These same scientists will also maintain that we have moral and ethical responsibility for the actions we take. These are incompatible stances.

Their argument for resolving the incompatibility usually has something to do with “feedback loops”.  A feedback loop is merely a mechanical system that responds to a stimulus and part of that response goes back in as stimulus for the next response. But if the system is purely mechanistic, the initial stimulus still completely determines the final state.  A computer simulation will demonstrate this. A machine still has no choice.

Without free will there are only two options for contemplating the future. Either the universe is like a movie, in which the end is already set in film, or the movie has multiple ending cuts which are chosen by random (causeless or spontaneous) forces. There is no in-between here. In order to have moral responsibility, the observer must be able to influence how the movie plays out.

Let me start my formal argument with the necessary conditions for “free action” to exist:

  1. Multiple future states must exist as real and alternate possibilities inherent in the current state of the universe.
  2. A free acting agent must have the ability to originate an influence that forces the realization of any one of these alternate possibilities.
  3. Determinism precludes alternate future states and thus precludes free action. Determinism must necessarily be excluded for free action to exist.

Quantum physics provides one explanation for the first condition. But no scientific theory yet explains the second condition.

These conditions for free action are still not sufficient to establish responsibility for the actions taken. Responsibility also depends on being “free of what we are”. To elucidate, lets begin with a scientific worldview assumption:

  1. We are only our physical bodies and our bodies and brain completely determine what we do.
  2. To be responsible for what we do, we must therefore be responsible for what we are.
  3. We cannot be responsible for what we are because, as the first point asserts, we act only in accordance to what we are. Simply, we cannot be self-forming.
  4. Therefore, we cannot be responsible for what we do.

For me, moral responsibility depends on two presumptions that are not yet compatible with a current scientific worldview:

  1. The ability to effect one of many alternate possibilities into realized physical states.
  2. Ability to act in such manner as transcends what we are as physical bodies.

Any philosophy that adheres only to the current scientific worldview and maintains the reality of moral responsibility is internally inconsistent.

Open This Content

Liberty as a Social Environmental Condition

Often I see liberty defined more as a personal freedom. It may be defined something like this:

Liberty: The freedom to do as one pleases so long as it does not interfere with the same freedom of others.

If liberty is seen as an individual personal freedom, it must contain a proviso that properly constrains it from interfering with the personal freedom of others.  Now let us consider defining liberty as a social environmental condition in the following way:

Liberty: A condition in which every man’s will regarding his own person and property is unopposed by any other will.

By this definition there is no such thing as “my” liberty being in conflict with “your” liberty. Liberty is raised to a ubiquitous condition in the social, political, and physical environment. My will and your will may be separate and independent but not our shared liberty.  Under this definition, our separate and independent wills become properly constrained by liberty.

Liberty is thus never identified as the cause of harm by others or to others. Only people and their personal decisions can be seen as causing such. Nor could liberty, unfettered, ever be seen as the source of chaos in a society. Individual wills, unconstrained, may cause chaos, but not liberty. Defined as such, societies may seek to maximize liberty.

 

Open This Content

Conditions for Justified Coercion

Below are some conditions or situations when coercion may be justified. They are designed to be concise yet comprehensive. I have defined terms as clearly as I can, but they may still be open to interpretation and judgement. Think about them and see if you can improve upon them or develop your own!

Coercion is justifiable only to:

1)   counter an initial attempt at coercion

2)   counter activity that demonstrably causes harm to others and which cannot be countered by voluntary cooperation*.

      2a) harm may be replaced with risk of harm.

      2b) “harm to” may be replaced with “exploitation of”.

3)   enforce compliance with the terms of a non-harmful and legitimate contract.

4)   guide or foster development (by parents or guardians) of persons incapable of consent.

Definition of terms:

Coercion: the use of harmful aggression (or threat of such) to compel involuntary action or inaction.

Contract: a written and signed agreement entered into by two or more consenting parties.

Consent: voluntary acquiescence following reason and deliberation by a person who possesses sufficient mental capacity to make an intelligent decision.

Exploitation: Deriving benefit for some at involuntary cost or sacrifice to others. Involuntary includes unknowingly.

Harm: Tangible injury, loss, damage, or impairment to body or property. Significant or prolonged physical or psychological pain and suffering. Unwanted and uninvited invasion of privacy, body, or property. Infringement/restriction of rights and/or liberties.

*Footnote

When someone is harming others through actions that have no intent or purpose to dominate but are merely seeking to live as they wish (like choosing to live without getting vaccinations or engaging in activity that puts others at risk), then coercing them is to be sought only after some attempt at voluntary cooperation (i.e. compromise or incentives or offering alternatives).

Open This Content

I, Protector

In the 2004 movie “I, Robot“, humanity narrowly escapes domination by robotic machines. The robots are engaged in secret plans and activities to take control of human affairs. What has made the robots do this when they have been programmed with a beneficent behavioral algorithm that ensure against human harm and (seemingly) maintain human autonomy?

Lets examine the algorithm instructions or “laws” as they are called. They are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I submit that the following algorithm alleviates the domination activity portrayed in the movie. Explanation follows.

  1. A robot may not injure a human being.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot may not, through inaction, allow a human being to come to harm, except where such action would conflict with the First or Second Law.
  4. A robot must protect its own existence as long as such protection does not conflict with the First, Second, or Third Law.

The secret and coercive domination portrayed in the movie is made possible because the “protector clause” is placed in the First Law, above the Second Law (human will). I have simply moved this clause to a position below the second law. The original placement of this clause charges the robots to take initiative, above human will, in order to protect them from harm. The robots thus construct a secret and coercive plan to protect humans from themselves. No human can subsequently order them to stop this process.

The revised set of laws allows robots to take initiative as before, but a human may now order the robot to stop or modify such an initiative.

In one particular scene in the movie, a policeman attempts to rescue a drowning child and subsequently endangers his own life. A robot forcibly intervenes to rescue the policemen, thereby preventing him from saving the child. The robot has calculated that the odds of the child being rescued are lower than the odds of itself rescuing the policeman. The policeman, however, was willing to accept the risk to his own life in the attempt. His personal choice to risk his own life was overridden.

If we place our protection in the charge of sophisticated and powerful machinery (aka government and police states), above any individual autonomy and discretion, we will allow ourselves to become dominated by self-created protection systems. This, I believe, is the primary warning contained in the movie.

Preserving our autonomy and discretion will come with risks because we humans are fallible and not omniscient. Yet this may be, in the long run, the more ethical choice.

Open This Content

A Distasteful Principle of Liberty

Consider the following situation:

  1. I see someone in need of help.
  2. This person’s need of help did not arise through any action of my own.
  3. I am in a position to help with little or no consequence to myself or others.
  4. I choose to do nothing.

Most people, and many philosophers of ethics, would say my failure to act was unethical. They argue that my knowledge of the situation, combined with my ability to act, set up a condition in which I became, in some sense, morally or ethically obligated to make the right choice and subsequently engage in an act of helping said person.

I beg to differ.

My choice to be passive has produced neither harm nor benefit upon the person or their situation. Physically speaking, I may as well have been a rock, a tree, or a cat. But it appears that since I am a human being, and thereby allegedly possessing certain qualities and capabilities, that this somehow confers upon me a moral or ethical imperative to help.

Don’t get me wrong. Helping others is good, positive, and beneficial. Not helping others does not generate or produce harm. Let me further clarify my ethics of the Liberty. You may stop my hand from harming others, but you may not force my hand to help others.

If you are into formal logic, consider the following premises on action and inaction as they relate to the production of harm or benefit

  1. Only actions have effects.  Actions can…
    1. produce benefit or harm.
    2. aid, handicap, or arrest processes or activities that cause benefit or harm.
    3. increase or decrease risk of harm.
    4. increase or decrease chances of benefit.
  2. Inaction has no effects. Inaction fails to benefit. Inaction fails to harm. It is the null of 1) above.
  3. If one’s prior actions have or are currently producing harm, one is ethically obligated to act to arrest and/or compensate for said harm.

Despite our instinctive and visceral responses when we observe those who fail to help others in need, as long as they are not or have not actively produced said harm, we cannot ethically visit coercion  upon them to do so. Nor can we ethically harm them through forms of active social retaliation.  We may also communicate and even demonstrate the wisdom of helping others, but that is our ethical limit.

Open This Content