Biological Warfare, Consequences, Amazing Familiarity

Send him mail.

“Finding the Challenges” is an original column appearing every other Wednesday at, by Verbal Vol. Verbal is a software engineer, college professor, corporate information officer, life long student, farmer, libertarian, literarian, student of computer science and self-ordering phenomena. Archived columns can be found here. FTC-only RSS feed available here.

Today’s work is to look at some of the ways that consequences will play out into other consequences.  Our so-called leaders will at times practice the short-sighted behavior of assuming there are simple expedients.  Leaders forget that consequences have consequences.  Let’s view some aspects of this miscarriage of leadership.

Biological Warfare

Dan Carlin recently completed his Hardcore History podcast series on World War I, the 6-part Blueprint for Armageddon.  In the last episode he covered one of the most interesting events, which lasted only a few days but which has living, huge ramifications in the world still today.  Members of the German high command aided Lenin and a small band of revolutionaries in making a safe train passage from Switzerland, across war-torn Europe, to the Capital of Imperial Russia.  Why did the Germans engineer this event?  Dan Carlin describes it as injecting a virus into the Imperial body of Russia.  But Carlin goes further to describe how this action produced consequences that went beyond control, affecting events from that point forward in extravagant ways.

The first unforeseen consequence, in the view of Carlin, was the success of the communist revolution, in terms of a revolution, in terms of the death knell of Imperial Russia.  The Germans had only intended that Russia be reduced to mob rule in a sufficient degree to neutralize Germany’s enemy, the Tsar.  As we now know, the Russian Revolution changed the map of the world in myriad ways.  Ironically, the unforeseen consequences were much more swift to Imperial Germany.

Carlin points out that, by this stage, 1917, the size of the German army was down to last ditch proportions.  Children and old men were being conscripted.  Even inhabitants of captured territory were being impressed.  These were people who had been sheltered from the war, in a practical sense, or who had been enemies.  They had not been sheltered from the idea of the fall of the old Imperial order.  They had been infected with the ideas of revolution.

So the virus that had been unleashed by the Germans had come back to infect their own.  This may have been the most widespread use of biological warfare in history.

But the Germans are not the only ones holding a dirty end of a stick.  We can go down through history and find many other examples.  The example that leaps to my mind is the CIA-instigated overthrow of the Democratic government of Iran in 1953.  I would not want to argue that the entire Middle East situation of today does not have roots originating or running through that event.

The most compelling example to me, however, is the more-or-less secret agreements that seemed to have been made at the Yalta Conference, near the end of the Second World War.  Much of that conference is cloaked in secrecy.  Secrecy, and the arrogance that produces secrecy, was the way that Franklin Delano Roosevelt, FDR, rolled.

I have many reasons to believe that FDR’s agenda at that summit was to finalize that monument to himself, the United Nations.  He was terminally ill; whether he acknowledged that at the time would only be conjecture.  So let’s look at consequences.  I have written on Yalta previously in this series of columns.  I contend that it embodies the worst foreign policy stumble by the US, in its history.  I urge you to refresh, in your mind, the detail from that column.  But it appears to me that FDR sacrificed half the world (more than half its population), more than half a century, to his lurid UN pipe dream.

Rothbard Quote #6

“But there are many problems in confining ourselves to a utilitarian ethic. For one thing, utilitarianism assumes that we can weigh alternatives, and decide upon policies, on the basis of their good or bad consequences.  But if it is legitimate to apply value judgments to the consequences of X, why is it not equally legitimate to apply such judgments to X itself? May there not be something about an act itself which, in its very nature, can be considered good or evil?” — Murray N. Rothbard, For a New Liberty

I suspect that Rothbard is here making a very subtle point.  It would be very difficult to find an X that was purely good, but aren’t those the only Xs that we should allow in our deliberations.  Many Xs should be rejected because they may cause bad consequences, because the premise, X, is bad itself in that regard.

An example would be that tax schemes always produce unforeseen, mostly bad, consequences, therefore tax schemes themselves should be rejected.  Any inflicting of harm to achieve a good spoils the good itself as well as the scheme.

Let’s also use examples from segment 1 above.  Germany’s intent with the aiding of Lenin was to destabilize the enemy, Russia.  The plan itself, however, affected the entire human race that would be alive in any time after the event.  So even if saving Imperial Germany were a laudable stratagem, it was a behavior that should have been rejected for its larger aftermath

What good outcomes have offset the disastrous plan to unseat a duly elected leader, Mossadegh, in Iran in 1953?  Is that toothpaste we would put back in its tube if we had the chance today?  Impossible.  Not to mention, the consequences of rebuilding history have extremely dark portents for the future that would follow.

Most of the modern world, beset with a tsunami of change, does not have (make) time to consider anything beyond the proximate objective.  A wit once observed that we do a great job on “A” goals, but we are blind to “B” goals.  This may be a tall tale, but this wit claimed that the first passenger aircraft went aloft without toilet facilities.

Logic Fallacy #34 — Amazing Familiarity

While using Google the other day, I came across a terrific web page which will provide fodder for many discourses on logic fallacy expositions to come. The site is Logically Fallacious, by Bo Bennett. For the grammarians out there, I agree that “logical fallacy’ is an oxymoron which I assume is meant to convey the idea of a falsehood with respect to the rules of logic — in other words, a logic fallacy. The phrase “logically fallacious,” however, will pass muster.

On Mr. Bennett’s page is a link to what he calls the B-list, on which he discusses what are in his view lesser order fallacies. But, I don’t agree that the Amazing Familiarity Fallacy belongs on the B-List. This fallacy occurs when a conversant claims knowledge that is impossible to know or is impossible for someone in his position to know.  An example may be when a person claims knowledge of the past, which they have not personally observed, or the future, which no one can observe;  “Donald Trump will be the next POTUS,” or “Indians killed far more white people than vice versa.”  The latter is also a case of something for which there was never a verifying scheme.  Another fallacy would be to state a statistic that could not be measured, as in “the Great Lakes contain one-fifth of the Earth’s fresh water surface” or “no two snowflakes are alike.”

Where I disagree with Mr. Bennett is in the relative triviality of this fallacy.  Is it trivial to say, “the science is settled on global warming?”  This creates two horribly corrupted ideas: 1) that science is a static thing which is settled by a vote among those who call themselves scientists, and 2) that the data are pure and solely fit for the question.  Is it trivial to say, “Iran has a nuclear weapon?”  Anyone in a position to verify that assertion is not posting it on Facebook or discussing it at the local truck stop.  Is it trivial to say that there is or is not a God?

In your judgment, what are the important dangers in life?  Might they include these:

  • Assuming that good causes produce good results, or even 
  • Assuming that good causes have only single, easily monitored results.
  • Thinking relatively about outcomes while thinking absolutely about inputs.
  • Assuming truths gained from information that cannot possibly hold such a truth.

Read more from “Finding the Challenges”:

Save as PDFPrint

Written by 

Verbal is a software engineer, college professor, corporate information officer, life long student, farmer, libertarian, literarian, student of computer science and self-ordering phenomena, pre-TSA world traveler, domestic traveler.