The Straw Man Fallacy Almost Never Happens

01 Jan 2016

The straw man fallacy is where someone intentionally ignores a person's actual position and substitutes a distorted, exaggerated or intentionally misrepresented version of that position.

The idea is the person gives a version of your argument that's weaker than your actual argument, to make it easier to counter.

But in general, people don't try to deliberately attack a position you don't hold. Normally, when someone is disagreeing with you, they are disagreeing with what they believe is your actual position.

If they want to criticise you, and they think your position is wrong, why wouldn't they criticise that? Why would they make up another position? If they already have something they disagree with to criticise, they wouldn't usually go to the trouble of creating a new target.

The main exception to this "usually" can be found in politics, propaganda and sometimes public debates. Politicians notoriously lie about their opponent's position to make them look bad. But in cases like these, it's because they are trying to do something other than debate ideas with you — such as get votes, motivate the base, or win over onlookers.

So usually, someone you're debating with does believe they are criticising your real position. They may have misunderstood your position, but a misunderstanding is not a straw man.

When someone makes a 'straw man' without doing it on purpose, it's not really a straw man: All arguments consist of thinking the other person is wrong. In every disagreement, the two people think the other side is missing something. In every disagreement, there's something that at least one side does not understand about the other side's position — which would be labelled 'straw man' if you defined that as any non-accurate version of a position. So it's not worth calling this a straw man — the term would then apply to absolutely everything.

One thing that can happen is that you don't immediately recognise something they criticise as your view. "Aha, a straw man!", you think to yourself. But what they may actually be doing is trying to expose contradictions in your view. Contradictions can be hard to recognise as your own view (if you saw contradictions in your views, you would have already tried to resolve them!).

For example —

Joe: Killing civilians in war is always wrong.
Bob: So you're advocating banning all warfare.
Joe: I never said that! That's a straw man! I'm against killing civilians — I never said I was against war altogether.

Here, Bob is trying to say that the idea that killing civilians is always wrong implies that war is always wrong (because it isn't known how to totally avoid civilian casualties). Neither of these people are intentionally trying to attack a position the other doesn't hold; Bob just thinks this conclusion follows from Joe's view.

What's really happening is that the two of them are disagreeing about something. There is a difference between a disagreement and a fallacy: In a fallacy, a person is making an error in logic. In a disagreement, they're making an error about some substantive claim.

It's not actually that common for people to make logical errors in arguments. (It does happen, but it's rare.) Usually, people disagree on some kind of substance.

A draft of this post was first written on 2009-11-17.

The Case Against Meta Discussion

12 Aug 2010

Meta discussion is talking about the discussion or its participants, instead of the content of the discussion.

Meta is disruptive to a conversation. Meta encourages other people to reply in meta, instead of to the point. It may seem like meta is helpful, but it actually isn't. It drives arguments into black holes.

Common types of meta are:

Ad hominem -- attacking the person's motives or character rather than their argument
- "You're only saying that because you're a communist."
- "I have never been a communist, socialist, or anything else you're accusing me of."
- "Why are you so hostile?"
- "You're so closed-minded."
- "Stop being an idiot."
- "You're being irrational."
- "You're just projecting."

Talking about how well or badly the argument is going
- "We're never going to agree on this."
- "You missed the point."
- "I don't see the point."
- "Why are you saying that?"

Talking about writing style
- "Are you aware that your writing style is very off-putting?"
- "You may have some good ideas, but you're not going to persuade anyone with that tone."
- "People would find it more pleasant if you wrote in a less abrasive way."

Talking about other elements of the discussion, or about yourself
- "Forgive me if this is off-topic."
- "I have decided to start posting again."
- "The problem I have with this is..."
- "This is boring."
- "I've had enough."
- "This is not a nice comment at all!"

Asserting what the other person is doing/saying
- "You're making a fallacy."
- "You're saying <X bad thing that doesn't make sense>."
- "Yes it is/no it isn't."
- "I agree/disagree".

Although saying you agree/disagree can be useful as an indicator of which points still need to be addressed, it is often pointless, takes time away from the content, and gets bad when you say you agree/disagree when you actually don't. People often make the mistake of saying "I agree" too readily, before they understand what the other person has said, because they want to show that they're open or being reasonable. Similarly, people are too quick to jump to "I disagree" when they don't understand what the other person has said and think the other is wrong.

The basic reason that meta is bad is that it's off-topic. But that's very important.

Say two people start off debating whether we should be in Iraq, and they end up debating whether the other person secretly wants to install communism. Arguing about Iraq and arguing about some person's state of mind are two different arguments, which on the face of it have nothing to do with each other. In a debate when people are caught up in it, they totally think that those two are the same thing.

It's clearly off-topic, but it's hard to take on board the full significance it's off-topic when you're in the situation. It arises in the context about what a person meant by a term, then it develops about what they mean about terms in general, then it goes on to his momma.

Meta is irrelevant.

Meta drives arguments into black holes by the following:

1) Meta is off-topic.

2) Meta breeds meta.
  • You can't contradict a meta statement without making another meta statement (which in practise always takes you even further away from the topic under discussion).

  • If you say, "You're only saying that because you're a communist", they'd say "a) I'm not a communist, b) that's not why I said it and c) I was right" (the 'I was right' is starting to go back on topic, but it's a meta form. Substance of meta is bad, form of meta is OK-ish. "I meant rationalism as in Popper said" -- in form it's a statement about you, in content it's a statement about what the argument meant).
3) Meta engages emotions.
  • Popper wants our ideas to die in our place. Meta wants to substitute us for our ideas, and let us die instead of our ideas (or if not die, be trashed).

  • Changes the focus from the substance of what's being argument to attributes of the speakers or the nature of the discussion.

Including meta often longer than just saying content. It's only shorter when you're saying "what I just said" as a way of repeating what you said before except shorter (which is meta in form but not content). "I think that you think that blah blah blah, and I think you are mistaken" is much longer than just saying "blah blah blah is wrong".

If there's a danger of meta, instead of saying "I meant so-and-so", say "that meant so-and-so", just to be on the safe side.

If you think your interlocutor isn't going to answer properly, there may be a temptation to answer with meta to clarify. But don't. Resist, and answer with content.

If your interlocutor does meta, there are a few things you can do:
  • Relentlessly stick to the content.
  • Try asking a different question on the same topic, or ask in a different way, or change the topic.
  • Be more specific when you ask. (This is similar to asking in a different way.)
Meta can appear in articles and posts without two participants, too, e.g. in articles:

- "I am going to argue that..."
- "In this article, I have a presented a powerful argument that..."
- "I think..."
- "In conclusion..."
- "It is a matter of fact that..."

This causes problems: First, readers have to filter out more fluff to get to the point. Second, if you say you're going to argue something, especially if you say how good the argument is, that will set up high expectations in the reader's mind. The reader's focus will shift to whether you're fulfilling your promise -- and looking out for ways in which you're wrong -- instead of trying to understand the content of what you're saying.

Try scanning your articles or posts for meta, and deleting it. Is it shorter? Does it take away anything from the original? Is it easier to understand? Are the content points more obvious/prominent? Does it sound less hesitant, bumbling, or more confident, straightforward?

There are rare cases where meta is good. Sort of. One example is when you're discussing morality: in that case, it's stopped being meta because it's a new topic branch. One should be careful to notice that it is a different topic, and not really meta.

Genes Don't Control Us

30 Jun 2010

From a discussion about whether there's a 'natural' tendency to be monogamous or polyamorous:

The whole idea of it being 'natural' or not is silly. We're humans. We can think about these things. We can judge that some of our inborn ideas are actually not good — or flawed — and we can change our responses to them. We frequently go against our most basic inborn desires: hunger (most people don't eat when they're hungry; many people overwrite their hunger signals and are either too thin or to fat); pain (people sometimes enjoy the pain of exercising; one can ignore pain often; and when females wax hair, they often abide by "pain is beauty"); sex (as William Godwin pointed out: you can be completely into it, but if someone tells you your father has died, you'll forget all about it because reason says the father thing is more important than the sex thing).

And these examples are the most basic, direct things we'd expect evolution to spend most of its resources on. We'd expect these to be the strongest, most important inborn imperatives. Evolution has to get this right before it moves on to other more minor things.

5 Ideas With Reach: Learning

24 Dec 2009

The great thing about ideas with reach is that you don't need to learn that many to be set upon the right path.

Here are five ideas with reach, that if you took seriously would allow you to do better than most on lots of stuff:

1) Human minds are universal computers. That means, it's possible for them to do any computation. That means it's possible for them to learn anything.

2) Learning is the exact same process as creating. When someone learns from a book, he's re-creating the knowledge from the book in his own mind.

3) Learning happens by the person making conjectures (guesses) about what a true theory (explanation of something) is, and then criticising it (trying to find flaws in the theory), and then fixing it by conjecturing what might fix the problems. Teachers can only help with this process, not do it for them.

4) In this way, knowledge is evolutionary. Learning is mutation (editing theories) and selection (picking the edits that make it better).

You can also have knowledge without learning, e.g. a species gets more knowledge about how to adapt to its environment by mutation (random changes in genes) and selection (genes that help get themselves copied, survive).

5) In order to learn anything, one must do this process described above. There is no other known process of learning. This means that if an entity can learn one thing, it has the ability to learn anything.

These ideas have consequences. Take (5): this means that we won't be able to have true 'intelligent cars' that learn stuff. By the time we get a car computer that can learn roads and drive for us, it will be able to learn anything, and so would be human. (Note: a car could 'learn' stuff if it's just blindly following a particular program/algorithm. But that's not really learning, it's just analysing data according to a code.)

And (5) means that either animals can't learn, or they're people (saying they're people leads to questions like: why haven't they created anything? why does it look like they can't learn some things?).

It also means that if children can learn at all, they can learn just as well as an adult can. More precisely: there aren't special fields of knowledge that children just can't learn for some reason. Which means that it is not the case that children are unable understand some things (like 'sense of self as distinct from others' or 'future consequences'). They may not have learnt it yet, but there's no reason to suppose they don't have the ability to.

It also means that mental illnesses or disabilities do not render people unable to learn specific things. Either they have lost their ability to learn, and can't learn at all, or they retain their ability to learn, and can learn anything. It's possible that severe brain damage could cause someone to do this a lot more slowly (or perhaps lose some of the knowledge he once had if the part of his brain where it was kept got damaged), but lack of speed doesn't mean lack of being able to comprehend.

So, this one idea applies to lots of different fields and issues. It's not just about computation, or teaching, or whatever — it has wider implications, it reaches to other stuff too. And it has important consequences for the ideas it reaches to: for example, it means it's possible to persuade a child instead of force them.

What an idea reaches to might not be obvious. You'll have to learn how to apply it, and work out individual situations. But it'll be much quicker and involve fewer errors than if you had to work out the conclusions just from the details of the situation. For any given situation, you can check it against the theories with reach.

Meta is Bad

06 Aug 2009

Good example of why meta discussion is bad:

Lots of...
  • Asking "What is the evidence?"
  • Saying "There is lots of evidence."
  • Ad hominem.
  • Arguing about whether it's ad hominem.
  • Argument from authority.
  • Talking about the close-mindedness of each other.
  • Asking to accept the "scientific facts".
  • Talking about how one side or other doesn't have fair coverage, or is censored (which is a fair topic in itself, but I was hoping to hear arguments about the topic).
  • Disagreeing about whether something is 'controversial' or not. That's so dumb — if your interlocutor disagrees, it's controversial!
  • Talking about definitions (e.g. of 'evidence') — actually, it was just suggesting that the definition might be different (for no reason, and then not explaining why or what the difference is).
  • Accusing the interlocutor of argument from emotion.
  • Suggesting the interlocutor read a book to learn about the evidence, instead of just explaining what the evidence is.
  • Accusing interlocutor of living in an echo chamber and not knowing the other side's arguments (after the interlocutor specifically asked for evidence/arguments. So instead of giving them, he said some ad hominem meta).
... Instead of arguing in a serious way about how DNA is evidence for evolution, or other content-related stuff.

Notice how, despite having the right conclusions, Richard Dawkins engages in far more meta than Wendy Wright. Most of the things listed in the bullet-points above were Dawkins. She mostly only did meta when he started it.

The Difference Between Philosophy and Science

22 Mar 2009

Philosophy and science are different in one respect: scientific theories can be tested by experiment, philosophy can't.

What does this mean?

All the stuff other than testing is done the same. So, how much of this other stuff is there? How similar are they really?

Testing is the act of taking two (or more) theories with different predictions, and finding out which prediction happens by doing an experiment. The theory that fails the test (predicts something different from what actually happens) is eliminated, or changed to account for the unaccounted-for result.

Tests only happen when we have two viable theories. If one of them makes less sense than the other — has more holes/problems, or doesn't explain as much as the other theory, or whatever — then the other is preferred by default, and no test to see which is better is necessary. ('Cause if you can already see which is better...)

I say two theories, but it could be that there are more than two rivals on at one time. The thing is, that's rare. It's rare enough at the leading edge of science that we have one viable theory, let alone two, let alone more than two. Usually, when there are rivals, we can eliminate all but one or two by just using criticism. Tests only come in after we've done that.

So, what is there other than testing?

First, there's coming up with a theory in the first place. We do this by guessing what might be the case, and guessing explanations for it. In other words: conjecture. (We do not induce theories from observation — though we can criticise our theories using observation.)

Then, there's criticism. We criticise the theory to see if it makes sense, to see if it's better than its rivals, to see if it explains what it purports to, and so on. We try to find problems with the theory.

When we find problems, we try to solve them. Either we will change the theory to account for the problems, or we'll come up with a new rival theory that has fewer/less-severe problems, or we'll discover the things we thought were problems are actually OK or explained already.

This process of criticising our theories and changing them to solve problems is intense. Or at least, it should be. Scientists tend to do it pretty well — they're rigourous, or at least have a culture that encourages being rigorous. Philosophers... not so much. Half of them don't even believe in objective truth, which can be a bit of a damper if you're trying to find it.

Most people have the impression that philosophy is this wishy-washy, personal/subjective thing that you talk about to sound deep. Some people have an idea of what it is, but it's mixed in with this wishy-washy conception of it. It's no secret that most people barely know what it is; most philosophy classes start with the question, "What is 'philosophy'?" (starting a history class asking "What is 'history'?" would be absurd).

But philosophy is only different in this one way. To make good progress in philosophy, it needs just as much rigour as science (or even more, considering we don't have testing to help us out).

Well... from the 'testing' difference, you could say another difference is that science and philosophy discuss different problems. Science is about the problems/theories regarding things we can test for (physics, chemistry, and so on), philosophy is about the problems/theories regarding things we can't (morality, epistemology, etc.). But their methodology is the same (well, up until the point where you actually test stuff — which, by the way, doesn't always happen for any given theory. A theory can be scientific without being tested; it just needs to be testable).

But it's not just philosophy that has this similarity to science. All fields where one can make progress involves this conjecture-and-criticism process. And to be good at them, they all involve some degree of rigour.

Authorities are Dangerous

28 Oct 2008

Using authorities to back up your arguments is not only pointless and false (justificationism), but a danger to rationality: if you have a favourite authority and have invested a lot in that authority, you may be defensive/resistant when someone challenges his ideas (even if they're not his main ideas, and you would agree otherwise). Talking about ideas doesn't run into that problem.

Popper & Induction Applied to Art

19 Sep 2008

A common error made by people trying to draw is that they draw what they think they see, rather than what is actually there. Since the brain is specialised in creating and interpreting symbols, people draw the symbols (e.g. "An eye is an almond shape with a circle in it"), instead of their real shapes (e.g. "Eyes are made of many components, such as eyelids, eyelashes, tear ducts, wrinkles, and more, and those are represented by types of lines, etc.").

The advice they are usually given is, "Draw from life," or "Draw what you see." But this is rarely helpful. They will often just continue to make the same mistakes, and not know why or how to improve.

From the point of view of the artist giving advice, it seems accurate. Just look at what's there, and draw that. Draw what you see, not what you think you see. It seems fairly straightforward; why don't beginners just take this advice?

The reason is that all observation is theory-laden. Even if you look at stuff and try to draw it, you still won't improve unless you improve your theories of what's there, or how to look at it, or how to translate what you see into marks on paper. The artist already has the theories about how to draw. We can't induce those theories just from looking at stuff.


Summary: To draw well, you have to learn what to see when you look at stuff and try to draw it. Artists say that if you just practice drawing from life, you'll learn it. This is wrong for the same reason the 'problem of induction' is wrong: we first have theories, and only interpret what we see through those theories. We can't just 'observe' without knowing what to observe.

Credit goes to Karl Popper, who created some of the ideas used here, and to an artist friend for making the connection between induction and art.


21 Jul 2008

In debates, people rarely try to persuade others. They say they want to, and they talk as if they want to*, but they don't act as one'd expect if they were trying to persuade.

To be persuasive, one must find a way of explaining that appeals the other person's knowledge and current values. If you argue that hitting kids is wrong using the premise that violence towards children is always wrong, but the person doesn't agree with that premise, he won't be persuaded by that argument. If, instead, you argue that hitting kids is wrong because kids are people, and the person has the belief "violence towards people is always wrong", you will appeal to his values and understanding of the world — so you might make headway.

Your argument has to solve something in his problem situation.

But most people don't even try to do this. They just make arguments they think should work, regardless of that person's problem-situation. They don't stop and think, "I wonder what his misconception is. Perhaps it's ..." or "What new way of explaining this argument could appeal to his current values? Oh, maybe he'd like this explanation ..." Instead, they think things like, "Argh, he just doesn't get it!" or "I wish he was more rational, then maybe he'd understand. >_<" or "But this argument is self-evident!" or "He's not listening at all..." They're not sympathetic. They don't realise that they should try to understand where the person is coming from when they make an argument, not just when they're listening to the other person's.

So what would it look like if someone was trying to persuade? More interestingly, what would be effective? Here are some good things to start with:
- Start with something they agree with and explain or show why
-- it's consistent with your argument.
-- your argument follows from it.
-- it's interesting to them (using their knowledge and values to explain why it's interesting).
- Look out for when they have a misunderstanding of what you're saying, and correct it.
- Look out for your misunderstandings too, and ask questions to try to correct them.
- Stick to one point at a time. You might have the impulse to correct everything they say that's false all at once, but this is usually just more confusing for them. Get them to agree to things in bite-sized chunks. If there's a complex idea that has lots of parts to it, try to find a way to split it up.
- Understand your opponent's view. Don't just assume you know it. You probably don't, so ask lots of questions to find out.

Acting friendly, patient and sympathetic helps too.

* Actually, sometimes they deny that they're in debates to persuade (because they think that's unlistening or bigoted), and say they have them to learn from the other person instead. But usually it slips out that their goal is persuasion — they don't act like they want to learn rival ideas, and they are pleased when the other person concedes (regardless of whether the person actually understands what he's conceding to). Also, less serious people sometimes start debates for the drama instead of the content. But I'm not talking about those people.

2015 UPDATE: I also gave an Argument Workshop with updated views on this topic in 2011.

Problems Are Good

10 Jul 2007

"I think that there is only one way to science – or to philosophy, for that matter: to meet a problem, to see its beauty and fall in love with it; to get married to it and to live with it happily, till death do ye part – unless you should meet another and even more fascinating problem or unless, indeed, you should obtain a solution. But even if you do obtain a solution, you may then discover, to your delight, the existence of a whole family of enchanting, though perhaps difficult, problem children, for whose welfare you may work, with a purpose, to the end of your days."
— Karl Popper
This contrasts with the conventional wisdom that says we do all these tests in science in order to get to the Solution, which we may then be happy with.

In real life, being in a state of no problems isn't fun — it's boring. If you have no problems, you're not working towards anything; you're not growing; you're not creating anything. All creative acts involve some kind of problem-solving.

The real thing we should be excited about when we solve problems isn't the fact they're over and done with and now we can relax without them — it's the discovery of new, better and more interesting problems.


13 May 2006

'Memes' are not just those online quizzes that tell you how well you do in bed by measuring the letters in your username. [2015 UPDATE: 'Memes' have come to be known on the web as a funny image or an internet in-joke. At the time on LiveJournal, they referred to trending quizzes you copy-paste into your journal.] 'Memes' are a philosophical idea — and a damn good one at that.

Put simply, a meme is an idea that is passed on from person to person. Memes evolve to be better at passing themselves on as much as possible, and they evolve to stay there. Kinda like genes, except it can be passed on to anyone and it's an idea.

Because memes have been going on since humanity began, you can imagine that some of them have survived quite a while and are pretty damn strong (read: hard to get rid of). They're designed to stay with you and spread to others. Traditions, culture, all that stuff, are memes.

There are two types of meme. The 'static meme' and the 'dynamic meme'. The static meme — AKA anti-rational meme — spreads itself by being very difficult to get rid of. (For example, 'if you even think about doubting God, you'll go to hell'.) The dynamic meme — AKA rational meme — spreads itself by withstanding criticism; by being a good/true idea. Most science would fall under the latter category.

I think David Deutsch's idea of static/dynamic memes is very important and can explain a lot in human behaviour. Spread the idea. ;)

How to Tell Whether a Theory is Bad

11 May 2006

Welcome to Lulie's Guide on How to Tell Whether a Theory is Bad.

  • If person denies that he himself exists.

  • If the person denies that knowledge is possible, or the person denies that science or reason has achieved anything.

  • If the theory doesn't hold if applied to itself.

  • If the theory doesn't solve any problems.

  • If the theory doesn't explain anything.

  • If the theory is easy to vary (bad explanation).

Updated January 2018.