Comparing Evidence

Many of you are new to this site from the 3NR or MDTA forums. Hi. My overarching project for this site is writing a new guide for debaters that I'm calling Picking Up. Hope you like it

One of my flaws as a coach is that I tend to get ahead of myself too often. Telling a novice debater to "make smart arguments as to why your evidence is better than theirs" is good advice in general, but it's pretty much useless if I haven't taught that debater how to compare evidence in the first place.

This article really should have been written a long time ago, because comparing evidence is at the root of so much in debate. You need to pick your best cards to go at the top of your blocks, you need to win our cards are better than their cards arguments to effectively debate a round, you need to be able to know what a good card looks like in order to cut one, and so on. The problem with teaching something so fundamental is that it gets taken for granted. By the end of the average high school debate career you've evaluated so many pieces of evidence that the process becomes automatic, so much that you don't have to think about it. Intuition is awesome, but it's hard to teach.

I'm going to do the best I can at breaking down the process, but remember that the best way to get good at comparing evidence is to do it a lot: write a lot of blocks, cut a lot of cards, debate a lot of rounds.

So, Card A or Card B, which is better? I like to think of this process as a number of tests, like a checklist that I run in my brain. Think of it like a driving test: almost everybody is going to make a few mistakes, but the instructor is looking for overall competency and making sure that no flagrant violations occur.

The most important test for a piece of evidence to pass is the "says what they say it says" test, or do the tag and the text of the card agree? First read the underlined portion of the card: does the tag do the author justice, or has the author's point been overstated? Some common stretches that you'll see include taking a card that mentions "nuclear war" and tagging it "extinction," taking a card that says "conflict" and tagging it "nuclear war," or taking innuendos and suggetions and turning them into explicit claims. This is a pretty common practice, unfortunatley, mostly because too many debaters let their opponents get away with over-tagging their evidence.

Next, look at the part of the card that isn't underlined. Here you might find caveats (this will happen unless that thing happens), weasel words (possibly, some have argued that), alternate causalities (poverty leads to obesity, but so does lack of exercize, no after-school activities, and crappy school lunches), and other evidence that the argument is weaker than it looks. Rarely will a team go so far as to intentionally distort the direction of evidence (underlining around the word "not"), but the magnitude of the evidence will often be stretched, at least a little bit.

After evaluating the agreement between tag and text, I usually give a second thought to the argument presented in the text itself: do the author's conclusions logically follow from the presented data? Does the author commit any popular logical fallacies? Is the argument missing an important piece of data? Does the author answer the obvious objections to his or her position? This sounds like pretty basic stuff, but you'd be surprised how often major publications print stuff that fails this test.

The last set of tests that I want to mention center around the citation. I list this one last because it's easiest; new debaters tend to focus on date comparisons and shallow "bias" claims at the expense of the tests above. However, there are still some important citation comparisons that you want to be ready to make.

More recent evidence can be important, but only when it's important. Here's a pro tip: if you can't think up (or better yet, prove) an event that happened that would make the information in their card outdated, it's usually not worth your while to point out that your card is newer.

The qualifications of a card also matter, but comparing qualifications is usually not straightforward. Cards can have an institutional legitimacy - IE they are from a trusted publication or author with a history of providing accurate information. Cards can also come from someone with field expertise, they're respected, prolific, and/or have professional qualifications on the subject in question. Cards can neutral, or free of bias and conflict of interest. Cards can take more, better, or more appropriate data into account, or otherwise have a better methodology. Cards can also go through peer review, fact checking, or other editorial processes that lend more credence to what they say.

So a major national newspaper, a republican party strategist, a phd in physics, a non-partisan think tank, a peer-reviewed scientific journal, a JD candidate, and a political blogger are all "qualified," but they aren't all qualified for the same reasons, nor are they equally qualified to talk about the same things. Evaluating whether your author is better than theirs requires connecting the reasons that an author is qualified to reasons that it matters for this particular dispute. Don't just compare, compare and impact your comparisons.

In fact, that's pretty good advice for evidence comparisons in general. "Your evidence" is bound to pass some tests that "their evidence" does not, and vice versa. The arguments about which tests matter more are going to be very dependent on what is being argued, and eventually winning that debate becomes much more important than winning that your card is newer by 3 days.

2 comments:

Unknown said...

The "Some would say" (otherwise known as the straw person argument) is in my mind the kind of distortion that you say debaters rarely do. If a debater points out that the evidence is clearly an argument established to be disregarded or substantively answered, I would likely disregard said evidence.

Ryan Ricard said...

It's sometimes tricky to spot a "some would say" without reading the entire article though. Especially in academic prose where it's pretty common to cite an author and follow with a lengthy summation/analysis of the argument. Some Rabinow cards from "The Foucault Reader" end up looking a lot like straw person cards, fr'instance.

That being said, I totally agree that all clear straw-person cards need to be shot on sight.