18

Framing, Word Choice, and Biases

A good argument provides sufficient justification for us to change the way we think, believe, or act. That is the art of persuasion. We expect to be treated as critical thinkers who can make reasonable decisions based upon the facts, our emotional responses, and connection to the speaker. However, there are times when a speaker may use hidden means of persuasion to try to convince us. Two common strategies are framing and word choice. Think of framing as a way to change meaning by manipulating the perspective, just as the frame around a picture influences the way we see it. For example, an optimist is said to see a glass as half full while the pessimist sees it as half empty. This view of the world is simply a matter of perspective as the glass is the same. It depends upon the context.

The context will also determine whether the purpose of a paper or speech is to persuade or inform. The speaker or writer must choose language that is specific to the topic, purpose, and audience. For example, lawyers, politicians, opinion journalists, and advertisers are expected to advocate particular points of view, so the audience will expect a considerable amount of framing in the way they present an issue. An audience will tend to give these communicators substantially more leeway in this regard than it might give to others who are expected to report rather than advocate. As such, an audience will hold news sources to a higher standard of neutrality than writers of opinion pieces such as op-eds; and an audience may reject outright “push polls” or any survey or study that appears biased or whose methodology does not meet a scientific standard.

It is not always readily apparent at which points someone is simply intending to inform versus the points at which he is attempting to persuade. But understanding the context and being aware of the implications of language choice should help you to recognize hidden persuasion and to evaluate whether the author of the argument you are analyzing has chosen language appropriate to the topic, purpose, and audience. We will expand on the concepts of framing, word choice, and bias by exploring the answers to the following questions:

  1. How do effective communicators choose language for their arguments?
  2. What uses of language are inappropriate?
  3. What is propaganda?
  4. How can I tell if language is being used as a tool for audience manipulation?
  5. What is framing bias?
  6. What is confirmation bias?
  7. What can I learn about fallacies from advertising?
  8. When can I trust a poll or survey?
  9. How does scientific sampling lead to credible premises?
  10. How can reliance on scientific reasoning reduce bias?

1. How do effective communicators choose language for their arguments?

Clear and appropriate word choice is a desirable quality in both written and oral communication, so your assessment of the strength of someone’s argument often will depend in part on how effective the author is at choosing his language. The same applies for constructing and communicating your own arguments.

Ask yourself whether the author correctly uses technical terms relevant to the topic. In addition, if the audience is not made up of specialists in the subject, does the author provide definitions of these terms, as well as examples to illustrate them? An author strengthens his ethos when he uses technical terms and other vocabulary correctly and shows that he understands when it is appropriate to provide definitions and examples.  But language choice also is relevant to an evaluation of an argument’s logos and pathos.

2. What uses of language are inappropriate?

In addition to noting whether an author relies upon correct and appropriate terminology—an appropriate use of language—observe whether the author avoids manipulative uses of language that either fail to support the argument’s logos or make unfair appeals to pathos.

Weasel words: words or phrases that are ambiguous or vague; clear and critical evidence for a position may be missing and weasel words offered in their place.

In the sentence “This compromise will give you most of what you wanted,” the word “most” is a weasel word. Exactly what is being promised? Similarly, in the statement “Students are almost always offered jobs at the end of their internships,” what is the precise meaning of “almost always”? In your Argument Analysis, consider whether the author relies on such ambiguous or vague language.

God terms: words or phrases with positive connotations that are meant to give a position a “stamp of approval”; god terms are used to imply that supporting a position would be patriotic or virtuous but are not themselves evidence for the position.

In the sentence “Cutting access to food stamps would encourage personal responsibility,” the god term is “personal responsibility.” It might seem as if it would be hard to argue against “personal responsibility” or related god terms such as “independence” and “self-reliance.” However, it would require a definition of “personal responsibility,” combined with evidence from studies of people’s behavior in the face of food stamp or other benefit reductions, to argue that cutting access to food stamps would lead to the intended results.

Devil terms: words or phrases with negative connotations; devil terms are used to imply that supporting a position would be unpatriotic or hurtful but are not themselves evidence against the position.

For some audiences, “dependency” might have negative connotations and would be inconsistent with God terms such as “independence,” and “self-reliance.” For such an audience, “dependency” would be a devil term in this sentence: “Congress needs to cut welfare payments to discourage dependency.” However, it would require a definition of “dependency,” combined with evidence from studies of people’s behavior in the face of benefit reductions, to argue that cutting welfare payments would lead to the intended results.

Name-calling: labeling an opponent with a term that the audience would find negative; name-calling is used to imply that an argument may be dismissed because its advocate should not be trusted.

If the word term “radical” has negative connotations for an audience, then labeling someone with that term would be name-calling. Similarly, the term “fascist”, with its negative connotations, would be an instance of name-calling. However, the name flung at an opponent does not prove anything about the strength of his argument. In your Argument Analysis, consider whether name-calling is used and the extent to which it detracts from the argument and weakens the name-caller’s ethos.

Generalities: broad statements so nonspecific that they do not make a meaningful contribution to a debate over an issue.

The statement “We must do what is best for our children” would be an example of a generality in a debate over high-stakes testing in high schools. This statement is so general that it could be trotted out by either proponents or opponents of high-stakes testing—which means that it is not an argument in favor of anything at all.

Euphemisms: words used to avoid unpleasant or offensive terms; euphemisms may be used to ‘sugar coat’ elements of an argument

When the word “retarded’ became stigmatized, people began to replace “mentally retarded” with the phrase “developmentally disabled” in order to avoid the use of a word that had become offensive.

When a euphemism has the effect of distracting an audience from important facts, its use may be considered manipulative. A famous example is the phrase “collateral damage,” a term that refers to civilian deaths and injuries without using a word that might make the audience think about the human beings who were affected by a military action.

Dysphemisms: words with unpleasant connotations; the negative feelings evoked by dysphemisms are directed against the opponent or class of people being attacked by the speaker or writer.

One of the most recent examples of the use of a dysphemism took place in Rwanda prior to and during an ethnic massacre. The targeted people were referred to as “cockroaches.” The term not only had unpleasant connotations; it also served to dehumanize the targeted individuals (Bromley, 2011, pp. 39, 43, 45, 51).

Loaded language: words or phrases that carry emotional “baggage”; the writer or speaker will use loaded language as part of an appeal to pathos—steering the audience toward or away from a position by means of the strong negative or positive feelings that become associated with certain language.

A prominent contemporary example of loaded language is the use of “pro-life” and “pro-choice” as labels for two sides in the debate over abortion. Each side is attempting to benefit from associating itself with something positive. As with the other instances of manipulative language, the terms themselves are not evidence for a position.

Skilled and discerning thinkers notice how advertisers, politicians, and other communicators may attempt to use manipulative language—often combined with images—to persuade them; and such thinkers do not allow themselves to be swayed by emotionally-charged, overly-broad, or misleading language.

For more information, read about “How can poor word choice lead to fallacies?”

3. What is propaganda?

Propaganda is a term for deliberate, systematic attempts to manipulate beliefs and emotions through methods that are considered deceitful. Studying propaganda techniques can help hone our critical thinking skills. Critical thinkers are always on the lookout for manipulation of belief and emotion, and they recognize that political pundits on television and radio, especially when promoting partisan positions or agendas, often try to manipulate opinion through inflammatory language and dichotomies (pairs of polar opposites) such as us versus them, friend versus enemy, patriot versus traitor, and good versus evil. Such contrasts can be powerful persuaders yet may have little basis in reality. While most political rhetoric does not amount to propaganda, history does provide numerous examples of it, and after studying techniques of propaganda you will better discern certain elements of it even in everyday advertising, media, and political discourse. These elements include fallacies and misleading language.

4. How can I tell if language is being used as a tool for audience manipulation?

When authors use language that is overly vague or inflammatory, they attempt to put a bias on an argument—that is, they attempt to slant it in favor of one position over another.

Political rhetoric is one context in which such slanted language may appear. For example, an argument may make use of emotionally-charged or loaded language by calling abortion murder. Similarly, a speaker may call an opponent’s position anti-child or pro-big government. Such language takes advantage of the fact that “child” for many audiences will have positive connotations whereas “big-government” may provoke a negative reaction.

Advertising is another context in which slanted language often is found. Weasel words are common. No actual standards are attached to terms such as new, improved, or long-lasting. Without an agreed-upon meaning, these words make no actual falsifiable claims. In short, their truth cannot be tested by the audience.

Since slanted or emotionally-laden words are meant to persuade by engaging our values or emotions, they fall under the heading of appeals of pathos, and the misuse of these words indicates the presence of one or more of the pathos-related fallacies. Emotionally-charged and slanted language may be powerfully persuasive. From a critical thinking perspective, keep in mind that this persuasion may be taking place without the audience’s awareness.  As you evaluate an argument, be alert to these “hidden persuaders” and consider how they may influence an audience.

5. What is framing bias?

Psychological studies of human reasoning show that people’s judgments are often surprisingly influenced by the way that a task or question is framed. We are easily influenced by a wide range of what psychologists call framing effects, and of course the more so to the extent that we are not aware of this hidden persuader.

Imagine an experiment in which subjects are asked what they would decide if they had to undergo a particular medical procedure. In a disclosure form, they will be given information that will be identical factually, but the information will be worded differently. The wording in the form will be one of the following:

Ninety percent of patients who undergo this procedure are alive at the five-year mark.
Ten percent of patients who undergo this procedure are deceased at the five-year mark.

The information in the two sentences is the same: the procedure has a survival rate of 90%, measured over a five year period, and a failure rate of 10%, measured over the same period.

If many more people accept the surgery option when the outcome is worded positively, then people’s choices have been determined by an emotional reaction to the wording of the choices rather than by an objective assessment of fact. That is but one of many examples of the hidden persuader that psychologists call a framing effect.

6. What is confirmation bias?

Another very common cognitive bias is confirmation bias (also called my-side bias). People tend to be strongly invested in their beliefs. They will favor information that confirms their preconceptions or justifies their actions while discounting or ignoring evidence that conflicts with those actions and beliefs. They will look for, gather, and evaluate evidence selectively, ignoring perspectives that might challenge their preconceptions or lead them to evaluate whether their existing beliefs are really true. People also tend to recall information from memory selectively when asked to justify their beliefs or explain or defend their actions.

Confirmation bias comes into play with respect to trivial or everyday beliefs, but it is more pronounced when issues are emotionally significant or beliefs are deeply seated. The avoidance of perspectives and evidence that have the potential to contradict deeply-held beliefs is driven by psychological need, and the more invested an individual is in a particular belief, the less willing he will be to allow his beliefs to be challenged or examined too closely.

For those who believe that the moon landing was a hoax, there is little opportunity to change their mind. Give them facts, and they can find ways to refute them. Present testimony from those involved, and they will discount them as liars. Find experts in the field, and they will produce a host of other people who believe as they do. Conspiracy theorists are the antithesis of critical thinkers. They only look for evidence that supports their preconceived notions and discard the rest. That is confirmation bias at work.

7. What can I learn about fallacies from advertising?

Advertisements are a good place to look for examples of fallacies. Advertising is a very compressed format, without a lot of space and time for logos-based persuasion. That fact may result in heavy reliance on a sort of shorthand that relies heavily on fallacious reasoning.

Emotionally manipulative appeals are the foundation of many advertising campaigns. Often advertisements are built upon the pathos-related message that you need or want the product being pitched.

The “Four Tricks of Advertising” (Teays, 2010, p. 481) captures the steps of a common type of emotionally manipulative appeal:

  1. Establish a feeling of shame in the audience. You’ve got a problem (even if you were unaware of it until now).
  2. Establish a feeling of optimism. It’s okay—your problem can be solved.
  3. Offer a solution. Your problem will be solved if you will simply buy this product.
  4. Offer a rationale for accepting the solution. You have a right to solve your problem (whatever the cost or impracticality of the solution).

8. When can I trust a poll or survey?

Polls or surveys may be the basis of the inductive claims that are used as premises in deductive arguments. Assertions based on such tools may look impressive but should not be accepted uncritically. Many professional frame shapers like advertisers, politicians, and special interest groups or lobbies use statistics to present a skewed picture even if some of the individual numbers can be said to be true. A big problem with polls and surveys, then, is that they may be driven by opinionated or ideological goals. Such biases are not always apparent because they can be hidden by selectively collecting or citing statistics.

One way frame shapers collect biased poll or survey data is by asking leading questions designed to funnel respondents toward certain answers. Push polls are those in which the way a question is framed may strongly influence responses. Social psychologists have shown that people are fairly susceptible to this practice, and partisan talk show hosts often use push polls to their rhetorical advantage. An example is the call-in poll on a Lou Dobbs Tonight show that asked, “Which do you believe Senator Hillary Clinton is most out of touch with?” The poll allowed the following choices: “Illegal Immigration,” “Border Security,” “The American People,” “All Of The Above,” and “None Of The Above.” It was not surprising, given the way the question was framed that the big winner was “All Of The Above” (Media Matters, 2006, n.p.). In addition, opinionated people are likely to call in to answer a viewer-poll like this one, so the respondents were self-selected. Their answers may tell us more about the show’s audience than about how a more randomly-chosen or representative sample of US citizens would view Hillary Clinton.

With a leading question and self-selected respondents rather than a random sample of voting Americans, this kind of poll exemplifies techniques of persuasion commonplace in infotainment. Such polling and surveying should not be confused with scientific sampling.

9. How does scientific sampling lead to credible premises?

Surveys and polls are more than just questionnaires. The information from a survey will be generalized to a larger population. As a consequence, the sample of the population to which the survey is given needs to be genuinely representative of the target population as a whole. In addition, it needs to be large enough so that results will have a significance that exceeds the margin of error—the amount of variation that will arise from chance alone.

Social scientists know that with a small sample it is easy to overgeneralize—to make claims about the target population for which there are too few examples to allow for confident conclusions. Even for very formal surveys, the margin of error when the sample size is small is likely to be high unless the target population is a highly homogeneous one. Often, however, social scientists study heterogeneous populations rather than homogeneous ones. With heterogeneous populations, it is not like blood, where a single drop is representative of blood throughout the body. For example, the margin of error for a national election poll goes down to a fairly cautious +/-3 percentage points (with 95% confidence in the results) only as the number of people surveyed goes up to 500. By contrast, a very small sample of 50 respondents, even if randomly-selected, can have a margin of error of +/-14 percentage points.

Besides small sample sizes, the other common problem to watch out for is the biased sample. For example, regional differences may matter, so Virginians or Californians should not be over-represented for their proportion of the population in a study that claims to generalize about adult citizens of the U.S. as a whole. Similarly, a survey in which people of one gender are over-represented would be biased in its results if the intention was to generalize about both men and women, and a poll in which people with four-year college degrees are overepresented would be biased if the intention was to generalize about adults of all educational levels.

10. How can reliance on scientific reasoning reduce bias?

Science may not be applicable to all aspects of life, but it is useful to notice how scientific reasoning works to overcome common biases. First, scientific research requires testing a hypothesis by purposely exposing it to potential failure. Such an approach is the polar opposite of confirmation bias because the researcher must be willing to consider the possibility that the hypothesis is wrong.

Second, scientific research makes use of blind experiments to guard against bias. Studies will be set up as either single- or double-blind experiments. If they are single-blind, participants will not know what group they are in; if they are double-blind, neither researchers nor participants will know what group participants have been assigned to. Since participants in single- or double-blind studies do not know what group they are in, their reports are more likely to reflect what they actually experience versus what they expect to experience. Similarly, in double-blind studies, researchers’ interpretations of data are more likely to reflect the actual data versus what the researchers’ may have expected the data to be.

A writer or a speaker probably will not set up a literal single- or double-blind study as part of the process of developing her argument. However, if she adopts the type of reasoning behind such studies, she may be able to avoid bias. First, she must be open to altering or even abandoning her initial position—the equivalent of being willing to see her hypothesis as not true. Second, she must see the evidence as it actually is, without imposing a pattern on it by ignoring some data while emphasizing other data—an openness that is the equivalent of a blind study.

License

Icon for the Public Domain license

This work (Radford University Core Handbook by Radford University) is free of known copyright restrictions.

Share This Book