fbpx
Talk to our Sales Team about our 15% Q4 Discount!

What Americans Really Think about Trump’s Immigration Ban and Why

Text Analysis of What People Say in Their Own Words Reveals More Than Multiple-Choice Surveys It’s been just over a week since President Trump issued his controversial immigration order, and the ban continues to dominate the news and social media.

But while the fate of Executive Order 13769—“Protecting the Nation from Foreign Terrorist Entry into the United States”—is being hashed out in federal court, another fierce battle is being waged in the court of public opinion.

In a stampede to assess where the American people stand on this issue, the news networks have rolled out a parade of polls. And so, too, once again, the accuracy of polling data has been called into question by pundits on both sides of the issue.

Notably, on Monday morning the president, himself, tweeted the following:

Any negative polls are fake news, just like the CNN, ABC, NBC polls in the election. Sorry, people want border security and extreme vetting.

— Donald J. Trump (@realDonaldTrump) February 6, 2017

Majority Flips Depending on the Poll

It’s easy to question the accuracy of polls when they don’t agree.

Although on the whole these polls all indicate that support is pretty evenly divided on the issue, the all-important sound bite of where the majority of Americans stand on the Trump immigration moratorium flips depending on the source:

  • NBC ran with an Ipsos/Reuters poll that found the majority of Americans (49% vs. 41%) support the ban.

  • Fox News went with similar results from a poll by Quinnipiac College (48% in favor vs. 42% opposed).

  • CNN publicized results from an ORC Poll with the majority opposed to the ban (53% vs. 47%).

  • A widely reported Gallup poll found the majority of Americans oppose the order (55% to 42%).

There are a number of possible reasons for these differences, of course. It could be the way the question was framed (as suggested in this Washington Post column); it could be the timing (much has transpired and has been said between the dates these polls were taken); maybe the culprit is sample; perhaps modality played a part (some were done online, others by phone with an interviewer), etc.

My guess is that all of these factors to varying degrees account for the differences, but the one thing all of these polls share is that the instrument was quantitative.

So, I decided to see what if anything happens when we try to “unstructure” this question, which seemingly lends itself so perfectly to a multiple-choice format. How would an open-ended version of the same question compare with the results from the structured version? Would it add anything of value?

Part I: A Multiple-Choice Benchmark

The first thing we did was to run a quantitative poll as a comparator using a U.S. online nationally representative sample* of n=1,531 (a larger sample, by the way, than any of the aforementioned polls used).

In carefully considering how the question was framed in the other polls and how it’s being discussed in the media, we decided on the following wording:

“Q. How do you personally feel about Trump’s latest Executive Order 13769 ‘Protecting the Nation from Foreign Terrorist Entry into the United States’ aka ‘A Muslim Ban’”?

We also went with the simplest and most straightforward closed-ended Likert scale—a standard five-point agreement scale. Below are the results:

TextAnalyticsTrumpOrder1.png

Given a five-point scale, the most popular answer by respondents (36%) was “strongly disagree.” Interestingly, the least popular choice was “somewhat disagree” (6.6%).

Collapsing “strongly” and “somewhat” (see chart below) we found 4% more Americans (43%) disagree with Trump’s Executive Order than agree with it (39%). A sizeable number (18%) indicated they aren’t sure/don’t know.

Trump-Text-Analytics-2.png

Will It Unstructure? – A Text Analytics PollTM

Next, we asked another 1500 respondents from the same U.S. nationally online representative source* EXACTLY the same question, but instead of providing choices for them to select from, we asked them to reply in an open-ended comment box in their own words.

We ran the resulting comments through OdinText, with the following initial results:

Trump-OdinText.png

As you can see, the results from the unstructured responses were remarkably close to those from structured question. In fact, the open-ended responses suggest Americans are slightly closer to equally divided on the issue, though slightly more disagree (a statistically significant percentage given the sample size).

This, however, is where the similarities between unstructured and structured data end.

While there is nothing more to be done with the Likert scale data, the unstructured question data analysis has just begun…

Low-Incidence Insights are Hardly Incidental

It’s worth noting here that OdinText was able to identify and quantify many important, but low-incidence insights—positive and negative— that would have been treated as outliers in a limited code-base and dismissed by human coders:

  • “Embarrassment/Shame” (0.2%)

  • “Just Temporary” (0.5%)

  • “Un-American” (0.9%)

  • “Just Certain/Specific Countries” (0.9%)

  • “Unconstitutional/Illegal” (2%)

  • “Not a Muslim Ban/Stop Calling it that” (2.9%)

An Emotionally-Charged Policy

EMOTIONAL-SENTIMENT-ANALYSIS-TRUMP.png

It shouldn’t come as a surprise to anyone that emotions around this particular policy run exceptionally high.

OdinText quickly quantified the emotions expressed in people’s comments, and you can see that while there certainly is a lot of anger—negative comments are spread across anger, fear/anxiety and sadness—there is also a significant amount of joy.

What the heck does “joy” entail, you ask? It means that enough people expressed unbridled enthusiasm for the policy along the lines of, “I love it!” or “It’s about time!” or “Finally, a president who makes good on his campaign promises!”

Understanding the Why Behind People’s Positions

Last, but certainly not least, asking the same question in an open-ended format where respondents can reply in their own words enables us to also understand why people feel the way they do.

We can then quantify those sentiments using text analytics and see the results in context in a way that would not have been possible using a multiple-choice format.

Here are a few examples from those who disagree with the order:

  • “Just plain wrong. It scored points with his base, but it made all Americans look heartless and xenophobic in the eyes of the world.”

  • “Absolutely and unequivocally unconstitutional. The foundation, literally the reason the first European settlers came to this land, was to escape religious persecution.”

  • “I don’t like and it was poorly thought out. I understand the need for vetting, but this was an absolute mess.”

  • “I think it is an overly confident action that will do more harm than good.”

  • “I understand that Trump’s intentions mean well, but his order is just discriminating. I fear that war is among us, and although I try my best to stay neutral, it’s difficult to support his actions.”

Here are a few from those who agree:

  • “I feel it could have been handled better but I agree. Let’s make sure they are here documented correctly and backgrounds thoroughly checked.”

  • “I feel sometimes things need to be done to demonstrate seriousness. I do feel bad for the law abiding that it affects.”

  • “Initially I thought it was ridiculous, but after researching the facts associated with it, I’m fine with it. Trump campaigned on increasing security, so it shouldn’t be a surprise. I think it is reasonable to take a period of time to standardize and enforce the vetting process.”

  • “I feel that it is not a bad idea. The only part that concerns me is taking away from living the American Dream for those that aren’t terrorists.”

  • “good but needed more explanation”

  • “OK with it – waiting to see how it pans out over the next few weeks”

  • “I think it is good, as long as it is temporary so that we can better vet those who would come to the U.S.”

And just as importantly, yet oft-overlooked those who aren’t completely sure:

  • “not my circus”

  • “While the thought is good and just for our safety, the implementation was flawed, much like communism.”

Final Thoughts: What Have we Learned?

First of all, we saw that the results in the open-ended format replicated those of the structured question. With a total sample of 3000, these results are statistically significant.

Second, we found that while emotions run high for people on both sides of this issue, comments from those who disagree with the ban tended to be more emotionally charged than from those who agreed with the ban. I would add here that some of the former group tended not to distinguish between their feelings about President Trump and the policy.

We also discovered that supporters of the ban appear to be better informed about the specifics of the order than those who oppose it. In fact, a significant number of the former group in their responses took the time to explain why referring to the order as “a Muslim ban” is inaccurate and how this misconception clouds the issue.

Lastly, we found that both supporters and detractors are concerned about the order’s implementation.

Let me know what you think. I’d be happy to dig into this data a bit more. In addition, if anyone is curious and would like to do a follow-up analysis, please contact me to discuss the raw data file.

@TomHCAnderson

Ps. Stay tuned for Part II of this study, where we’ll explore what the rest of the world thinks about the order!

*Note: Responses (n=3,000) were collected via Google Surveys. Google Surveys allow researchers to reach a validated (U.S. General Population Representative) sample by intercepting people attempting to access high-quality online content—such as news, entertainment and reference sites—or who have downloaded the Google Opinion Rewards mobile app. These users answer up to 10 questions in exchange for access to the content or Google Play credit. Google provides additional respondent information across a variety of variables including source/publisher category, gender, age, geography, urban density, income, parental status, response time as well as google calculated weighting. Results are +/- 1.79% accurate at the 95% confidence interval.

0 Responses

  1. very interesting research Tom. I like the linkage of pure CE and OE responses.

  2. Very interesting research! I would be hesitant to say either group was more informed about the contents of the EO over the other group. That may be your opinion showing rather than objective research.I say this for 2 reasons:
    1) To say one side was more informed implies that not only did their comments contain more information but thathe information has been verified as factual. There is a chance either or both groups are simply parroting arguments they have heard for their particular stance (whether fact based or not).
    2) The phrasing of the question predisposes those for then EO to defend their position. Reason being, you stated the phrase “Muslim Ban” at the end of the question. That phrase is generally only used by those that are against the EO. Hearing/seeing that phrase could bias the answers provided by someone who is in agreement with the EO as they may feel they need to defend their position more than someone who is opposed.

    The way the closed and open ended responses have been paired to gain a deeper understanding of the data is phenomenal! Nice work!

    1. I agree completely with Jessica’s perspective. Including “Muslim Ban” in the question could artificially induce pro and con responses just based on those 2 words – because those 2 word can be emotionally charged for many. A suggestion: ask the question without those 2 words. This is consistent with the background description regarding the impact of the wording of the question. I realize the purpose of this test is to demonstrate the insight derived from an open-ended response.

    2. Really great work, Tom. And unique – don’t think anyone else has done this yet. Two observations on the approach:
      1. I’ll echo Jessica on the Muslim Ban issue – the use of that term is a strong indicator of your opinion and thus probably created some bias in this question. I’ll out myself as a strong opponent of EO and don’t think I agree that the supporters are better informed than the opponents (although those in your survey certainly might break that way). From my view, there was plenty of solid evidence that this was originally planned and then carried out as an action against Muslims at large, so the term seemed pretty accurate to me (and still does).
      2. More of a question, since I have not used it: is there any notion on how representative the people taking Google Surveys are of the larger public? Without any data, I would assume they’d break toward higher education and incomes, even more so than the online population at large.

      Can’t wait to see part 2, and yes, if you are able to share raw data, I’d be very interested to take a look.

      1. @Stefan Nate Silver claimed Google Surveys was one of the top 2 best performing data sources that he used in making his political predictions. The sample is weighted by google to me US Online Nationally Representative, so no, should not skew any specifficc way

  3. Informative, and very well laid-out set of results. It will be interesting to see what happens as challenges move through the legal system and how this impacts public perception.

  4. Great post, TOM. The media were all hypothesizing the “why” but OdinText provided the foundation that clarified that many of those suppositions were foundless.

  5. Very interesting, Tom, and the results plant you smack in the middle between the two sets of skewed pro and con polls you cited. Great value from the unstructured data, my take away is that there’s a lot of stated reservations from the ‘pro’ group. Agreed that the pro group responses are well thought out and less emotional/knee jerk than the con group. But the potential for many of these pros to turn con is high.

  6. To me, this work solidifies what good researchers are SUPPOSED to know about survey research: 1) that it can be highly reliable and replicable if done properly, 2) that the answers you receive will always depend on the questions you ask and the scales you use (for example, I wonder if some of the 18% who were not sure would have come down on one side or another if a 7-point scale were used), 3) there is always some level of interpretation of the data that needs to be done (in this case, your assertion that the “pro” votes tended to be more informed), and 4) that interpretation is subject to disagreement (e.g. the assertion in the comments section that using the words “Muslim Ban” caused some bias in the responses). Unfortunately, the inevitable “conflicting polls” (which in the big picture don’t really conflict that much, clearly showing a divided nation) get painted by pundits as an example that polls can’t be trusted. They can, as long as you know how much and what to trust.
    Would be interested in some subgroup breakdowns, if possible – liberal/conservative, GOP/Dem, ages, gender, etc.

  7. I think this was great information. I would have rather seen the question completely unbiased. Adding the “Muslim ban” at the end feels leading. The actual executive order makes no mention of religion, specifically. I often wonder how much the media’s addition of “muslim ban” has been the trigger to incite such immediate, intense reactions in the country’s (and the world’s) perception. It’s still good to see how people are responding and what their thoughts are beyond the scaled responses. If you look at those alone it seems that everyone is generally in agreement.

  8. This is a great piece of work. The direct comparison of structured to unstructured illustrates the validity of text analysis and clearly shows its added value. (though I agree about the potential inflammatory nature of “Muslim ban”.

  9. Great work, Tom. The different outcomes based on the scaling used, the wording of the questions and the analytic approaches in your article reinforce the fact that any piece of research, as objective as you might think it is, can give you any answer that you want. Survey research is an imperfect science subject to the whims of the research manager resulting in findings can be manipulated purposefully or accidentally. And my comments don’t even include the vagaries of sampling methodologies and how respondent qualification is determined.
    I would encourage all who read this comment to find a copy of Stephen Pane’s book “The Art of Asking Questions” which I believe is one of the hallmark books on the principles of marketing research. Having said all this I am more confident in the ODIN text analytics because they provide a more accurate and richer assessment of consumer sentiment and helps identify the grey areas of what consumers are really saying.

  10. Thank you all for the great comments.Want to address the “Muslim Ban” as a leading question, question, as it is one I had expected.
    The reason I chose to word it that way was that I did not want any confusion between this OE and the 8 or so EO’s Trump had already issued. No one knows an EO by #, and I did include the official title of the OE, as well as the most popular name the media was using to describe it by at the time of fielding.
    In the past 3 days or so I’ve seen the media start changing how they are referring to it to “ravel ban” etc. Because awareness for the EO was already so extremely high, I really don’t think including those 2 words had much impact, and clarity gained was worth it.
    In fact, when I indicated that some of the “pro” responses seemed a little more informed, what I meant, and perhaps I didn’t state it clearly enough, is that those ‘for’ were more likely to state specifics about the language in the EO and call out specifics such as “Just Certain/Specific Countries” (0.9%), “Not a Muslim Ban/Stop Calling it that” (2.9%) etc.
    My guess is that Fox has probably been calling attention to these details while other networks focused on just calling it a “Muslim Ban” part. It wasn’t just related to this but several sub topics, while the against was more emotional and vehement attacks against Trump.
    In the end, we have a very polarized country, and it doesn’t matter what we call it I’m afraid.
    Planning a follow up post latter this week with a bit more…

  11. Thanks for sharing Tom. Did you also capture who they voted for in the Presidential election as I suspect their views would split in line with who they voted for.

  12. Excellent piece of work. I was thinking that current polling approaches tend to be used as a stick to beat people with, instead of a flashlight that provides insight. Presented as “stick,” poll results tend to exacerbate divides, and make those who disagree with you look like ill-informed idiots.
    By letting people use their natural voice, we see that there is a lot more nuance on both sides. And perhaps the basis of disagreement is smaller than thought as well.

    If we have learned anything from this US election, it should be that there was a widespread failure to listen to people.

    Thanks for adding something of real substance to the discussion.

  13. Larry, a client of mine once declared to me that “it is impossible for anyone to be objective.” I had never heard that position, and had no response. As the goes by I have a deepening appreciation for his declaration.
    I could blather on, but will exercise atypical restraint in this instance.

Industry-leading companies using OdinAnswers to better understand their customers.

See OdinAnswers in Action

Schedule a demo or discovery meeting to see how OdinAnswers can reveal actionable insights for your business.