Bad Chart!

Wednesday, March 13, 2013

Hate Groups in the United States

  
I haven't posted to this blog in a long, long time. The main reason is that I don't read the newspaper cover to cover on the weekends like I used to and so I don't come across many charts.

But then my friend Jason (@JasonRobertC) over at Opensewer tweeted a link to the Southern Poverty Law Center's map of the United States that shows the number of hate groups operating in each state and the District of Columbia. What the map didn't show was how those raw numbers of hate groups related to the population of each state. So I dusted off Excel, got the 2012 census numbers from Wikipedia, plugged in the SPLC's hate group data and then calculated the number of hate groups per 100,000 of population.

The results were interesting, but perhaps not very surprising.

A big Aloha goes to Hawaii for having no organized hate groups. Four of the top ten states were in the south (Mississippi, Arkansas, Alabama, and Georgia) and four  were from the Plains/Rockies states (Montana, Idaho, South Dakota, and North Dakota). Rounding out the top ten were West Virginia (the South?) and New Jersey.

One big caveat: I removed the District of Columbia from the rankings. DC, with 14 groups, actually ranked the highest, having almost double the number of hate groups per 100,000 than the worst state. However, that reality reflects the fact that many groups base themselves in DC regardless of the location of their support. Of course there is no way to properly gauge this (or apply it to the states), but in the case of DC, I can guarantee you that the two anti-gay groups and the three white nationalist groups do not reflect the population of the District. I could make arguments about some of the other groups based in DC, but those might be harder to dismiss outright. I recognize that there might be similar arguments to be made for some states. But I still believe that the fact that the DC is the seat of the federal government is the sole reason for many of these groups locating here and is much less likely to reflect the District's population than groups basing themselves in various states.

(I would have preferred to present a table, but couldn't figure out an effective way to import it into Blogger. I also recognize that this chart may not be perfect...surely you will let me know where I erred.)

Right click on the chart and choose to open in new tab or window to make the chart bigger and more legible.

Sunday, June 14, 2009

When the Story Doesn't Tell a Story


This little snippet by Phyllis Korkki in today's New York Times is an example of when a statistic doesn't really illustrate the point being made. The gist of this article is that enclosed, climate controlled malls that so dominated our collective retailing consciousness in the 1980s are becoming relics. While I don't disagree with the basic premise--for years I have been reading that the enclosed shopping center has been on its way out, and thank god for that. The problem is the statistic that is used to prove the case.
"Of the 102,000 shopping centers in the United States, some 99 percent are open-air centers."
The accompanying bar chart more or less illustrates that point--although who wants to wager that the ochre-shaded area at the top of each bar only represents 1 percent of the bar?--I wouldn't. However, the real problem is that the "fact" (99 percent of 102,000 shopping centers are open-air) doesn't tell us anything because it doesn't provide any kind of comparison to what the case has been historically. For instance was it 95 percent last year? Was it 90 percent in 2000? The chart seems to indicate that that is not the case. In fact the chart seems to indicate that the percentages have remained fairly constant since 2000. So where is the story? Maybe in 1990 the percentage of enclosed malls was 20 percent not the current 1 percent. Boy, howdy then you would have a story. Or would you? That was 19 years ago when I was 20 (sigh).

With no historical benchmark, the fact cited is pretty useless. I could also mention how neither the statistic nor the story say anything about what fits into the definition of "shopping center". My guess is that the total number of shopping centers includes all sorts of retail environments that don't necessarily approach the size or regional draw associated with enclosed "giant malls". So maybe the story should not be about enclosure, but rather about shopping center size and retail mix.

However, I shouldn't quibble with the shallow analysis of this USA Today-style factoid. The little snippet falls under the heading "The Count" so I guess it was meant to be a pithy little glimpse of something. Even if that was the case, the least the Times could have done is given the story at least a scintilla of relevancy by giving us a historical comparison.

Wednesday, May 6, 2009

Here's One I Actually Like


Now that I have determined to blog about my frustration with bad charts in the news media, I don't seem to come across any bad enough to blog about. I don't want to actually go looking for bad ones, that seems a little too grumpy even for me.

In Saturday's New York Times, I came across a chart that I actually like. I won't speak to the methodology of gathering and interpreting the underlying data that was used to create the chart. (I have no idea whether it was good or not.) But this chart does exactly what a chart is supposed to do. It clearly communicates its point with a minimum of explanation. It speaks for itself.

Saturday, April 25, 2009

A Measure of Confusion


Today's chart is not hall of fame bad, but it does fall into the category of confusing and pretty much useless.

The chart accompanies an article in the New York Times (4/25/09) about the health of regional banks. In trying to convey how healthy various banks are, the Times created a chart that shows the Tangible Common Equity Ratios for 16 banks. The introductory paragraph explains that this ratio measures how leveraged a bank is which it says is a good indicator of financial health.

The explanation for this simple bar chart also indicates that "a ratio of 3 is acceptable". According to the chart Bank of America has an acceptable ratio of 3. What it doesn't tell us is if U.S. Bancorp at 3.2 is better or worse, or if Wells Fargo at 2.5 is better or worse.

One thing about a confusing chart is that it can force one to actually read the article. So I did. Unfortunately, the article does nothing to clarify the meaning of the chart. In fact the article text actually confuses the issue further. The text indicates that Goldman Sachs, Morgan Stanley, State Street, Bank of New York, JP Morgan, and US Bancorp are "among those best positioned". Yet their Tangible Common Equity Ratios range from from 2.1 to 5.2 in a total range of 1.7 to 6.4.

The article also says that BB&T, PNC, and Wells Fargo "face a coming wave of heavy losses", yet their Tangible Common Equity Ratios are 5.1, 2.3, and 2.5 respectively.

Perhaps the only thing that might give us a clue as to which end of the ratio spectrum is the "good" end would be the comment that Regions, SunTrust, and KeyCorp are all probably in need of billions in additional capital. Somewhat lumped near each other at 5.4, 5.7 and 6.1. This would suggest to me that the higher the ratio the worse off the bank is.

Although with all the talk in the media about how bad off Citigroup is, it is hard to understand how it manages to have the lowest ratio of all the 16 banks at only 1.7. Maybe that footnote about Citi's ratio going up after the government's investment in preferred stock being converted to common stock would change their ratio drastically to put them at the other end of the spectrum.

So, after spending all of this time reading the article, studying the chart, and writing this post, I think I feel comfortable saying that the higher end of the ratio spectrum is bad and the lower end is good. BUT THE POINT IS, A CHART SHOULDN'T REQUIRE THIS MUCH WORK.
 
-t-