31 January 2014

Reduce GHGs or Increase Energy Access?

At the Center for Global Development Todd Moss and Ben Leo have a provocative analysis up about an obscure but consequential decision facing the US Congress. It has to do with a little-known US government agency called the Overseas Private Investment Corporation. Moss and Leo explain:
President Obama’s Power Africa initiative, launched in June 2013, aims to increase electricity generation and access to modern energy services in six low-income countries. The success or failure of this effort will be determined in large part by the investment decisions of a dozen or so US government agencies that may be operating under potentially conflicting mandates. The Overseas Private Investment Corporation (OPIC), the main US development finance institution, will play a central role. How it selects projects will affect outcomes in Africa for the Power Africa initiative and OPIC’s activities in other low-income countries.
What is the issue here? In a nutshell it is whether OPIC will be allowed by the US Congress and Obama Administration to invest in fossil energy in low income countries. Moss and Leo explain:
There has been a general bias toward using OPIC to invest principally in solar, wind, and other low-emissions energy projects as part of the administration’s effort to promote clean energy technology. An explicit policy capping the total greenhouse gas emissions in OPIC’s overall portfolio has further pushed the organization’s investments heavily toward renewables. Indeed, over the past five years, OPIC has invested in more than 40 new energy projects and all but two (in Jordan and Togo) are in renewables. 
The graph at the top of this post illustrates the trade-offs here. they are stark and consequential. The CGD analysis shows that a $10 billion OPIC portfolio focused on 100% off-grid renewables would provide energy access to 70 million less people than if that portfolio was 100% natural gas. The graph shows how a mix of renewables and gas translates into energy access.

There are three positions one might take with respect to the GHG vs. energy access tradeoffs involve with OPIC decision making.

1. Preventing GHG emissions is more important than securing energy access for poor people.
2. Securing energy access for poor people is more important than preventing GHG emissions.
3. The trade-off is an illusion as both goals can be achieved at the same time.

The first 2 positions are legitimate and defensible. The third is not, however it is a convenient refuge for those who wish to avoid the uncomfortable nature of the tradeoff.

Here is an additional bit of context. The US consumes a massive amount of natural gas. It is not even worth comparing US consumption to that of the six poor countries of Power Africa. However, here is an interesting bit of trivia: The amount of natural gas flared in the US (that is, wasted), is equal to the total combined consumption of Yemen, Tanzania, Ghana, Angola, Mozambique, Kyrgyzstan, Cameroon, Afghanistan, New Guinea, Gabon and Senegal (sources: here and here).

A debate over OPIC is one worth having: Should US greenhouse gas policy extend to trading off energy access for preventing emissions?

It is a simple choice, and one with enormous consequences. And make no mistake, a choice will be made.

As I wrote in The Climate Fix, the only way to turn this trade-off into a win-win situation is via a long-term commitment to energy innovation that makes clean energy cheaper. Meantime, in the near term it is simply immoral to ask the poor to make energy access sacrifices while we consume massive amounts of energy, based almost entirely on fossil fuels. Climate policy should not be used to keep poor people poor.

I'd love to hear the counter argument. Any takers?

30 January 2014

Science and Politics with Henry Waxman

Congressman Henry Waxman (D-CA) announced today that he will not be running for re-election in 2014. He has been a true role model as a public servant and member of Congress. The nation is better for his service. I wish him all the best in what comes next for him. 

I have had several opportunities to present Congressional testimony before Mr. Waxman. My favorite exchange occurred in 2007 when I was testifying before the House Committee on Oversight and Government Reform on the subject of the science shenanigans of the Bush Administration.

My testimony was focused not on the partisan fights of the day, but rather on the underlying institutional and political dynamics that led themselves to the abuse and misuse of science -- dynamics which are common across the political spectrum. Of course, back in 2007 a leading meme was that Republicans abused science and Democrats did not. Today, the notion that scientific integrity is a bipartisan challenge is much more widely accepted.

In my testimony, I was critical of the Bush Administration but I also illustrated the challenges of cherry picking science to support a political agenda with the pre-hearing memo that Mr. Waxman and colleagues had put together. In that memo they had selected a few studies on hurricanes and climate change to make claims which were out of step with the scientific consensus which existed at that time (and which has only gotten stronger since). The memo claimed that increasing hurricane impacts could be attributed to "global warming" and utterly ignored a recent consensus statement of experts from the WMO saying that such attribution was not possible. As readers know it is an issue I've been writing about for a long time.

So I wrote (here in PDF) the following in my testimony:
A memorandum providing background to this hearing prepared 26 January 2007 by the majority staff of the House Committee on Government Reform and Oversight illustrates the cherry picking of science. Cherry picking literally mean “take the best, leave the rest.” The memorandum states, quite correctly, that “a consensus has emerged on the basic science of global warming.” It goes on to assert that:
“. . . recently published studies have suggested that the impacts [of global warming] include increases in the intensity of hurricanes and tropical storms, increases in wildfires, and loss of wildlife, such as polar bears and walruses.” 
To support its claim of increasing intensities of hurricanes and tropical storms the memorandum cites three papers. What the memorandum does not relate is that authors of each of the three cited studies recently participated with about 120 experts from around the world to prepare a consensus statement under the auspices of the World Meteorological Organization which concluded:
“The possibility that greenhouse gas induced global warming may have already caused a substantial increase in some tropical cyclone indices has been raised (e.g. Mann and Emanuel, 2006), but no consensus has been reached on this issue.” 
With respect to two of the three papers cited in the memorandum, referring to possible trends in tropical cyclone intensities, the WMO statement concluded the subject “is still hotly debated” and “for which we can provide no definitive conclusion.” The WMO Statement was also recently endorsed by the Executive Council of the American Meteorological Society. The hearing background memorandum is absolutely correct when it asserts that “recently published studies have suggested that the impacts [of global warming] include increases in the intensity of hurricanes and tropical storms.” But this selective reporting does not tell the whole story either. Such cherry picking and misrepresentations of science are endemic in political discussions involving science.
Several members of the House Committee did not like the fact that I was accusing them of cherry picking science -- that was something the Bush Administration did, not them!

The task of taking me on was initially given by Mr. Waxman, the committee chair, to Representative Peter Welch (D-VT) who at that time was the lowest ranking Democrat on the Committee.  Mr. Welch took issue with my testimony, explaining that they had contacted Judy Curry and Michael Mann to get their input on my claims (the transcript is here in PDF):
I noticed in your written testimony, you made a claim that the memo that was prepared by the committee staff for this hearing is "exactly the same sort of thing that we have seen with heavy-handed Bush administration information strategies," and I take the charge that you make very seriously. You are, if I understand it, essentially accusing the committee of the conduct that it is investigating.

You took specific offense with the memo’s discussion of the state of science regarding the connections between global warming and hurricanes, where the memo notes, recently published studies have suggested that the impacts of global warming include increases in the intensity of hurricanes and tropical storms.

So, taking this seriously, we asked the committee staff to contact these leading researchers to followup to see if there is anything we should be concerned with in that memo. Dr. Judith Curry, as you know, a leading researcher, told us that all the research scientists working in the area of hurricanes agree that average hurricane intensity will increase with increasing tropical sea surface temperature. Theory, models, observations all support this increase. She tells us that the recent research indicates an impact of global warming is more intense hurricanes. The current debate and lack of consensus is about the magnitude, she says, of the increased intensity, not its existence.

Dr. Michael Mann, also a prominent researcher, tells us that in his view, you have misinterpreted the WMO report in arguing that it somehow contradicts information provided in the scientific background of the hearing memo that you had a chance to review. He says, the current state of play with the science on this is accurately summarized in the hearing memo. . .

In light of today’s testimony and the information provided to the committee by Drs. Curry and Mann, is it still your belief that the committee’s hearing memo is, "exactly the same sort of thing" the Bush administration has done?
In my response I explained to Mr. Welch that Curry's comments were off point as there is an important distinction between what has been observed and attributed versus what is projected for the distant future:
I will stand by exactly what I said, and I am happy to talk about the science and impacts of hurricanes as long as you would like because it is an area I have been researching for about 15 years. The memo includes the statement, "recently published studies have suggested that the impacts of global warming include increases," and it cites three papers that look retrospectively back in time. So it is not talking about projections in the future. So the statement by Dr. Judy Curry who is a great scientist, who I have a lot of respect for, isn’t on point here.
I then compared the cherry picking of hurricane studies to the cherry picking of a study (Soon/Baliunas) contrary to the so-called "hockey stick" which had been discussed earlier in the hearing. My point was that if one is going to argue from the authority of scientific consensus, then it doesn't look good to say that you accept the consensus when you like it and reject it when you don't.

I explained:
Now I am not a climate scientist and just like I accept the consensus of the IPCC, I am compelled to accept the consensus of the hurricane community. Now it is very easy to pick out a Soon and Baliunas paper or selectively email a scientist and say, what is your view?

I respect Dr. Mann and Dr. Curry have their views about what the statement says, but I am absolutely 100 percent certain that the statement that is in your background memo does not faithfully represent the science. It selects among the science perspectives, and that is inevitable, and we have to recognize that, and no one is immune from it. It doesn’t excuse the Bush administration from their actions, of course, but let us not pretend that somehow we can separate out scientific truth from political preferences. The reality is they are always going to be intermixed.
That went on for a bit, and the Mr. Waxman came to the rescue:
Mr. WAXMAN. Mr. Welch, will you yield to me?

Mr. WELCH. I yield to the chairman, yes. Thank you.

Mr. WAXMAN. Doctor, you are a doctor, but you are not a scientist. You are a political scientist.

Mr. PIELKE. I am a political scientist. That is accurate.
Boom! Authority card played. It gets better. Mr. Waxman turns to Drew Shindell, a NASA climate scientist, one who has never to my knowledge done any work on hurricanes.
Mr. WAXMAN. And you [Dr. Pielke] said you are absolutely certain that you are right on this issue and that Dr. Curry and Dr. Mann are wrong in their statement. Isn’t that quite a statement for you to make? No scientist here has been willing to make any statement that there is absolute certainty because the process of science continues to evaluate things. Dr. Shindell, you are familiar with Dr. Curry and Dr. Mann, is that correct? Dr. Shindell, are you familiar with those two?

Mr. SHINDELL. Yes.

Mr. WAXMAN. Are they somewhat isolated in the field with their own theories at odds with the majority of scientists?

Mr. SHINDELL. No. They are quite within the mainstream.

Mr. WAXMAN. In fact, isn’t Dr. Mann one of the leading scientists in global warming issues.

Mr. SHINDELL. Yes. Yes, he is.

Mr. WAXMAN. And Dr. Curry as well?

Mr. SHINDELL. Yes.

Mr. WAXMAN. So I am just wondering whether we should believe them or the certainty of Dr. Pielke that they are wrong.
Man, Mr. Waxman is good. Very good. And Shindell was playing the part of straight man to perfection.

I asked for a chance to clarify:
Mr. PIELKE. Let me clarify again. I did not say that they are wrong. I said that their views are not consistent with the mainstream consensus in the community. I am 100 percent sure of that statement.

Mr. WAXMAN. Do you know whether that is true, Dr. Shindell?
Drew Shindell obviously did not want to be in the middle of all this so he said some inscrutable stuff that gave the appearance of siding with Mr. Waxman, but which really supported the substance of my claim of cherry picking.
Mr. SHINDELL. I believe that their views are consistent with the mainstream consensus, and I think that we are having a slight semantic argument over what the mainstream consensus is. Is it that hurricanes have increased in severity in the past? Will they increase in the future? I think it is an interesting issue, this one, because unlike some of the other aspects of global warming that are better understood, there is some legitimate controversy, and so it can lead to these kinds of discussions. . . 
Having stepped in to save the discussion from my trouble-making, Mr. Waxman turned things back to Mr. Welch:
Mr. WELCH. What I want to know, after we have been through this, is this, are you standing by your position that this memo that cites mainstream science is exactly the same kind of conduct as what we have heard occurred in the Bush administration where there was direct interference with independent conclusions reached by scientists following the scientific method?

Mr. PIELKE. I will repeat exactly what I said in my written testimony. In microcosm, this shows how in political settings, which the preparation of Government reports is, how easy, enticing it is to selectively present scientific results to buttress a political perspective.
What did I learn?
  • Complex issues like the role of science in politics are not easily discussed is a partisan hearing format. It is much better to discuss empirical data or policy options. You can see this in the framework of The Honest Broker.
  • Members of Congress who cherry pick expertise (which is to say most every member of both parties) always have the trump card in such discussions as they can always find another expert whose views they like better.
  • This hearing (earlier parts of which appeared in The Honest Broker) helped me to clarify my thinking about "stealth advocacy" and how easy it is to be forced into such a role in political settings. 
  • Henry Waxman is one bad ass dude. Politics ain't beanbag.
I do wonder, however, after 7 years have gone by, does Mr. Waxman still like the scientific views of Judy Curry? Does he still think that hurricane impacts are a sound example of the consequences of human-caused global warming?

Good luck to Mr. Waxman, and thanks for the lesson in politics. I did (and do) appreciate it!

27 January 2014

Poverty in America in Global Context

The graph above comes from the work of Branko Milanovic of the World Bank. It shows income distributions by "ventile" (that is, 5-percentile bins) for the US, Brazil, China and India. Here is how the New York Times describes the graph:
The graph shows inequality within a country, in the context of inequality around the world. It can take a few minutes to get your bearings with this chart, but trust me, it’s worth it.

Here the population of each country is divided into 20 equally-sized income groups, ranked by their household per-capita income. These are called “ventiles,” as you can see on the horizontal axis, and each “ventile” translates to a cluster of five percentiles.

The household income numbers are all converted into international dollars adjusted for equal purchasing power, since the cost of goods varies from country to country. In other words, the chart adjusts for the cost of living in different countries, so we are looking at consistent living standards worldwide.

Now on the vertical axis, you can see where any given ventile from any country falls when compared to the entire population of the world.
So what does the graph show?
Notice how the entire line for the United States resides in the top portion of the graph? That’s because the entire country is relatively rich. In fact, America’s bottom ventile is still richer than most of the world: That is, the typical person in the bottom 5 percent of the American income distribution is still richer than 68 percent of the world’s inhabitants.

Now check out the line for India. India’s poorest ventile corresponds with the 4th poorest percentile worldwide. And its richest? The 68th percentile. Yes, that’s right: America’s poorest are, as a group, about as rich as India’s richest.

Kind of blows your mind, right?
So according to Milankovic, the poorest 5% in the United States. as a group, are richer than 68% of the world's population. They are richer than the richest in India, about 80% of Chinese and 50% of Brazilians.

Presenting such data is not to minimize relative poverty or inequality in the United States, but rather to make the point that a discussion of poverty or inequality in the US is a very different conversation than a discussion of poverty or inequality at the global scale.

24 January 2014

Questions from Congress, Part 2: Responses to Rep. Suzanne Bonamici

U.S. HOUSE OF REPRESENTATIVES
COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY
Subcommittee on Environment

RESPONSES OF ROGER PIELKE, JR. TO
Hearing Questions for the Record
The Honorable Suzanne Bonamici

A Factual Look at the Relationship between Climate and Weather

Dr. Pielke

1. In your testimony you acknowledge that anthropogenic climate change is a real phenomenon with real consequences for the climate. You also condemn “activists, politicians, journalists, corporate and government agency representatives and even scientists” for making claims about climate change being a contributing factor to extreme weather events for which there is not strong evidence. You said that “such claims could undermine the credibility of arguments for action on climate change…” There is nothing in your testimony that similarly condemns “activists, politicians, journalists, corporate and government agency representatives and even scientists” who deny that climate change is happening at all or that there are anthropogenic causes for climate change. Could their claims also be undermining the credibility of arguments for action on climate change? Why or why not?
PIELKE RESPONSE: There are at least three reasons why those who “deny that climate change is happening at all or that there are anthropogenic causes for climate change” are largely irrelevant and thus a distraction. First, while there are people who “deny that climate change is happening at all or that there are anthropogenic causes for climate change” most of those who identify themselves as opposed to action on climate change admit the reality of climate change and even a human role, but take issue with its significance in the context of the actions that are often proposed in response. Second, as I document in my book The Climate Fix (2010) public opinion on the reality of climate change, a human role in it and the importance of action has been remarkably strong for many decades. Public opinion varies, often with the weather, but there is nothing unique about public opinion on climate change that would suggest it as an obstacle to action. History shows many important issues with much less public support for which action was taken. Third, those calling for action on climate change often ground their arguments in claims of scientific authority. To the extent that such claims are shown to be overstated or just wrong – as often the case with respect to extreme events – then the resulting loss of credibility will be disproportionately larger than to those who start from a minority position or on the fringes of science. I elaborated on these arguments in a recent essay for The Guardian.
2. You make a second claim regarding the pernicious results of overstated connections between extreme weather events and climate change: that such “false claims confuse those who make decisions related to extreme events, and could lead to poor decision-making.” Please provide an example of where this has happened as well as the resulting consequences.
PIELKE RESPONSE: In 2007, Working Group II of the Intergovernmental Panel on Climate Change included a graph in its report which showed an apparent correlation between increasing global temperatures and the global costs of disasters. This graph was included in violation of the IPCC’s guidelines, as it had never appeared in any scientific study. It was created by an IPCC author, an employee of a catastrophe modeling firm called RMS, because he expected that it would show up in a future study. In order to get the graph into the report the author intentionally mis-cited it to a separate non-peer reviewed white paper that he had co-authored (ironically, as a contribution to a workshop that I organized.) However, that graph did not ever appear in that future study and the IPCC author later admitted that its inclusion was a mistake (This episode is detailed in The Climate Fix).

At the same time, the company that employed this IPCC author had made a dramatic change to its estimates of hurricane incidence in the United States. In 2011 the Sarasota Herald-Tribune was awarded a Pulitzer Prize for its investigative reporting of what came next. Here is an excerpt from that prize-winning reporting:
RMS, a multimillion-dollar company that helps insurers estimate hurricane losses and other risks, brought four hand-picked scientists together in a Bermuda hotel room.

There, on a Saturday in October 2005, the company gathered the justification it needed to rewrite hurricane risk. Instead of using 120 years of history to calculate the average number of storms each year, RMS used the scientists' work as the basis for a new crystal ball, a computer model that would estimate storms for the next five years.

The change created an $82 billion gap between the money insurers had and what they needed, a hole they spent the next five years trying to fill with rate increases and policy cancellations.

RMS said the change that drove Florida property insurance bills to record highs was based on "scientific consensus."

The reality was quite different.

Today, two of the four scientists present that day no longer support the hurricane estimates they helped generate. Neither do two other scientists involved in later revisions. One says that monkeys could do as well.

In the rush to deploy a new, higher number, they say, the industry skipped the rigors of scientific method. It ignored contradictory evidence and dissent, and created penalties for those who did not do likewise. The industry flouted regulators who called the work biased, the methods ungrounded and the new computer model illegal.

Florida homeowners would have paid more even without RMS' new model. Katrina convinced the industry that hurricanes were getting bigger and more frequent. But it was RMS that first put a number to the increased danger and came up with a model to justify it.

As a result of RMS' changes, the cost to insure a home in parts of Florida hit world-record levels.

It turns out, since RMS issued its forecast of enhanced hurricane activity, the United States has not been struck by a Category 3 or stronger hurricane, marking the longest such stretch going back to at least 1900. The new estimates proved wildly overstated.
For its part, RMS today views the science of hurricanes quite differently, “warmer atmospheric conditions may act to reduce the likelihood of hurricane landfalls along the Atlantic Coast due to stronger atmospheric winds blowing west to east during hurricane season, effectively pushing storms away from the U.S.”.

Meanwhile, overblown claims of a sudden change in US hurricane risk led to dramatic increases in insurance costs for Florida residents. The Sarasota Herald Tribune Explains:
For most of the past two decades, risk models have relied on actual hurricane activity recorded over more than 100 years to produce averages and other estimates of storm formation.

But even before Katrina, RMS was under pressure to disband the long-term outlook. Insurance insiders wanted something they believed would be more accurate. And they wanted it to forecast hurricane activity for next few years based on current conditions, not simply assume history would repeat itself.

The pressure came from several places. Some reinsurers sought validation that global warming was increasing the threat of hurricanes. Others in the industry wanted a short-term model to encourage investors, who wanted odds on their returns in the near term.

RMS CEO Hemant] Shah says he had an obligation to pursue the short-term model because of the belief that hurricanes had gotten more dangerous.
The overstatement of the connection between climate change and extreme events can sometimes just be a bit of political hyperbole intended to add intensity to support for climate policies. But such overstatement can also have consequences. In this well-documented case the overstatement resulted in the transfer of tens of billions of dollars from Florida citizens to reinsurance companies based on flawed estimates of hurricane risk.

This experience is not unique. A just-released scientific paper written by an all-star team of researchers (involved with the IPCC) concludes: “There is such a furore of concern about the linkage between greenhouse forcing and floods that it causes society to lose focus on the things we already know for certain about floods and how to mitigate and adapt to them.”
3. As I understand it, rather than trying to control carbon emissions, you advocate an expansion of alternative energy sources to serve both economic demands and environmental needs. What policies and programs would you advocate in order to expand alternative energy sources? What level of funding in the U.S. would be needed to carry this forward? How would you recommend structuring this approach to obtain the broadest support from energy sectors and to minimize opposition from fossil fuel industries? Would this approach be adequate to slow the release of carbon emissions and reduce the inevitable changes that result from those releases?
PIELKE RESPONSE: Thanks for this question. Just about everyone recognizes that developing the energy resources for the future will require innovation. The conventional view has been that putting a price on carbon – via a substantial tax or cap-and-trade program – would provide businesses and consumers with incentives to invest more in energy innovation. However, the fatal flaw in this perspective is that efforts to raise the costs of energy have their political limits, such as observed in Europe just this week, as the EU has stepped back from aggressive and costly energy policies in order to shore up the continent’s competitiveness. I, along with many colleagues, have argued that instead of focusing primarily on making dirty energy expensive, we should focus to a greater degree on making clean energy cheap.

A greater commitment to public sector innovation might be supported with a low carbon tax (How low? At whatever level is politically acceptable). Consider that a $5 per ton tax on carbon dioxide would add about $0.04 to the price of a gallon of gas and raise about $30 billion per year in the US (Pielke 2010). To put this into context, the Department of Energy will spend $2.4 billion on energy R&D programs in FY 2014.

As the United States has learned from its experiences with shale gas and shale oil, innovation which leads to lower priced energy costs confer substantial economic and competitiveness benefits. Such innovation requires partnerships of the public and private sectors, and often a very long lead time – the key innovations underpinning shale gas and oil technologies were decades in the making.

The world will continue to demand more and more energy. Whatever one thinks about climate change it is in the interests of the United States to be at the forefront of energy innovation for decades to come. We should think hard about how we might bring greater resources to meeting the challenges and opportunities posed by energy demands of a growing world. Building a bridge to that energy future by placing a small tax or fee on today’s energy makes good sense. These ideas are discussed in greater depth in The Hartwell Paper and elsewhere.
4. In your “Truth in Testimony” statement, you acknowledge receiving more than $12.7 million in grant support from NSF—all but $39,435 of that came from social and behavioral/economic accounts. What projects and publications resulted from this funding? Did any of that funding contribute to work that you testified about in this hearing?
PIELKE RESPONSE: The total reported in the “Truth in Testimony” statement is actually $2.8 million. Most of that funding supported a project called “Science Policy Assessment and Research on Climate” (SPARC) which was funded under the NSF competition on Decision Making Under Uncertainty (the other two listed projects were science policy-related and did not focus in any way on climate). SPARC “conducts research and assessments, outreach, and education aimed at helping climate science policies better support climate-related decision making in the face of fundamental and often irreducible uncertainties.” That project, now completed, resulted in hundreds of publications, several of which were cited in my testimony. A comprehensive account of that project and the work which it did can be found at: http://cstpr.colorado.edu/sparc/.

Questions from Congress, Part 1: Responses to Representative Lamar Smith

U.S. HOUSE OF REPRESENTATIVES
COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY
Subcommittee on Environment

RESPONSES OF ROGER PIELKE, JR. TO
Hearing Questions for the Record
The Honorable Lamar Smith

A Factual Look at the Relationship between Climate and Weather

Dr. Pielke

1. Everyone from the UN Intergovernmental Panel on Climate Change to the President to the journal Nature have admitted it is very difficult to attribute specific weather events to climate change. However, Dr. Titley and Dr. James Hansen have argued that man-made climate has resulted in the deck being stacked toward more extreme weather events generally.

a. Is this characterization correct? Is there a detectable signal that these events have been made more likely over the scale of decades?
PIELKE RESPONSE: Debate over the influence of human-caused climate change on extreme events often conflates expectations for the future with observations of the past. The scientific literature, as assessed by the Intergovernmental Panel on Climate Change, does include projections for some types of extreme events to become more frequent and/or more intense. At the same time, as I summarized in my testimony, there is very limited evidence to support claims that such increases in frequency and/or intensity have been observed in most types of extremes – notably, the incidence and impacts of tropical cyclones (hurricanes), floods, drought, tornadoes and winter storms. Given a set of projections for changes in frequency/intensity of particular extreme events, it is a mathematical exercise to calculate when such changes might be detected in the observational record. As I detailed in my testimony, such detection may lie in the distant future. Consequently, that no such signal is detected today is consistent with long-term projections. That is, we should not expect to see changes in most types of extreme at the present time and this is indeed what the data shows.
2. In light of the last decade and a half of global temperatures not rising, have we learned anything new about the relationship between temperature and extreme weather events?

a. In light of this “pause,” how can the President and others continue to project with medium and high confidence that certain extremes may get worse?
PIELKE RESPONSE: While the issue of the so-called “pause” in global temperature increases has attracted considerable attention, it is not of direct relevance to understanding either the historical patterns of most extreme events or long-term projections for their future evolution.
3. Dr. Titley’s testimony cites a single study regarding one climate model about tropical cyclone activity in the 21st century when making that claim that “our future may include more intense, and possibly more frequent storms.”

a. How is this claim consistent with the IPCC’s recently-revised projection that there is “low confidence” in any increase in intense tropical cyclone activity through 2050?
PIELKE RESPONSE: Dr. Titley is correct to say that “our future may include more intense, and possibly more frequent storms.” However, it would also be correct to say that “our future may include less intense, and possibly less frequent storms.” Looking across studies, rather than at any single study, the IPCC concludes, “there is low confidence in region-specific projections of frequency and intensity.”
4. Has man-made climate change contributed to increased intensity or frequency of wildfires is the U.S., as the President has indicated?
PIELKE RESPONSE: The IPCC AR5 does not detect or attribute a linkage between human-caused climate change and wildfire intensity or frequency. However, there is ample literature to suggest that such a connection is plausible. The many factors which influence wildfire incidence, many of which are related to human activities, make the detection and attribution of signals difficult.
5. Were the recent floods in Colorado driven by man-made climate change?
PIELKE RESPONSE: Attribution of causality to human-caused climate change for single-events remains a much debated topic and of questionable scientific value. Flooding in the US Southwest, including Colorado has decreased on climate time-scales (Hirsch and Ryberg 2012).
6. Is there evidence that recent historic droughts in Texas have been driven by man-made climate change?
PIELKE RESPONSE: You will find conflicting claims on Texas drought in the literature. A NOAA report does not find strong evidence for such a linkage. Another recent study is suggestive of a linkage. Attribution of causality to human-caused climate change for single-events remains a much debated topic and of questionable scientific value. Drought in the US, as I documented in my testimony, has not increased nation-wide or globally on climate timescales.
7. Were there any aspects of testimony received during the hearing regarding extreme weather events and climate change that you would like to elaborate on further?
PIELKE RESPONSE: No. I am quite satisfied with the breath of information that I shared in my testimony.
8. Were there any aspects of testimony received during the hearing regarding extreme weather and climate change that you disagree with? If so, please elaborate.
PIELKE RESPONSE: No.

22 January 2014

Europe's New Emissions Goals

The European Commission has recommended a set of new goals for its climate and energy policies. The centerpiece of the proposed policy is a 40% reduction target for each member nation from 1990 levels. The BBC reports:
Climate commissioner Connie Hedegaard said that, given the economic climate, the 40% target was a significant advance.

"A 40% emissions reduction is the most cost-effective target for the EU and it takes account of our global responsibility," she said.

"If all other regions were equally ambitious about tackling climate change, the world would be in significantly better shape."

Officials emphasised that the 40% target would have to be achieved "through domestic measures alone", meaning that member states couldn't offset their reductions by paying for carbon cutting in other countries.
The graph at the top of this post, which I earlier shared via Twitter and is of the sort that I have shown previously, shows the implications of a 40% emissions target for the proportion of carbon-free energy in Germany in 2030.  The graph assumes that (a) carbon-free energy replaces the most carbon-intensive sources (i.e., coal), (b) that nuclear is phased out and (c) that energy demand remains constant at 2012 levels to 2030. Such assumptions could of course be varied, the data come from BP 2013 Statistical Review of World Energy.

The graph indicates that achieving a non-nuclear, renewables-based energy system in Germany, while also reducing emissions to 40% below 1990 levels, will remain a formidable challenge. For now, Germany is building more coal plants and moving away from that 2030 target.

21 January 2014

When Scientific Integrity and Institutional Values Collide

There is quite a spectacle unfolding at the University of North Carolina. One political scientist says of his university's reaction to some unwelcome and uncomfortable research findings:
“What I see unfortunately so far is a strategy of denial.”
What is this all about? Head over to The Least Thing for details. Its the same old story, but in a very interesting context.

14 January 2014

Is Our Economic Ignorance Increasing?

At the Financial Times John Authers writes ($) of the graph shown above:
Compared to expectations pre-crisis, Richard Dobbs of the McKinsey Global Institute suggests companies simply expect less growth than they once did.

More technically, Mr Dobbs points to a steady rise in Solow’s Residual. Named for Robert Solow, the Nobel laureate economist, it refers to the proportion of growth that cannot be accounted for by extra labour or extra capital. From 1920 to 1950, this figure was about 33 per cent. Now, according to McKinsey, the Solow Residual has risen to 50 per cent.
What is the "Solow Residual"? It is a stylized fact (or maybe just a myth) that the Solow Residual is, as stated in the title to the graph shown above, the "contribution of technology change to world output growth."

Actually, as Konstantin Kakaes explained at Slate a few years ago, the Solow Residual does not actually refer to technological change:
Robert Solow, winner of the 1987 Nobel Memorial Prize in Economic Sciences, is famous for, in the recent words of a high-ranking State Department official, “showing that technological innovation was responsible for over 80 percent of economic growth in the United States between 1909 and 1949.”. . Typically, technical or technological progress isn’t explicitly defined by those invoking Solow, but people take it to mean new gadgets.

However, Solow meant something much broader. On the first page of “Technical Change and the Aggregate Production Function,” the second of his two major papers, he wrote: “I am using the phrase ‘technical change’ as a shorthand expression for any kind of shift in the production function. Thus slowdowns, speedups, improvements in the education of the labor force, and all sorts of things will appear as ‘technical change.’ ” But his willfully inclusive definition tends to be forgotten.

Solow was constructing a simple mathematical model of how economic growth takes place. On one side was output. On the other side was capital and labor. Classical economists going back to Adam Smith and David Ricardo had defined the “production function”—how much stuff you got out of the economy—in terms of capital and labor (as well as land). Solow’s point was that other factors besides capital, labor, and land were important. But he knew his limitations: He wasn’t clear on what those factors were. This is why he defined “technical change” as any kind of shift (the italics are his) in the production function. He wasn’t proving that technology was important, as economists in recent years have taken to saying he did. All Solow was saying is that the sources of economic growth are poorly understood.
Writing in 1956 about the role of productivity growth in the economy unexplained by labor and capital Moses Abramovitz lamented the existence of the unexplained residual factor, of which he said (here in PDF):
[It] should be, in a sense, sobering, if not discouraging, to students of economic growth. Since we know little about the causes of productivity increase, the indicated importance of this element may be taken to be some sort of measure of our ignorance about the causes of economic growth in the United States.
If the Solow Residual reflects a "measure of our ignorance" about economic growth, what does it say that it is growing in magnitude?  Is our ignorance getting larger as well?

13 January 2014

Exceptional Talent and US Immigration

Issues related to immigration and citizenship have long been debated in the United States, and are reemerging as a political issue, with calls for reform coming from both Republicans and Democrats.

President Obama says that "the US immigration system is broken ... there are 11 million people living in the shadows."1 One consequence of the broken immigration system can be seen in US soccer, where certain immigrants to the United States are deemed ineligible to represent Team USA, despite meeting FIFA criteria for eligibility. This article explains this situation and recommends several alternative ways forward to better align the intent of FIFA regulations with their implementation in a US context by US Soccer.
To read the rest, go here. Comments welcomed.

09 January 2014

Was the "War on Poverty" a Success? Yes.

The graph above comes from a new paper by Christopher Wimer and colleagues at Columbia University (available here in PDF). The paper helps to address what is a surprisingly difficult question to answer: How has poverty changed over time?

Wimer et al. explain:
Poverty measures set a poverty line or threshold and then evaluate resources against that threshold. The official poverty measure is flawed on both counts: it uses thresholds that are outdated and are not adjusted appropriately for the needs of different types of individuals and households; and it uses an incomplete measure of resources which fails to take into account the full range of income and expenses that individuals and households have. Because of these (and other) failings, statistics using the official poverty measure do not provide an accurate picture of poverty or the role of government policies in combating poverty.
A parallel might be to use the number of people who visit the doctor for flu shots as a measure of people who are not vaccinated against the flu. Of course the reason that people come into the doctor is to get a flu shot, so using such a metric as a measure of non-vaccination would be highly misleading.

Researchers and the government are well aware of the problems with and limitation inherent in the official poverty rate, such that they have developed alternative measures. Wimer and colleagues have used one of those alternative measures and used it as the basis for measuring poverty back in time -- called the supplemental poverty measure (SPM), which is calculated by the US Census Bureau. Think of their methodology as analogous to our normalized hurricane damage work -- it allows a better look at trends. The approach is simple and intuitive.

The supplemental rate can be a bit deceiving as in 2012 it looks a lot like the official poverty rate (OPR) -- the former was 16% in 2012 and the latter 15.1% -- ho, hum, so what? So what is that the SPM has only been measured for a few years, so until now it has not been useful as a measure of trends. Wimer et al. use the SPM, "anchored" to its 2012 values and then ask what the poverty rate would have been using this measure going back in time until 1967, the earliest year when data is available.

What they find is shown in the graph at the top of this post, which shows change in poverty rate (i.e., the proportion of people whose incomes fall beneath the anchored-SPM poverty threshold).  The overall poverty rate fell by almost 40% from 1967 to 2012. The poverty rate for children fell by a similar amount, for those of working-age the rate fell by 23% and for the elderly a remarkable 78%.

Whatever one thinks about the desirability of a "war on poverty" or the way that it has been implemented ideologically or politically, we should all be able to agree that the incidence of poverty -- as measured by the SPM -- has dropped dramatically since the 1960s.  A major explanation for the drop is government programs focused on the poor, as documented by Wimer et al..

Even though there remains considerable inequality and outright poverty (still 16% in 2012), as well as important debates on what "poverty" actually means, we can also look at the numbers and conclude that the "war on poverty" has been a success.

To read more, the Center on Budget and Policy Priorities has a nice discussion of the paper here and The Washington Post here,

08 January 2014

A Different Look at the Income of the 99%

Research by Emmanuel Saez and Thomas Piketty is fundamental to debates over inequality. They have documented that a larger share of US income has been going to the top 10% -- and especially the top 1% -- of all earners. Their data is shown in the graph at the top of this post.

Here is how the New York Times recently characterized their work:
The top 1 percent took more than one-fifth of the income earned by Americans, one of the highest levels on record since 1913, when the government instituted an income tax.

The figures underscore that even after the recession the country remains in a new Gilded Age, with income as concentrated as it was in the years that preceded the Depression of the 1930s, if not more so.

High stock prices, rising home values and surging corporate profits have buoyed the recovery-era incomes of the most affluent Americans, with the incomes of the rest still weighed down by high unemployment and stagnant wages for many blue- and white-collar workers.
I was curious about how their data looked in absolute rather than relative terms -- after all the magnitude of incomes and number of earners both change dramatically over time.  So I have conducted an empirical thought experiment. Let's pretend that the top 1% does not exist. (But in case you are curious, in 2012 the bottom 99% had a collective income of $6.85 trillion, not too shabby. In 2012 the top 1% had a collective income of $2 trillion or the entire income of the 100% in 1956. Wow.)  But for analytical purposes, let us strike them and their income from the data. What then does the income of the bottom 99% look like over time?

The answer can be found in the graph below (using the wonderfully user-friendly dataset made available by Emannuel Saez -- source here in PDF, hereafter SP13. Note I show their data without capital gains, its inclusion does not affect this analysis.):
The graph shows that the total income of the 99% grew by about a factor of 14 since 1913 (accounting for inflation), whereas the number of earners among the 99% grew by only a factor of 4. From 1913 until World War II the total wages of the 99% grew at the same rate as the growth in the size of the workforce (using the SP13 definition). From WWII until about 1970 income of the 99% grew a remarkable 3 times the rate of growth in the workforce, after which the ratio remained constant for a decade. Then around 1980, the ratio jumped around, exhibiting a modest increase over the past 30 years, mainly due to a jump in the 1990s.

The inflection points match up nicely with the inflection points in the SP13 look at relative income (shown at the top of this post) associated with the proportion of income going to the top 10% and 1%.

The graph below shows the same data for 1980 to 2012.
This look at the data suggests that it may be incorrect to claim that greater income equality is causing the 99% to fall behind in an absolute sense. Their income is keeping pace or a bit better with respect to changes in the size of the workforce. At the same time a case can be made that income inequality is keeping the 99% from doing better than they might - although causality is of course at the center of much debate.

Clearly, with respect to total income growth in the US there was something quite different about 1940-1970 than the rest of the period. Was that demographics? Policy? Changing composition of the labor force? Technology? Put another way, was the mid-century growth in wages an anomaly or should it be expected as business-as-usual?

Comments welcomed.

07 January 2014

What is the Official Poverty Rate?

The notion of a “poverty rate” is pretty simple: come up with a definition of “poverty,” determine a quantitative measure that corresponds to that definition, and then count up the number of people who are in “poverty” under that measure.

Being in poverty is of course a subjective determination which can be defined in many different ways. This post focuses on the “poverty rate” as defined by the Census Bureau of the US government. The official US poverty rate is the focus on much informed public discussion. For instance, the New York Times referred to the “poverty rate” 248 times over the past 12 months, about 5 times per week.  This post describes the poverty rate as background for a bit of analysis to come on this blog.

The Census Bureau describes the poverty rate as follows:
Following the Office of Management and Budget's (OMB) Statistical Policy Directive 14, the Census Bureau uses a set of money income thresholds that vary by family size and composition to determine who is in poverty. If a family's total income is less than the family's threshold, then that family and every individual in it is considered in poverty. The official poverty thresholds do not vary geographically, but they are updated for inflation using Consumer Price Index (CPI-U). The official poverty definition uses money income before taxes and does not include capital gains or noncash benefits (such as public housing, Medicaid, and food stamps).
There is in fact not a single official “poverty rate” but 48 different official poverty rates that are calculated based on family size and age.  For instance, in 2013 an individual would officially be in poverty with an income of $11,490, which works out to $32.11 per day. For comparison, someone working at the US federal minimum wage (assuming 2,000 hours work in a year) would make $14,500.  In its discussions of global poverty, the UN often uses a threshold of $1.00 per day (and sometimes $1.25 or $2.00). In 2012, 46.5 million Americans (15%) were determined to be in poverty under the official threshold.

Let’s dive a little deeper. What is meant by “income”? The Census Bureau explains:
Includes earnings, unemployment compensation, workers' compensation, Social Security, Supplemental Security Income, public assistance, veterans' payments, survivor benefits, pension or retirement income, interest, dividends, rents, royalties, income from estates, trusts, educational assistance, alimony, child support, assistance from outside the household, and other miscellaneous sources.
There are also some very important clarifications about what is not counted as income:

  • Noncash benefits (such as food stamps and housing subsidies) do not count.
  • Before taxes.
  • Excludes capital gains or losses.
  • If a person lives with a family, add up the income of all family members. (Non-relatives, such as housemates, do not count.)

Immediately we can begin to see some problems with the official poverty threshold in that it fails to include non-cash benefits, which are of course fundamental to many government programs which target low-income households. Of course, because many of the programs are based on pre-assistance income in order to determine eligibility they necessitate leaving such benefits out. Nonetheless, the resulting picture of poverty is thus biases. These shortfalls are well-understood and the subject of some interesting new research (to be discussed soon).

Yet, there is a deeper problem with the official poverty thresholds. We might ask why is the poverty rate (for an individual in this case) set at $11,720 in 2013? Why is that number the demarcation of being "in poverty."

The answer is pretty remarkable.

The official poverty rate was initially developed in the early 1960s, based on a long history of research and debates over the measurement of poverty (this paper by Gordon M. Fisher provides a detailed look at that history). The more proximate history of the early 1960s involved several analyses by Mollie Orshansky see this in PDF) of the Social Security Administration which arrived at a quantification of a poverty threshold. Lyndon Johnson declared a “war on poverty” soon after Mollie Orshansky published her first paper on poverty thresholds.

Orshansky came up with a threshold based on how much it cost to feed a family (this history is detailed in this paper in PDF by Fischer). Quantification of how much it cost to feed a family was based on a survey done by the US Department of Agriculture, initially in 1933 and extended in 1961 based on data from 1955 to include an “economy food plan.” Orshansky assumed that a family would spend one third of its income on food. The poverty threshold was then determined to be that point at which family income was low enough such that the amount spent on food equaled that determined to be necessary under the “economy food plan” calculated based on food prices of 1964.

At that point Orshansky assumed (as quoted in Fisher):
“the housewife will be a careful shopper, a skillful cook,  and a good manager who will prepare all  the family’s meals at home.“
Fisher explains further of the assumptions:
Orshansky made the assumption that, at that point, the family’s nonfood expenditures would also be minimal but adequate, and established that level of total expenditures as the poverty threshold for a family of that size. Since the family’s food expenditures would still be one-third of its total expenditures, this meant that (for families of three or more persons) the poverty threshold for a family of a particular size and composition was set at three times the cost of the economy food plan (or the low-cost food plan) for such a family. The factor of three by which the food plan cost was multiplied became known
as the “multiplier.”

It is important to note that Orshansky’s “multiplier” methodology for deriving the thresholds was normative, not empirical-that is, it was based on a normative assumption involving (1955) consumption patterns of the population as a whole, and not on the empirical consumption behavior of lower-income groups. 
Since the introduction of the original poverty thresholds used by the US government (which were formalized in 1969) they have only been adjusted for inflation based on the Consumer Price Index (it's true, I did the math). In other words, to be in poverty in the United States means that you have income three times greater than that determined necessary to consume an adequate diet as assessed in 1955. Today, that same multiplier would be closer to 8 as food represents a much smaller share of consumer spending.

If that seems to you to be odd or dated, then you’ll be in good company. In 1995 the National Research Council prepared a major report (Measuring Poverty) on the poverty thresholds and found them to be outdated and no longer reflective of the conditions which they first were proposed to represent.

The NRC explained:
Overall, except for the minor changes in the number of different thresholds and the change in the price index for updating them, the poverty line has not been altered since it was first adopted in 1965. In the language of poverty measurement, the United States has an "absolute" poverty threshold that is updated for price changes but not for real growth in consumption. Thus, the poverty line no longer represents the concept on which it was originally based—namely, food times a food share multiplier—because that share will change (and has changed) with rising living standards. Rather, the poverty threshold reflects in today's dollars the line that was set some 30 years ago. . .

We find that the current official poverty measure has a number of weaknesses, involving both the thresholds and the definition of family resources. (Some of these problems were pointed out in the 1960s by Orshansky herself.) Although they were not necessarily important or obvious at the time the measure was adopted, these problems have become more evident and more consequential because of far-reaching social and economic changes, as well as changes in public policy, that have occurred since the 1950s and 1960s. These changes involve labor force participation, family composition, geographic price differences, growth in medical care costs and benefits, government taxation, the provision of in-kind benefits to families and individuals, and the overall increase in the standard of living.
In theory, a poverty rate can be set at whatever level based on whatever arbitrary criteria one chooses. In practice however, the US government’s official poverty rate (as utilized by federal agencies in decision making) has significant social and economic consequences.

The NRC explained in 1995:
The U.S. measure of poverty is an important social indicator that affects not only public perceptions of well-being in America, but also public policies and programs. The current measure was originally developed in the early 1960s as an indicator of the number and proportion of people with inadequate family incomes for needed consumption of food and other goods and services. At that time, the poverty "line" for a family of four had broad support. Since then, the poverty measure has been widely used for policy formation, program administration, analytical research, and general public understanding.
Poverty experts are of course aware of these challenges, and alternative approaches to poverty thresholds currently used have been proposed. However, much of the public debate about policies related to government assistance and wealth inequality are grounded in numbers which are anchored to an archaic notion of poverty.

One quick way to improve such debates would be to do as the UN does and focus on poverty in terms of an income measured in dollars per day. So instead of talking about a percentage below a "poverty threshold" whose provenance date to the 1950s, we would discuss the number of people below an daily income threshold -- which in this case is $32.11 per day (recognizing that the official numbers do not fully represent what poor people actually live on, as mentioned above).

Of course this would lead to question such as, what is special about $32.11 per day? And, what is the full income distribution of Americans from top to bottom? The answer to both questions sets up a more informed discussion of poverty and inequality than one based on the very dated statistics of official poverty rates.

06 January 2014

What Was the "War on Poverty"?

This week marks the 50th anniversary of President Lyndon Johnson declaring a "war on poverty." The NYT reports:
Half a century after Mr. Johnson’s now-famed State of the Union address, the debate over the government’s role in creating opportunity and ending deprivation has flared anew, with inequality as acute as it was in the Roaring Twenties and the ranks of the poor and near-poor at record highs. 
Here is what President Johnson said in that inaugural address:

This administration today, here and now, declares unconditional war on poverty in America. I urge this Congress and all Americans to join with me in that effort.

It will not be a short or easy struggle, no single weapon or strategy will suffice, but we shall not rest until that war is won. The richest Nation on earth can afford to win it. We cannot afford to lose it. One thousand dollars invested in salvaging an unemployable youth today can return $40,000 or more in his lifetime.

Poverty is a national problem, requiring improved national organization and support. But this attack, to be effective, must also be organized at the State and the local level and must be supported and directed by State and local efforts.

For the war against poverty will not be won here in Washington. It must be won in the field, in every private home, in every public office, from the courthouse to the White House.

The program I shall propose will emphasize this cooperative approach to help that one-fifth of all American families with incomes too small to even meet their basic needs. 
In subsequent posts I'll present some data relevant to evaluating the effectiveness of the "war on poverty." As a first step in thinking about policy evaluation, it is important to be precise about what it is being evaluated. The CEPR blog has a very a useful discussion of what, exactly, might be meant by the "war on poverty":
In an excellent new Russell Sage Foundation book on the war on poverty, Martha Bailey and Sheldon Danziger take a more historically informed approach, defining the war on poverty as:
"The full legislative agenda laid out in the 1964 State of the Union and in the eleven goals contained in chapter 2 of the 1964 Economic Report of the President, titled 'Strategy against Poverty'...  These goals include maintaining high employment, accelerating economic growth, fighting discrimination, improving regional economies, rehabilitating urban and rural communities, improving labor markets, expanding educational opportunities, enlarging opportunities for youth, improving the Nation’s health, promoting adult education and training, and assisting the aged and disabled."
By their definition, the war on poverty includes 16 major pieces of legislation passed during the Johnson administration between 1964 and 1968 (for a list, see their very helpful Table 1.1 in the first chapter of the book, which is available on line).
In subsequent posts this week I will develop several themes. One is that the "war on poverty" has been a remarkable success, largely due to the government programs initially passed in the 1960s and described above. The second theme is that evaluation of progress with respect to poverty is hampered by a focus on official "poverty rates" produced by the US government. They do more to obscure than clarify.

Both themes are of direct relevant to ongoing current debates about economic growth, inequality and the role of government policies in both.

02 January 2014

Global Poverty Rates and Economic Growth

The figure above comes from a recent, excellent paper by Martin Ravallion, The Idea of Antipoverty Policy, which shows a dramatic acceleration in the reduction of global poverty since 1950,

Ravallion makes two observations based on the graph (of which he notes, "Neither observation has been made before to my knowledge"):
The middle of the 20th century saw a marked a turning point in progress against poverty globally. Figure 2 plots two series for the $1 a day poverty rate, from Bourguignon and Morrisson (2000) and Shaohua Chen and Ravallion (2010). There is a long list of data problems in these sources and their comparability. However, these are the best estimates we have, and the comparability problems are unlikely to alter two key observations from Figure 2: First, the incidence of extreme poverty in the world is lower now than ever before. While there have been calls to end extreme poverty at various times during the last century or so, they are surely now more credible than ever. Second, the time around 1950 saw a turning point, with significantly faster progress against extreme poverty.
The data are consistent with that which I posted up recently on the evolution of global income over the same time period which shows increasing income and greater income equality (relative historically) globally since 1820.

Ravillion adds a little flesh and blood to interpretation of the graph:
To assess the extent of the break in 1950, suppose that it had not occurred—that the pre-1950 trajectory had been maintained. We would then have expected to find that 36% (standard error =3.3%) of the world’s population lived below $1 a day in 2005, as compared to the Chen and Ravallion estimate of 14%. The difference implies that an extra 1.5 billion people would have lived in poverty by this measure if not for the break in trajectories around 1950.
That is a lot of people. Consider that there were only 2.6 billion people alive in 1950. The graph tells a global story, as Ravallion assumes that: "nobody lives below $1 a day in the developed countries after 1980. This is plausible, and is consistent with the Luxembourg Income Study data base (author’s calculations)." The data suggests a secular change in poverty rates in the middle of the 20th century, which long pre-dates the modern era of free(er) trade, technology-based globalization and global anti-poverty awareness.

We can perhaps gain a hint of why poverty reduction acceleration by looking at trend data on the size of the global economy.The dramatic decrease in this metric of poverty has been accompanied by a dramatic acceleration in global economic growth, as shown in the following graph (using data from DeLong 2006, here in PDF).
From 1950 to 2000, global GDP increased by about 800% while population increased by less than 300%. Growth appears to be a necessary, but not sufficient condition for reducing poverty. This was the conclusion of a 2011 report by the Brookings Institution, which examined the progress in meeting the UN's Millennium Development Goals, over the period 2005 to 2015 (here in PDF):
For many years, a debate raged amongst development academics, advocates and policymakers on the role of  growth in poverty reduction and development, with some suggesting issues such as inequality and redistribution merited greater attention. Today, the development community has thankfully largely moved beyond this debate, with a broad consensus rightfully asserting the role of growth at the center of poverty alleviation.

This analytical evolution has happily coincided with a period of rapid economic growth in the developing world, even despite the setback of the Great Recession. The new estimates of global poverty presented in this brief serve as a reminder of just how powerful high growth can be in freeing people from poverty.
Ravallion concurs:
[P]resent day thinking is both more optimistic about the prospects of eliminating absolute poverty through an expanding economy, and more cognizant of the conditionalities in the impact of growth on poverty. Under the right conditions, economic growth can be a powerful force against poverty.
A key question is, what are those right conditions for economic growth to be accompanied by, or even to contribute to, reducing the number of people in poverty? The experiences since 1950 provide some empirical evidence to address that question at the global scale.

To be continued . . .