29 February 2012

Energy and Global GDP

Ed Morse of Citi writes a sobering commentary on oil prices in the Financial Times, and includes this comment:
The biggest problems, however, appear to lie in the impact of prices on the global economy. From a global perspective, total energy costs are about 10 per cent of global gross domestic product, a level last seen in the late 1970s. In the US, oil costs are above 4.5 per cent of GDP and for the world as a whole oil spending is 5.4 per cent of GDP, both creeping up to the record world level of 7.3 per cent.
Does anyone have a time series of world energy costs vs. GDP?

28 February 2012

Gasoline Intensity of The Economy

UPDATE 3/1: Please also see the follow up post here.

Following up yesterday's post on US gasoline prices, here is a comparison figure for UK petrol spending as a proportion of GDP over the period 1991-2010 (the period for which data is available) with petrol data from the UK government's DECC and GDP data from ONS (conversion factors courtesy BP).

In this post I introduce a new concept (at least new to me and to Google) -- the gasoline intensity of the economy -- defined as the amount of total spending on gasoline as a proportion of total economic activity. (More precisely, the gasoline expenditure intensity of the economy.) Successful innovation in energy will lead to sustainable reductions in gasoline intensity of the economy. Reduced gasoline intensity means less economic vulnerability to increases in the price of oil resulting from a greater efficiency in the use of energy, both of which are desirable outcomes.

A few points to note:
  • The UK spends about 100% more than the US on gasoline as a proportion of GDP (in round numbers, the US is at about 0.3% and the UK 0.6%, for the 2010 data)
  • Assuming UK petrol consumption is constant in 2011 and 2012, and GDP in 2012 = 2011, then the index above increases to 75 in 2011 and 82 in 2012, or about the same as it was in 2001.
  • Thus, since 2001 US spending on gasoline as a proportion of GDP has increase by about 30%, while UK spending is about the same level
So the bottom line for the US in comparison to the UK is mixed -- The US spends about half as much of GDP on gasoline than does the UK.  At the same time, over the past decade US gasoline intensity has increased by about 30% while UK gasoline intensity is about the same as it was 10 years ago.

It has been frequently pointed out on this blog that gasoline in the UK and Europe is as much as twice the cost of that in the US. No doubt pricing helps to explain the improvement in gasoline intensity in the UK from about 2000 to 2009, and the recent reversal is explained by a combination of economic stagnation (if not contraction) in the UK coupled with increasing oil prices. It is important to observe that increasing oil prices have proportionately less impact in the UK because the relative change in prices will be smaller, as they start with a much larger base. My guess -- and it is only a guess -- is that economic recovery in the UK is likely to be accompanied by a resumption of improvement in the gasoline intensity metric.

The performance of the US with respect to gasoline intensity since 1983 suggests that after about 15 years of improvement, the US has been moving in the wrong direction. This provides some evidence that gasoline prices are not too high, but too low -- by exactly how much we can certainly debate. Those who'd like to argue the other side should address the following (a) whether you accept that decreasing gasoline intensity of economic activity is a desired outcome, and if yes, then (b) if pricing is not an appropriate tool to influence this outcome, what you would recommend instead.

26 February 2012

How Economically Significant are US Gasoline Prices?

The figure above shows US spending on gasoline as a proportion of GDP for the period 1983 to 2010, with 1983 set to 100. The data comes from the US EIA (gasoline consumption and prices) and the White House (GDP). The figure shows that in 2010 the total economic cost of gasoline as a proportion of GDP was about half what it was in 1983.

Assuming that gasoline consumption is constant in 2011 ans 2012 and prices go to $4.00 per gallon, with 2.5% annual GDP growth then the index increases in 2012 to 66, or just about the same level as in the lead-up to the 1988 elections.

While everyone likes lower priced energy, the current price of gasoline does not appear particularly exceptional in recent US economic context. $4/gallon gasoline is not what it used to be.

24 February 2012

Friday Funny ... For Aussie Political Junkies

H/T: Lowy Interpreter

Mark Your Calendars

If you are in Boulder next Friday, March 2, and you'd like to hear about the science and politics of disasters, including how the IPCC got it wrong before it got it right, please come to the talk announced below, which is part of the Geography Department's colloquia series. If you are not interested in the talk or have heard it all before, well then just come for the refreshments!
The Science and the Politics of Disasters and Climate Change 
Friday March 02, 2012. 03:30 pm. IBS Building, room 155, University of Colorado

Roger Pielke, Jr.
University of Colorado, CIRES Fellow, Center for Science and Technology Policy Research, and Environmental Studies Program
Refreshments following lecture on the IBS patio
The talk on March 2 will set the stage for a second talk I will give on April 9 in which I will discuss the ethics and responsibilities of doing science in the context of a highly politicized issue like climate change. You don't need to come to the first talk to make sense of the second, but it might be helpful background as I will draw on the case study. (I wonder if there are other recent examples of issues arising at the interface of ethical responsibilities and climate science that I might draw upon? Any ideas?)

Here are the details for that second talk:
April 9 at 12:00 PM
WAG THE DOG: ETHICS, ACCURACY AND IMPACT OF THE SCIENCE OF EXTREMES IN POLITICAL DEBATES
Roger Pielke, Jr.
Location: CIRES Auditorium

22 February 2012

Does Increased Productivity Decrease Manufacturing Jobs?

I was intrigued to see an argument being made in a recent Brookings Institute report (Helper et al. here in PDF) that productivity gains do not lead to job losses in manufacturing. This post explains why I think that this argument is not well characterized and probably just wrong.

Helper et al. make the following argument (pp. 9-10):
Some argue that strong productivity growth has caused much of America’s manufacturing job loss, especially in the last decade. This theory, which contends that technology is replacing workers, stems from the observation that apparent productivity gains have coincided with manufacturing job loss in the 1990’s and 2000’s. Yet there is no economic reason why increased productivity must lead to job loss.
The first thing to note is that Brookings is talking about overall productivity gains, and not labor productivity. The different is important. Labor productivity according to the BLS "is the ratio of the output of goods and services to the labor hours devoted to the production of that output." Overall ("multi-factor") productivity "relates output to a combination of inputs used in the production of that output, such as labor and capital or capital, labor, energy, materials, and purchased business services (KLEMS). Capital includes equipment, structures, inventories, and land." The distinction is crucial to understanding employment changes in manufacturing.
There are a lot of moving parts in multifactor productivity beyond labor. So in any analysisa good first place to start, fairly obviously, is the relationship of labor productivity and jobs. The above graph shows the relationship of changes in labor productivity with changes in employment for manufacturing, according to data from the BLS (with 2005 = 100 for each time series).

It shows an extremely close relationship between labor productivity and employment, one that is even stronger on a one-year lagged basis (as companies fully exploit labor productivity gains). This relationship should not be at all surprising since labor productivity is defined in terms of hours worked, which has a direct relationship with employment.

So the first order conclusion necessarily must be that gains in labor productivity in manufacturing necessarily lead to losses in jobs. And it is here that Brookings gets a bit confused with its focus on total productivity:
Even though a productivity increase means that fewer workers are needed to produce a given quantity of output, the productivity increase also allows product prices to be lower, increasing the size of the product market. The bigger market means that firms will need to hire more workers. The additional hiring needed to produce for a bigger product market usually offsets the initial labor-saving impact of the productivity increase. Therefore, the overall impact of a productivity increase is usually to expand employment rather than reduce it.
To see where Brookings has erred, think about the mathematics here -- If a market for a good expands that means that more output is required. Additional labor needed to meet that demand is an input. If the magnitude of the change in the input is equivalent to the magnitude of the change in output, then there has in fact been no increase in labor productivity, which is defined as an ability to produce more with fewer hours on the job. If labor productivity is improving then the inexorable result will in the long run be a loss of jobs, even in an expanding market.  In the short term, market expansion can offset job losses, such as we are currently experiencing in the US, but only temporarily. In short, with respect to jobs, simple math tells us that market expansion cannot defeat labor productivity. Just look at agriculture for an example of this dynamic (e.g., programs such as the Food for Peace effort of the 1950s were designed to expand markets in the name of aid and diplomacy).

Now, with respect to total productivity it is certainly possible that changes in other inputs can work to offset gains in labor productivity, but only at the expense of overall productivity (i.e., if labor productivity improves then KEMS productivity must degrade).  Similarly, total productivity gains can occur even if labor productivity degrades or is held constant (i.e., via improvements in KEMS productivity). Thus, there is nothing inconsistent with overall gains in productivity and constant or even increasing employment, but such an outcome does require that labor productivity gains are smaller than the combined effects of market expansion and degradation in other sources of productivity gains. This is a tough ask: The case of Germany suggests that even aggressive policies to influence labor productivity (which in some cases degrade labor productivity) .

Does increased productivity in the manufacturing sector result in a loss of jobs? If those productivity gains are in labor then the answer must be yes.

Adventures in Democracy: Australian Edition

The current goings on in Australian politics are utterly fascinating. Combining elements of high-minded democratic governance with characteristics of a reality television show and UFC fighting, the next week will see an all-out brawl for the leadership of the Australian Labor Party between Kevin Rudd, the former prime minister and Julia Gillard, the current prime minister. Here in a nutshell is how we got to today.
  • Kevin Rudd was forced to resign as  Prime Minister in 2010 after Julia Gillard engineered a palace coup
  • In 2010 the Australian federal elections resulted in a hung parliament, with the ALP and the Liberal/National Coalition each winning 72 seats
  • The balance of power lay in the hands of 6 independents who broke 4-2 for the ALP and Gillard
  • The government thus rests precariously on a single vote majority
  • Gillard installed Rudd as Foreign minister
  • Since 2010 there have been various rumblings, including on this blog, that Rudd was going to make a leadership challenge (and if could have been seen from Boulder, then it must have been fairly obvious;-)
  • The tensions have built in the past weeks, with Rudd's challenge to leadership obviously mounting
  • Early this morning, US time, Rudd resigned dramatically as foreign minister
  • The ALP caucus (of the 103 members of the House and Senate) is likely to have to vote on the leadership next week
  • Gillard says she has 60 votes, Rudd says that both have about 30 
 The possible outcomes are many:
  • Gillard as PM
  • Rudd as PM
  • Neither as PM
  • A new election called by ALP
  • A new election caused by a change in the majority if an independent defects
  • A new election caused by a Rudd resignation (and subsequent election lost by ALP)
 You don't have to be a political scientist to be enthralled by the spectacle of democracy in action.

20 February 2012

A Case Study in (how not to do) Quantitative Policy Analysis

Writing in the Boulder Daily Camera yesterday Tom Rohrer identifies a howler of a mistake in a recent report by the City of Boulder on "Safe Streets." The city has been implementing "flashing crosswalks" around town which light up to let motorists know that a person or bike is about to cross the street. In theory, the signals are supposed to lead the drivers to stop the pedestrians or bikers to cross, and everyone to go on their way.  In practice, the "flashing crosswalks" have been the location of some pretty nasty car-pedestrian accidents.

So it is a surprise that the city has issued a report that claims that the "flashing crosswalks" are safer than non-flashing crosswalks. Rohrer explains where the city's analysis is flawed:
[T]he city recently released their Safe Streets Boulder Report, in which they quite correctly note that a very large number of the accidents between motor vehicles and either a pedestrian or a bicyclist occurred in crosswalks at intersections, and a rather smaller number occurred in the flashing crosswalks. Moreover, I don't doubt their arithmetic was correct when they converted those accident counts to percentages.

On the face of it, the report's finding that 37 percent of the pedestrian/bicyclist-vehicle accidents were in crosswalks at intersections while "only" 6 percent of such accidents were in flashing crosswalks seems to be evidence that the flashing crosswalks are statistically more safe than anyone thought, and that if anything we should be worried about the crosswalks at our city's intersections.

Unfortunately those numbers are profoundly misleading.

You see, the report doesn't mention that there are only a few flashing crosswalks in the entire city (only 18 have ever been installed). However, there is a very large number of the crosswalks at intersections -- four at almost every intersection, totaling hundreds if not thousands across the city. So it is not terribly surprising that the city found a smaller number of accidents in a very small number of flashing crosswalks, while there were a larger number of accidents in the much larger number of crosswalks at intersections.

To correctly compare the safety of flashing crosswalks with those at intersections, one needs to know the accident rates for each type of crosswalk.
Under a more appropriate methodology the proper conclusion is that the "flashing crosswalks" have a much higher accident rate than non-flashing crosswalks -- the exact opposite of the City's conclusion. Thus, a poor quantitative analysis can do more to mislead than to clarify.  This clear and simple example will be part of my graduate seminar quantitative methods of  policy analysis the next time I teach it.

16 February 2012

Reality is Not Good Enough

The entire Heartland document episode has become far more interesting than a typical tale of an advocacy group paying off shills now that it seems clear that one of the documents that was leaked was in fact a fake. Megan McArdle at The Atlantic does a heroic job examining the documents (something that apparently most reporters failed to do) and concludes that it is fake (I agree):
The memo doesn't add new facts, just new spin.  Naturally, because the spin is more lurid, it's what a lot of the climate blogs seized on.
If the faked document happened to be produced by a climate activist or scientist (as some are already suggesting), then the leaked Heartland documents will go down in history as one of the more spectacular own goals in the history of the climate debate (with the consequences proportional to the stature of the faker). The faking is likely to overshadow whatever legitimate questions may have been raised by the release of the documents. Imagine what would have happened if the UEA hacker/leaker had made up a few emails to spice up the dossier.

More generally, the episode already illustrates much of what has become of the activist wing of the climate science community -- Apparently, reality is not good enough, so it must be sexed up. This sort of thing feeds into the worst imaginings of skeptics and blinds them to the fact that there are real issues here despite the frequent over-egging of the pudding.

It will be interesting to see how this develops as it appears that the faker left plenty enough fingerprints to be revealed in due course. The collateral damage is likely to be significant among the media and the overeager blogosphere. Stay tuned.

Thursday Funny

Source: George Hoberg

15 February 2012

I'm on Twitter

It is all a big experiment. Let's see how it goes: @RogerPielkeJr

New Peer-Reviewed Paper on Global Tropical Cyclone Landfalls

We have a new paper just accepted for publication in the Journal of Climate, titled "Historical Global Tropical Cyclone Landfalls." Here is the abstract:
Historical global tropical cyclone landfalls
Jessica Weinkle, Ryan Maue and Roger Pielke, Jr.
Journal of Climate (in press)

In recent decades, economic damage from tropical cyclones (TCs) around the world has increased dramatically. Scientific literature published to date finds that the increase in losses can be explained entirely by societal changes (such as increasing wealth, structures, population, etc) in locations prone to tropical cyclone landfalls, rather than by changes in annual storm frequency or intensity. However, no homogenized dataset of global tropical cyclone landfalls has been created that might serve as a consistency check for such economic normalization studies. Using currently available historical TC best-track records, we have constructed a global database focused on hurricane-force strength landfalls. Our analysis does not indicate significant long-period global or individual basin trends in the frequency or intensity of landfalling TCs of minor or major hurricane strength. This evidence provides strong support for the conclusion that increasing damage around the world during the past several decades can be explained entirely by increasing wealth in locations prone to TC landfalls, which adds confidence to the fidelity of economic normalization analyses.
In the paper we provide details on landfalls of tropical cyclones (called hurricanes in the Atlantic) for the five major global basins and a global aggregate from 1970 (in the figure shown above -- the paper provides similar graphs for each of the basins for the periods of available data). We take the Atlantic back to 1944 (no other basin has reliable data before 1950, so earlier data is superfluous for our analysis), though the record of US landfalls goes back before 1900 and has been discussed in other papers (like this one in PDF).  Tropical cyclone statistics are a cherry picker's delight because large intra-basin variability means that "trends" can be found (up and down) over various arbitrary periods of record. For instance, we do see a upwards trend in North Atlantic landfalls since 1970, but not since 1944 (or 1900). Other basins show similar up-and-down pattens on multi-decadal periods, but we finding nothing coherent at the global level. Over 41 years of reliable data, there is precious little evidence of a secular trend at the global level.

A reviewer wisely noted that looking for linear trends in such data was probably not particularly useful in any case -- and we much agree, however, a detection of trends is (for better or worse) established by the IPCC as a means to detect of changes in climate. Those looking for meaningful trends in TC behavior (regardless of cause) should be aware that even assuming large changes in storm behavior, such trends would not be detectable for many decades or longer. Thus, our findings should not be too surprising.

Some interesting statistics of note:
  • Over 1970 to 2010 the globe averaged about 15 TC landfalls per year
  • Of those 15, about 5 are intense (Category 3, 4 or 5) 
  • 1971 had the most global landfalls with 32, far exceeding the second place, 25 in 1996
  • 1978 had the fewest with 7
  • 2011 tied for second place for the fewest global landfalls with 10 (and 3 were intense, tying 1973, 1981 and 2002)
  • 1999 had the most intense TC landfalls with 9
  • 1981 had the fewest intense TC landfalls with zero
  • There have been only 8 intense TC landfalls globally since 2008 (2009-2011), very quiet but not unprecedented (two unique 3-year periods saw only 7 intense landfalls)
  • The US is currently in the midst of the longest streak ever recorded without an intense hurricane landfall 
Because the time period with reliable data at the global level on landfalls is short (42 years, including 2011), it will be exceedingly difficult to develop theoretical explanations of the observed variations beyond simple randomness. Consider that 1971 was a strong La Niña year and had 32 landfalls and 2011 was also a La Niña year and had only 11.
If you are interested in overall trends in tropical cyclones, and not just the ones making landfall, you should have a look at Ryan Maue's excellent tropical cyclone page, where you can find the above image of overall global tropical cyclone activity since 1978.

We'll post up the paper and all of the underlying data as soon as it is available from JOC.

Postscript: Regular readers will remember this paper from last fall as one which, after the reviews came in, an editor at Geophysical Research Letters bizarrely refused to discuss them with me. That water is under the bridge. However, thanks to the airing of that debacle the paper received various useful comments from colleagues, including those from Ryan Maue that were so significant that we invited him to join in as a co-author, leading to a rich new collaboration and an improved final product. Score one for the blogosphere in helping to make connections which ultimately improved the science published in the peer-reviewed literature.

14 February 2012

The Politics and Economics of Manufacturing


The video above is of course the much discussed Super Bowl commercial from Chrysler featuring Clint Eastwood titled "Halftime in America" which celebrates manufacturing. The video does not note that Chrysler is now part of Fiat, an Italian company. The tension between the globalization of modern business and the symbolism of American manufacturing reflects the difference between viewing manufacturing from an economic perspective (i.e., as a NAICS categorization) and through a political lens (i.e., as a powerful image of American ideals). Many analysts conflate the two.

The aggregate number of people in the United States employed in manufacturing today is lower than at any time since before World War II, as shown in the graph below.
This means that the only people who (as adults) have previously experienced a labor market of less than 12 million manufacturing workers are today 90 years old. Consider also that in 1948 there were only about 60 million people in the labor force, meaning that about 1 in 4 US jobs was in manufacturing and today that ratio is closer to 1 in 13. The US of course is not alone in seeing a decline in manufacturing jobs -- for instance, Germany's decline has been more pronounced.

Since about 1950 US manufacturing output has increased by more than 300% (in real terms) even as the sector has seen its role in the overall economy decrease from 27% to less than 12% (data from BEA, price deflators from OMB). By any economic measure, manufacturing occupies a less significant role in the US economy than it did several decades ago, and far less significant than what might today be deemed the 20th century golden age of manufacturing in the decades following World War II.

But it seems that politicians still see that golden age as having political benefits. Here is an excerpt from President Obama's FY 2012 budget (PDF):
Our challenge is not building a new satellite, but to rebuild our economy. If the recession has taught us anything, it is that we cannot go back to an economy driven by too much spending, too much borrowing, and the paper profits of financial speculation. We must rebuild on a new, stronger foundation for economic growth. We need to do what America has always been known for: building, innovating, and educating. We don’t want to be a nation that simply buys and consumes products from other countries. We want to create and sell products all over the world that are stamped with three simple words: “Made in America.”
The invocation of manufacturing's golden age is bipartisan, with Mitt Romney recently making a speech on a factory floor and promising to bring back manufacturing jobs.

The political appeal of manufacturing is largely symbolic. Consider that organized labor has declined dramatically, falling from 35% in the private sector in the 1950s to less than 7% last year.  The symbolic importance of manufacturing is reflected in an opinion poll taken by Deloitte in 2011 which found (here in PDF):
When asked which industries are most important to the national economy, manufacturing is near the top of the list, topped only by energy. Eighty-six percent indicate that America’s manufacturing base is “important” or “very important” to our standard of living. And when asked if they could create 1000 new jobs in their community with any new facility, manufacturing comes in at the top of the list – ahead of energy production facilities, technology development centers, retail centers, banks or financial institutions and a host of others. . . Seventy-nine percent of Americans believe a strong manufacturing base should be a national priority.
The poll also provided some evidence that the world has changed:
While the U.S. public registers a strong belief in the importance of manufacturing for the country’s economy, when it comes to choosing manufacturing as a career choice, they place it near the bottom of the list. Out of 7 key industries, manufacturing ranks second to last as a career choice. While the reasons for this are complex, one interesting finding suggests that Americans (77%) fear the loss of domestic manufacturing jobs to other nations, contributing to a sense that manufacturing is an unstable long-term career choice. Of equal concern is the fact that the future talent pool is least excited about the prospect of a career in manufacturing. Among 18-24 year-olds, manufacturing ranks dead last among industries in which they would choose to start their careers.
Over the past century, the US fought a losing battle against productivity gains in agriculture. The 1920s and 1930s were characterized by political debates on the plight of the American farmer, which are not so different than those of today focused on manufacturing. Price supports, tariffs, supply destroying policies, demand increasing policies, Food for War, crop subsidies, crop insurance and many other policies were unable to stop the inexorable march of innovation on agriculture. Today agriculture represents about 1% of the US economy and about 1 million jobs. Despite these declines -- far larger than those experienced by manufacturing -- no one is talking about revitalizing American farming.

Make no mistake -- both manufacturing and agriculture are essential parts of the US economy. However, if it is indeed "Halftime in America" we'd better be ready to come out of the locker room prepared to play a different game.

13 February 2012

Manufacturing vs. Industrial R&D

One of the arguments for special government support of manufacturing is that manufacturing is a key to industrial R&D throughout the US economy (e.g., here). The data above comes from BEA and NSF and shows that as manufacturing declined as a portion of GDP from 2002 to 2007, industry R&D nonetheless increased (note: NSF data goes to 2007). Since 2007 manufacturing has continued to decline as a proportion of GDP (to 92 in 2010 in the graph above) while it appears that industrial R&D has tracked GDP (e.g., see here in PDF).

With industrial R&D increasing faster than GDP over the past decade, even as manufacturing fell sharply as a proportion of GDP, I find little support for the argument that support of manufacturing is in some way critical for supporting industrial R&D.

09 February 2012

New Zealand Science Policy and a Music Video


A short while ago I commented on Sir Philip Gluckman's remarks on The Honest Broker. As science advisor to the New Zealand government he has prepared a discussion paper on science advice to government. Here is an excerpt (here in PDF):
It is important to separate as far as possible the role of expert knowledge generation and evaluation from the role of those charged with policy formation. Equally, it is important to distinguish clearly between the application of scientific advice for policy formation (‘science for policy’) and the formation of policy for the operation of the Crown’s science and innovation system, including funding allocation (‘policy for science’). This paper is concerned with the former. A purely technocratic model of policy formation is not appropriate in that knowledge is not, and cannot be, the sole determinant of how policy is developed. We live in a democracy, and governments have the responsibility to integrate dimensions beyond that covered in this paper into policy formation, including societal values, public opinion, affordability and diplomatic considerations while accommodating political processes.

Science in its classic linear model can offer direct guidance on many matters, but increasingly the nature of science itself is changing and it has to address issues of growing complexity and uncertainty in an environment where there is a plurality of legitimate social perspectives. In such situations, the interface between science and policy formation becomes more complex. Further, many decisions must be made in the absence of quality information, and research findings on matters of complexity can still leave large areas of uncertainty. In spite of this uncertainty, governments still must act. Many policy decisions can have uncertain downstream effects and on-going evaluation is needed to gauge whether such policies and initiatives should be sustained or revised. But, irrespective of these limitations, policy formed without consideration of the most relevant knowledge available is far less likely to serve the nation well.
Very smart stuff.

Also, the Sustainable Future Institute, a New Zealand-based think tank, has issued an executive summary of a forthcoming report on NZ science policy (here in PDF) along with a historical overview of recent NZ science policies (here in PDF). Learn more via the Asia-Pacific Science, Technology and Society Network.

08 February 2012

Jobs in the App Economy

Michael Mandel has done a rough estimate of the jobs created  in the "app economy" since 2007 and published his results in a paper for TechNet (here in PDF). He concludes:
How can the U.S. dig itself out of the current job drought? Government policy can temporarily boost employment. The ultimate answer, though, is innovation:  The creation of new goods and services that spur the growth of new industries capable of employing tens or hundreds of thousands of workers.1

Nothing illustrates the job-creating power of innovation better than the App Economy. The incredibly rapid rise of smartphones, tablets, and social media, and the applications—“apps”—that run on them, is perhaps the biggest economic and technological phenomenon today. Almost a million apps have been created for the iPhone, iPad and Android alone, greatly augmenting the usefulness of mobile devices. Want to play games, track your workouts, write music? There are a plethora of apps to choose from, many of them free.

On an economic level, each app represents jobs— for programmers, for user interface designers, for marketers, for managers, for support staff. But how many? . . .

The App Economy now is responsible for roughly 466,000 jobs in the United States, up from zero in 2007 when the iPhone was introduced. This total includes jobs at ‘pure’ app firms such as Zynga, a San Francisco-based maker of Facebook game apps that went public in December 2011. App Economy employment also includes app-related jobs at large companies such as Electronic Arts, Amazon, and AT&T, as well as app ‘infrastructure’ jobs at core firms such as Google, Apple, and Facebook. In additional, the App Economy total includes employment spillovers to the rest of the economy.
The methodology is rough -- one could easily justify a number half as big or twice as large. In either case, the job numbers are big. The numbers are rough estimates because government economic data is not yet caught up with this fast-changing aspect of the national economy.

The analysis thus raises a larger question: How do we think intelligently about innovation and its consequences when the relevant data doesn't even measure what is going on in the current economy. Are we trying to drive by looking in the rear-view mirror?

ITIF on Romer and Manufacturing

At the blog of the Information Technology and Innovation Foundation Stephen Ezell posts up a lengthy response to Christina Romer's NYT op-ed on the non-specialness of manufacturing. Ezell writes:
Romer’s op-ed gets at least four critical points flat wrong. First, it conflates having a coherent set of policies and strategies to support U.S. manufacturers with them receiving “special treatment.” Second, it wrongly argues that manufacturing jobs are the same as all other jobs in the economy. Third, it misdiagnoses the central challenge facing the U.S. economy as a lack of aggregate demand when the real problem is faltering U.S. competitiveness, especially in the traded sectors of the economy, such as manufacturing. In doing so, her op-ed fails to recognize that the loss of manufacturing jobs has contributed significantly to the loss of U.S. employment, in terms of both direct and indirect jobs lost. Finally, arguments like this that manufacturing in the United States deserves no specific policy focus refuse to acknowledge the sophisticated strategies that dozens of U.S. competitors around the world have put in place to bolster the competitiveness of their manufacturing sectors.
I'll address some of Ezell's points in detail in forthcoming posts, as the Romer vs. Ezell perspectives are examples of a larger debate on manufacturing, innovation and the US economy. Let me preview my two cents by suggesting that Romer is more right than ITIF on this issue.

Research on Climate Change and Conflict

The journal Peace Research has just published a special issue on climate change and conflict. The introductory essay, by Nils Petter Gleditsch, says this:
On the whole, however, it seems fair to say that so far there is not yet much evidence for climate change as an important driver of conflict.
Gleditsch also offers a gentle suggestion to the IPCC:
The IPCC is currently working on its Fifth Assessment Report, scheduled for release in 2013. For the first time, this report will have a chapter on the consequences of climate change for human security, including armed conflict (IPCC, no date). We hope that the studies reported here will contribute to a balanced assessment by the IPCC, built on the best peer-reviewed evidence.
That such a statement has to be made is a statement itself.

06 February 2012

Romer on Manufacturing Policy

Writing in the NYT, Christina Romer (professor at UC-Berkeley and former chair of President Obama’s Council of Economic Advisers) finds the justification for a "manufacturing policy" to be wanting:
As an economic historian, I appreciate what manufacturing has contributed to the United States. It was the engine of growth that allowed us to win two world wars and provided millions of families with a ticket to the middle class. But public policy needs to go beyond sentiment and history. It should be based on hard evidence of market failures, and reliable data on the proposals’ impact on jobs and income inequality. So far, a persuasive case for a manufacturing policy remains to be made, while that for many other economic policies is well established.
Where is she wrong?

Simple Energy Math at Grist

Over at Grist, my long-time critic David Roberts does some simple energy math and finds that emissions reductions will be difficult because energy efficiency gains, while undoubtedly a good thing, won't make much of a dent in reducing emissions. Roberts might have saved himself some time by starting with The Climate Fix;-)

The numbers lead Roberts to conclude that we need to engage in a process of global economic contraction. Once he works through that math, he'll find his choices are (a) to keep poor people poor and make rich people poor, or (b) to focus on technological innovation to accelerate the decarbonization of economic activity. Where ever he comes out on that debate, (a) isn't going to happen -- iron law and all that.

But seriously, kudos to David for taking the time to run the numbers and report the results -- we all benefit from such analyses, uncomfortable as the results might be.

Score One for Old School Journalism and The Australian

With the admission by public officials today that the Wivenhoe dam was indeed mismanaged,  The Australian newspaper is right to trumpet the importance of old school investigative journalism. Without their work, it is likely that the mismanagement would not have been uncovered:
Whatever its findings, the [Queensland flood investigation] report will be more informative and comprehensive as a result of the inquiry being reconvened and extended for 13 days after The Australian exposed glaring inconsistencies in the original evidence given by SEQWater and flood engineers about serious breaches of the dam's operating manual over two days leading up to the disaster. . .

For the public, an alarming aspect of the issue is that the mismanagement was uncovered not by their elected representatives or through the initial inquiry hearings, but by senior journalist Hedley Thomas's painstaking reading of official records. These suggested that SEQWater remained locked into the wrong strategy over the weekend of January 8 and 9 and into early Monday before the Brisbane River first broke its banks on January 11.

Scepticism, scrutiny of records and refusing to accept official spin are the hallmarks of fine journalism. Four days after the river peaked, contrary to SEQWater's insistence that the operating manual had been followed, Thomas questioned why the operation of the dam failed. He also reported independent engineer Michael O'Brien's view that catastrophe would have been avoided if releases had been adequate. Such probing, alas, did not suit more gullible media outlets, including Crikey, which brushed the public interest aside in claiming our coverage was "distorted" by "out-of-control" ego. A year on, the operations manager and chief executive of Queensland's WaterGrid admit that, based on what they were told at the time, the dam was mismanaged for two crucial days before the floods. In a land of climate extremes, hard lessons have been learned about managing the ravages of floods as well as drought, and the evacuation of St George shows authorities are being proactive. The emergence of the truth about Wivenhoe is also a lesson about public-interest journalism.
Whatever one might think about the political views found on the pages of The Australian or feelings about its ownership, Australians have been well served by its dogged reporting in the case of Wivenhoe. For everyone, the case provides a good example why independent oversight of experts and government makes good sense.

05 February 2012

Updated: Normalized Disaster Losses in Australia

Figure. Annual aggregate insured losses (AUD$ million) for weather-related events in the Disaster List for years beginning 1 July with losses normalised to season 2011/12 values.

Ryan Crompton of Risk Frontiers at Macquarie University has provided an update of their normalized loss catalog which is shown in the graph above. Crompton sends along this description of the update:
The normalised loss figure shown above is an updated version of that published in Crompton and McAneney (2008). The methodology used to normalise losses has been refined and the loss data from seasons 2006/07 - 2010/11 has been included and normalised to season 2011/12 values. The Insurance Council of Australia (ICA) insured loss data is current as at 31/1/12.

In our previous normalisation of the Disaster List ending at the 2005/06 season (Crompton and McAneney, 2008) we noted the low loss activity in the most recent 5 seasons analysed. Since that time there has been heightened weather-related loss activity with the most recent 5 seasons to 2010/11 averaging slightly more than double the 45-year average. The average annual weather-related insured loss over the most recent 10 seasons (2001/02 – 2010/11) is within approximately 30% of the average annual loss over the full 45-year period of the Disaster List.

02 February 2012

Update: The Wivenhoe Investigation

UPDATE 5 Feb: The revelations from the inquiry continue.

I have commented occasionally on the role of flood management decisions leading up to the flooding of Brisbane in early 2011, as it is a fascinating case study at the intersection of science, uncertainty, decision processes, accountability and politics. As I mentioned last week the official investigation was re-opened after emails were released that suggested some inconsistencies in earlier reporting. The re-opened investigation started yesterday with explosive revelations:
In a series of heated exchanges at Queensland's recalled floods inquiry yesterday, SEQWater's principal engineer of dam safety, John Tibaldi, was grilled over a report he penned in the weeks following the January floods, which accounted for the actions he and his fellow engineers took.

At one point, Mr Tibaldi choked up in tears under the questioning.

Commissioner Cate Holmes, a Supreme Court judge, reconvened the inquiry after The Australian revealed evidence that appeared to show the dam was employing less severe flood mitigation strategies than those detailed in Mr Tibaldi's report.

As well as SEQWater officials and engineers being called to testify, Premier Anna Bligh has been asked to submit a written statement to the inquiry. Ms Bligh said she would provide a comprehensive statement by Monday and would submit a copy of her diary and relevant documents relating to the meetings and briefings she attended at the time of the floods.

Mr Tibaldi told the inquiry the report used raw data collected during the flood -- including lake levels and outflows -- and he then matched the data to the release strategies prescribed in the dam manual, known as W1, W2, W3 and W4. He said he had no recollection of asking the three other dam engineers which strategies they were using at various times during the disaster, but prepared the report based on the raw data and subsequently sought their approval.

"I tried to match the strategy transitions against the data that was available to me (and) just made conclusions based on that data as to when strategy transitions had occurred," he said.

Counsel assisting the inquiry, Peter Callaghan SC, suggested the manual was therefore used to analyse and justify the decisions taken by the four engineers -- Mr Tibaldi, Robert Ayre, Terry Malone and John Ruffini -- rather than dictating the decisions they took at the time.
Apparently SEQ Water is privately insured, though for what contingencies and to what financial level it is not clear from what I have read. What does seem increasingly clear is that someone is going to receive a big bill to settle what will inevitably be large claims against SEQ Water. Stay tuned.

A Conversation With an Economist on Magical Solutions

Economist: I think you are way too optimistic that investments in technological innovation funded by a low carbon tax can lead to accelerated decarbonization of the economy. That is why I favor a high carbon price.

Me: But isn't the point of the high carbon price to stimulate innovation?  The question is thus how to stimulate or motivate that innovation. I think a high carbon price is politically impossible, which is why I argue for starting low with investments in innovation as part of the package.

Economist: A high carbon price will create incentives to change people's behavior. If prices are set appropriately the market will take care of the rest.

Me: But if you do not think that technological innovation can lead to an accelerated deacrbonization of the economy, what difference would it make if that innovation is stimulated by pricing or direct investment?

Economist: Pricing has reduced pollution in many areas. We just need to get the carbon price right.

Me: But I am curious about the causality implicit in your argument -- let me ask, of the four levers in the Kaya Identity [Population, Per capita wealth, Energy intensity, Carbon intensity] which ones do you see will be influenced by carbon pricing in a way that reduces emissions?

Economist: Well ... I guess carbon intensity and energy intensity.

Me: So then you do think that technological innovation can lead to accelerated decarbonization since carbon intensity and energy intensity are modulated by innovation?

Economist: Well, no, not at all. I don't think that the solution can be technological. I do think that pricing makes a lot more sense than focusing on technology.

Me: Can you believe all the rain?