“What?” you may be saying “Gas prices are lower than they have been in a long time.” That’s true, even in California, but that just reflects the collapse of world oil prices. And only partially. You see, while oil prices have been falling across the country, the gap between California gas prices and the rest of the U.S. has climbed to higher levels for a longer stretch than at any time in the last 20 years.
Why? I don’t know, but some people claim to, from consumer advocates arguing it’s collusion to industry representatives saying that it’s just a shortage of supply for California’s special cleaner-burning blend, known as CARB gasoline.
The figure above shows the difference between California’s average gas price and the U.S. average going back to 1995 when the state started requiring CARB gas. For the decade from 2005 to the end of 2014, California’s retail price averaged about 31 cents above the national average. That differential lines up well with the fact that our gas taxes were about 20 cents above the nationwide average during that time and making CARB gasoline adds about 10 cents a gallon to the cost.
On January 1, 2015 transportation fuels came under California’s Cap-and-Trade (CaT) program for greenhouse gas (GHG) emissions, as I discussed before. It is now widely accepted that the CaT program should have, and has, increased gas prices by about ten cents a gallon. Add that in, and we’d expect the differential between California and the rest of the country (where GHG emissions are still free) to average around 40 cents per gallon.
That’s about where things were for the first month and a half of 2015, but then on February 18 a fire at Exxon’s Torrance refinery near LA shut down the plant’s gasoline production. That refinery normally produces about 10% of the state’s CARB gasoline. Since then, California’s gas price has averaged about 82 cents per gallon higher than the national average. The extra 42 cent premium since February 17 totals up to nearly $4 billion in extra payments – more than $150 for every licensed driver in the state – and still growing. As of yesterday, the average California price was 71 cents above the US, according to AAA.
The problem is worst in Southern California, where prices since mid-February have averaged 26 cents higher than in the North. In the previous decade, the North-South differential averaged around one cent.
High prices don’t necessarily mean that anyone is profiting unfairly or doing anything illegal. Scarcity of a product drives up prices even in the most competitive markets.
Events like the Torrance fire have caused price spikes in California before, but they generally have disappeared within 4-6 weeks, because that’s how long it takes to import CARB-specification gasoline from the many other refineries in the world that can produce it. In 2012, when the Chevron refinery in Richmond had a major fire, prices jumped 50 cents for a couple weeks, but within a month that excess differential was gone. As the figure above shows, previous spikes have never before lasted nearly as long as the current one.
So, this spike does suggest that something is amiss in this market. Why is this spike so long lasting? And what, if anything, should the state do about it?
Some consumer advocates point to increased concentration among in-state producers of CARB gasoline in the last few years and allege these firms are now colluding to reduce competition. But the evidence presented so far is thin, mainly just that refineries are making a boatload of money. That could indeed be due to producers restricting the quantity they sell in order to boost prices, but it could instead just reflect refineries having insufficient capacity to replace the lost production capacity when one of the largest producers shuts down unexpectedly. Either could cause the price to jump. In a 2004 paper that Jim Bushnell, Matt Lewis and I wrote, we discussed competitive and non-competitive causes of high gasoline prices, how difficult it is to tell them apart, and policies that might address them.
Critics also point to the fact that California refineries have been exporting gasoline despite the high prices at home. But not all of the gasoline made in our refineries can meet the strict specification for in-state sales. Non-qualifying gasoline is regularly shipped from California to Nevada, Arizona, Mexico and other places with lower standards. So, exporting gasoline doesn’t seal the deal on anti-competitive behavior. Now if California refiners were exporting CARB-specification gasoline since February – or making a choice to produce less CARB gasoline — that would be much more difficult to reconcile with competitive behavior.
Nonetheless, while consumer advocates have not proven their case, their suspicions have merit. With prices very sensitive to even a slight shortage, and with two companies producing about half the state’s CARB gasoline supply, it seems quite possible that firms might be able to make more money by making less CARB gasoline. This could be particularly true when a supply shock like a large refinery fire has already tightened the market. That doesn’t prove they are doing it, but it does – as the lawyers say – go to motive.
In the past, one response from the industry has been that such output restriction would just create an opening for imports of CARB gasoline that would steal their market share. But that leads us to perhaps the biggest puzzle of the current price shock: where are the imports? With California’s prices this high – regardless of whether due to real scarcity or insufficient competition among in-state producers — it seems there is ample money to be made bringing in CARB gasoline from afar, as has happened during past spikes. Why isn’t that happening this time, or happening in sufficient quantity to bring California’s prices back in line with the rest of the country?
More than one of my environmentalist friends has responded to my concerns by asking what’s so bad about high gas prices. After all, we need to move away from gasoline and this will help. I think there are a couple reasons that this isn’t the way we want to get off gasoline.
First, high gas prices hurt lower-income working families, so if we were imposing high prices with, say, a carbon tax policy, I at least would want to pair it with some other tax relief for that group to help offset the higher cost of fuel. This isn’t a government tax policy, just higher profits flowing to private companies, and there is no offsetting tax reduction.
Second, because California is a leader in all things enviro, our energy policies are scrutinized worldwide. If our fuels policy is viewed as causing inexplicably high gasoline prices, that will undermine political support for similar policies in other jurisdictions.
A year ago, I was named a member of the California Energy Commission’s new Petroleum Market Advisory Committee, five industry experts charged with examining the state’s high and volatile gas prices, and suggesting policy responses. Three weeks ago, I was appointed chair of the committee. Working with CEC staff, I hope very soon to hold a workshop at which we can hear the views of all stakeholders – refiners, importers, retailers, consumer groups and others — and ask them detailed questions. Such an open discussion will, I hope, bring more insight and common understanding than we have gotten from the media-targeted rhetoric that usually accompanies discussions of gas prices.
 If you buy a house just before the market rebounds and then sell it a few years later at a tidy profit, is that unfair?
Last week one of the biggest environmental scandals since the Deepwater Horizon disaster made its way to somewhere near the bottom of page 11 of most major newspapers. VW admitted to systematically cheating on emissions tests of its Diesel vehicles. This might sound snoozy, until you read up on the details.
Vehicles across the US must satisfy emissions standards for criteria air pollutants (e.g., NOx, SOx, CO2). California, of course, has the most stringent of these standards and enforces them for new and used vehicles. If you have an older car, you need to go get your car smog checked every few years to make sure your clunker is still clean enough to be allowed on California roads. It is for this reason that until recently the share of Diesel cars in California was extremely low, since almost no vehicles satisfied these stringent standards. In come the “clean Diesels”, pushed mainly by German manufacturers of normal people (e.g., VW) and luxury (e.g., Mercedes and BMW) vehicles. Diesel was finally salon worthy! Look! It’s fuel efficient and clean! Many of my Birkenstock-wearing, dog-owning, El Capitan-summiting colleagues and graduate students ran out and traded in their Prii for the VW TDI wagon. So much space! So much torque! So much fuel efficiency! So much clean! Well, it turns out what sounded too good to be true was.
In a Lance Armstrongian feat of deception, VW has now admitted to having installed a piece of software called a “defeat device” that turns on the full suite of pollution control gadgets when cars are being smog tested. As soon as you leave the testing station and head out for your Yosemite adventure with Fluffy barking in the back, your car emits 10-40 times (!!!!!!!!!!!) the amount of NOx you just reported on your smog check card. Just to put this in perspective – this is like that 215 calorie Snickers bar having 2150-8600 calories instead. The EPA will almost certainly sue VW. The penalties involved here are significant. The EPA can ask for $37,500 per incident, which amounts to roughly $18 billion in fines. Plus there will likely be criminal charges filed against VW executives. Further, depending on whether these vehicles will continue to be sold in the US after everything is said and done, this is a disaster for VW as they rely heavily on the high fuel efficiency ratings of Diesels to satisfy CAFE.
In my eyes there are two interesting economic points to be made here. The first, maybe more headline worthy, is trying to determine the optimal fine in order to deter other manufacturers from engaging in such behavior. An economist would argue that what we have here is the classic case of an externality. By selling the dirtier vehicles, VW exposed kids, adults and dogs to massive quantities of local air pollutants. VW is responsible and should be liable for this. Hence VW should correct this market failure by paying the full external costs it caused. This calculation would involve estimating the economic damages from this additional air pollution and passing the bill on to VW. My back of the envelope calculation suggests that for the NOx portion this is about $232 per vehicle over three years (far from $37,500).
But, there is a large law and economics literature on determining the fines to achieve the optimal and efficient amount of deterrence. The problem with just passing on the external damages is that VW was not going to be caught with certainty. If the executives thought there was a 1% chance of getting caught, it might have been more worthwhile to cheat than if they thought that they were going to get caught with certainty. In this case, the penalty should be approximated by the external costs divided by the probability of getting caught. This, of course, would be significantly larger than the external costs alone. Getting the external costs right is hard to do (e.g., you need more pollutants and the damages vary across space), but can be done with standard tools in the talented economists’ empirical toolkit.
The broader question is how did this happen? This is not one student cheating on an intermediate microeconomics exam and thinking (s)he would get away with it. This is the world’s largest car manufacturer intentionally deceiving the federal and state governments by gaming their enforcement strategy. While some cynic might remark that folks will always cheat when there’s a dollar to be made, I think we can rethink how we design regulations by building in evaluation from the get go.
Michael Greenstone, who spends his summers two doors down the hall, has thought a lot about this recently. In the US, we pass many of our major regulations based on ex ante cost benefit analyses. Testifying on Capitol Hill, he recently made two suggestions that would significantly improve things. First, he argues that we should institutionalize the ex post review of economically significant rules “in a public way so that these reviews are automatic in nature”. He also argues that rules already in effect should start being reviewed using retrospective analysis. The relevant agencies should commit to changing or abandoning rules based on these evaluations, or possibly create new rules based on these evaluations.
The big issue here is of course, who should review these policies? He argues in favor of the creation of a regulatory analysis division within the Congressional Budget Office. This division would conduct the regularly scheduled reviews and conduct reviews at the request of lawmakers. I would go one step further and argue that these reviews should not only be staffed with government employees, but require the review and participation by independent academics. There is precedent for this model.
The certainty of independent review of policies and enforcement strategies significantly drives up the probability of detection, which would diminish the expected profits from cheating. By firms large and small. Plus, we are spending scarce public funds on environmental regulation. We should spend it on what works. And we need to figure out what that is.
“Rural electrification” and “energy access” are catchphrases in many energy and development circles. Multilateral lending agencies, many NGOs and the UN are highlighting the 1.3 billion people who currently do not have electricity in their homes. For example, of the UN’s 17 Sustainable Development Goals, number 7 is to “Ensure access to affordable, reliable, sustainable and modern energy for all.” Similarly, the UN and the World Bank launched the Sustainable Energy For All initiative in 2011, whose name basically defines their vision.
Electricity is certainly a vital part of modern life. Without it, people can’t watch TV, refrigerate food and medicine, charge a cell phone, protect themselves from extreme heat, or do many of the things those of us in the developed world take for granted.
I’m concerned, however, that development efforts may be misdirected because of the near singular focus in the energy sphere on this particular goal. I’m worried about two potential outcomes – that we’ll stop short or that we’ll go too far. My concerns may seem contradictory, but I fear that both are made more likely by focusing too exclusively on one binary measurement. (I’ll leave for another blog post the messy ethics of focusing on sustainable energy access – achieving energy access with green sources.)
Perils of Stopping Short
Here’s the potential problem with stopping short. I worry that once a household has a small solar home system, the data collectors will declare it “electrified” and policy makers will put a checkmark in the electricity box and declare victory. But, solar home systems provide very limited services at high per kWh prices and don’t allow people to do many of the things we associate with modern energy access.
For example, mKopa, the leading solar provider in Kenya, sells a tiny 8 Watt system that comes with 3 LED bulbs, a radio and a cell phone charging station. (Unfortunately, this very system was championed in a New York Times op-ed last week.)
The world’s chief energy data collectors at the International Energy Agency recognize that “[a]ccess to electricity involves more than a first supply to the household,” and claim that an appropriate definition of electricity access would include a minimum annual kWh usage level. But, they conclude that:
[t]his definition cannot be applied to the measurement of actual data simply because the level of data required does not exist in a large number of cases. As a result, our energy access databases focus on a simpler binary measure of those that do not have access to electricity…
I am part of a working group at the Center for Global Development that’s advocating for better data and reporting on energy access. A report is due out soon.
Perils of Going Too Far
The dangers associated with going too far are subtler, and may not be empirically relevant, but let me describe my concerns. As the chart below demonstrates, there is clearly a strong positive relationship across countries between GDP per capita and electricity consumption per capita. (The figure plots the natural log of both variables, so you can think of the relationship reflecting percent changes.) The same pattern holds for lots of other development indicators besides GDP per capita.
Let’s assume that we know the relationship in the above figure is causal, meaning that driving up electricity consumption in a country will cause its per capita GDP to grow. What the chart misses is that not all kWh are created equally. A kWh that replaces a kerosene lamp with a CFL for a month may not be the same as a kWh that helps power a factory that employs 10 people for an hour, and one may have a larger impact on development than the other.
What if governments are less likely to electrify schools if they’re focusing on homes? Or, what if utilities that spend more money on building out their electricity systems to reach homes can spend less money on ensuring factories or hospitals get reliable electricity?
None of the Sustainable Development Goals are targeting the number of schools with electricity or the number of industries with reliable electricity supply, and, to my knowledge, we don’t have a firm analytical grasp on whether spending money on rural versus industrial or health sector electrification helps improves people’s lives by more.
I am not denying that rural electrification brings benefits. Nonetheless, any expenditure of public, World Bank or NGO money has an opportunity cost, so spending money on rural electrification means we can’t spend money somewhere else.
This struck me seeing the juxtaposition of a sleek new electricity meter on a Kenyan woman’s mud wall. She liked replacing her kerosene lamp with an electric light bulb and her neighbor liked having TV, but connecting her to the grid is not cheap. What else could the government have done with that money that may have helped this woman more than the electricity connection?
We asked another woman in the compound whether she would prefer her electricity connection or a new motorbike. She said electricity. But when we gave her the choice between better health services or electricity and better education for her kids or electricity, she chose both of those over electricity. If this woman is representative, electrification in rural households is not yet the right priority.
That said, there are reasons to believe I don’t need to worry about going too far. It is entirely possible that building out the electricity grid to reach homes will make countries more likely to connect health centers and schools, and not less likely. Also, there could be a lot of benefits that come from rural electrification that wouldn’t be captured if the kWh were directed at factories – what economists call spillovers. For example, one person interviewed in an NPR story (which features my co-author Ted Miguel) described how getting an electricity connection made him feel, “part of Kenya.” Similarly, introducing this young boy to engineers installing electricity at his home may incite an interest in engineering and make him more likely to go to university. These indirect effects are difficult to measure, but they may be very important.
Governments and NGOs need to figure out how to get the biggest bang for their buck. So, we need more data and more analysis to figure out the best way to improve people’s lives and how big a role there is for rural electrification. It may turn out that rural electrification has high payoffs relative to alternatives, but there are risks to forging ahead without a richer understanding of how electricity drives economic growth, improved quality of life, and other development goals.
Air conditioning made lots of headlines this summer. In newspaper accounts, internet memes, and office gripe sessions, there was much commiseration about the over-air conditioning of the American workplace….
It seems no one is spared from these polar working conditions – not even energy efficiency advocates! Colleagues of mine recently attended an energy efficiency conference in the midst of a summer heat wave. Organizers courteously (but ironically) distributed blankets to help shivering attendees keep warm in heavily air conditioned meetings (about energy efficiency!). This is like handing out jelly doughnuts at a weight watchers meeting.
On the face of it, widespread accounts of frigid summer working conditions seem to point to an absurd waste of energy. But thermal comfort is highly subjective. If complaints are all coming from the easily-chilled tail of the thermal preference distribution, is it possible that office temperatures are about right on average?
The Goldilocks question
In theory, indoor office temperatures should be about right. Temperature control standards for most office building are calibrated to optimize something called the “Predicted Mean Vote”. This is a formula that predicts how a large group of people would vote (too cold registers as a negative number; too hot registers as positive; zero = just right) as a function of indoor temperatures, metabolism rate, clothing, etc.
But a provocative study released last month suggests this PMV formula needs updating. These authors claim that one of the primary inputs to current standards – the metabolic rate- is calibrated to the average male. When the researchers measure the average metabolic rate for a small group of young adult females performing light office work, they find significantly lower metabolic rates.
The media spin on this paper played up the irresistible battle of the sexes dimension. This set off a heated/amusing debate between freezing women who decry “sexist thermostats!” and sweaty men who point to the first fundamental law of clothing (you can always put more on -but there is a limit to how much you can appropriately remove!).
The industry response to the study looks quite different. The engineers at the Association of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE)– who know a thing or two about temperature control standards in buildings- say that the authors have misinterpreted how the standards are actually set. They assert that thermal comfort criteria are based on extensive laboratory studies of both men and women. These studies find that when men and women do the same kind of sedentary work in the same type of clothing, there are no relevant differences in preferred temperatures across sexes.
To make sense of all of this, I went to find my Berkeley colleague Stefano Schiavon who studies indoor work environments and building energy consumption. Ask Stefano whether commercial office buildings in the US are kept too cold in the summer and you get an impassioned (he is Italian after all) Yes!!
By 3 degrees Celsius at least, he says. But, he argues, factors such as oversized HVAC systems – not sexist thermostats – are to blame. For example, a temperature check across a sample of U.S. office buildings finds average summer temperatures in US office buildings are not only below the recommended ASHRAE standards, but colder in summer than in winter!
How much energy wasted?
Whereas office buildings are often cooled to around 72°F in the summer, experts suggest something in the range of 77°F can be maintained (assuming good air circulation) with no loss of worker satisfaction. Lose the necktie, put on some linen shorts, and you have Japan’s “Cool Biz” campaign which recommends an 84°F set point for office air conditioners.
Super Cool Biz : Looking super cool in 28°C!
How much energy could be saved if we increased the average cooling set point in office buildings? A team of Berkeley researchers recently looked at increasing cooling set points in office buildings from 72°F to 77°F. Across climate zones and office buildings types, they estimate a reduction in cooling energy consumption of 29 percent on average.
By my very crude calculations, if we apply this reduction across all air conditioned office space in the United States (estimated at 14,095 million square feet in 2012), this amounts to a reduction in electricity consumption of 11,300 GWh/year.
Compared to total annual electricity consumption, this does not amount to much ( less than half a percent). But the impact would be comparable with other climate change mitigation measures we get excited about. For example, the EIA estimates nationwide solar PV production in 2014 at 15,874 GWh (utility scale). In other words, keeping indoor air temperatures too cool in the summer is working to offset hard won emissions reductions achieved elsewhere.
Too much of a good thing
Air conditioning at the office – when used in moderation – is a very good thing. There is plenty of research demonstrating that air conditioning reduces mortality, boosts productivity, and makes us happier, more agreeable people. However, in many American workplaces, it seems this good thing is being taken to a wasteful extreme.
We should be paying more attention to how we (over-) cool our commercial buildings. But it can be very hard to get people excited about energy efficiency and/or conservation. Several new technologies (such as apps that help individuals control their cubicle climate) on the market make energy conservation more accessible and more fun. Between the stereotypical shivering female office worker and her gadget-loving male counterpart, this could lead to smarter cooling – and some energy savings- in our office buildings next summer.