- About Us
- News & Events
- Career Center
- Our Blog
When the wind blows and the sun shines in Germany, electricity prices around the country plummet. Natural gas peaker plants are not needed, as the peaks are erased and they cannot compete with renewables.
But the grid still needs a whole lot of balancing resources during times that renewables dominate. That makes demand response, still a very nascent resource in Germany, even more important.
When it comes to demand response, America has the most mature markets in the world. Within the U.S., demand response -- both for emergency load capacity and ancillary services -- is growing in every region.
The most active and largest market is PJM Interconnection, which also happens to be one of world’s largest grid transmission organizations. Germany has roughly half the peak load of PJM, but the volume of reserves procured by the four German transmission system operators are comparable to the size of the ancillary market in PJM, according to an analysis by EnerNOC.
EnerNOC bought its way into the German market earlier this year with the acquisition of Entelios, one of the European demand response leaders. EnerNOC already claims a large presence in Australia, a relatively mature DR market, and is moving aggressively into Ireland and the U.K. as well. But it is Germany that has the most immediate potential.
“In the next two years, Germany is our biggest opportunity in Europe,” said David Brewster, president and co-founder of EnerNOC. “But we’re very bullish about the opportunity for demand response in Europe as a whole.”
In 2010, very little of EnerNOC’s revenues came from outside the U.S. By 2013, about 20 percent of revenues came from abroad. While the U.S. market and the Australian market grew out of the need to shave off kilowatts when electricity demand is high, the German market is all about balancing services.
There are three levels of ancillary services in Germany, Brewster explained: primary, secondary and tertiary.
Primary is closest to the practice of frequency regulation in the U.S., which requires dispatch times of just a few seconds. EnerNOC is mostly working in the other two markets: secondary, which is similar to operating reserves and requires a five-minute response time, and tertiary, which requires a fifteen-minute response time.
The secondary and tertiary markets each need about 2,500 megawatts in each direction of positive and negative demand response, said Brewster. Germany is unique in that it needs demand response providers to drop load (positive DR) to balance the grid, as well as to increase their load when there is more renewable power than the grid can handle. In total, there are about 10 gigawatts of load needed.
The market potential is there, but it’s early days. “It’s a major evolution just [to allow] demand response to participate in all of these services,” said Brewster. Currently, demand response is participating in the markets as part of “innovation projects,” but it has not been codified in the market by the energy industry laws.
“With the newly elected German government only now becoming really active,” Brewster said, “we are working to have demand response formally included in the legislation.”
There are other barriers, as well. Commercial and industrial customers must secure permission from the electric supplier to participate in demand response, because the suppliers are responsible for balancing their supply and demand within their portfolio. Other European markets, such as Belgium, have made modifications to make DR participation easier. But it has not happened yet in Germany.
Another problem, Brewster explained, is that customers providing negative DR could exceed their contracted demand levels, triggering grid usage fees that are similar to demand charges that businesses face in many parts of the U.S. Aggregators like EnerNOC would like to see customers exempted from those fees while they’re providing grid balancing services.
Lastly, demand response will compete with natural gas power plants in the ancillary services market -- potentially eroding their profit margins even further. That may cause some resistance to increasing demand response.
Despite the challenges, EnerNOC is bullish about the market opportunity. “We view Germany as a linchpin of our European strategy,” said Brewster. “We have a great opportunity to export technology and business model around the rest of the world.”
Could solar paired with battery storage make utilities obsolete? That's the question analysts are asking as the economics of the technologies improve.
The short answer is "No." Utilities operate an extremely valuable set of infrastructure and equipment, and still benefit from legacy customer relationships. The economics of leaving the grid entirely are usually not favorable either.
But it's an important question to ask, says Jon Creyts, managing director of the Rocky Mountain Institute, because it can guide how utilities manage future consumer empowerment.
In this podcast, we'll talk with Creyts about Rocky Mountain Institute's latest report on grid defection, and examine the different scenarios for solar and storage that could impact utility relationships with customers.
The Energy Gang is produced by Greentechmedia.com. The show features weekly discussion between energy futurist Jigar Shah, energy policy expert Katherine Hamilton and Greentech Media Editor Stephen Lacey.
Solar power is on a tear. Cumulative solar photovoltaic electricity production is about to reach 1 percent of total global electricity production, according to a new report from the International Energy Agency.
In just a decade, solar power has gone from being a fringe technology for greenies to an almost-mainstream source of power, due to its increasing cost-effectiveness in many countries around the world. This is a remarkable evolution and demonstrates well why CitiGroup recently stated that the “age of renewables” is upon us.
I’m going to go even further in this article, however, and argue, as I did recently with respect to electric vehicles, that this 1 percent of global solar penetration is far more important than you might think.
Only 1 percent, you say? That’s tiny. But that 1 percent is actually halfway to the goal of market dominance when we consider recent growth rates and the likely growth rate in the future. I’ll explain below.
Figure 1: Global Solar PV Capacity, Annual and Cumulative
Ray Kurzweil, an American inventor and entrepreneur, formulated what he called the 'law of accelerating returns' after having studied numerous technology adoption curves. The classic example concerns computing power: Moore’s law holds that computing power will double about every two years while maintaining the same cost.
Sixty years after former Intel CEO Gordon Moore made this observation, the law holds true, though we’re actually doubling now about every year. Kurzweil discovered, however, that Moore’s law is just the latest incarnation of a much longer trend, as the figure below shows.
Figure 2: Kurzweil on the Fifth Paradigm of Increasing Computing Power
Source: Ray Kurzweil
So how can 1 percent of all global electricity sales suggest in any way that we’re on the road to a world dominated by solar?
Here’s why: in terms of doubling capacity, 1 percent is halfway between nothing and 100 percent. That is, there are seven doublings from 0.01 percent to more than 1 percent, as well as seven doublings from 1 percent to 100 percent. One double is two, which doubled is four, etc., reaching 128 after seven doublings.
Kurzweil’s best example of this law is the Human Genome Project. This was a government-funded effort to decode the entire human genome in about fifteen years. Halfway through the project, the effort was only at about 1 percent completion. Many observers wrote off the effort as a failure that couldn’t possibly reach its goal.
Kurzweil, however, wrote at the time that the game was won, because 1 percent was halfway to 100 percent in the terms that actually mattered: the rate of improvement in sequencing technology.
And he was right. The Human Genome Project finished slightly ahead of schedule and helped to bring down the costs of genome sequencing technologies by orders of magnitude, as well as achieving dramatic reductions in the time it takes for sequencing. We can now pay about $1,000 to sequence our own genome in a matter of days.Applying the Law of Accelerating Returns to Solar
The average global growth rate for solar over the last five years has been around 33 percent. Leading research firms estimate between 42 gigawatts and 48 gigawatts installed in 2014. IEA calculates that solar PV will produce 160 terawatt-hours in 2014, or 0.85 percent of global demand.
Seven doublings of 0.85 percent gets us to 108 percent. So what is the doubling time at 33 percent? There’s a simple cheat formula: 72/growth rate = the rough doubling time. So 33 percent leads to a doubling rate of a little more than two years (2.2 years).
Seven doublings of 2.2 years equals 15.4 years, at which point we would theoretically reach a solar penetration of more than 100 percent of current global electricity demand by 2029. (Of course, global electricity consumption is going to increase in that time, but for simplicity’s sake, I’ll limit my analysis to today’s level of consumption.)
Will this happen? Of course not. The growth rate will surely slow, as it has already in the last couple of years. This is to be expected: new technologies often see rapid growth rates as they catch on, and then the growth rate slows as higher penetrations are reached.
If solar continues to grow in coming years at an average rate of “only” 15 percent, we will reach about 8 percent of current global electricity demand by 2030. This is good but not great, considering the magnitude of the problems we face in terms of climate change and the use of fossil fuels.
So are we going to see a declining rate of growth, steady growth or perhaps even an increasing rate of growth as solar becomes more affordable around the world? The IEA report shows that only about a dozen countries were responsible for the vast majority of solar PV growth in 2013. Will this revolution spread quickly to other countries, or will the solar revolution catch on fire around the world? It appears it already is.
The center of gravity in PV installations in 2013 has swung away from Europe. But Asia, Latin America, the Middle East and North America are all budding (in some cases, exploding) markets that are keeping global growth steady.
We can’t forget our own backyard in this discussion. Major investment banks are finally starting to see the potential for solar in the U.S., including Morgan Stanley and Citigroup. America became the third-largest installer of PV in 2013, behind only China and Japan. We installed nearly 5 gigawatts of solar, enough to power over a million homes.
The U.S. also regained the top spot in Ernst & Young’s Renewable Energy Country Attractiveness Index in 2013. Combined with the increasing realization that solar PV is more affordable and natural gas prices continue to increase (natural gas is at about $4.50/mmbtu now, sharply up from a few years ago), we can expect to see far more solar in the U.S. too. And that’s a very good thing.
However, as solar reaches higher penetration levels in various countries and grids, integration issues will become more pronounced. I’m currently living in Hawaii, on the Big Island, where solar penetration is already more than 10 percent. That has slowed deployment in some areas because local utilities are worried about power quality and reliability.
High penetration of renewables can be managed in various ways, but energy storage technologies are the most promising long-term solution. Already, cost drops for batteries are below the Department of Energy's targets. And recent analysis from the Rocky Mountain Institute shows that hybrid solar-storage systems will be competitive with conventional electricity in large portions of the U.S. by the end of the decade.
Summing up, under Kurzweil’s law of accelerating returns, we have reached, after over 40 years of development of solar PV technologies, the halfway point to market dominance. We are likely to see the second half of this journey take less time than the first half. Cost will probably not be the primary hurdle after a certain point. Rather, integration issues and the stranded costs of existing energy infrastructure will be the larger problems.
Future-thinking policymakers and regulators should learn from past adoption curves and recognize the potential for these technologies. We wouldn't want ratepayers getting stuck with old, room-sized computers (i.e., coal plants) when they could have the coolest, nimblest new device (i.e, solar) with the same delivery potential.
Tam Hunt is owner of Community Renewable Solutions, a consultancy and law firm specializing in community-scale renewables. Community Renewable Solutions can help developers navigate this complicated field and provide other development advice relating to interconnection, net metering, procurement and land use.
Insanely complex math, and never enough data -- that’s the conundrum in trying to model the ebb and flow of solar power and energy storage on the grid edge.
Large-scale transmission systems are well modeled. But the majority of the grid below the substation provides little to work with beyond original engineering specs -- and hopefully, up-to-date maintenance and replacement records -- to turn into useful data for software platforms.
That makes it very difficult to capture or predict voltage anomalies, reactive power problems and other disruptions coming from customer-side energy assets like rooftop solar. It also makes supply-demand forecasting to optimize the interplay of solar power, energy storage and grid interconnection requirements almost impossible. Even so, with smart meters, grid sensors and advanced inverters starting to populate the grid edge, there are new tools that could make this micro-scale grid modeling a truly useful tool.
Power Analytics, a supplier of complex power flow modeling software for customers like the U.S. Navy and the FAA, believes its new platform is ready to take on the challenge. It’s called Paladin PV (PDF), the newest piece of the San Diego, Calif.-based company’s Paladin software suite. According to Kevin Meagher, Power Analytics’ CTO, it has the “potential to significantly change the landscape” for distributed solar’s integration into the grid.
Paladin PV stems from work Power Analytics has been doing for the Department of Energy’s SunShot program since 2011. The first part builds on the power flow modeling it’s done with the University of California, San Diego, which has a world-class microgrid including combined heat and power, solar PV and batteries.
The second part adds a key component of the solar-equipped grid: the inverters that convert DC-powered solar panels and batteries to grid-ready AC.
“We came up with a [method] to mathematically model an inverter in two ways,” said Meagher. "First as a real generator on the power line, and second to integrate the solar irradiance data” that informs just how much sunlight is being converted into the “fuel” that powers all that PV.
Here’s a diagram that represents the working parts of a Paladin PV system:
The first part of the equation is the “Total Sky Imager” system used by UCSD for its SunShot-backed high-penetration solar portal program. The camera and hemispherical mirror system captures cloud cover as it moves across the sky, with a panoramic aspect that can see what’s coming and predict just when it’s going to start shading that PV array.
“We’re talking about solar forecasting in less than fifteen minutes,” compared to more typical hourly forecasts from satellite data, he said. That localized advanced weather warning allows planning ahead for sudden drop-offs or surges in generation, and the effects that has on local grid conditions like voltage and power factor changes. It also could offer up to 98-percent-accurate forecasts of what a solar array is going to produce at any given point in time, according to a Power Analytics report.
From there, Paladin PV links that irradiance data to its models of how each PV inverter converts that solar “fuel” into electricity, Meagher said. That starts with generic inverter calculations, and is trued up by modeling the actual inverters being used in combination with the actual cable sizes, efficiency losses across different operating parameters, and other tricky real-world calculations, he said.
That includes things like voltage, current, and power factor conditions for the project developer and operator. It also includes a real-time tally of how those are affecting the utility at the grid interconnection. The final element is energy storage -- analyzing how batteries and inverters interact with the system, and figuring out how many of them are needed at any given moment of the day, projected out into weeks, months and years into the future. That enables the system to “come up with a size for the energy storage that’s a lot more precise,” said Meagher.
These aren’t simple tasks, as this generic inverter power flow model from Power Analytics DesignBase software indicates:
The goal is to provide a software representation of a real-world system, one that’s directly tied to the real-time operations for keeping that system in balance.
“That kind of accuracy is the basis for the range of planning that would be part of a study, and would be part of an interconnect agreement with a utility -- and when we’re in the real-time model, that same platform is operating,” Meagher said.
With other projects, the modeling for utility interconnect approval and the real-world monitoring and control schemes are separate affairs. That leaves quite a bit of uncertainty between what the model predicts should happen, and what’s actually going on, and lessens the value of that model as a real-world operating asset.
With Paladin PV, by contrast, “We have a very accurate mathematical expectation of what we should be seeing, moment by moment,” he said. “If we don’t see that -- if it’s off by a couple of percentage points -- we know where to look and why.”
This isn’t a concept unique to Power Analytics, although it's the kind of expertise that won that company a spot on Greentech Media's Grid Edge 20 list of companies to watch. We’ve been covering the innovations happening on the distributed, two-way grid edge, with startups like Spirae, Integral Analytics and Smarter Grid Solutions working alongside grid giants like Alstom, ABB General Electric, Hitachi, Siemens, Schneider Electric and Toshiba to sense, model and manage these new distributed systems.
Solar forecasting as a new technology market is on the radar of companies ranging from weather data providers to third-party PV aggregators like SolarCity and Sungevity, or solar metering providers like Locus Energy and Itron.
As far as deploying Paladin PV goes, Power Analytics is doing the software and consulting, and relying on design and implementation partners to build its automatic control capabilities, Meagher said. That involves partners like Colorado-based Homer Energy, a spinout of DOE’s National Renewable Energy Lab (NREL), which focuses on financial modeling to prove out the value of different combinations of assets for microgrid deployments.
But the kind of model-to-operations integration that Paladin PV is promising is also becoming more important for traditional large-scale solar projects, he said -- perhaps not today, but in the near future.
“If it’s a utility buying power from you, and you can’t show the ability to respond to cloud cover and ramp rate control, it significantly [decreases] the value of it on the PV side,” he said. That’s easier to contemplate in megawatt-plus PV installations, where the costs of boosting the system are matched by economic or interconnection incentives to do so.
Smaller projects are harder to pencil out, but “because of the drop-off of tax credits and stimulus funds, people are more interested in the performance of PV” on that scale as well, he said. Islands like Hawaii and Puerto Rico are already starting to demand some ability for solar farms to provide more stability, and to allow standby resources with warning to ramp up and down to meet their fluctuations when they do occur. California is driving smart inverter regulations that could unleash a range of voltage and reactive power support services from the everyday solar PV array.
At the same time, better modeling and more accurate forecasting could help push far more distributed rooftop PV into the grid than utilities think is possible today, said Meagher. “For years and years, everybody talked about 15 percent, which is kind of an arbitrary number,” as a limit for how much solar a local section of the grid could safely handle. That artificial limit has already been breached, however, with no dire effects -- “they’ve already gotten there in San Diego, a lot sooner than they expected,” he said.
The problem is, “they don’t have any data to tell them what’s going on,” he said. “The work that SunShot has done has been to greatly increase this” amount of data available to boost PV’s modeled, forecast and managed performance, along with the smart inverter integration to smooth it out locally.
“Maximum penetration could be 40, 50, 60 percent,” he said. “The more you know about it, the more you can manage it on the margins.”
Conventional wisdom holds that thin-film PV module efficiencies will always be below those of crystalline-silicon technologies. That has been the case to this day.
However, according to GTM Research's newly launched monthly data service PV Pulse, successful execution of First Solar's ambitious roadmap for its thin-film cadmium-telluride (CdTe) technology could see it surpassing standard multicrystalline silicon (c-Si) module efficiencies by the end of 2015.
As the April 2014 edition of the Pulse shows, current average commercial module efficiencies for multi c-Si modules at the end of 2013 ranged from 15.0 percent to 15.2 percent, almost 2 percent higher than First Solar's fleet average efficiency of 13.4 percent. By the end of 2015, however, First Solar's roadmap sees average efficiencies of 16.2 percent, compared to 15.8 percent for multi c-Si.
Source: GTM Research PV Pulse, April 2014
By the end of 2017, First Solar is targeting average efficiency of 17.2 percent, which would place it on about an equal footing with best-in-class P-type mono c-Si technology. The goal for its best manufacturing line is even higher, at 18.5 percent. This would still place First Solar a fair bit behind industry leader SunPower, whose proprietary n-type back-contact technology already drives module efficiencies of over 20 percent and is targeting best-line commercial module efficiency of 23 percent by the end of 2015.
"First Solar's updated module efficiency roadmap is the fulcrum upon which successful execution of its aggressive module and system cost targets rests, since both module and BOS costs are highly influenced by efficiency improvements. While there is certainly a degree of execution risk to these ambitious targets, there is reason to be optimistic, given that the company has a consistent history of beating expectations on the technology front," said Shyam Mehta, Lead Upstream Analyst at GTM Research.
Check out GTM Research's PV Pulse for more insights on the solar industry.
FirstEnergy CEO Anthony Alexander traveled to Washington, D.C. this week to speak in front of the U.S. Chamber of Commerce about the challenges his utility is facing.
With electricity use flatlining and renewable energy eroding margins for traditional generators, Alexander was not there to call for more regulatory flexibility to help the utility industry embrace these technologies.
Instead, he called for a renewed focus on fossil fuels.
“We need to develop a national energy plan that will allow us to take advantage of our vast supply of domestically produced resources -- both coal and natural gas -- and our superior electric system to stimulate and support our economy,” he said in prepared statements.
Strong promotion of renewables, said Alexander, is a threat to the electric system.
"In the electric utility industry, energy efficiency, renewable power, distributed generation, microgrids, rooftop solar and demand reduction are examples of what 'sounds good' -- and while they may all play some role in meeting the energy needs of customers, they are not substitutes for what has worked to sustain a reliable, affordable and environmentally responsible electric system."
Alexander was simply defending his business against a regulatory environment he perceives as a threat. But some don't think regulators are going far enough to prepare utilities for coming changes -- essentially casting what could be an opportunity as a grave threat.
“They have their heads in the sand,” said cleantech investor Jigar Shah, speaking about investor-owned utilities on a panel at the Bloomberg New Energy Finance Summit in New York City earlier this week. Shah added that, to his knowledge, most vertically integrated utilities are not working with top-level consultants to figure out what the future of their business will be.
Shah worries that regulators, particularly at the state level where most of the action happens, might be incapable of moving fast enough in the next five years to allow utilities to innovate and offer new energy services. So far, utilities are not clamoring for the opening up of markets so they can better compete.
Regulation comes in many forms: EPA power plant rules, state renewable portfolio standards, and FERC’s Orders 745 and 755, among many others. State regulators are arguably the most important for helping utilities earn a rate of return while competing in the clean energy market. But there are few indicators they are moving to catch up with the pace of technology adoption.
“The change that’s coming is [going to move] faster than the regulation will move,” said Jill Anderson, chief of staff at New York Power Authority, during the BNEF Summit panel with Shah. “We don’t need the same regulatory process. We don’t have time for the comments on the comments on the comments. We are at a time when we need to be much more agile.”
Research backs the notion that the change is coming faster than most regulators and utilities anticipate. A recent report on grid defection from the Rocky Mountain Institute found that a considerable number of utility customers could see “favorable defection economics” within a decade. However, utilities will likely see significant revenue decay before customers start defecting.
“Energy efficiency is undermining the utility business model faster than distributed generation [is],” said Shah. “Jill [Anderson] can’t react fast enough because the regulatory problem doesn’t allow her to. That is a big problem, but it’s not my problem.”
“If we do it the right way...customers, utilities and technology providers should [all] be able to benefit,” said Jon Creyts, managing director of Rocky Mountain Institute. Rate restructuring does not mean just addressing net metering for solar, but rather rethinking the value of electricity in broader terms of efficiency, resiliency and societal impacts.
Abigail Hopper, director of Maryland’s Energy Administration, is leading a task force on microgrids for her state, and is trying to move the needle on regulation. “I think there’s certainly an idea of opening a proceeding on utility 2.0 [in the Maryland public utility commission],” she said, although she cautioned that it wasn’t necessarily going to be taken up.
Creyts suggested that for the utility industry, it’s essentially still 1993. At that time, a heavily regulated AT&T was earning a steady but paltry rate of return. And then, after deregulation, AT&T earned a far greater rate of return. He sees a path forward for utilities that could be similar.
Shah disagreed with the idea that utilities would cannibalize their current business to embrace distributed resources. “When you think about what Verizon did to decimate its own business to move into mobile, no utility CEO in the U.S. will come even close to doing that to their business,” he said.
Some of the biggest calls for change have come from NRG Energy CEO David Crane. But he runs an independent power producer, not a vertically integrated utility. While NRG has diversified, it still owns 53,000 megawatts of generation, most of which is oil, coal and natural gas. “We need to pick up the pace more, and that is what we intend to do,” Crane recently wrote in a manifesto about becoming a consumer-centric energy company.
New York Power Authority, which also is not a vertically integrated utility but a state power organization, is far more nimble in its response to the change. “We have to demand more flexibility in our regulatory structure,” said NYPA’s Anderson. “We need to be more experimental and accept some failures.”
Granting that flexibility needs to become the goal of regulators, argued Creyts. “Everyone needs to get used to a different cost point, different market players and a different type of service. We can remake the utility model.”
Even though some states, such as New York and California, have responsive regulators who are taking action, Shah questioned whether the utility of the future would come fast enough for customers. “There are a lot of utilities where this will happen in less than a five-year period,” he said. “I hope and pray the people in those states aren’t negatively affected.”'
To hear more from companies transforming the industry and some of the utilities evolving their businesses, join Greentech Media for the Grid Edge Live conference June 24-25 in San Diego.
The Alliance for Solar Choice, a group made up of leading solar service providers, is a staunch defender of net energy metering. And that has brought it into conflict with solar advocates calling for a more precise "value of solar" calculation.
TASC’s defense of net metering divided stakeholders in Minnesota who eventually pushed through the nation’s first legislatively mandated and PUC-approved state value-of-solar tariff. In South Carolina, TASC’s intervention could halt compromise net metering legislation crafted by a coalition of advocates and local utilities looking to create a value-of-solar formula.
TASC wants to protect net metering, which credits rooftop solar owners at the retail rate for electricity delivered to the grid. “This is a stable and highly successful policy,” TASC Executive Director Anne Smart has written. "We need to maintain net metering, not ‘fix’ it.”
Net metering is key to the third-party ownership (TPO) business model leveraged by TASC founding members SolarCity, Sungevity, Sunrun and Verengo. By effectively making use of federal tax credits, net metering and investment funds, TPO companies have driven unprecedented U.S. solar growth over the last two years.
But utilities across the country are pushing back against net metering policies, saying the practice forces them to shift costs for maintaining grid infrastructure to non-solar owners. Advocates of net metering say many of the utilities' claims are political, not based on actual cost shifts. Even in states where solar penetration is virtually nonexistent, some utilities are still attempting to change the policy.
A value-of-solar tariff is a more detailed valuation of solar. As laid out in a presentation from Minnesota clean energy advocate John Farrell, a VOST credits system owners for:
Source: Institute for Local Self Reliance
Utility challenges to net metering across the country are increasingly pushing regulators into value-of-solar proceedings. Many solar advocates think the future of solar can only begin when the right VOST is agreed upon.
“If Minnesota utilities report favorably on the value of solar,” Farrell notes in the presentation, “it may change the debate in other state battlegrounds over distributed generation.”
Because it is a rigorously determined, transparent, and regulator-approved value, a VOST should eliminate utilities’ arguments about cross-subsidies, according to Karl Rabago, a former Texas regulator and utility executive who created the first U.S. VOST in Austin, Texas.
But VOST is a “buy-all, sell-all” approach in which solar owners sell “100 percent of the energy they produce back to the utility,” and then “buy 100 percent of the energy they need from the utility,” Smart wrote.
That generation sold creates “taxable revenue” for the system owner, Smart explained, and according to a legal opinion cited in a recent TASC regulatory filing, potentially makes system owners “ineligible” for the 30 percent federal Investment Tax Credit.
An effective VOST design avoids making solar owners sellers of electricity and preserves their state and federal tax benefits. That is how both the Austin and Minnesota VOSTs were designed, Rabago said.
“Legal opinions provide guidance on how to set things up,” he added. “Thanks to TASC, we have some guidance from a national law firm.”
Source: Institute for Local Self Reliance
Utilities are just trying to "protect their bottom line” by setting the VOST rate, Smart wrote. After the first three years of the 25-year VOST term In Minnesota, she added, “utilities can recalculate the rate annually. [...] Solar companies will find it difficult to sustain growth and secure financing while facing yearly rate uncertainty.”
A VOST requires good-faith engagement and oversight on the part of electric utilities, Rabago said. “Utility regulators must do their job.” This means transparent ratemaking. “Well-run, open stakeholder processes, like those used in Austin and Minnesota, were vital to generating results.”
Smart predicted that Minnesota will see a three-year solar boom, followed by a bust when rates begin to fluctuate. As the retail rate rises, the VOST rate could even fall below that level, diminishing solar installation’s value proposition. “That's an unstable market that doesn't support the long-term, high-impact growth that solar is poised to achieve with net metering," she said.
“The boom-and-bust [cycle] only comes if there is no long-term commitment,” Rabago added.
A more legitimate concern is that the VOST value could be too low or too high if retail rates or technology costs change, Rabago acknowledged. “There is a [ strong ] connection between evolving rates and NEM. That’s why I included an annual adjustment in my value-of-solar” calculation.
A value-of-solar analysis “is something we desperately need,” said Rabago, who has been involved in ratemaking for 25 years. Retail rates are set by regulators using past and short-term future data. “The right price mechanism for a forward-looking technology is not backward-looking costs.”
But TASC is determined to keep net metering, a policy it says is not broken, in place to ensure consistency.
“Across the 43 states where net metering is in place, the solar industry supports thousands of jobs and hundreds of megawatt-hours of clean, distributed solar energy,” Smart wrote. “If rooftop solar is to continue its record-breaking growth, VOSTs are not the path to success.”
Solar advocates are all attempting to create a sustainable market. But with an ever-growing number of voices in the industry, there are many different opinions on how best to do it.
“There are more people today in the solar boat,” Rabago said. “Sometimes it will appear that we are not all rowing in the same direction. But we all know where we want to go.”
Prices for solar PV modules delivered to the U.S. by Chinese suppliers could increase by as much as 20 percent by the end of 2014 due to supply constraints, rising input costs and the ongoing U.S.-China solar trade case, according to GTM Research's new report on solar pricing, Global PV Pricing Outlook: Q2 2014.
"Prices for modules produced by Chinese suppliers have historically been significantly lower than those manufactured in other regions. By the end of 2014, however, this may not be the case. Already, Chinese firms are quoting module pricing in excess of 80 cents per watt for delivery in the second half of 2014, compared to levels of 70 cents per watt at the end of 2013," said report author Jade Jones, GTM Research Solar Analyst.
The primary driver behind the likely price increases, according to Jones, is the ongoing U.S.-China trade case, which has already led to import duties on China-produced solar cells. Further duties on China-assembled modules as well as Taiwanese solar components would push up U.S. pricing beyond current levels, as Chinese firms pass tariff-induced penalties onto customers, or resort to contracting out cell and module production to OEM vendors based in higher-cost regions such as India, South Korea and Malaysia.
With Chinese suppliers shipping almost 3 gigawatts of modules into the U.S. in 2013, this development could result in meaningfully higher solar costs for U.S.-based developers and their customers.
Additional findings from the report:
-- Global polysilicon spot pricing increased 15 percent quarter over quarter to $21.20 per kilogram at the end of Q1 2014.
-- GTM’s Q4 2014 base-case forecast estimates polysilicon prices reaching $24 per kilogram and wafer prices 26 cents per watt.
-- Current Tier-1 Chinese module pricing across the globe ranges from 56 cents per watt in Chile to 80 cents per watt in the EU
-- Pricing for Japanese modules in the Japanese residential market is still as high as $1.50 per watt, but has fallen below $1 per watt in the commercial segment.
The full report, which assesses short- and long-term pricing dynamics in PV polysilicon, wafers, cells, and modules by region, supplier and technology, is part of a quarterly report series by GTM Research and can be accessed here.
Report co-author Shyam Mehta will be presenting on "The Module Market Landscape" next week in Phoenix. See the complete Solar Summit agenda here.
After the Tohoku earthquake in March 2011, Japan was in a seemingly impossible situation. A tremendous amount of conventional generation capacity, including the entire nuclear fleet, was unavailable, and the country faced the risk of power cuts during summer consumption peaks.
But miraculously, or seemingly so, in just a few short weeks Japan managed to avert the rolling power cuts that many believed inevitable. Even more impressive, the Japanese have turned these emergency measures into lasting solutions.
So how'd they do it without forcing people back to the Stone Age? Japan overcame this daunting task by tapping the cheapest and most widely available source of energy: energy efficiency and conservation.
Much of the electricity savings were initially driven by a popular movement known as "Setsuden" ("saving electricity"). This movement emerged to encourage people and companies to conserve energy and prevent rolling power cuts. Simple measures such as increasing temperatures in homes and offices, "thinning" lighting by removing some of the bulbs and tubes, shutting down big screens and cutting exterior lighting enabled Japan to dramatically reduce power demand almost overnight (albeit at the cost of a small amount of personal comfort).
In addition to these measures, the dress code in offices was eased to reduce the need for AC, while commercial facilities were audited to identify potential savings.
These temporary measures have proven to have long-term impact. They've dramatically increased the awareness of energy use and energy efficiency, and large companies are running high-profile efficiency programs. Consequently, power consumption never rebounded with GDP growth because energy-conscious practices became ingrained. More importantly, there is huge potential for technical measures to reduce energy use even further.
More surprising is how far off pundits were about the impact. Some made dire predictions about the need to replace the nuclear fleet with "cheap coal" (a myth we debunked here). A combination of commonsense energy savings measures that began as temporary behavioral changes have led to permanent efficiency gains. In the process, the Japanese people, and its business community, proved the punditry wrong.
The key lesson from the Japanese experience is that coal plant construction is simply too slow to be relevant in the modern world, where resiliency is highly valued. To cope with rapid loss of generation capacity, Japan needed fast, nimble and modular 21st-century solutions. That means efficiency and clean energy. Despite major hurdles to deploying these solutions -- mostly due to a complete absence of renewable energy policies prior to Fukushima -- solar power surged in 2013, blowing away earlier predictions.
In contrast, coal power projects proposed in the wake of Fukushima are still sitting on the drawing board. By the time these plants are on-line, the output could be rendered obsolete due to the rapidly dropping price of renewable energy. Worse, these investments lock Japan into a volatile international coal market. Japan could learn from India's recent imported coal debacle -- the disastrous Tata Mundra project -- to understand what that market can do to energy security.
Energy security is important. But aligning energy investments with the need to address climate is an even more pressing concern. Replacing half of the nuclear fleet with efficiency and the other half with fossils (mostly gas) is not enough for an advanced country like Japan.
Global greenhouse gas emissions need to peak quickly, and Japan must begin reducing its emissions. The easiest and most important step it can take is giving up on the premise that new coal plants are needed.
Lauri Myllyvirta is an international campaigner with Greenpeace. Justin Guay is director of the Sierra Club's international coal program.
Energy efficiency remains far cheaper than investing in additional generation.
That’s the case even in states where a long history of energy efficiency has pushed up the cost of squeezing out additional savings.
Those are among the findings of two recently published studies, one by the Lawrence Berkeley National Laboratory (LBL), the other by the American Council for an Energy-Efficient Economy (ACEEE).
Together, the studies send “a signal to keep going” with more ambitious energy-efficiency efforts, said Rebecca Stanfield, a deputy director for policy at the Natural Resources Defense Council.
The LBL study collected data from 31 states; the ACEEE study is based on electricity data from twenty states, and natural-gas data from ten states. Both studies calculate what’s known as the levelized cost of saved energy, or the cost per kilowatt-hour of an efficiency measure when the upfront cost is spread over the projected lifetime of the investment.
The LBL study put the average cost of saved energy at about 2 cents per kilowatt-hour. The ACEEE study estimated it at about 2.8 cents. The average cost of generating power from new sources, whether coal-fired plants or wind turbines, is typically at least two to three times that amount.
Source: ACEEE (click to enlarge)
The cost of saved energy in Midwestern states was especially low, relative to other regions of the country. According to the LBL study, electricity efficiency measures in the Midwest cost about 1.4 cents per kilowatt-hour. The authors of the ACEEE study calculated the cost of saved energy on a state-by-state basis and averaged over the years 2009 through 2012. It came to 2.6 cents per kilowatt-hour in Minnesota, 1.9 cents in both Iowa and Illinois, and 1.7 cents in Michigan.
By contrast, the comparable figures are 4.5 cents in California and Connecticut, and 4.8 in Massachusetts. Those states require their utilities to wring much more energy use out of the system, with the result that the utilities’ customer-financed efficiency programs compensate customers for increasingly-costly efficiency technologies.
Even so, Stanfield said, “The interesting story is how much you can still get out of states that have been doing this for decades.”
The findings echo earlier research.
A 2012 report from Michigan regulators looked at energy use in the state and found there was plenty of potential for low-cost efficiency improvements. The analysts concluded that very feasible improvements could reduce electricity use by 17 percent, and produce $2.55 in savings for every $1 invested. They found that more-extensive improvements could shave a total of 40 percent off current electric consumption.
And in Illinois, the state Commerce Commission has ordered utilities to continue aggressive efficiency efforts despite complaints that few “low-hanging fruit” opportunities -- simple measures such as switching to high-efficiency light bulbs -- remain.
The seemingly greater payoff of efficiency in the Midwest could be due to a couple of factors. Midwestern states, in general, instituted energy-efficiency programs later than many New England and western states. New efficiency programs, typically funded with utility money, usually target the cheapest targets first, meaning that the cost of saving energy at the outset is extremely low.
High-efficiency light bulbs typically are one of the first changes people make. While the average nationwide cost of saving energy is between 2 and 3 cents per kilowatt-hour, the cost of saving energy with more efficient light bulbs is more like seven-tenths of one cent, according to the LBL report. And not surprisingly, about 44 percent of energy saved in U.S. homes is due to the use of CFL or LED bulbs.
Efficiency programs generally move on to increasingly-costly energy-saving measures. In states that have made the greatest strides in cutting energy use, the cost of saved energy is typically the highest.
Another factor pushing up the cost of saving energy is the pace at which a state requires its utilities to cut energy sales, Stanfield said. Massachusetts, for example, requires a 2.5 percent reduction each year in the amount of electricity sold. The comparable figure in Illinois is 1.5 percent.
“As programs get more aggressive, they are more expensive,” Stanfield said. But even given that, she added, “They’re not more expensive than generation.”
Between 2006 and 2011, administrators of efficiency programs tripled what they spent on cutting electricity use from $1.6 billion to $4.8 billion annually, according to the ACEEE study. Spending on natural gas efficiency increased from $300 million to $1.1 billion over the same period.
Another Berkeley Lab study forecasts that figure possibly doubling by 2025.
And as efficiency grows and matures, the financial facts may look a bit less compelling.
However, Stanfield said, “There’s enormous, enormous potential...even in states that have been at it for a while. We’re not even close to maxing out cheap energy efficiency.”