Syndicate content
Headlines from Greentech Media
Updated: 1 hour 43 min ago

GE Threatens to Enter Fuel Cell Market, Compete With Bloom

Thu, 07/24/2014 - 12:00

Earlier this week, General Electric announced that it is initiating an entrepreneurial effort to commercialize its solid oxide fuel cell (SOFC) technology for megawatt-scale stationary power applications. Billion-dollar fuel cell startup Bloom Energy also works with SOFC technology at this scale.

GE has claimed a recent fuel cell "breakthrough" with an efficiency of 65 percent (when used with a Jenbacher engine) and an overall efficiency of up to 95 percent when waste heat is captured.

Johanna Wellington, advanced technology leader at GE Global Research and the head of GE’s fuel cell business, stated in a release, “The cost challenges associated with the technology have stumped a lot of people for a long time,” adding, “we made it work, and we made it work economically."

GE materials scientist Kristen Brosnan states that using an additive thermal spray technology "to deposit the anode and the electrolyte" makes it "easy to apply, [allowing it to] handle large temperature swings and...last a long time." That's the "game-changer" claimed by Wellington.

Martin LaMonica reported on this year's SOFC fuel cell breakthrough from GE a year ago in IEEE. He wrote that GE was combining "proprietary fuel cell technology with its existing gas engines" to replace diesel generators. LaMonica also noted that GE's fuel cell technology was meant "to work in tandem with GE's engines." He wrote that the system is still a few years away from commercial availability and is aimed at customers outside of the U.S., where natural gas prices are not so cheap.

Wellington's predecessor as director of GE Global Research, Mark Little, also claimed a "real breakthrough in fuel cell technology" at the time.

GE has been working on SOFC technology for decades. Here's an SOFC slide from GE dating back to 2003, about the same time that Ion America was changing its name to Bloom and investing $1 billion into development for its still-unprofitable fuel cells.

GE plans to build a pilot plant and development facility near Saratoga Springs, New York. There are seventeen people currently involved in the program, according to the website GigaOm.

Vlatko Vlatkovic, chief engineering officer of GE’s Power Conversion division, said in an interview with Bloomberg, “It’s almost impossible to do a good fuel cell without platinum as a catalyst,” which is why GE went the SOFC route. But Vlatkovic also said that that an actual product is still “very far off."

The GE Conglomerate had $146 billion in revenue last year. Michael Kanellos, one-time editor-in-chief at GTM, suggested that it would be a conglomerate like a Siemens or GE that would eventually take fuel-cell technology to market, rather than a startup like Bloom. Another conglomerate, UTC, which has an extensive fuel-cell pedigree, actually paid ClearEdge Power $48 million to take that firm's fuel cell business off its hands. See below for the eventual outcome of that transaction.

Other recent fuel cell news:

  • Doosan of South Korea, a conglomerate with $21 billion in 2013 revenue, just purchased the remains of bankrupt fuel-cell aspirant ClearEdge Power. The assets and debt sold for $32.4 million, according to the Yonhap News Agency, which reported that ClearEdge had 2013 revenue of $68 million. Dow Jones reports that the original bid was at a higher figure of $48 million. ClearEdge raised more than $136 million in VC funding since its founding in 2006 from Kohlberg Ventures, Applied Ventures (the investment arm of Applied Materials), Big Basin Partners, and Southern California Gas Company to develop and build a proton exchange membrane (PEM)-based fuel cell for residential and small commercial applications at hotels, multi-tenant buildings and schools.  In December 2012, ClearEdge acquired the fuel cell business of technology conglomerate United Technology Corporation (UTC) and switched out its core PEM product to the UTC phosphoric acid technology for its 5-kilowatt and 400-kilowatt offerings. The natural-gas-powered UTC 400-kilowatt fuel cell unit had a reputation in the industry as one of the higher-performing products. UTC Power was spun out of Pratt & Whitney in 1958 and supplied fuel cells to NASA for space missions from 1966 through 2010. Around the same time as the April bankruptcy, Ballard Power acquired UTC's transportation- and stationary-related fuel cell IP assets for $22 million.
  • Earlier this month, PEM fuel-cell maker Intelligent Energy went public and raised $94.1 million. The company was valued at $811 million, making it the most highly valued publicly held fuel cell company in the world. We look a bit deeper at the fuel cell market here.
Categories: Industry News

New Jersey Launches $200M Energy Resilience Bank for Microgrids and Distributed Generation

Thu, 07/24/2014 - 11:54

Following in the footsteps of its tri-state neighbors, New Jersey is establishing an infrastructure bank that is focused on energy resiliency.

The names of these institutions vary across Connecticut, New York and New Jersey, but the idea is similar: leveraging public and private capital and the authority of the state to fund energy projects that provide cleaner, more reliable sources of electricity.

The $200 million for New Jersey’s Energy Resilience Bank will come from the state’s Community Development Block Grant-Disaster Recovery allocation. Unlike the models employed in Connecticut and New York, it remains unclear at what point private capital will be brought into New Jersey’s ERB, or at what scale.

The bank will support distributed energy resources at critical facilities, with an early focus on water and wastewater treatment plants. After Sandy, New Jersey determined that only 7 percent of its total wastewater capacity has distributed generation that can be islanded.

“A large number of plants in the state have no existing distributed generation, and many of these facilities are good candidates for combined heat and power or other technologies,” according to the state’s action plan amendment.

The action plan states that the benefits of technologies such as fuel cells, combined heat and power and resilient solar “are indisputable.” Hospitals, emergency response centers, town centers, transit networks and regional high schools that could be used as shelters could also be targets for the bank. Potential projects could include microgrids, distributed generation, smart grid technologies and energy storage.

“Distributed energy resources proved extremely resilient following Superstorm Sandy; unfortunately, due to high initial costs, many critical facilities do not have these energy resilience solutions in place,” Michele Brown, CEO of the New Jersey Economic Development Authority, said in a statement.

The report also calls for microgrids that could link various critical facilities to island in the case of another disaster like Sandy. The state and the U.S. Department of Energy are already working together on a microgrid, NJ TransitGrid, which will provide critical power to part of NJ Transit and Amtrak’s system.

New Jersey will also be able to learn from its neighboring states, which were also hit hard by Sandy and are further along in terms of issuing financing for microgrid projects. GTM Research’s Microgrid Deployment Tracker has found that several microgrid projects were initiated following an extreme weather event or a string of costly outages.

New York opened up a $40 million microgrid competition in January as part of its broader post-Sandy plan. Microgrids, particularly CHP-based microgrids, are just one technology that could also be funded by New York’s Green Bank, which is eventually expected to have a capitalization of $1 billion.

Connecticut has arguably the most sophisticated state-led microgrid pilot program, which is being implemented in cooperation with local utilities. The Northeast, driven by Connecticut, has the largest share of community microgrids of any region in the U.S., according to GTM Research’s recent report, North American Microgrids 2014: The Evolution of Localized Energy Optimization. With New Jersey’s resilience bank announcement, the region will likely continue to lead in community-based microgrids for years to come.

The New Jersey ERB will be helmed by Mitch Carpen, who was most recently at the Bank of Tokyo-Mitsubishi in Singapore. His deputy director will be Thomas Walker, who is currently the bureau chief of engineering services in the division of energy at New Jersey’s Board of Public Utilities. The guide to program funds, which will outline the parameters for issuing the grants and loans, will be issued by the end of summer.

***

For more on how resiliency has emerged as a framework for energy investment and how the power sector has responded, download the free e-book from Greentech Media, Resiliency: How Superstorm Sandy Changed America’s Grid.

Categories: Industry News

Why Residential PACE Is Growing in Spite of Opposition From Federal Housing Lenders

Thu, 07/24/2014 - 10:40

Last fall, California's governor and treasury secretary came up with a plan to solve a longstanding conflict with federal housing authorities over residential property-assessed clean energy (PACE) programs.

The plan didn't work. But does it even matter?

After years of a near standstill, residential PACE programs are back on the upswing in California and other states. And while federal support will be critical for the success of the program nationwide, experts say the promising clean energy financing model will still continue to grow.

By leveraging local bonding authority, PACE supports loans for energy efficiency, water conservation and solar projects. Those loans get paid back through incremental increases to property taxes over twenty years. California pioneered the concept, and state officials have been highly supportive its expansion.

But PACE hit a major snag in 2010, when the Federal Housing Finance Agency -- the body that regulates the nation's two biggest mortgage lenders -- came out in opposition to the program. Because PACE loans take precedence over a mortgage in case of default or foreclosure, the agency argued that they are too risky for lenders to support. It instructed Fannie Mae and Freddie Mac to stop underwriting mortgages for customers taking advantage of PACE loans.

California responded with a lawsuit, which eventually failed. Governor Jerry Brown and State Treasurer Bill Lockyer decided to take a softer approach by creating a $10 million loan-loss reserve that could pay back lenders in case a homeowner defaulted. That also failed to change FHFA's position. 

This spring, the agency wrote a letter to California officials informing them that the reserve fund was inadequate.

"The Reserve Fund does not sufficiently address the risks to the Enterprises [Fannie Mae and Freddie Mac] that we have previously described, and FHFA will continue our policy of not authorizing the Enterprises to purchase or refinance mortgages that are encumbered by PACE loans in a first lien position," wrote Alfred Pollard, the agency's general counsel.

Without the support of underwriters that account for two-thirds of the mortgage market, it might seem that FHFA's latest rejection is a death sentence for residential PACE.

But the program is far from dead. 

"It's a non-issue," said J.P. McNeill, the CEO of Renovate America, a PACE administrator who has executed 95 percent of all residential projects in California.

McNeill isn't dismissing FHFA's concerns. In fact, his company meets regularly with the agency to discuss the program. He just doesn't see the rule having a damaging impact on consumer decision-making.

"We’ve never had the view that PACE is dead. It's just that PACE is hard," said McNeill.

It's not illegal for homeowners backed by Fannie or Freddie to participate in the program. They are simply required to pay off the loan first if they move or refinance their mortgage. That may deter some homeowners from considering a PACE loan, but a lot of them are still making the decision to finance a retrofit through PACE.

Stacey Lawson, the CEO of Ygrene, another large PACE administrator, said in a previous interview that it all comes down to communicating the implications of taking on the loan: "There’s been a lot of fear, uncertainty and misinformation in the marketplace around the risk to homeowners. But it's really just a simple business decision they have to make."

McNeill agreed, saying that most customers are not aware of the drama playing out between PACE advocates and federal housing authorities. As long as homeowners understand that they are on the hook to pay off the balance of the loan in certain circumstances, they are generally OK with moving forward.

"Has the program been limited? Yes. By a lot? No," said McNeill. "The homeowner hasn't been immersed in the story. From their perspective, they just view it as another form of financing."

Over the last year, hundreds of millions of dollars have been raised for PACE programs in California, Connecticut, Florida and New York. And there are now 169 cities and counties in California that have adopted PACE.

Investors seem bullish on the growth potential of residential PACE as well. Last week, Renovate America raised $50 million in growth equity from Valor Equity Partners, Macquarie, RockPort Capital and Spring Creek. 

But those dollar amounts are nothing compared to the size of the mortgage market. So far, $350 million in residential PACE projects has been executed in communities across the country. Compared to the $5 trillion in mortgages underwritten by Fannie and Freddie, PACE is insignificant.

"We’re such a non-factor," said McNeill. "We are less than 1/100th of a percent compared to the capital deployed through Fannie and Freddie."

That might explain why FHFA's policy has been mostly limited to public statements, rather than a crackdown on homeowners or communities supporting PACE. (Some have worried that the agency would completely redline cities or towns participating in the program, but that has not happened.)

Rather than try to stir up conflict with federal regulators, McNeill's strategy at Renovate America is to "walk softly and get to critical mass."

"The idea is to execute projects and gather data in the hopes that we could show that PACE has a positive or neutral impact on lenders," he said.

Data showing energy efficiency's positive impact on mortgage lenders is emerging. An in-depth study released last year by the Institute for Market Transformation found that Energy Star-rated homes were 32 percent less likely to go into default than the average home. 

PACE programs have not been around long enough to accumulate the same kind of data. With a few more years of experience, McNeill and others hope to show that mortgage lenders don't need to worry about the risk. 

At this point, however, conversations with FHFA are all theoretical. In order to quantitatively prove that PACE is not harmful to Fannie and Freddie, many more projects will need to be executed. But if recent program expansion in California is any indication, there will eventually be a time when PACE administrators can argue their case to federal regulators with actual performance data.

"California is moving forward," wrote Jim Evans, a press spokesperson for Governor Jerry Brown, in an email. "We are continuing to work with the federal government and local governments to improve and expand the program."

Categories: Industry News

YieldCos Are a Really Big Deal for the Clean Energy Industry

Thu, 07/24/2014 - 09:00

Up until recently, it was nearly impossible for most investors to directly support solar, wind and other projects that offer stable returns. But YieldCos are changing that.

In this week’s podcast, we’ll tell you about how publicly traded YieldCos work, why so many companies are forming them, and what they’ll do to support a surge in clean energy development.

Later in the show, we’ll talk about innovative new structures in energy efficiency finance and explore the energy supply implications of California’s historic drought.

This podcast is sponsored by eGauge Systems, a manufacturer of next-generation energy meters for solar generation and building demand, submetering, performance contracts, LEED projects and net zero buildings.

The Energy Gang is produced by Greentechmedia.com. The show features weekly discussion between energy futurist Jigar Shah, energy policy expert Katherine Hamilton and Greentech Media Editor Stephen Lacey.

Categories: Industry News

Why a Minimum Bill May Be a Solution to Net Metering Battles

Thu, 07/24/2014 - 08:00

In June, the solar energy industry was excited to learn that solar advocates and electric utilities had reached an agreement on solar policy in Massachusetts.

One of the changes House Bill 4185 would make to solar policy is subjecting all electric utility customers to a minimum bill. In a new report, The Minimum Bill as a Net Metering Solution, GTM Research explores the potential impact of the proposed Massachusetts minimum bill. This report excerpt explains how the minimum bill mechanism would work and presents one of our key findings: a minimum bill is preferable to a fixed charge for solar customers.

The minimum bill mechanism is a new approach that Massachusetts electric utilities hope will cover fixed costs while promoting solar, but without the same level of political rancor that fixed charges have spurred in places like Arizona. Based on channel checks and our reading of the legislation, our understanding is that a customer’s electricity bill would be calculated in the following manner under the minimum bill mechanism:

  • Calculate the customer’s monthly net energy use.
  • If the net energy use is positive, bill for the net energy use at the volumetric electricity rate.
  • If the net energy use is negative, bill the customer for zero kWh used and calculate the excess net metering credit. The customer pays the minimum bill charge for the month and carries over any excess net metering credit.
  • For the next month where net energy use is positive, apply any net metering credits down to the minimum bill charge. Carry over any remaining net metering credits.

In order to determine the impact of the minimum bill mechanism on a typical Massachusetts solar customer over the course of a calendar year, GTM Research analyzed the impact of a $10 minimum bill on an NStar customer with a 6.3-kilowatt rooftop solar system who has an energy consumption and production profile based on actual data from Genability. We assume a volumetric retail electricity rate of 17.33 cents per kilowatt-hour and a fixed distribution charge of $7 per month.

Under the $10/month minimum bill mechanism the solar customer would pay $434.77 for the year.


Source: GTM Research and Genability

Now imagine Massachusetts imposes a $10 monthly fixed charge on solar customers instead. In this scenario, the typical Massachusetts solar customer would pay $458.77 for the year’s electricity.
 


Source: GTM Research and Genability

A $10 fixed charge on solar customers would result in the typical Massachusetts solar customer paying $24 more each year. 

The $24 difference is a result of the customer being charged an additional $3 ($10 fixed charge for solar customers minus the $7 monthly fixed distribution charge for all customers) in the fixed-charge scenario for each of the eight months the minimum bill charge is not triggered in the minimum-bill scenario. As a result, a minimum bill would likely be preferable to the typical Massachusetts solar customer, assuming the minimum bill is set at the same level as the fixed charge. 

***

The report, available to GTM Research solar clients, also includes a comprehensive overview of how the proposed Massachusetts minimum bill mechanism would work and analyses on the impact of the minimum bill on a typical Massachusetts solar customer’s electricity bill, the minimum bill’s effect on residential solar project economics, and the impact of different size minimum bill charges. If you have any questions or would like to gain access to the report, please contact sales@gtmresearch.com.

Categories: Industry News

Lessons From New York: How Hurricane Sandy’s Aftermath Is Creating a Smarter Power System

Thu, 07/24/2014 - 06:00

It’s ironic that a storm -- one that caused widespread blackouts that left millions of Americans in the dark -- is finally helping us see the light. 

Hurricane Sandy brought devastation and loss to the Eastern Seaboard. The storm exposed the severe vulnerability of our electricity infrastructure and made global headlines as a harbinger of nature’s impacts in a climate-changed world. But beyond the shock, New Yorkers found a silver lining in the destruction.

The storm made it crystal clear that the existing electricity system is not suited to address the challenges of the 21st century. In response, New York state recently released a powerful report illuminating how it plans to create a more affordable, efficient and reliable grid.

Titled Reforming the Energy Vision, this game-changing document calls for a new approach to generating, managing and delivering electricity throughout New York. The state proposes to replace aging infrastructure by investing nearly $30 billion over the next decade to develop a smarter electricity system. State officials, seeing the performance and cost benefits, are moving quickly to put this vision into action.

Central to the new strategy is replacing two obsolete notions: that the model of centralized generation combined with long-distance transmission is the most cost-effective option, and that utility customers should only consume -- not produce -- grid services.

When it comes to the electricity system, Sandy helped to make it clear that bigger isn’t always better. To expand grid services, we have historically incentivized utilities to build large power plants and big transmission infrastructure. This has led to an inefficient and overly expensive electricity system. States across the country have built significantly bigger systems than necessary.

For example, New York’s electricity system uses just 60 percent of the electricity it is capable of generating, on average, because many power plants operate only a small number of hours each year when demand for electricity is highest. Additionally, roughly 10 percent of transmission-dependent power is lost because of inefficiencies associated with power traveling long distances. As a result, most New Yorkers pay more for energy than they should.

For a deeper look at how Sandy changed utility planning, read Greentech Media's recent e-book, Resiliency: How Superstorm Sandy Change America's Grid.

Now, smart information systems, energy efficiency, and local renewables are challenging the centralized power paradigm in terms of performance and costs. These distributed energy resources can provide resilient, affordable electricity services driven by private investment and innovation. The Hunters Point Community Microgrid Project in San Francisco, being conducted by the Clean Coalition in collaboration with Pacific Gas & Electric, is proving the technical and economic viability of this distributed grid architecture.  

To unleash distributed energy resources in New York, officials are redefining the relationship between utilities, their customers, and the power grid. Rather than simply providing energy as a commodity, New York now sees its utilities as businesses that can be incentivized to provide electricity as a service. Moving forward, utilities will manage the grid as a platform where innovative businesses compete to provide grid services. The result can be drastically improved performance of the electricity system and reduced costs for everyone.

New York is on the right track and leading the nation toward a clean, efficient and reliable electricity system. But even more should be done. Smart, two-way meters in every home allow effective demand management, while competitive rates of payment to all licensed grid service providers can drive competition and innovation, which benefits consumers. Greater support for distributed generation will power economic growth, producing more jobs per dollar invested than traditional power plants. And deployment of local renewables can be accomplished at remarkable speed, enabling a transition to a zero-carbon electricity system in as little as two decades, according to our studies.

The U.S. electricity system is outdated -- and so are the policies that continue to guide its development. Although it’s not easy to change century-old paradigms and infrastructure, New York is embracing this challenge head-on. Hurricane Sandy was a devastating storm, but it is spurring a shift toward cleaner, cheaper and more reliable power. Hopefully, these winds of change reach well beyond New York’s borders.

***

Daniel M. Kammen is a Distinguished Professor of Energy at the University of California, Berkeley where he directs the Renewable and Appropriate Energy Laboratory. He has served as Chief Technical Specialist for Renewable Energy and Energy Efficiency at the World Bank. Craig Lewis is Executive Director of the Clean Coalition, a nonprofit leading the transition toward renewable energy and a modern grid.

For more on New York's evolution, listen to the Energy Gang podcast below:

Categories: Industry News

Google’s $1M Challenge: A Laptop-Sized Solar Inverter

Wed, 07/23/2014 - 11:30

Back in May, Google announced its Little Box challenge, a $1 million prize for technology that can radically shrink the size and weight of inverters. This week, it opened the contest to applicants, announced the IEEE as a partner, and clarified just how small it’s hoping to get with the next-generation power conversion technologies.

According to Google’s Tuesday blog post, its goal is to shrink today’s picnic-cooler-sized DC-to-AC inverters, used to connect solar panels, wind turbines, electric vehicles and other grid edge devices, into something “the size of a small laptop, roughly 1/10th its current size.” In technical terms, that equates to a kilowatt-scale inverter with a power density greater than 50 watts per cubic inch.

Google’s Little Box challenge specifications document (PDF) offers a lot more details, including minimum efficiency (95 percent), how hot to the touch it can get (60 degrees Celsius), and a slew of specs for how the devices can interact with the grid. But all of these are more or less standard solar inverter requirements -- it’s the size that matters most.

Getting today’s inverters down to laptop size will require some revolutionary new materials and design concepts, according to MJ Shiao, director of GTM's solar research.  Today’s inverter manufacturers are pushing the higher bounds of efficiency, and "investing into the operational, value, feature side of the inverter may show more returns -- not that there's no value in investing in smaller footprints," he said. "Smaller, more efficient designs clearly have some return. Fitting more power into the same footprint can lead to lower electrical balance of systems and logistical costs."

Google concedes that it’s setting a high bar, and it’s enlisted the help of the IEEE Power Electronics Society to help judge and select the winners. Applicants have until July 2015 to get their submissions in. As for why it thinks that much smaller inverters are important, Google’s blog post cited their utility to “help create low-cost microgrids in remote parts of the world. Or allow you to keep the lights on during a blackout via your electric car’s battery. Or enable advances we haven’t even thought of yet.”

Of course, $1 million doesn’t go that far in terms of bringing a technical concept to lab demonstration, let alone commercialization. But a relationship with Google could be worth a lot more than that, particularly given its recent interest in power conversion technology. For the past few months, Google has been hiring power electronics engineers for a project dubbed the “Bottom Up Grid,” with the promise of getting to work on “advanced electrical power conversion and conditioning solutions that aim to fundamentally change the world of power.” According to news reports, Google has put former Department of Energy ARPA-E program director Arun Majumdar in charge of a power electronics group within the company’s Energy Access team.

Where could the breakthroughs come from? New wide-bandgap semiconductor materials are helping power electronics achieve better performance in smaller packages. President Obama dedicated $70 million to a power electronics manufacturing hub in North Carolina last year, and ARPA-E is funding a host of next-generation semiconductor research efforts.

One of them, Transphorm, has Google as an investor, and is using gallium nitride as a replacement for silicon in all manner of power electronics components. Silicon carbide is another material that promises to allow significant reduction in materials weight and cost. Google's Little Box web page lists semiconductor companies including Transphorm, NXP, Cree and United Silicon Carbide as places where Little Box applicants should go for information, and asks other wide bandgap device manufacturers not on the list to add their names as well.

Google’s interest in better power conversion technology isn’t just charitable. The company has invested more than $1 billion in solar and wind power projects in the past five years, and holds a 37.5 percent stake in the Atlantic Wind Connection, a $5 billion offshore wind transmission project planned for the East Coast. It’s also laying the groundwork for more active participation in the grid. In 2010, it received approval from the Federal Energy Regulatory Commission to act more like a utility in terms of onsite generation and power-purchase agreements. And, of course, it has been squeezing efficiency out of its data centers for years now, another area where tiny gains in power conversion efficiency can add up to huge savings.

Google’s Little Box challenge specifications

 

Categories: Industry News

Beyond the Rooftop: Commercial Net Metering in California

Wed, 07/23/2014 - 11:00

Net energy metering (NEM) in California lets businesses offset their on-site load through the same simple, straightforward tariff structure used by homeowners. A recent report from E3 called the NEM Impacts Analysis notes that commercial system interconnections might actually provide a net benefit to ratepayers, allaying utility concerns and protests regarding a supposed "cost shift" caused by NEM programs. In order to promote further growth in the commercial solar sector, new NEM options are being introduced to encourage customers to go solar.

Today, solar is truly cost-effective for many commercial customers with the right rate tariffs, load profiles and installation sites. Yet despite strong economics, thousands of farms, businesses and schools still have yet to unlock the value of onsite renewable energy generation. For example, while California’s farms and ranches generate nearly one-quarter of all on-farm clean power in the U.S., less than 8 percent of the state’s agricultural operations have invested in solar and other renewables. This demonstrates significant untapped potential, which could be stimulated by a robust NEM program that better serves commercial generators.

For California’s farms, in particular, solar can play an important role. The ongoing drought, which forces farmers to get their water from energy-intensive groundwater pumping, has caused energy costs to skyrocket. A recent UC Davis report estimated that farmers in the Sacramento/San Joaquin Delta will pay an aggregate total of nearly $500 million due to increased groundwater pumping. Solar has already helped alleviate these costs for hundreds of farms. It could play an even greater role in making California’s farms more resilient in the face of a drought.

So what is needed to encourage commercial solar growth in California? As the future of NEM in California is debated at the state’s Public Utilities Commission, let’s look at the opportunities available to encourage commercial-sector renewables growth.

Obstacles to growth

California’s slowly evolving NEM regulations have occasionally thrown up roadblocks to solar growth, with many of them landing squarely in the path of potential commercial sector customer-generators.

For example, regulations originally confined the net metering incentive to the meter connected to the solar installation. This system worked well for homes and small businesses, which generally have just one meter through which total energy usage and production can be netted.

But businesses with multiple meters were left out in the cold. Farmers and ranchers, for instance, can have dozens of meters spread across acres of property to serve irrigation pumps and storage sheds far from any centralized location. Industrial facilities often have separate meters installed for each building across a complex. Regulations required a separate set of panels to serve the load on each of these separate meters, which is clearly an overly complex and uneconomical solution.

Another barrier for commercial customers has been demand charges, which do not apply to residential systems, but can make up at least 30 percent to more than 50 percent of a commercial customer’s monthly bill. Even with a renewable energy installation that offsets all on-site energy usage, it is difficult to predict a reduction in demand charges, because demand could spike when the system has no energy output -- for example, in the evenings or when the sun isn’t shining. For agricultural operations with irregular pumping patterns or machinery needs, a single instance of high electricity demand can spike demand charges for the whole month’s bill. Meanwhile, utilities continue to steadily increase demand charge rates. These charges for California’s three largest utilities have risen 30 percent in the past three years alone.

Fortunately, many solar customers have learned they can match their peak demand to the times when their solar system is producing energy. This can reduce their demand charges significantly, yielding substantial savings beyond the reduction in energy charges. Without a way to value the peak capacity of a solar project and apply that to customer demand charges, this value is often not part of a commercial solar project’s financial projection.

Net metering regulations have also historically provided a disincentive to installing larger commercial projects, even though these larger installations are known to provide greater benefits to ratepayers. E3’s report NEM Impacts Analysis states, “NEM customers as a group pay their cost of service. This aggregate result is driven by a minority of large, non-residential NEM customers who significantly overpay their cost of service.”

Residential financing has exploded because the FICO score is a standard way to evaluate the risk that someone will stop paying their solar bill in the future. But because there is no commercial equivalent to the FICO score, every project must be individually evaluated for credit concerns, adding costs and time to the financing process. As NEM does not fairly compensate a system owner for backfeeding onto the grid, the projects are only evaluated on the basis of the host’s ability to pay their solar electric bill, and thus they require this credit evaluation process. If electricity tariffs accurately reflected the value of solar electricity produced by a system, there would be no need for this credit evaluation, because a system financier could be sure that they are still receiving value from producing the energy, even in a situation where the onsite customer no longer uses the solar energy.

Meter aggregation progress

In 2012, the California Legislature passed SB 594 (Wolk), which addresses some of the problems faced by multiple-metered properties using NEM. The Net Energy Metering Aggregation (NEMA) option created by this bill allows property owners to aggregate the electricity load of multiple meters on a single or multiple parcels of land, and then credit the bills with kilowatt-hour production from a single net-metered installation.

NEMA thereby reduces the costs of installation and interconnection by removing financial and technical barriers for customers with multiple meters. This new option has been particularly attractive to farmers, whom it allows to optimally site the renewable energy system, instead of having to co-locate it with the meter that takes the most load. NEMA also incentivizes the installation of larger systems, as it increases the amount of load that can be offset to include all electricity consumption within a facility or farm, not just the consumption on a single meter.

The NEMA legislation was passed with strong support from a large group of agricultural and solar advocates. But even putting this common-sense measure in place was not an easy task, requiring a significant investment of time and resources against stiff utility opposition and a protracted regulatory process.

Even now, a full twenty months after the passage of SB 594, NEMA is only available in PG&E territory. Southern California Edison and San Diego Gas & Electric are expected to roll it out by the end of July 2014.

NEMA was a step forward, but it hasn’t eliminated several of the hurdles to widespread commercial growth. Solar projects for commercial customers are still great investments for specific customer types, particularly those whose rate tariff, load profile, and installation site make solar a cost-effective way to reduce electricity bills and achieve desirable payback periods and investment returns. However, there are several policy mechanisms that will need further evaluation if commercial solar is to become fully capable of reaching beyond a few customer types and market segments.

Achieving commercial sector growth: How to get there

Last fall, the legislature passed AB 327, which tasks the CPUC with developing a successor tariff to NEM that accounts for the total costs and benefits of net metering. This successor tariff or contract, dubbed "NEM 2.0," will go into effect on July 1, 2017, or when 5% of aggregate peak load in each of the major utility service areas is produced by renewables.

This is an opportunity to address the obstacles to commercial NEM growth we have already discussed, in order to better encourage investment from farms, businesses, schools, and other commercial facilities in building California’s renewable energy future.

The CPUC has issued guidelines that govern the development of the successor tariff, which show progress toward a system that works for commercial customer-generators in addition to residential. But in order to fully unleash the potential of this market and ensure that it keeps growing, NEM 2.0 must achieve the following outcomes:

  1. Expand and improve upon NEM aggregation. Multi-metered properties with higher onsite load are excellent candidates for renewable installations. But confusing and arbitrary rules and regulations should not limit the ability to install the optimal system. Further, NEMA should be adopted by all California utilities, not just the IOUs.
  2. Rethink demand charges and other fees. Standby fees and departing load charges should be re-evaluated to make sure they are in proportion to the actual costs of grid services. Demand charges should be similarly examined. Fees and charges should be adjusted to bring total costs in line with total benefits.
  3. Evaluate the benefits of larger systems. Large commercial installations, sized appropriately to load, can improve the grid and help the state attain aggressive greenhouse gas reduction and renewable energy goals. California should evaluate the benefits and impact to the grid from deployment of larger systems.
  4. Long-term certainty is key. A commercial solar system is a long-term investment. Better assurances of equitable terms and protection from unfair tariff changes, charge increases, and added fees are needed to provide certainty.
  5. Provide solar financiers with more predictability. Defining standards for commercial financing will be key to reducing the time and hassle involved with assembling commercial financing packages. One option to help mitigate the need for credit reviews is to accurately value the energy produced by a system, regardless of customer use. The current “Net Surplus Compensation” mechanism needs an upgrade, and a fair value will help finance providers approve projects that may otherwise not fulfill their credit requirements.

With incredible and accelerating growth in solar installations coming alongside an overhaul of California’s net metering regulations, we are entering a new era of renewables development in the state. There are currently plenty of commercial customers who can invest in solar today and enjoy substantial savings and great economic benefits. However, we must structure NEM 2.0 to encourage participation by all commercial sector customers and demonstrate that California is truly a leader in deploying solar technology.

***

Adam Kotin is a Policy Associate at California Climate & Agriculture Network. Ben Peters is Director of Solar Finance & Policy at REC Solar.

Categories: Industry News

Are Flying Robots the Next Smart Grid Technology Ready to Take Off?

Wed, 07/23/2014 - 10:45

Unmanned aerial vehicles (UAVs) come in all shapes and sizes -- and they’re not all like the missile-equipped drone aircraft in use by the U.S. military. A new breed of tiny, battery-powered flying robots are performing all kinds of non-lethal tasks, from trade show package delivery demos for Amazon, to flying over solar farms to look for maintenance problems.

Utilities are another natural fit for UAVs. They have thousands of miles of transmission lines to inspect every year, many of them stretching over untracked mountain and desert terrain that require expensive helicopter flights or time-consuming ground crew hiking expeditions. A fleet of flying robots could cover the same territory at a fraction of the cost.

UAVs also have a fraction of the risk for certain utility tasks -- say, climbing a transmission tower to inspect high-voltage electrical equipment, or checking a power line that’s being threatened by fire. While they can’t actually fix things themselves, UAVs could be very valuable eyes and ears for utilities across a range of use cases.

So why haven’t utilities started using drones yet? The main roadblock to date has been the Federal Aviation Administration. While private property owners can fly tiny robots at close to ground level without FAA clearance, utility infrastructure is out there in the public airspace. Flying anything in that airspace comes with an exhaustive list of FAA requirements.

Earlier this month, San Diego Gas & Electric quietly became the first utility in the country to take on this challenge, launching a pilot program under an FAA “special airworthiness certificate.” Over the coming months, SDG&E will be flying a pair of UAVs along a half-mile-wide, 2.5-mile-long stretch of transmission line right-of-way in remote eastern San Diego County, in hopes of proving that flying robots are safe and effective tools that could eventually be put to use across its territory.

SDG&E has already started test-flying its two UAVs, which weigh less than a pound and are built by Massachusetts-based contract R&D firm Physical Sciences Inc., utility spokesperson Hanan Eisenman said in a Friday interview.

“The purpose is to promote safety and reliability,” said Eisenman. “This device can inspect power lines, help identify power outages more quickly, and improve situational awareness in fires. It can also inspect gas lines," which is something of interest to Sempra Energy, parent company of both SDG&E and Southern California Gas.

Cost reduction is another huge benefit. SDG&E’s entire UAV system, including two “quadcopters” and a control panel, cost about $6,000, he said. Compare that to the costs of helicopter flyovers for tasks like tracking vegetation growth along transmission lines, which can add up to hundreds or thousands of dollars per mile.

Of course, SDG&E’s current tests include costs beyond the price of the UAVs. Right now, SDG&E has hired licensed pilots to operate the UAVs in flight, as well as a “registered pilot observer” to monitor each outing, he said. That’s part of the FAA agreement to build a record of evidence of how well the UAVs handle in their set of tasks.

For instance, SDG&E would like to see if the quadcopters can fly in wind conditions that would ground helicopters, such as during California’s seasonal Santa Ana winds, he said. It would also like to put them to use for firefighting tasks, to augment its existing weather forecasting and firefighting information coordination system, said Eisenman.

There’s a catch here, however. Right now, SDG&E is only operating its UAVs in line-of-sight mode, with each vehicle always in sight of the pilot flying it, Eisenman said. That’s also part of the FAA agreement, and SDG&E isn’t yet talking about how it might start testing the UAVs in situations where the operator can’t actually see it.

“These things do have a camera on them, that sends a live feed to the control panel, so the operator can send them up, and get close to those transmission towers and inspect them in real time,” he said. This, of course, is sure to raise privacy concerns. Eisenman noted that the current test site has no homes or businesses, and that SDG&E has no intention of using UAVs in populated areas.

The electric utility industry has been testing robots for transmission line inspection for some time now, but they’ve mainly been crawler-type devices that ride on the power line itself. Flying robots are far more flexible, and startups like SkyCatch are designing platforms that make deploying them quite simple. Perhaps the smart grid will be airborne sooner than we think.

Categories: Industry News

Do Networking Standards Really Matter in Connecting the Smart Home?

Wed, 07/23/2014 - 10:15

Everyone agrees that the smart home market will flourish when there is a variety of popular, connected devices on the market that work seamlessly together. But exactly what it will take for that to happen is still an open debate.

Last week, Nest and Samsung announced a new mesh networking protocol, Thread, which promises to “help the internet of things realize its potential for years to come.”

But at least one major retailer that sells many of the things for the home that will connect to the internet is not that impressed by news of yet another protocol. “We’ve taken a very deliberate stance: we’re just getting out there and getting on with it,” said Kevin Meagher, VP and GM of Lowe’s smart home division.

That doesn’t mean that Meagher won’t support Thread if it ends up providing a better user experience, such as longer battery life, which will in turn allow Lowe’s to move more products. “The business of selling stuff to people doesn’t change just because Google is in the market,” he added.

Lowe’s has not been waiting around for HomeKit or Google’s Nest to sell people elements of a smarter home. “Our problem is that if everyone had sorted this out, we wouldn’t be doing this,” Meagher said of Lowe's own Iris platform.

More than two years ago, the retailer launched Iris, which integrates devices from vendors such as Honeywell, Schlage, First Alert, and Whirlpool through a hub that connects to the home’s broadband connection. The platform, which can be used for self-monitoring home security and energy efficiency, could face competition from Google and Apple, as it already does from ADT, Comcast, Alarm.com and others.

But with other entrants into the market, Lowe’s also sees new opportunity to move the whole market forward. “From a Lowe’s viewpoint,” said Meagher, “We don’t think the issue is technology and the way things communicate."

Reps of other potential competing networking protocols, such as ZigBee, offered more pointed criticism. “Thread does not seem to provide the higher-level standardization to ensure interoperability between devices,” Tobin Richardson, chairman of the ZigBee Alliance, said in a statement.

Chris Boross, president of Thread Group, argues that ZigBee is not truly designed for consumer electronic devices. He added that it was a conscious decision that Thread does not have an application protocol as ZigBee does; it is intended to purely be a networking technology.

For now, Lowe’s is working on expanding its own offerings with Iris through an open protocol. At the moment, Z-Wave and ZigBee products can be integrated onto Iris, but that should expand to Wi-Fi products in the future.

“Everyone needs to get real about this and just open up their API and let manufacturers focus on just selling devices,” said Meagher. 

Forging partnerships with utilities

Even though Iris has been available for more than two years, Lowe’s has been hesitant to work with slow-moving utilities, as have some telecom providers with smart home offerings. For Lowe’s at least, that is starting to change.

Lowe’s does not just want electricity management to be a part of its smart home offerings -- it also wants to offer support and advice across the utility spectrum, including gas and water. Some of its analytics have been developed in-house, and it is also working with Genability.

But Lowe’s is not interested in offering tailored analytics unique to each utility that comes knocking on its door. “Whatever we do has to be infinitely scalable,” said Meagher.

Next quarter, Lowe’s will take the first step to see if its platform is scalable as part of a utility program. It will sell a USB stick that links the smart meter to Iris. Southern California Edison will offer its customers a rebate for purchasing the device.

Unlike bring-your-own-thermostat programs, which Iris customers might be able to participate in in the future, SCE is looking to provide energy efficiency information to Iris customers in its territory as a low-cost way to encourage savings. In the future, based on usage patterns, it could be fine-tuned to push rebates.

The Iris platform will also soon be offering a connected hot water heater from Whirlpool, which could interest utilities that are looking for new demand response devices beyond the smart thermostat. Lowe’s is in talks with some utilities about the possibility of rebates for the device, some of which could cover most of the cost of the heater, which will retail for about $400.

Lowe’s interest in working with utilities centers around allowing its customers to get the benefit of rebates and to provide more detailed information about their energy use in order to help them buy more efficient products for their home.

Ideally, Lowe’s would like customers to use Iris when it buys those products because then it can mine the data coming from those homes to offer more tailored services and products. “The real battleground is data and leveraging it in innovative new ways,” said Meagher. Savvy utilities will likely want to partner with the platforms that are providing those goods and services that can be then be tapped further by the utility, whether it’s for water efficiency or energy savings.

At the end of the day, however, Lowe’s will still happily sell the door locks that integrate with Nest or HomeKit, even if they don’t work with Iris. “Let the consumers choose,” declared Meagher, adding a qualifier: “It will be two years before the floodgates really open.”

For more on how utilities can leverage next-generation consumer analytics, join Greentech Media at The Soft Grid: Data, Analytics and the Software-Defined Utility conference on September 10-11, 2014 in Menlo Park, Calif. Looking forward to seeing you there!

Categories: Industry News

What’s Ahead for PV Inverter Technology?

Wed, 07/23/2014 - 09:00

As the primary point of interconnection between the generation and distribution sides of a photovoltaic system, the solar inverter directly affects multiple aspects of solar project development. Consequently, the right inverter has the potential to significantly impact a PV project’s delivered value.

A growing focus for successful integration of PV power on a large-scale, distributed basis has been to add smart features and functionality to inverters. At the same time, other inverter technology advancements are gaining popularity in North America.

Three key trends are making a significant impact in the overall effectiveness of PV installations and will likely gain momentum in the coming years:

  • Rapid growth in three-phase string inverter adoption
  • Movement toward increased DC voltage
  • Shift toward transformerless systems
The rise of three-phase string

Distributed architectures that utilize multiple three-phase string inverters throughout a solar array have been the typical architecture in Europe and are gaining traction in the high-growth U.S. commercial market for distributed generation. IHS predicts that low-power, three-phase inverter shipments will triple over the next four years in the U.S., with annual shipments of nearly 20 gigawatts globally in 2017. As one of the fastest-growing inverter applications worldwide, three-phase string inverters offer a compelling price-to-performance ratio, simplicity in design and ordering, ease of installation, improved uptime, and quick serviceability for commercial applications where flexibility and modularity  are essential.

The move toward greater voltage capacity

The industry standard has gradually moved toward 1,000 V models rather than 600 V. Increasing  to 1,000 V allows for more modules in each string series. A 600 V inverter can typically only accommodate twelve standard 72-cell modules in a series, whereas the 1,000 V allows for twenty modules of the same type. This results in the use of fewer fuses, disconnects, and combiner boxes, leading to a reduction of up to 40 percent in balance-of-system costs on the DC side. In addition, 1,000 V-rated wire carries more power and reduces conductive losses, providing more efficient output than 600 V-rated wire.

The move to 1,000 V delivers significant improvements in PV system cost and function, but the industry standard continues to advance from there. Inverter manufacturers are already looking at designs and topologies that allow for even higher voltages. For example, one available solution is a 1,000 V bipolar inverter, which enables system cost savings in a range similar to a 2,000 V monopolar inverter.

The shift toward transformerless systems

Since 2007, when Advanced Energy introduced the first transformerless solar inverter available on the market, the industry has seen increasing adoption of transformerless inverter solutions. While transformerless inverters have long been popular in Europe, up until 2005, U.S. regulations dictated that all electrical systems must be grounded. For PV systems, this required the use of transformer-based inverters to create galvanic isolation between the DC and AC sides of the system. Recent code updates allowing for ungrounded systems gave rise to transformerless technology in the U.S.

Without a grounding requirement, project developers can achieve greater energy harvest, lower operations & maintenance (O&M) and balance-of-system costs, and a smaller inverter footprint. The transition to transformerless inverters is also being driven by higher voltage input capabilities and improved DC monitoring tools. Previously, when maximum DC input was limited to 600 V, reaching a desirable AC output required a boost from the transformer. But with the market now moving toward a standard DC input of 1,000 V or more, there is a reduced need for a transformer because the higher DC voltage also boosts the native AC output.

As the brain of the PV system, the inverter is critical in ensuring that solar energy is successfully fed to the grid. And as solar’s role in the North American energy market continues to grow, inverter manufacturers will continue to adapt their offerings to meet and improve industry standards, so as to deliver the greatest possible value to project developers, utilities, and consumers.

Categories: Industry News

Two Charts That Illustrate the Sophistication of Today’s PV Suppliers

Tue, 07/22/2014 - 14:55

Back in the day, “supply expansion” used to be synonymous with “capacity expansion” in PV manufacturing. More demand than you can handle? Want to expand the business and set yourself up to take advantage of growth in the end market? Then increase your factory footprint and buy more manufacturing equipment.

Certainly, there have been exceptions: some top-tier firms have been using contract manufacturers for module assembly since the last decade. But as a general rule (and to the delight of capital equipment suppliers the world over), the last growth phase of the solar manufacturing sector was characterized by growth in internally owned manufacturing capacity as the primary mechanism by which to expand revenues and shipments. It is this collective mentality with regard to expansion that plunged the supply industry into overcapacity in the period 2011 to 2013.

As the chart below shows, today’s supply strategies are a lot more sophisticated. This is most apparent in the case of China-based module vendors, which epitomized the old "add-capacity-first-think-later" mindset to the extreme. With the Ministry of Industry and Information in Beijing placing heavy regulations on “pure” capacity expansion projects, the imposition of trade restrictions in the EU and U.S. and supplier balance sheets still bearing bruises from several quarters of heavy losses, adding more capacity in China is no longer the sole (or even the preferred) approach of increasing supply capability; strategies now differ significantly by individual supplier.

Source: GTM Research PV Pulse

Yingli Solar, the largest supplier of modules in the world in 2012 and 2013, uses contract suppliers in China for everything from ingots to modules, and has recently dabbled in OEM arrangements with producers in Mexico and Canada. ReneSola also uses OEM partners in locations as diverse as Turkey, Poland, India and South Korea to supply modules that are free from trade restrictions and tariffs to its customers in the EU and U.S. In contrast, Trina and Jinko have moved to acquire struggling lower-tier Chinese suppliers that own relatively modern equipment at pennies on the dollar -- and while JA Solar is planning another internal expansion in China for cells and modules this year, it has also invested in a module assembly joint venture in South Africa with a local downstream firm, which it plans to use for shipments to the U.S. Meanwhile, crystalline silicon module firms based outside China have outsourced component production to an ever-greater degree since 2012.

In fact, supply strategies and business models don’t just vary by firm -- as the chart below shows, even the same vendor can have markedly different strategies for different geographical markets. Take ReneSola, for example: it serves the EU and the U.S. by using OEM partners and sells its modules through distributors and direct sales to installers and developers. In China, it has followed peers such as Jinko in extending its reach further downstream into project development and sales, using all-Chinese products. In Japan, the firm has invested in an 80-megawatt joint venture for module assembly, hoping to use the appeal of domestic production to gain a leg up on its competitors in the rooftop market. Over and above all this, ReneSola is planning a broad shift from wholesale to retail sales in high-GDP markets, aiming to generate half of its sales via this approach by the end of 2014 (take a look at the company website if you don’t believe me).

Source: GTM Research PV Pulse

Ultimately, the sophistication and variety exhibited by today’s PV vendors is a sign of an increasingly complex and diverse end market, and it's one positive outcome of the overcapacity-induced perdition of the last two years. While a few firms are still prone to partaking in periodic pissing contests about how many gigawatts of factory capacity they command, the best and brightest have absorbed the lessons of the past and have overlapped long-term strategic initiatives with more opportunistic, market-focused approaches -- undoubtedly a reflection of a more mature and efficient supplier landscape. To paraphrase the German philosopher Friedrich Nietzsche (or equally, American Idol Kelly Clarkson), what has not killed today’s solar firms has made them, if not necessarily stronger, at least a lot smarter.

***

Commentary and analysis in this article were drawn from GTM Research's monthly global PV supply chain tracker, PV Pulse. For more on the Pulse, click here.

Categories: Industry News

Is This the Site of the Tesla Giga Factory in Nevada?

Tue, 07/22/2014 - 14:50

It appears that work has already begun on the site of the new $5 billion Tesla Giga factory near Reno, Nevada. It's the planned location of the world's largest lithium-ion battery factory and the potential enabler of lower-cost Tesla electric vehicles.

It's either that or a really big pizza factory. 

Yesterday, our unnamed source (Bob Tregilus) hiked a few high-desert ridges in Storey County, Nevada to take pictures of 50 earthmovers moving earth at a site that Tregilus estimated to be large enough to accommodate the proposed 10-million-square-foot factory. Tregilus has also heard whispers about a secret project from construction industry folks and locals.

The site is located at the Reno Tahoe Industrial Center at 2641 Portofino Drive. Here's one of Tregilus' photos:

(click to enlarge)

Photo credit: Bob Tregilus

It's a great photo, but it turns out that last month, the Reno Gazette-Journal reported that "an excavation permit to begin site work for a sizable project has been pulled [on May 22] by the Tahoe-Reno Industrial Center" at 2641 Portofino Drive. The paper notes, "The business park could potentially begin site preparation work before any official contract with Tesla is signed, adding, "In addition to the excavation permit in Storey County, the contractor for the Portofino Drive project has air quality and storm water permits from the state's Environmental Protection Division." There was no confirmation that this is indeed the Tesla site.

According to the TeslaMondo and Tesla's own blogs, the site is also potentially the site of a pizza factory. Tesla is not commenting.

Reno has been considered the front-runner for Tesla's vertically integrated super factory.

In March we covered the deep politics surrounding the Tesla battery Giga factory -- which pitted Texas, Arizona, New Mexico, Nevada and California against each other for the rights to the $5 billion high-tech manufacturing site. Tesla aims to build 500,000 cars per year by 2020 and will need an unprecedented volume of lithium-ion batteries to hit this target.    

We recently covered a presentation from Tesla CTO JB Straubel, in which he spoke of attacking the battery's cost with the Giga factory by "doubling the worldwide capacity in a single factory and reinventing the supply chain." A total of 35 gigawatt-hours of cell production from the new plant will be devoted to meeting the needs of the Fremont automotive plant, and 15 gigawatt-hours will be devoted to stationary battery packs. Straubel said that Tesla was "bullish" about the California energy storage mandate. Straubel also said he was bullish that stationary energy storage "can scale faster than automotive."

The project has the potential to provide more than 6,000 jobs. Or millions of pizzas, as the case may be.

Elon Musk, Tesla CEO, is intent on moving forward with two sites, according to Bloomberg, saying in an earlier interview, "What we’re going to do is move forward with more than one state, at least two, all the way to breaking ground, just in case there are last-minute issues."

Current occupants of the Nevada site

Photo credit: Bob Tregilus

Photo credit: Bob Tregilus

Giga factory facts and figures
  • Tesla announced a $1.6 billion convertible debt offering. Tesla looks to offer $800 million of convertible senior notes due in 2019 and $800 million due in 2021 to build the world's largest battery factory.
  • Elon Musk predicts that the new factory will produce batteries for 500,000 vehicles by 2020.
  • Tesla expects to reduce the kilowatt-hour price of batteries by 30 percent.
  • The plan is for construction to start in 2014, with production beginning in 2017.
  • The facility will house more than 6,000 workers in 10 million square feet of factory space.
  • The plant will provide recycling capability for old battery packs.
  • Musk also said that Panasonic, currently supplying hundreds of millions of cells to Tesla, would likely join in on the new factory, but no commitments have been made.

Here's a rendering of the proposed plant:

Categories: Industry News

Here’s What Utilities Really Think About Microgrids

Tue, 07/22/2014 - 14:48

Despite a surge of interest, microgrid development in the U.S. has been one giant R&D experiment. 

There is just over a gigawatt worth of microgrids scattered throughout the country. While many of those projects have proved their value during major storms or extended outages, they are largely custom-engineered and limited to public facilities like schools, hospitals and military bases. They're also highly dependent on fossil fuels, rather than renewables.

That is slowly starting to change. According to a recent report from GTM Research, microgrid development is expected to grow by around 700 to 800 megawatts in the next three years. Some of those new projects will extend to private commercial operations and include solar PV, battery storage and biogas, but the vast majority will still be based on the traditional model.

"The applications are still largely limited," said Timothy Qualheim, VP of Strategy Solutions at S&C Electric Company. "The non-governmental projects are still less than 10 percent of the market, and a lot of them are pet projects."

So how can microgrids evolve into something more significant than one-off installations limited to a narrow set of customers? There is not a single, simple answer. A lot will depend on improvements to state-level planning, how energy markets value ancillary services, the frequency of outages, the economics of distributed energy and how ownership models change. 

As grid planners sort through those matters and attempt to streamline the development process, another big question arises: What role will utilities play in these projects?

In theory, the expansion of islandable microgrids beyond critical public facilities could create a lot of headaches for utilities. Declining revenues from decreased sales, conflicts over where microgrid operators can string lines and connect to the distribution system, and worries about reliability when customers separate themselves from the grid are all points of contention. 

Some industry observers have argued that if utilities can fully or partially own microgrids, they might be more interested in seeing the market succeed.

"For the vast majority of states, we need to move forward with microgrids with significant utility involvement," said Galen Nelson, the director of market development at the Massachusetts Clean Energy Center. "The business model issues are just as hard as the technical integration issues."

And as a new survey of more than 250 utility executives conducted by Utility Dive points out, power companies are interested in being involved. More than half of the executives surveyed said they see themselves getting into the microgrid market within the next five years. And beyond the next decade, 97 percent said they expect microgrids to offer a business opportunity.

"This defies the perception of utilities as often slow to innovate and resistant to disruptive change," wrote the authors. 

It's certainly not clear whether utilities are embracing this change, or just acknowledging the fact that microgrids are going through a small growth spurt. The Utility Dive analysts interpreted the data very optimistically.

"It indicates that microgrid development will ramp up quickly in the near term, given that utilities are the dominant players in the electric power industry," concluded the authors.

However, with no comprehensive planning processes in place on the state level, it's unlikely that microgrids will "ramp up quickly" and become a meaningful force in the electric system in the short term. But utility recognition of coming growth is certainly a positive indicator. 

So do utilities want to own all the projects? It doesn't appear so. Executives said there would be a mix of ownership opportunities, with power companies playing some role in operating the microgrids in their territories.

No matter how receptive power companies are to the idea of microgrids, there won't be much activity without regulation that values their services. According to the Utility Dive survey, 85 percent of executives said the regulatory environment prevents them from developing or operating projects.

Support for regulatory change appears to be more than just self-interest. The survey also showed that 85 percent of respondents support changes to franchise rights that would allow microgrid developers to gain easier and cheaper access to grid infrastructure.

"If this proves true, and utilities do support franchise rule reforms, it will be a tremendous boon for microgrid developers," wrote the authors.

Assuming utilities start seeing meaningful growth in microgrid development, there are long-term challenges in operating those projects in concert with the rest of the grid. Three-quarters of respondents said they think projects should be controlled by a central operational organization. More than half of those utilities believe they should be put in control, while only 22 percent said a private third party should be in control.

The survey clearly shows that power providers are supportive of microgrids -- in theory. But Utility Dive asks a very important question about what the results mean: "If an overwhelming 97% of utilities see microgrids as a business opportunity, what’s holding them back from capitalizing on it now?"

As partially explained through the responses, a lack of market rules and a disjointed regulatory environment are the primary factors at play. No one has yet come up with a comprehensive process for evaluating, approving and operating projects.

States such as Maryland, Massachusetts and New York are attempting to grapple with these matters and develop a regulatory framework to make microgrid development simpler. But these are multi-year processes. Until planning gets better, it won't matter how positively utilities feel about microgrids -- they simply won't grow beyond niche applications.

Categories: Industry News

Assets of Fuel Cell Vendor ClearEdge Acquired by Korean Conglomerate for $32.4M

Tue, 07/22/2014 - 14:45

Doosan of South Korea, a conglomerate with $21 billion in 2013 revenue, just purchased the remains of bankrupt fuel-cell aspirant ClearEdge Power. The assets and debt sold for $32.4 million, according to the Yonhap News Agency, which reported that ClearEdge had 2013 revenue of $68 million. Dow Jones reports that the original bid was at a higher figure of $48 million, consisting of "$20 million in cash, up to $13 million in payments to vendors and landlords, and up to $15 million to cover secured creditor claims."

Here's a refresher on ClearEdge: The Hillsboro, Oregon-based company raised more than $136 million in VC funding since its founding in 2006 from Kohlberg Ventures, Applied Ventures (the investment arm of Applied Materials), Big Basin Partners, and Southern California Gas Company to develop and build a proton exchange membrane (PEM)-based fuel cell for residential and small commercial applications at hotels, multi-tenant buildings and schools. 

Our understanding is that ClearEdge's PEM fuel cell technology was never ready for the market.

So, in December 2012, ClearEdge acquired the fuel cell business of technology conglomerate United Technology Corporation (UTC). UTC Power was a maker of large-scale phosphoric acid fuel cells (PAFCs), although the firm also has experience with PEM, alkaline, solid oxide, and molten carbonate fuel cells (MCFCs). In an earlier interview, then-CEO David Wright estimated that UTC invested roughly $1 billion in the technology over the last 30 to 40 years. ClearEdge switched out its core PEM product to the UTC phosphoric acid technology for its 5-kilowatt and 400-kilowatt products. The natural-gas-powered UTC 400-kilowatt fuel cell unit had a reputation in the industry as one of the higher-performing products. UTC Power was spun out of Pratt & Whitney in 1958 and supplied fuel cells to NASA for space missions from 1966 through 2010.

Around the same time as the April bankruptcy, Ballard Power acquired UTC's transportation- and stationary-related fuel cell IP assets for $22 million, with $2 million in cash and the balance in common shares. Vancouver-based Ballard has not had a single profitable year since its founding in 1979.

Doosan has existing in-house experience with molten carbonate fuel cell technology and now has access to a wealth of fuel cell equipment and a significant legacy with the addition of ClearEdge/UTC.

In related fuel cell news, earlier this month, PEM fuel cell maker Intelligent Energy went public and raised $94.1 million. The company was valued at $811 million, making it the most highly valued publicly held fuel cell company in the world. We look a bit deeper at the fuel cell market here.

Categories: Industry News

Cisco and Bit Stew Are Turning Grid Routers Into Virtual Servers

Tue, 07/22/2014 - 14:43

Distributed intelligence is all the rage in the smart grid industry these days. The idea is to turn smart meters, grid-hardened routers and other devices into computing platforms, capable of analyzing and processing floods of data at the speeds needed to manage disruptions that happen too fast for central systems to handle.

It’s a tricky endeavor, requiring a marriage of hardware, networking and software to manage multiple tasks. So far, two key contenders for utility platforms have been Silver Spring Networks, with its SilverLink Sensor Network technology, and Cisco, with its Linux-enabled IOx grid router. Both have a host of software vendors building apps to do everything from disaggregate household energy usage to manage grid voltage on circuits with lots of solar PV.

Underlying that premise, however, is the assumption that the underlying “platform” -- a bunch of hardened grid computers, linked by wireless networks -- is cleaning, filtering, analyzing, condensing and sharing all that data at the level required to make real-world decisions. These are the areas of expertise that Vancouver, B.C.-based startup Bit Stew has staked out, and on Tuesday it announced that it’s taking them to the edge, embedded in Cisco’s connected grid router (CGR) platform.

Cisco invested in Bit Stew last year, but it’s been working with the startup since 2011 to help utility BC Hydro deploy and manage its network of 1.8 million smart meters. So far, Bit Stew’s Grid Director software has performed its “adaptive stream computing, complex event processing, high-speed data analytics, and sophisticated machine-to-machine learning” tasks in the utility data center.

But with Tuesday’s opening of its Mix Core engine (Bit Stew’s name for the technology underlying its Grid Director platform), the startup can embed the entire application on a single CGR, or multiple devices in the network, said Bill Reny, the company's chief operating officer, in an interview. From there, it can put that collective memory and processing power to use as if they were servers back in a data center, he explained.

“Putting Grid Director at the edge, on the router level, you can do all those transactions and do all that analytics at the edge, and take the bare bones, the key data, to the head office,” he said. That helps serve a fundamental purpose of distributed intelligence to lessen the burden on the network as more and more devices get connected.

Beyond that, however, is a world of complexity in getting lots of in-field devices to communicate with one another in a way approximating the streamlined, closely managed data center environment -- or to adjust to overcome the inevitable differences. The wireless mesh networks used for smart meters throughout North America, for example, have limits on how fast they can move data from one “hop” to another on its way back to the utility, and a small number of reads are typically delayed or missing and must be accounted for.

Bit Stew’s approach has always included some level of distributed intelligence. At BC Hydro, it embeds software in Itron’s smart meters to help it tell whether a missed meter read is due to network failure, meter malfunction, planned outage, or another reason, for example.

But the embedded version available for Cisco’s CGR routers is aimed at a more seamless provision of distributed computing, said Alex Clark, Bit Stew’s chief technology officer. In other words, the same network and device management tasks the startup has been performing for utilities has laid the groundwork for a greater set of purposes.

“Any one of the devices is going to have less horsepower than an enterprise device, but the collective sum should be greater,” said Clark. “If I can break a complex problem down to a lot of simple problems, why not use the same intelligence to do more?”

Bit Stew’s 8.0 version of its technology includes new data streaming and data parallelization processor capabilities to help in this task.

Cisco and Bit Stew didn’t provide details on how they may have been testing this embedded capability to date, or which utilities were using it. BC Hydro is certainly a likely candidate, and Reny did say that the hometown utility has expressed interest in the technology. Another Cisco utility customer that might be a trial candidate is Duke Energy, the massive utility that has built the concept of distributed router devices into its multi-state, multi-million-unit smart meter rollout plans.

While the companies didn’t lay out just how they’re planning to put their new grid-embedded capabilities to use, Tuesday’s press release cited “complex communication networks that support substation automation, distribution automation and distributed energy resources,” among other utility uses.

“One of the potential use cases for this capability would be processing demand response signals,” said Reny. Bit Stew has been managing all the smart devices in BC Hydro’s Harmony House net-zero-energy home project. And with Mix Core embedded in the Cisco routers that communicate with those devices, it could provide a quick-reacting link for more fine-tuned management of home energy use.

Beyond the grid, both Cisco and Bit Stew are aiming at providing capabilities to the broader internet of things, or IOT -- a shorthand term for the world of wirelessly networked industrial, medical, business and consumer devices that’s predicted to emerge over the coming years. Bit Stew remains focused on the utility industry, Reny said, but it is looking at “adjacent verticals” for expansion, possibly starting next year.

Bit Stew CEO Kevin Collins, speaking as one of several Cisco IOT partners at a Tuesday event, noted that the electrical and gas industries are “on the leading edge of the industrial internet,” to use another term for the same concept. BC Hydro manages about 1.5 billion data points per day, coming in at volumes that amount to ten times Twitter’s record-breaking 580,166 tweets per minute during Germany’s victory over Brazil in the World Cup semifinal two weeks ago, he said.

“Not only do they have to process that, but they have to act on that in real time,” he said. “They have to get that nugget of actionable intelligence from that data to make decisions in the field.”

Bit Stew raised an undisclosed amount from Cisco and Yaletown Venture Partners last year, and secured a $3.5 million credit facility from Silicon Valley Bank in May. It claims 35 partners in the U.S., Canada and Australia, including BC Hydro, Michigan’s Consumers Energy, Australia’s SP AusNet, and two large, as-yet-unnamed California utilities with plans for smart meter and distribution grid optimization analytics to help them meet their solar PV challenges, Reny said.

Categories: Industry News

Intelen Launches Upgraded Engagement Analytics Platform

Tue, 07/22/2014 - 10:00

Intelen is seeking to overcome the restrictions imposed by many of the efficiency solutions currently available on the market. By correlating human behavioral input with energy consumption, as well as with other building metadata and environmental and sustainability factors, and by analyzing this information using behavioral science models, Intelen has developed a behavioral learning system that can be used by organizations to promote efficiency, raise awareness on environmental and energy efficiency issues, and spread knowledge on related topics. Intelen BiG is an integrated platform designed to provide organizations with the tools to promote and evaluate efficiency initiatives that revolve around people.


The BiG platform is focused on humans rather than buildings. It comprises three layers:

  • A dynamic dashboard based on game mechanics theory
  • An integrated learning management system
  • A mobile app that can act as an information and education gateway, connecting users to the cloud platform

As companies and utilities seek new ways to engage and inform their employees and customers and to better communicate their internal sustainability and CSR strategies to all stakeholders and external supply chains, we hope to help foster a paradigm shift that will build lifelong habits.

In the real world, people inhabit various types of buildings where they live, work and pursue their everyday activities. BiG and its associated InGage app accompany users everywhere they go and provide a number of engagement services and activities, based on the culture of the organization and the specific initiatives adopted by each facility's sustainability officer.

With InGage, users can subscribe to a specific initiative or live training game that is linked to the buildings they visit or inhabit. They receive:

  • Tips
  • Multiple-choice quizzes
  • Articles
  • Multimedia content
  • Tasks to commit to
  • Surveys and announcements related to sustainability or other energy efficiency topics

The content InGage offers is adapted both to individual user behaviors and the specific building. Via the BiG admin controls, sustainability managers can manage the content, users, competing teams, competition rules and prizes.

Another feature allows users to push valuable information to the platform. Users can report building faults (including pictures) such as thermal discomfort, lighting/mechanical problems, and HVAC problems. For each interaction, users accrue points that help them earn badges, in keeping with gamification principles.

Users can also see how colleagues are performing via live newsfeeds and status updates. For example, a user can see that another user just scored 87% on the recycling quiz and can choose to challenge that user to a rematch.

On the back end, a very powerful analytics engine analyzes all interactions. By combining demographics, social profiles, energy data and BiG’s real-time metrics, it produces useful behavioral reports and highlights emerging trends. A number of key engagement metrics are calculated in real time, including categories such as Engagement, Knowledge, Influence, Efficiency, and Commitment.

BiG’s behavioral analytics focus on how users behave and, more importantly, why they behave they way they do. In order to provide these analytics and use them to gain behavioral insights and knowledge, four key data sources are needed. To shape any user’s behavior in any app, their demographic and behavioral data is required. However, BiG takes this analysis one step further and merges psychographics with energy data. Basic questions to be answered are:

  • Do users follow a particular pattern in the app? If so, why?
  • How do they tend to interact and behave?
  • How are savings achieved by their actions?
  • What is the impact of their behavior on energy consumption?

In other words, the data tells us not only what is happening, but also how and why it is happening.

“We are creating a brand-new approach to continuous human engagement, and we want to change the way people understand engagement,” according to Vassilis Nikolopoulos, Intelen’s CEO and co-founder. “We focus not on buildings, but on the humans who are inside these buildings. I'm often asked where we focus our engagement services: on home consumers or on corporate clients. The answer is both. The same people who work in corporate buildings also live in houses and apartments. We know how to engage and influence humans in their corporate environment and how to continue this engagement in their homes.”

The initial version of BiG was piloted at a number of corporate and university facilities, and the trials yielded some impressive results.

  • In early 2014, Azusa Pacific University conducted a pilot in which APU Sustainability Club members competed against students enrolled in a sustainability class. Toney Snyder, Assistant Director of Environmental Stewardship at the university, said, "Our students who are actively studying sustainability reported how surprised they were to discover how much they didn’t know about sustainability.”
  • After a pilot conducted at Wayne State University, Sustainability Coordinator Daryl Pierson reported that “the InGage app helped our campus community learn more sustainable behaviors," describing the experience as a "fun learning activity."
  • Tania Taff of Michigan Technological University said that she “found the InGage interface to be extremely intuitive from an administrator’s point of view. Very little coaching was needed to understand how to input material using the BiG admin dashboard.  The pilot went very smoothly, from the back-end preparation of material to the end-user experience."

More than twenty universities have already conducted BiG pilots. Beginning in September, existing and new deployments in the educational and corporate sectors will be scaled up, using the extensively upgraded second version of the platform. Additional deployments are planned at the Harvard Business School and Summa Energy Solutions.

Figure 1: BiG Dashboard

Figure 2: Web User Interface

 

Categories: Industry News

Forget the Death Spiral: Electric Vehicles Offer a Major Growth Opportunity for Utilities

Tue, 07/22/2014 - 06:00

Energy use in the U.S. can be split into two large pies. One pie is electricity use in homes, buildings and industry. The other is transportation, which is powered primarily by liquid fuels like gasoline and diesel.

There are some exceptions, as well as a few overlapping categories of fuel use. For example, there's direct industrial use of liquid fuel (a fairly significant quantity), some liquids burned to make electricity (this used to be a significant amount, but is now only a very small amount), and now a very small amount of electricity used to power electric vehicles (EVs).

American consumers spend, on average, more than $1 billion every day on each of these energy uses.

Electric utilities have never made a serious effort to attack the transportation market at scale. Historically, this made sense. Transportation infrastructure was built around liquid fuels and there was no viable electric-drive alternative.

Within the past few years, a technological transformation has occurred in the electric vehicle sector. The possibility of a utility taking a piece of the oil companies’ market share is becoming much more likely.

There are now better batteries, faster charging options and proven EVs on the road. Virtually every auto manufacturer is building a full electric or plug-in hybrid model. This evolution has happened independently of the electric utilities. Aside from rolling out a handful of charging stations or the occasional plan for how to manage large amounts of EVs on the distribution network, utilities have been little more than observers to this evolution.

Fighting the wrong battle?

There is an opportunity right in front of utilities. Yet the industry seems more focused on the threat of distributed energy and the possibility of a utility death spiral.

The risk to utilities is real. A combination of distributed energy, energy efficiency, changing behavior and weak economic growth has resulted in virtually no growth for new electricity demand since 2008. That will force higher rates for each unit of electricity sold, which in turn will make the alternative technologies more attractive and accelerate consumer adoption. 

So far, these factors only threaten a small portion of a utility’s total sales of electricity. The real threat is declining profitability, which will impair access to low-cost capital and hurt asset-intense businesses like electric utilities.

An unprecedented opportunity

While electric utilities focus on this threat, they have largely ignored one of their greatest growth opportunities. By attacking the transportation market, electric companies could offset slowly sinking demand and drive new revenue growth throughout the industry -- addressing concerns about the impact of efficiency and on-site generation, while also providing more time to adapt.

By proactively accelerating the widespread adoption of EVs and plug-in hybrids, the electric industry could consume a significant portion of the revenue from the transportation energy market. A recent report from the United Nations suggested that EVs could make up 100 percent of vehicle sales in the U.S. within fifteen years, assuming an aggressive support scenario.

There are challenges, of course. They include range anxiety, insufficient vehicle-charging infrastructure in many areas, limits on the affordability of better batteries, and a lack of consumer confidence in the new technology. However, these challenges can be addressed through faster deployment of infrastructure and consumer education.

Electric utilities are perfectly positioned to manage the infrastructure needs and to accelerate understanding and acceptance of EV technology. There are any number of ways this could be accomplished, but here are a few:

  • Free long-term financing for at-home high speed charging stations when a consumer buys an EV – utilities have access to low cost capital and direct billing to consumers.
  • A partnership between a utility and an EV manufacturer could allow for innovative financing options (e.g., an EV with loan payments collected as part of a Duke Energy bill). This has the primary benefit of accelerating adoption through easier financing, but also increases knowledge simply by promoting the plan.
  • For a utility like Exelon, with excess off-peak production due to its large nuclear fleet, a carefully crafted program could absorb some of this power while providing low-cost fuel for consumers with EVs.
  • Rebates for EV buyers could be recovered over the life of vehicle use through increased electricity sales.

As consumer anxiety and infrastructure needs get addressed by utilities, all of these strategies could lead to the permanent capture of consumers’ transport energy purchases.

It's about more than money

There are three reasons why accelerating this transition would be good for America, and not just utility executives and shareholders:

1. Despite increasing output of liquid fuels in the U.S., the country still imports hundreds of billions of dollars in oil every year and will continue to do so unless there is a structural change in demand.

2. Regardless of the geographic source, oil will continue to get more expensive. It is a global commodity, and the U.S. can only have so much impact on the global supply/demand balance.

3. This shift toward electrification would also significantly support efforts to manage global warming. There is no viable way to reduce carbon dioxide emissions from the lifecycle of a gallon of gasoline -- and per-gallon emissions will actually rise as we pursue more unconventional sources of oil that require more energy to extract and refine. The U.S. electric grid has already begun to reduce emissions per unit of electricity as inefficient coal plants are phased out and renewable energy use increases. New EPA rules will accelerate this transition.

Despite the clear benefits to the electric industry, it's unclear how this evolution will play out. Utilities have been slow to react to rapidly changing technology -- and it's worth noting that under existing regulatory frameworks, there is little incentive to do so. There are exceptions, such as NRG, but very few electric companies seem excited about promoting electric transportation.

When will U.S. utilities recognize the opportunity that lies before them?

***

Elias Hinckley is a strategic advisor on energy finance and policy to investors, energy companies and government agencies. An energy and tax partner with the law firm Sullivan and Worcester, he helps his clients solve the challenges of a changing energy landscape.

Categories: Industry News

Solexel, Thin-Silicon Solar Startup, Lands $31M More in VC Funding

Mon, 07/21/2014 - 12:30

Despite investor flinchiness in a recovering global solar market, Solexel, a thin-silicon solar cell and module builder, just raised $31 million in a Round D of funding aimed at moving the company to commercial production.

Solexel added new investor GAF, a large roofing materials manufacturer, to its roster of investors, which includes SunPower, KPCB, Technology Partners, DAG Ventures, Gentry Ventures, Northgate Capital, GSV Capital, Jasper Ridge Partners, and Spirox. The firm's board of directors includes Mehrdad Moslehi and Michael Wingert of Solexel, as well as Ira Ehrenpreis of Technology Partners, Les Vadasz, John Denniston, Jan van Dokkum of KPCB, Larry Aschebrook of Gentry, and Greg Williams of DAG Ventures. Gentry's investments include Fisker, Bloom Energy, Amyris, Agrivida, and Glori Energy -- all Kleiner Perkins portfolio companies.

Mark Kerstens, Solexel’s chief sales and marketing officer, as well as its acting CFO, informed GTM that SunPower did not reinvest in this round, although all of the equity players did. The startup's total VC funding to date is north of $200 million. Solexel has also scored $17 million in DOE and NSF grants and is still an active participant in the DOE SunShot program. The 55-employee startup hit an NREL-certified cell efficiency of 21.2 percent in 2014.

Solexel is looking to bring 20-percent-efficient photovoltaic modules to market in 2015. First Solar, the thin-film solar leader, recently announced that its manufacturing cost will plunge to $0.40 per watt by 2017. (GTM Research Senior Analyst Shyam Mehta looks at the potential of 36 cents per watt silicon-based modules here.)

As we reported previously, Solexel looks to partner in Malaysia to build the cells. The firm currently has a megawatt-scale pilot line in Milpitas which it intends to "copy-exact in Malaysia," according to Kerstens.

Solexel is hoping to mass produce 35-micron-thick, high-performance, low-cost monocrystalline solar cells using a lift-off technology based on a reusable template and a porous silicon substrate. According to the company's claims, the process ensures that the thin silicon is supported during handling and processing, while the back-contact, n-type cell dispenses with the need for expensive silver, using aluminum instead. The process uses no wet steps, according to the CEO, Michael Wingert, and employs CVD on trichlorosilane gas at atmospheric pressure, with silicon deposited at a rate of 2.5 microns per minute. The cell uses nearly ten times less silicon than conventional c-Si cells, at about 0.5 grams per watt.

Mehrdad Moslehi, the CTO and founder of the firm, told GTM that at 156mm x 156mm, Solexel produces the largest back-contact cell in the industry. The firm may use a contract manufacturer to construct the modules, which will be available in frameless as well as framed versions. Solexel claims that its cells don't need the support of glass, and it envisions using lightweight, non-glass sandwich panels in future product offerings.

A resin and fiber carrier, akin to circuit board material, supports the thin cell and allows a diode to be added for module shading tolerance.

Kerstens notes, "Where we've done quite a bit of work is in form factor." He added that the flexible cells don't demand the standard glass sandwich and aluminum frame, and the company is "excited about working with lighter-weight solutions." That typically means building-integrated photovoltaics or solar tiles for residential usage and lightweight flexible panels for flat or low-slope commercial rooftops. The addition of roofing materials manufacturer GAF to the firm's list of investors is not a coincidence.

The acting CFO was very involved in the fundraising efforts and noted that the investor landscape has improved since the company's Round C raise in 2012 and 2013. Kerstens said that the reception has improved in family offices and with high-net-worth individuals. (This is a theme we'll explore in depth at the upcoming NextWave Greentech Investing event next month. Kerstens plans to attend.)

Other firms in the thin silicon business include 1366 Technologies with its "direct wafer" technology using molten silicon directly converted into wafers, and Crystal Solar using a vapor deposition process for making thin crystalline silicon wafers.

Slides from Solexel's 2012 presentation:

 

Categories: Industry News

8 Strategies That Can Help Greentech Hardware Startups Win

Mon, 07/21/2014 - 12:09

Someone reached out to me the other day to ask about greentech startups, and during the discussion mentioned having heard that it typically takes at least $100 million to successfully develop and commercialize a green technology. 

I replied that I knew of plenty of startups in our sector that have gotten to significant revenue on far less than that. The response: "Oh, I was talking about hardware-based greentech."

I responded that I was talking about hardware-based greentech startups, too. That killed the conversation.

As we start to really differentiate between "last-wave" and next-wave greentech, it seems that there's a basic disconnect when it comes to how people talk about "hardware" startups.

People I speak with outside our industry often reflexively think of greentech as being capital-intensive, because to them the only "real" greentech plays involve a fab or a production plant or a specialized manufacturing plant. When I point out that many greentech startups nowadays are software- or web- or service-based, they kind of nod politely and go back to talking about major pieces of capital equipment.

And that makes sense -- if your frame of reference is the last wave of greentech investing, because that's exactly where a large portion of the capital and a majority of the hype went. I went back and estimated how much capital was required for certain famous greentech examples simply to get to commercialization and replicable revenue. (These estimates are based mostly on what's available via the web and S-1s, so calculations aren't exact.)

  • Solyndra: $375 million
  • MiaSolé: $335 million
  • Bloom: $450 million
  • Tesla: $105 million
  • KiOR: $275 million

In an era when entrepreneurs and venture capitalists are looking for business opportunities that are quick to market and can scale rapidly, you can see why anyone with this framing of our sector would be running for the hills. If the above list of startups represents your idea of what "greentech" looks like, you can find the occasional winner (Tesla), but not a consistent path to returns. And there's certainly a heck of a lot of risk along the way.

That's why next-wave greentech is about opportunities where the entrepreneurs can quickly get to market and start generating revenue with significantly less capital than the above last wave plays. Here are a few examples of greentech startups that have become successful exits or at least gotten to tens of millions of dollars in revenue, and how much capital was required for them to get to successful commercialization:

  • Opower: ~$15 million
  • SolarCity: ~$10 million
  • Zep Solar: ~$10 million
  • ecoATM: <$20 million
  • Digital Lumens: $11 million
  • Next Step Living: $2 million
  • And of course Nest, which keeps the size of its Series A and B a secret, but launched product less than two years after founding.

Many of these companies eventually raised more capital than what's shown above. But it was raised to fund growth, with solutions for which the market had already demonstrated some acceptance.

You can't make reliable returns as an entrepreneur or an investor if it's going to take hundreds of millions of dollars and several years of development before you know if you've even got a solution that the market is willing to pay for. And right now in particular, the investment community isn't interested in funding those types of moon shots.

Which brings me back to hardware. Yes, some of the companies on that second list are predominantly software- and/or service-based, but there are a few hardware companies on that list as well. "Hardware" is simply any physical product. It can be a huge, honking piece of expensive machinery, or it can be a small consumer product, or anything in between. You can't really lump them all together like some people want to, nor can you talk about "greentech hardware" and only mean the big, expensive kind.

My point is that you shouldn't exclude hardware from the "next wave" concept just because some hardware is necessarily expensive and slow to commercialize. Capital-efficient, scalable greentech is not just about software and web plays. Not at all.

So yes, next-wave entrepreneurs are tackling -- and in many cases, succeeding -- with hardware solutions. But how?

We can start to talk about some lessons learned already from early successes. Not every single one of these lessons is applicable to every greentech hardware startup, of course, but the more that entrepreneurs are able to hew to these success factors, the easier their pathway to winning.

1. Design for contract manufacturing

Building a factory is expensive. So unless you're intent on achieving an extraordinary level of quality control or you're building something that simply cannot be contract-manufactured, you don't want to build a factory as a startup. 

Your first few sold products may need to be handmade and hand-assembled, but to scale up, many successful hardware entrepreneurs find they need to design for contract manufacturing right from the start. Cyril Ebersweiler and Benjamin Joffe have some good thoughts in this respect. But I think about these things in terms of team: Make sure someone on your technical team is competent at dealing with contract manufacturing and knowing how to design with it in mind.

2. Small and distributed are beautiful

It's no coincidence that the fastest-growing markets in greentech are being driven by distributed assets like rooftop PV, LED lighting, and such. It's because that's where new technologies and products can most readily scale.

First of all, it's easier to build such devices, in most cases, for obvious cost and complexity reasons.

Second, they tend to compete with edge-of-network costs, which are most often higher than those found in centralized incumbent systems. With electricity, for instance, it's easier to compete with retail electricity costs at the consumer end than wholesale power costs. 

Third, distributed assets are able to more readily apply to niche, accessible applications where they're a good fit even during their early, higher-cost stages. This is why, for example, Digital Lumens went after refrigerated warehouses (where the economics were a no-brainer and the customers were relatively accessible) as its initial proof to customers. With that foothold established, the firm was then able to expand across the whole industrial lighting market.

Fourth, to the extent that smaller and distributed assets are going after a more numerous customer set, that can increase the odds of finding a critical mass of early adopters willing to try out the technology, even if it doesn't yet make sense economically for the larger set of customers.

So we see smaller, simpler devices able to get a foothold in the market, using that foothold to establish reputation and get early revenue and market feedback, and then competing against other high-cost or even nonexistent options in order to scale up as their own costs go down. This doesn't work for large, centralized production technologies, which really cannot compete in the market until they have already driven down costs.

3. Scale up capacity via modular design

Just because the assets and devices are small doesn't mean they can't add up to big capacity systems. Several industrial technology startups such as newterra are building standardized and modular systems that can be combined into larger-capacity systems when needed. This provides maximum flexibility for addressing a wider range of applications, which makes both manufacturing and selling easier. 

4. Leverage intelligence

Distributed, modular devices are only truly enabled by intelligence. Automation and coordinated data allow for the hardware to be much cheaper, because the intelligence enables a wider array of capabilities and applications using simpler devices. Plus, they enable services revenues beyond just upfront equipment sales. People rightfully credit the product design of the Nest thermostat, but it's the connected intelligence of the devices that really drove adoption (that, and very successful marketing, of course).

Embedded intelligence also allows you to sell hardware that is inexpensive to make because it has few if any proprietary components, and yet has significant proprietary IP in the controls built in to the overall solution. Plus, the data gathered from networked devices in the field can be valuable in its own right.

5. Test your application with the market before you build a product

You are not really selling hardware. You are selling a device that provides the service. To the extent that it is practicable, test that service with customers before expensively developing a proprietary piece of hardware. I've met with entrepreneurs who have built their initial commercial systems with off-the-shelf parts, even while still developing their proprietary solutions to integrate into future offerings. OsComp is one such example.

But at the very least, go verbally test your service before you spend a lot of time and money developing a proprietary technology. To be fair, you shouldn't expect that customers will know what they want before they actually have something to interact with, so don't just go asking open-ended questions in hopes that customers will write your product specs for you. But make sure you're not developing a technology in search of a problem -- that almost never works.

6. Prototype cheaply, and get feedback quickly

Hardware entrepreneurs now have access to rapid prototyping options they never had before. 3-D printers are a wonderful way to mock up prototypes that customers can play around with, using off-the-shelf components inside in some cases. These and other expensive pieces of equipment are now accessible at various prototyping incubators and labs that can be found in many entrepreneurial regions. For instance, here in Boston, we have Greentown Labs and Bolt, just to name two examples. Both have lots of equipment for entrepreneurs who want to develop their first prototypes and be able to go interact with customers.

Getting that early feedback is important for testing your application. And it's OK if the first feedback is negative. Fail quickly, and learn from the feedback how to design the next product so that you're eventually giving customers something they really want.

7. Sell directly before you go to channel partners

Many a great device has languished because channel partners were slow to sign up to sell it, and then were not good at selling it. You're developing something new to the market, with new capabilities. Most channel players (distributors, system integrators, reps) are only used to selling existing products.

The logic of selling through channels is compelling for a startup. You don't have a big sales force and you don't have the credibility that a channel partner does with its existing customers. It seems like the fastest route to scaling up revenues. But again and again, I've seen startups that attempt that route fail, at least if they're doing it as their first go-to-market strategy. 

Lights are a great example. LED lighting is introducing a brand-new, bundled product to the marketplace. LED fixtures are an upfront sale with a total-cost-of-ownership (TCO) advantage over incumbent technologies; they don't require changing bulbs and they run more efficiently. Yet they often have a higher upfront cost. And they can offer additional capabilities, such as embedded intelligence, that traditional fixtures can't even begin to offer, making them new to everyone. Existing channels weren't prepared to sell them until relatively recently.

Selling directly to customers allows you to make the business case directly. It allows you to get instant feedback. It allows you to learn how to sell the hardware, and learn what ancillary services the customers will need to make the economics work. All of this not only helps you get early traction and design your next product iteration for maximum customer acceptance, but it also eventually helps you select and enlist the right channel partners for when you have a product that is truly ready to scale.

8. Offer customers no-money-down solutions

Some hardware simply isn't that expensive. But many hardware solutions are expensive enough that the upfront cost can be an adoption hurdle, even if the TCO pencils out on paper. Customers may have cash challenges, or they may simply not believe the claimed paybacks. So for many hardware entrepreneurs, customer financing ends up being an important option to offer.

For B2B solutions, there are emerging offerings like those from Noesis Energy that can help provide this, or there are any number of equipment leasing vendors. Some startups end up self-financing some of these solutions (although that requires some good capital resources), selling via a capital lease, or even using a form of hardware-as-a-service. You may not be able to get third-party financing for your first few installations, but it's worth working toward underwritability from the beginning, with financing as a long-term goal.

Hardware and software/web winners are actually similar in that they typically got into the market quickly and inexpensively. Hardware isn't inherently a poor fit for the greentech next wave. In fact, entrepreneurs applying these lessons are already found in just about every subsector of greentech -- even if people outside our market don't know to look for them.

***

To hear more from startups and investors on how to develop a winning strategy, come to Greentech Media's NextWave conference in Menlo Park on August 5.

Categories: Industry News