Last week PV Evolution Labs (PVEL) released its PV Module Reliability Scorecard 2021, detailing the key findings from the previous year’s module PQP testing series. The testing and scorecard puts the industry’s modules through their paces, acting as the gold standard for module reliability.
The findings, published last week and indeed detailed in a webinar co-hosted by PVEL and PV Tech, highlighted both areas of promise and concern. While increased numbers of manufacturers earned ‘Top Performer’ status and thermal cycling tests continued to improve, cases of junction box failures continued to rise, some of which were failing straight out of the pallet.
In an interview with PV Tech, PVEL’s head of PV module business Tristan Erion-Lorico speaks to Liam Stoker about how the industry could overcome its junction box failure issue, how testing for the elements is evolving and what developers must consider when using the scorecard.
One of the top line figures from the scorecard was the fact that more manufacturers had achieved Top Performer status than before. Is that a testament to greater performance in the sector, or does it speak of the sector’s overall growth?
I would say it’s more related to overall growth and the traction in the industry of requiring PQP testing. I think as sites have gotten larger, and there’s more money on the line, and there’s this influx of institutional investors that are risk adverse, the manufacturers are realising that they need this testing to be competitive in the industry. Before they could sell to, I don’t know, a 5MW site, and that owner and investor wouldn’t require PQP, and now they’re trying to sell to the 50MW site or larger and this is a requirement. This is just part of the due diligence that these institutional investors are doing. That’s pushing more manufacturers to participate in PQP testing, which, as a by-product, typically means more top performers.
What were some of the key trends that PVEL picked up from this year’s scorecard, and what would you say the industry needs to be aware of?
The fact that we see that one in four of the Bill of Materials (BOMs) we tested had at least one failure, and that was one in five in 2020, and then drilling down into those failures more, we saw that one in three manufacturers had at least one junction box failure, versus one in in five last year. Pretty much each year of the scorecard that number gets worse, where there’s more manufacturers experiencing more junction box issues. I think both of those are concerning, particularly with some of these junction box failures we’re seeing the lids falling off the junction boxes in transportation, as we take them out of the pallet the lids are missing. That’s concerning. And we’ll do wet leakage testing, which is looking at the insulation resistance of the module, how good of an electrical insulator it is, and we’ll find that out of the box manufacturers are failing that wet leakage test, which is a core certification test. And if you can’t pass that during certification, your products can’t be certified and you can’t sell them. Seeing the number of manufacturers that are struggling with that basic test, which has been part of certification for, frankly, over a decade… you need to pass wet leakage to be able to sell your products, and [for] modules [to be] failing that test before we even get to any of the extended reliability testing…That’s significant, and that is something that we would have hoped the industry would have solved by now.
Is there any indication as to what might be a root cause of this or anything which immediately strikes you as though it could be a fix here?
In the scorecard, we talk about the fact that the junction box step is for the most part, still a manual process, you might have an automated silicone dispenser putting the silicone around the junction box, but then someone is manually putting that into place on the module, and I think in most plants, still manually putting the lid on and manually doing the dispensing. There also used to be one large junction box for monofacial full-cell modules, and now with half-cell and bifacial it’s gone to three separate junction boxes. So the people at that station need to do that [process] three times rather than once. There’s just a bigger opportunity for error. And now when you think of the scale of this manufacturing… I’m not saying that it’s necessarily the big players that had all of the junction box problems, but just on a multi-gigawatt scale, we do report that over 100 million cells are soldered a day. I don’t know how many junction boxes would be installed per day, but it’s in the millions. You just need strong process control and quality control and if it’s a manual process, you’re still leaving a lot of it up to training and robust procedures. And apparently, that’s not sufficient enough to avoid these problems. So I think it goes back to scale, and having to do it manually.
And then as an extension of that, with the industry on the cusp of a really quite significant increase in scale over the next year or two, it really does bring the magnitude of task into view, those numbers are only going to grow and grow over the next few years.
Yeah, that’s one thing that we also report in the scorecard. We’re soldering 100 million cells a day right now, but we need to solder over a billion cells a day to hit climate change targets. So the scale is just going to keep increasing, and we can’t sacrifice quality for scale.
There’s obviously a lot of mechanical stress testing involved in the PQP, and during last week’s webinar there was mention of the forthcoming hail sequencing testing as well. With solar being deployed in increasingly far flung areas, and having to experience more and more adverse conditions, is there more importance now being placed on those mechanical stress testing, as well as the backsheet endurance?
Yeah, I would say so. As more solar gets deployed, it’s going to logically go in less ideal environments. And so making sure that the module selection is robust for the environments that it’s ending up in is important. That’s what we’re looking to achieve with our PQP testing.
With the mechanical stress and other endurance tests, how are the test conditions selected and how much insight are developers having in that, given their experience of developing in these conditions?
For that test we’re really using IEC61215 static mechanical load requirements as the basis. And to be honest, it’s not that it doesn’t represent the most extreme case – we’re not using, for example, for some residential or even commercial rooftop racking designs, you can mount the module just at its four extremities, and for a flat roof top you can get tremendous snow loads in certain areas of the world while mounting it just at the four corners – [but] our mechanical stress sequence isn’t designed to pick up weaknesses related to that, such as in extreme cases. Our mounting is fairly conservative in how we mount the modules and our test load is the minimum requirement from IEC61215. Now at the same time, for people that are mounting their modules in, you could call it the worst case or in a more extreme way, we do recommend batch testing, or just qualification testing using the racking that they’ve chosen, using the design loads that they are expecting in their design, and we’ll do batch testing separate from the PQP on those more extreme cases. We find that in some cases the microcrack susceptibility significantly increases when you’re mounting in the non-ideal [case] that we use during mechanical stress sequence.
Have you picked up any potential increased susceptibility to microcracks with regard to larger-format modules given industry mounting, or racking structures as they are today? This is something we’ve heard developers express some concern over.
We’re equally concerned about this. I think as of yet, we do have a number of BOMs of large-format modules going through mechanical stress test, but we haven’t seen or compiled the results of that yet. But we have seen, and we’ve already reported on an increase in microcrack susceptibility between identical BOMs of 60-cell and 72-cell using smaller format cells. So with 158.75mm and 166mm [cells] we see a pretty significant difference in microcracking between two identical modules of different sizes. So by extension, it stands to reason that going to even larger modules is going to have more microcracking. Now, I think we still have to go through the testing to see if that increased microcracking truly results in increased power loss. You know, as we showed on the webinar, with that multi-busbar module it had tons of cracks, but very low power loss because it was multi-busbar. Microcracks aren’t always a bad thing, I don’t think they’re a good thing, but they don’t always lead to significant microcracking. But we do, based on our previous testing of different module sizes, predict that there will be more microcracking in these large format modules.
And I guess that then relates to, when picking a particular module type or size as well, it really will come down to the specifics of that project in terms of location, exposure to conditions and all the other finer details.
Exactly. Our recommendation for developers and investors is to use the PQP report reports as a guide, but if we find that there is a significant amount of microcracking in – and we’re very transparent in how the modules are mounted for mechanical stress sequence, and what loads were used – if that isn’t representative of their project site, and there was a significant amount of damage that we reported, they should probably test their designs to see if it’s in their more extreme cases, then there’s going to be extreme failure. We have seen modules break in mechanical stress sequence testing and that’s also pretty significant, considering we’re using pretty conservative mounting, and we’re using the minimum 61215 test loads. We have seen broken glass in that, [and] I think we’re going to see more of that as modules get larger, particularly because they’re getting larger, but the frames aren’t necessarily getting thicker, the glass isn’t getting thicker, it’s using the same module bill of materials, it’s just on a larger format, which there’s inherently some risk involved there.
Is there anything other than the susceptibility to microcracks, are there any particular big trend or theme that you’re starting to see from these modules?
The other tests we were concerned by would be thermal cycling, just the different thermal expansion coefficients across a larger area, we might start to see that there’s a new.. as we’ve reported in the last couple of scorecards, thermal cycling results have improved, but we’re worried that that trend might go the opposite way as we get to larger-format modules and that distance between the cells being decreased to have higher efficiency. I think risks there will be borne out in thermal cycling testing. Now we have seen some of these, whatever you want to call them – gapless technology, seamless soldering or tiling ribbon – we have seen decent thermal cycling results on some of those modules thus far when it wasn’t large-format, so I think we’ve seen so far that it can be done reliably, but we haven’t yet finished the thermal cycling test sequence on large-format modules with gapless soldering. I think until we’ve tested a number of BOMs through that and gotten more comfortable, that’s still quite a question mark.