Replies: 8 comments 13 replies
-
I would emphasis a point alluded to by the third and fourth pros: an archival reference provides some confidence that future maintainers and users can resolve questions about algorithms or their implementation. This has been necessary several times, e.g., #1239. Having a reference doesn't ensure technical merit, but at least it should provide clarify. Keeping implementations faithful to their references provides clarity and addresses one of the deficiencies in PV performance modeling that was identified at the original 2010 PVPMC workshop, that we all said we use the "Perez" model but all had different implementations whose results disagreed. |
Beta Was this translation helpful? Give feedback.
-
Thank you @mikofski for the excellent start to an important discussion! I see two questions here:
I have been thinking that we could establish a (hopefully) simple points system for proposed features to achieve a more balanced assessment of suitability for inclusion in pvlib as opposed to accept/reject on the basis of this one specific criterion. Aspects to be rated could be:
Of course there is a process question too: who evaluates and who decides? Full disclosure: I would like to convince other maintainers to accept #1878 for inclusion in pvlib--it is currently blocked. |
Beta Was this translation helpful? Give feedback.
-
More points:
|
Beta Was this translation helpful? Give feedback.
-
Good discussion! I may be restating the obvious, but I've seen peer-reviewed publications that appeared to have had less thorough scrutiny/review than would happen as part of a pvlib PR. I like @adriesse's list of aspects. And I like @cwhanse's point about archival references. It seems to me that the requirements for something to be included might need to scale with the complexity and novelty of the contribution. Small modifications to existing approaches that have obvious benefits should be accepted with minimal hurdles, while a dramatically different approach might need more rigorous review/validation. To @mikofski's point,
That does seem like something to be careful of. On the other hand, there should be some benefits from being a contributor/maintainer. Maybe that isn't the right benefit...? |
Beta Was this translation helpful? Give feedback.
-
@williamhobbs I propose that if we have a dedicated group, that could have rotating membership and vary in size but should probably not be made entirely from maintainers and could have non-contributing members who are either users or industry experts, then perhaps in combination with a zenodo archive, they could be the judges of merit of new features based on yours and Anton’s points above? They could make recommendations to the maintainers and community at large? Something like that? |
Beta Was this translation helpful? Give feedback.
-
Hello folks, this is a great discussion and thought that I'd pop back out of the woodwork with some thoughts here. I'm admittedly very out the the loop on a lot of the recent developments on the package, so apologies if any of this is already covered. When I think about pvlib, I think that the core use-case for it is that it is useful. For most people most of the time, that probably means that core workflows in the package are based on peer-reviewed and archival references, because it is useful to be able to trust their outputs as a part of a larger project, and to know that "someone" has looked at them. However, for other users it is likely useful to have workflows and tools that haven't yet been reviewed but that have been developed specifically because they are useful. I would also imagine there are a large number of applications that will stand on the border between something that would make sense to peer review, and something that just makes sense in the flow of the model or workflow being implemented. I would also suspect, as has been pointed out in this thread, that implementation into pvlib and running through the wide range of conditions and test cases for an algorithm is probably a more rigorous test of an algorithm than is required to pass peer review . I think there have been many good examples of the community finding and correcting issues with published models. An aspiration here might be that the pvlib implementation (with associated tests) is the reference implementation of a function (and the output that is incentivized by funding bodies), not the publication.
|
Beta Was this translation helpful? Give feedback.
-
Any ideas how to wrap this discussion up? |
Beta Was this translation helpful? Give feedback.
-
I propose a policy for references:
|
Beta Was this translation helpful? Give feedback.
-
Do pvlib features require a reference or citation from the literature? I can't find where this is explicitly stated. Should this even be a requirement?
Beta Was this translation helpful? Give feedback.
All reactions