-
Notifications
You must be signed in to change notification settings - Fork 2
Jenkins test coverage #15
Comments
As much as I appreciate a home-based solution, I think figuring out how to set it up on AWS or another cloud provider is a better longer term solution. Feel free to experiment, of course, but do so with a mindset that it's temporary. Not sure what you mean about donating code to ASF. How can one "donate" free code? |
Basically, this project would have to become an apache project in order to ASF resources. I.e. it would become Apache Oshi. For ASF, there's a process for submitting projects as ASF projects. A chunk of it is just the legal aspect of it, related to copyrights and whatnot, IP issues etc. It looks like it just starts with a proposal. I've never gone through it myself but I may in the future. https://incubator.apache.org/policy/process.html Since i'm already in asf, i can fulfill some of the roles. Honestly, I think oshi should be it's own top level project. It doesn't happen overnight, but if this is the route you want go, there's certainly a number of benefits and some potential issues to be aware of.
Anyhow, if you are interested, we can talk outside of github for further info. I'll look into AWS pricing |
First, thank you @spyhunter99 for contributing code to OSHI in the last year. Hope to see more! I am very flattered to hear that you think OSHI could be a top level ASF project. It's a respected brand and the ASF is a good home for large, active, community-developed projects that require significant governance and process theatre. You get some benefits from the infrastructure, coupled with more process. We could throw a party to celebrate. I am very much against process overhead without any customer benefit. Until a large number of OSHI contributors are saying that ASF is the way to go, and until incremental benefits have already added up, this feels like an attempt at creating a large amount of non-code work to rename I am excited to see more CI in this project, whether it's running on AWS or starts as homegrown. If that delivers value there will be no shortage of ways to find funding for the hardware. I'll pitch in. Finally, if any discussion about donating OSHI to ASF is to be had, please keep it here in the open, don't take it offline. |
The ASF overhead is a quarterly email about the project status and a structured release process that involves a set of contributors to vote off on it. Other than that, it's not much different. I run one of the projects and it's not bad. It does increase the time from "decide to cut a version" to "globally available in central" due to the voting, but it's been helpful for me as several undiscovered bugs were found in the process of committers pulling down the proposed release version and running some manual tests. AWS pricing was surprisingly expensive. anywhere from 40-80 a month depending on how much processing time is used. In terms of Jenkins, i've been experimenting getting jenkins to build on PRs. I think it's only possible if the jenkins server is internet facing. I was hoping a polling mechanism would have worked with PRs but I'm not sure if it's supported currently. I can definitely get it to build with every commit but that's not too helpful, we need it to post a comment or set the build status with PRs. |
AWS has (experimental) bare metal instances that would probably be the optimal testing ground for platform tests. Their hardware won't vary at all and most importantly they can simulate any kind of OS configuration on an unvirtualized machine. The problem is that those instances are extremely expensive ($5/hr) to keep idling. The logical solution is to run the Jenkins controller on a regular $10/month virtualized instance and have it spin up a bunch of bare metal build slaves for a short amount of time. To keep costs down, Jenkins wouldn't automatically build each PR. If the test suite takes around 10 minutes and we're testing on 5 platforms, that's $4.608 * 1/6 * 5 = $3.84 for each build on metal instances. It's still pretty high, but I think that's what it would take to have a set of quality tests. Testing on containers would be much cheaper (nearly $0 per build), but I don't think that would be as valuable given the nature of OSHI. The other option is to crowdsource the whole thing as discussed here. There are quite a few benefits:
Of course there are also some serious downsides (it's not automatable like CI). |
So don't keep 'em idling! Have a script that starts them up, runs your analysis, and shuts down. I did this for years with cloud servers on Rackspace (temporarily spinning up hundreds of them from a saved image, running stuff using JPPF, shutting them down when done). I'm pretty sure there's a command line API you can use to trigger the startup. With a little creativity you could probably layer that onto a Travis script and make the Travis instance execute the EC2 instance to send a report (thus leveraging the PR / commit hooks). Alternately (or integrated with this travis setup), this could be something done periodically (monthly or after significant code overhaul -- I'd pay a few bucks once a month) but not for every single commit or PR. This is what we do with Coverity -- we have to push a commit to a specific branch to trigger the coverity build/scan, which I typically do about monthly, and prior to releases. Another thing to think about is that we've gotten along as far as we have relying on Travis for Mac and Linux, and they should have Windows soon. I think it's impractical for us to have every single Linux distribution to test on (we only recently have people running tests on Raspbian but they haven't submitted PRs to fix the missing data). Ultimately, we need a bigger team to actually contribute here! Just me running tests on VMs on my mac can only get us so far... ultimately the best solution is to develop tests that enable crowdsourcing and make it easy for users on new (to us) OS's to run tests and generate expected output for their machine. More than pass/fail JUnit tests, but actual output that they can independently check vs. their other system. |
I agree that this is probably the best way to handle it. I'm confident that Jenkins would work well for this use-case. If bare metal EC2 instances turn out to be too expensive, they can be swapped out for Fargate containers as a last resort. It's still going to be a lot of work though. I have experience managing AWS resources with Python, but not much in setting up Jenkins. Maybe @spyhunter99 could share their Jenkins configuration. Here's what would need to be accomplished:
Edit: looks like there's a plugin for managing EC2 instances, so the 3rd point should be even easier. |
I have a fair amount of experience with Jenkins having written the ANSI color plugin :) I've setup and run Jenkins with the said EC2 plugin successfully multiple times, it eventually got to a point where it worked somewhat decently. I found the whole thing fairly brittle, EC2 instances being left over and not properly shutdown, updates failing to install, constantly having some kind of small issue with it. If there's a team tending to it, it's fine. For Windows, anyone here can contribute an AppVeyor setup. It's just like Travis. AppVeyor is used by JNA and ResourceLib for example. |
Maybe this issue should get converted into a list of OSes we actually want to run CI on without prescribing a solution and letting the one who does the work first, win? Sounds like there's a need for Raspbery-PI, Windows, what else? |
This probably isn't obtainable, but ideally every supported OS family would be tested. It's a waste for the average Java project, but OSHI is unique in that it has a closer than usual relationship with the hardware. x86_64
armhf
Realistically, Windows should definitely be added soon via AppVeyor since it's free and simple. |
In reference to @cilki's list above, I disagree with a focus on the "latest". We're a library that others expect to link to their own code and support a large variety of systems.
|
Punting this issue to the oshi5 project as I think the oshi 4.x test suite is sufficient! |
So, I'd like to revisit this conversation. It's been about 15 months since this discussion and the landscape is a bit different now. @spyhunter99 brought up the idea of proposing Oshi as an Apache project. This process starts with the Apache Incubator and would gain some benefits. There are some drawbacks as well, and a bit of administrative overhead, and I figured it'd be good to actually discuss those. I'm intentionally replying to the thread in the In the earlier conversation @dblock noted, and I agree in principle:
Taking a customer focus, usage of Oshi is expanding at a rapid pace (correlating with the 2019 licence change to MIT). When this thread was started in May 2019, monthly downloads (from Maven Central) of Part of the increase in usage is adoption among enterprise "customers". Prompted by some issues filed by a developer from AppDynamics, I discovered dependencies in a few other large cloud service monitoring providers, some already in use and others with open issues "considering" adoption. Thanks to our steady adoption of feature and platform parity with SIGAR, the all-important-and-scientific metric of Github stars tells the story that we're probably the new default open source Java system info platform. While part of me wants to brag about this, the other part of me knows that with great power comes great responsibility. After completing the AIX port in 5.2, I decided to try to take a step back and get into "KTLO" mode and was successful doing absolutely nothing for the month of September 2020. Alas, October 2020 has brought several new bug reports from customers. All welcome, of course, in the interest of bettering the product, but I've sensed a theme. I'm seeing more bugs related to processor/system configurations that are beyond my ability to test, including:
I'm still sort of trying to KTLO but am increasingly wanting to come to depend on "community" help. There have been multiple instances where someone has mentioned in an issue/proposal that their company might be willing to contribute more, but none of those has come to fruition (yet) and part of me wonders if there is a perceived risk of spending corporate resources contributing to a "single maintainer project" (a.k.a. 'benevolent dictator') vs. an established community project. And this is where re-visiting the ASF conversation is relevant. I've looked into the Incubator process and documentation quite a bit at this point, and think it's a reasonable path forward, although I'm still not sure it's the best one. I'd like to discuss this among the group that originally discussed this issue (@dblock , @spyhunter99 , @cilki) and also involve @hazendaz (tenured guru/mentor/Maven advisor), and @tausiflife (who has been an excellent help recently with Linux challenges I couldn't handle and helped a lot with the AIX port) before making a decision going forward. I'll deal with whatever administrivia needs to be done, but I'm interested in feedback on the bigger question of whether it's a right fit. Benefits of becoming an Apache project (via the incubator process):
Disadvantages of becoming an Apache project:
So, thoughts? Next step would be an email to an ASF list expressing interest and requesting assistance in preparing a proposal for incubation, but not taking that step if there are good reasons not to... |
As far as I (the OG author) am concerned, @dbwiddis has my blessing to do whatever he feels is right by him. I would recommend hearing from other benevolent dictators that have gone through this process as a reference. |
Since i have only worked recently on OSHI, i don't have a complete background of it. However i understand that testing, communication and request for help has been more than a full time job for @dbwiddis. As said this will perhaps result in greater and active participation from the community considering the usage statistics of the project. Only thing that I am not sure if i understand is how adopting to ASF will help in testing, as i can see from this thread, a fair amount of concern around it. However that being said, I think this is a great step moving forward towards Apache License and i am willing to help for the administrative tasks, if required |
Definitely will do so. Many checkpoints along the way where I can stop the process.
See earlier conversation history in this issue regarding Jenkins and the ability to access/test on more platforms. I am assuming @spyhunter99 's offer to help set it up is still valid. :) |
i'll do what i can. ASF INFRA has jenkins and buildbot, both of which can be used by any apache project. There's builders for a variety of both vanilla and off the beaten path operating systems. ASF projects cannot use GPL or LGPL licensed libraries as dependencies. There's a few other gotchas with that but that's the biggest hurdle for some. Other projects can use ASF projects without much concern though |
Yeah, I've looked into the licensing stuff. It wont be a problem as we don't depend on any disallowed software (it's all MIT or AL2.0). I found a gotcha today with some StackOverflow code that's not allowed, so I rewrote it (all the AL2.0 code depending on us has not done their due diligence, 😁 ). As I said, I think at least considering the steps/process is a useful exercise even if at the end of the day we keep the status quo. |
Well, I've had a good few weeks to think hard on this and do a lot of research. I'm leaning against it at this point. Here's my reasoning:
So back to the original issue, @spyhunter99 what would it take to set up Jenkins on a Polarhome AIX server? :) |
Well I have traditional CI for everything except AIX thru github actions. (Solaris and freebsd use a VM under macOS.) AIX I've rigged to just ssh into our polarhome server and issue commands, currently just |
From oshi/oshi#427
I can do it and I can probably host it at my house. It's just a matter of finding suitable, inexpensive hardware and identifying what the test environments need to be. I think ideally we would want a few bare metal setups in certain cases (specifically for testing battery/power states), a raspberry pi, and VMs for the rest. Maybe we can setup a gofundme or something for getting hardware.
Since we'll know exactly what the hardware (either real or virtual), then we can try unit tests that only run on that specific environment and confirm exact values for the configuration. It would go a long way in terms of confirm accurate measurements and operating system compatibility.
So if i host it, it will not be publicly accessible however it will have its own github account and will add comments to pull requests based on test results (i think).
Finally, as an added bonus, I actually want to setup something for some of my other android projects.
Another thought is if this code base is donated to the apache software foundation, i know they have both a large jenkins cluster and buildbot configurations available with a variety of operating systems. There's a lot of benefits to it and then i wouldn't have to manage a bunch of miscellaneous hardware
The text was updated successfully, but these errors were encountered: