You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
right now its easy to run out of PV space for your /root/.mvnrepository/ directory.
So it'd be nice if we had a little function in the pipeline library that would find out the disk space and %age free and log it to the build log (maybe with a nice big WARNING if its, say, less than 20% free).
We could then include this function call into the standard pipellines / functions. e.g. for all java projects we should check the disk before we start and after a release? Maybe at the end of the pipeline too (in case integration tests clog things up too much).
So maybe the function looks like...
// in the pipeline library...defcheckDiskUsage(operation, folderName, folderPath) { ... }
// in the jenkinsfile or pipeline library functions....
checkDiskUsage("Starting release", "maven repo", "/root/.mvnrepository")
....
checkDiskUsage("Completed release", "maven repo", "/root/.mvnrepository")
...
checkDiskUsage("Finished pipeline", "maven repo", "/root/.mvnrepository")
We may want to output this data into Elasticsearch / Prometheus too; including more metadata like the namespace / job name / build number too etc?
Then we can start doing tools to visualise the users disk over time and so forth
The text was updated successfully, but these errors were encountered:
right now its easy to run out of PV space for your
/root/.mvnrepository/
directory.So it'd be nice if we had a little function in the pipeline library that would find out the disk space and %age free and log it to the build log (maybe with a nice big WARNING if its, say, less than 20% free).
We could then include this function call into the standard pipellines / functions. e.g. for all java projects we should check the disk before we start and after a release? Maybe at the end of the pipeline too (in case integration tests clog things up too much).
So maybe the function looks like...
We may want to output this data into Elasticsearch / Prometheus too; including more metadata like the namespace / job name / build number too etc?
Then we can start doing tools to visualise the users disk over time and so forth
The text was updated successfully, but these errors were encountered: