-
-
Notifications
You must be signed in to change notification settings - Fork 8.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Flatpak aarch64 builds (Part 1: build) #9979
base: master
Are you sure you want to change the base?
Conversation
b764a6b
to
d8c7fb0
Compare
@@ -28,25 +29,30 @@ jobs: | |||
|
|||
case "${GITHUB_EVENT_NAME}" in | |||
pull_request) | |||
config_data=('codesign:false' 'notarize:false' 'package:false' 'config:RelWithDebInfo') | |||
if gh pr view ${{ github.event.number }} --json labels \ | |||
config_data=('codesign:false' 'notarize:false' 'package:false' 'config:RelWithDebInfo' 'flatpakArch:["x86_64"]') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather have this step create a bespoke matrix variable instead of putting this in config_data
, as described in the GitHub action documentation:
I'm fine with flatpakArch
being set up as a bespoke output variable depending on GitHub event.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assigning a whole matrix looks unnecessary, making the JSON object (only an array) more complex (an array in an object) where there is only one variable does not look worth it.
I was aware of this feature to set up a matrix but for one variable, it looks too much.
| jq -e -r '.labels[] | select(.name == "Seeking Testers")' > /dev/null; then | ||
config_data[0]='codesign:true' | ||
config_data[2]='package:true' | ||
fi | ||
if echo "${label_data}" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really happy about introducing a bespoke label just for this, but code-wise it's fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could work and make the aarch64 build happen on PR if a cache is there and a label to force the build without cache.
But I would prefer to know, how well/fast it is on master before enabling it on every PR (if cache available).
Note that the label name can be discussed.
Changes seem fine overall (apart from specific review comments), though I'm somewhat unsure about the added complexity. Also how does this impact our cache usage? The Flatpak cache is the largest line item in our cache usage and I wonder how aarch64 builds will impact this. |
2fcbb74
to
f8041da
Compare
https://github.com/tytan652/obs-studio/actions/runs/7170036033/job/19532684838#step:9:5 Looks like it will be the almost same size as x86_64, so 2 times ~2GB in total. |
b8377ed
to
a9ef18c
Compare
Are there any updates to this? |
None of Intel runtime supports non-x86_64 arches
Reusable workflow can have a matrix applied to the whole workflow which cleanly workaround job with a matrix not having outputs per iteration
This allows to workaround the 6 hours time limit per job. This is needed to enable to build Flatpak with QEMU emulation.
Requires a label to be enabled on pull requests
a9ef18c
to
0a74df6
Compare
Will be refactored after #10934 has been discussed |
Description
Add aarch64 support to the manifest and to the CI with a different approach from #8105.
Important
Publish workflow will be worked on once the PR design for build is approved.
Caution
This PR does not mean that RPis (or alike) will be supported, like x86_64 machines not all of them can handle OBS Studio.
About aarch64 on CI:
Building through QEMU emulation is really slow. So to avoid timing out
the job (6h), the original job is split in two.
The first job build dependencies (4-5h) and creates an artifact, if the cache already exist the
build is skipped.
And the second job build CEF and OBS Studio by relying on the cache, if no cache on the artifact of the previous job (2-3h without cache).
aarch64 builds are not enabled by default on the main workflow on PRs.
Motivation and Context
Publish Flatpak aarch64 builds.
How Has This Been Tested?
CI with a PR on my fork: tytan652#16 (aarch64 artifacts availlable)
Types of changes
Checklist: