Skip to content

🔒 🤖 CI Update lock files for array-api CI build(s) 🔒 🤖 #31878

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

scikit-learn-bot
Copy link
Contributor

Update lock files.

Note

If the CI tasks fail, create a new branch based on this PR and add the required fixes to that branch.

@scikit-learn-bot scikit-learn-bot force-pushed the auto-update-lock-files-array-api branch from 0144d2f to 2d8f23c Compare August 4, 2025 05:19
Copy link

github-actions bot commented Aug 4, 2025

✔️ Linting Passed

All linting checks passed. Your pull request is in excellent shape! ☀️

Generated for commit: 2d8f23c. Link to the linter CI: here

@adrinjalali
Copy link
Member

O_o

ERROR: scikit_learn-1.8.dev0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl is not a supported wheel on this platform.

Have you seen this? @OmarManzoor @betatim @lesteve

https://conda.anaconda.org/conda-forge/linux-64/pcre2-10.45-hc749103_0.conda#b90bece58b4c2bf25969b70f3be42d25
https://conda.anaconda.org/conda-forge/linux-64/python-3.11.13-h9e4cc4f_0_cpython.conda#8c399445b6dc73eab839659e6c7b5ad1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't find out why yet but Python is downgraded from 3.13 to 3.11 in the updated lock file. This is what's causing the CI to fail, because the wheel is built for Python 3.13.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still get Python 3.13 when updating the lock-file locally. Honestly this is the kind of things, where I would wait until next Monday because the problem may disappear by itself ...

Also in an ideal world we would use the same conda environment in the job that builds the wheel and the one that tests it ...

Copy link
Member

@betatim betatim Aug 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also in an ideal world we would use the same conda environment in the job that builds the wheel and the one that tests it ...

I think this is quite tricky because we use cibuildwheel, which in turn uses some build tools and they setup their own environment. At least I was discussing "frozen dependencies in CI jobs" in the context of cuml and was told that the jobs that build the wheels would need to use the same frozen versions but that is very difficult to do (pip can use --no-build-isolation but there is nothing like that for building conda packages).

But, given that we hard wire Python 3.13 in the build job in the CUDA CI, could we make it so that producing the lockfile fails if it can't produce one that contains Python 3.13? Or is all this too much effort for a problem that occurs rarely, doesn't lead to silent failures and (hopefully) goes away by waiting a bit?

edit: I'm fine with waiting till next Monday to see if it fixes itself.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking right now is that we don't have to use cibuildwheel in principle. If we use the same conda lock-file to create the env for building and testing the wheel, we should be fine I think.

The right incantation for building the wheel would be something like:

python -m build . --no-isolation

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. Though I am not sure how much we'd gain. In the case of this PR we want things to fail. Maybe with a easier to understand error message. I think if we used a lockfile for building as well things would have succeeded and we'd have quietly switched to using a different Python version

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's wait until next Monday and revisit if the problem is still there, hopefully not 🤞.

In case the problem is still there, one possible work-around would be to add a constraint for the CUDA build in build_tools/update_environment_and_lock_files.py to have python=3.13 (and add a comment in both places to be explicit that the Python version needs to be synchronized between update_environment_and_lock_files.py and cuda-ci.yml).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants