CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 282
Releases: explosion/thinc
v8.3.6: Support Python 3.13
Compare
This release adds support for Python 3.13. In order to do this we're requiring Pydantic >= 2.0 and updated compilation to use Cython 3.0. This required an updated to the blis packaged that's not binary compatible, but thinc itself should not have any binary backwards compatibility issues.
Assets 38
- 874 KB
2025-04-04T11:46:56Z - 825 KB
2025-04-04T11:46:56Z - 3.88 MB
2025-04-04T11:46:56Z - 3.94 MB
2025-04-04T11:46:56Z - 4.72 MB
2025-04-04T11:46:56Z - 4.93 MB
2025-04-04T11:46:56Z - 1.69 MB
2025-04-04T11:46:56Z - 875 KB
2025-04-04T11:46:56Z - 825 KB
2025-04-04T11:46:56Z - 4.2 MB
2025-04-04T11:46:56Z -
2025-04-04T11:10:15Z -
2025-04-04T11:10:15Z - Loading
v8.3.4: Update Blis pin to revert to known-good v0.7
Compare
Previous releases have used releases of our blis
package that vendor newer releases of the upstream blis
library. Unfortunately these newer releases have had intermittent crashes on Windows that we haven't been able to track down.
I've therefore released a v1.2 of the blis
package that goes back to the known-good v0.7 release of the vendored blis
code, which we were using before. This release updates the verison-pin to use it.
It took a surprisingly long time to get v0.7 of blis to compile, due to conflicts on Windows. I regret the delay.
Assets 23
v8.3.3: Fix Blis crashes, widen numpy pin
Compare
- Update blis pin to v1.1. This updates the vendored blis code to 1.1, which should fix crashes from the previously vendored v0.9 code on Windows.
- Widen numpy pin, allowing versions across v1 and v2. Previously I had thought that if I build against numpy v2, I couldn't also have v1 as a runtime dependency. This is actually incorrect, so we can widen the numpy pin
- Set flag on loading PyTorch models to improve safety of loading PyTorch models.
Assets 23
v8.3.2: Fix regression to torch training, update ARM dependency
Compare
- Fix regression to torch training introduced in v8.3.1
- Restore MacOS ARM wheels, which were missing from previous builds
- Fix compatibility with thinc-apple-ops
Assets 23
v8.3.1: Fix torch deprecation warning
Compare
torch.cuda.amp is deprecated (Pytorch 2.4). This PR updates shims pytorch.py to use torch.amp.autocast instead of torch.cuda.amp.autocast.
Thanks to @Atlogit for the patch.
Assets 19
v9.1.1: Restore wheels for MacOS ARM 64
Compare
Previously we used a complicated build process that used self-hosted runners to build wheels for platforms Github Actions did not support. Github Actions has been adding support for ARM recently, so we've simplified the CI process to rely only on it exclusively.
This release adds back support for MacOS ARM64 wheels that were missing from the previous release. Linux ARM wheels are still pending, as Linux ARM architectures are currently only supported for private repos. Cross-compilation with QEMU is possible in theory, but in practice the build timed out after several hours.
Assets 31
v9.1.0: Depend on numpy 2.0.0
Compare
Numpy is a build dependency of Thinc, and numpy 2.0 is not binary compatible with numpy 1.0 (fair enough). This means we can't have a version that's compatible across numpy v1 and numpy v2.
This release updates v9 by pinning to numpy 2.0, and builds against it. No other changes are made, so that we have paired versions that only differ in their dependencies.
Assets 19
v8.3.0: Depend on numpy 2.0
Compare
Numpy is a build dependency of Thinc, and numpy 2.0 is not binary compatible with numpy 1.0 (fair enough). This means we can't have a version that's compatible across numpy v1 and numpy v2.
This release updates the pins to numpy 2.0 and builds against it. No other changes are made, so that we have paired versions that only differ in their dependencies.
Assets 19
v8.2.5: Restrict numpy pin to <2.0.0
Compare
Numpy v2.0 isn't binary compatible with v1 (understandably). We build against numpy so we need to restrict the pin.