Python Multilingual Ucrel Semantic Analysis System, is a semantic tagging framework that contains various different semantic taggers; rule based, neural network, and a hybrid of the two, of which all but the neural network can identify and tag Multi Word Expressions (MWE). The taggers can support any semantic tagset, however the tagset we have concentrated on and released pre-configured spaCy components for is the Ucrel Semantic Analysis System (USAS).
- 📚 Usage Guides - What the package is, tutorials, how to guides, and explanations.
- 🔎 API Reference - The docstrings of the library, with minimum working examples.
- 🚀 Roadmap
PyMUSAS rule based taggers currently support 11 different languages with pre-configured spaCy components that can be downloaded, each language has it's own guide on how to tag text with the rule based tagger using PyMUSAS. Below we show the languages supported, if the model for that language supports Multi Word Expression (MWE) identification and tagging (all languages support token level tagging by default), and size of the model:
| Language (BCP 47 language code) | MWE Support | Disk Space (MB) |
|---|---|---|
| Mandarin Chinese (cmn) | ✔️ | 1.28 |
| Danish (da) | ✔️ | 0.85 |
| Dutch, Flemish (nl) | ❌ | 0.15 |
| English (en) | ✔️ | 0.86 |
| Finnish (fi) | ❌ | 0.64 |
| French (fr) | ❌ | 0.08 |
| Indonesian (id) | ❌ | 0.24 |
| Italian (it) | ✔️ | 0.50 |
| Portuguese (pt) | ✔️ | 0.27 |
| Spanish, Castilian (es) | ✔️ | 0.26 |
| Welsh (cy) | ✔️ | 1.10 |
We also have 4 different neural taggers 2 support English only and the other 2 are highly multilingual. Lastly we have the hybrid taggers that require both rule based and neural tagger resources therefore they support all languages the rule based taggers support.
For more information on the different taggers and there language support see the introduction documentation page.
Can be installed on all operating systems and supports Python version >= 3.10 < 3.15, to install run:
pip install pymusasIf using uv:
uv add pymusasIf you want to use the Neural Network / Transformer models then you will need to install the neural extra like so:
pip install pymusas[neural]or for uv:
uv add pymusas[neural]When installing the neural extra we use the default version of pytorch for your Operating System (OS), in the case for Linux this is likely to be the cuda version and for all other OSs this will be cpu. If you would like to use a different version of torch please either install it before install pymusas or add the package index like so uv add --index-strategy unsafe-best-match --index https://download.pytorch.org/whl/cu130 pymusas[neural] in this example we are downloading torch for cuda version 13.
Note we do not require the GPU version of spaCy spacy[cuda12x] to run pymusas with a custom accelerator like cuda but pymusas does support the GPU version of spaCy in case you are using it, but pymusas does not require it.
You can either use the dev container with your favourite editor, e.g. VSCode. Or you can create your setup locally below we demonstrate both. To note in both cases we will be installing the CPU version and not the GPU version.
In both cases they share the same tools, of which these tools are:
- uv for Python packaging and development
- node for building and serving the documentation.
- make (OPTIONAL) for automation of tasks, not strictly required but makes life easier.
A dev container uses a docker container to create the required development environment, the Dockerfile we use for this dev container can be found at ./.devcontainer/Dockerfile. To run it locally it requires docker to be installed, you can also run it in a cloud based code editor, for a list of supported editors/cloud editors see the following webpage.
To run for the first time on a local VSCode editor (a slightly more detailed and better guide on the VSCode website):
- Ensure docker is running.
- Ensure the VSCode Dev Containers extension is installed in your VSCode editor.
- Open the command pallete
CMD + SHIFT + Pand then selectDev Containers: Rebuild and Reopen in Container
You should now have everything you need to develop, uv, node, npx, yarn, make, for VSCode various extensions like Pylance, etc.
If you have any trouble see the VSCode website..
To run locally first ensure you have the following tools installted locally:
- uv for Python packaging and development. (version
0.9.6) - node using
nvmwithyarn. (version24). After installing this run the followingcd docs && corepack use yarnit install the correct version ofyarn. - make (OPTIONAL) for automation of tasks, not strictly required but makes life easier.
- Ubuntu:
apt-get install make - Mac: Xcode command line tools includes
makeelse you can use brew. - Windows: Various solutions proposed in this blog post on how to install on Windows, inclduing
Cygwin, andWindows Subsystem for Linux.
- Ubuntu:
When developing on the project you will want to install the Python package locally in editable format with all the extra requirements, this can be done like so:
uv sync --all-extrasThis code base uses isort, flake8 and mypy to ensure that the format of the code is consistent and contain type hints. ISort and mypy settings can be found within ./pyproject.toml and the flake8 settings can be found in ./.flake8. To run these linters:
make lintTo run the unit tests with code coverage of the unit tests:
make testsTo run the doc tests, these are tests to ensure that examples within the documentation run as expected, the coverage results of these tests will also be reported:
make doc-testsTo run the all the tests (unit, documentation, and functional tests) with coverage that takes all of these tests into account:
make full-coverage-testsTo note the functional tests that are ran within this make command are the tests that build the pymusas package and then use the built package to test that the output of the taggers are what is to be expected.
NOTE We do not expect contributors to run these tests, the UCREL team can run these tests as part of the pull request before we merge into the main branch.
The GPU tests are the same tests as we run in make full-coverage-tests but some of these tests are skipped when we request the model to run in GPU mode this is why we have this docker image. The image if you run it assumes you have an Nvidia GPU and a Nvidia driver that supports CUDA 12.
These tests also allow us to test that the codebase can be used with the GPU version of spaCy spacy[cuda12x].
As we do not have GPU infrastructure on the CI pipeline, we can run the GPU tests locally use the following docker container (to note the 0.1.0 version number of the container is not meaningful at the moment):
docker build -t pymusas-gpu:0.1.0 -f ./tests/gpu-docker.dockerfile .And then we can run the tests like so:
docker run --gpus all --shm-size 4g --rm pymusas-gpu:0.1.0Note at the moment when running these tests only 2 errors should occur: tests/unit_tests/spacy_api/test_spacy_api_utils.py ..EE this at the moment is expected and we hope to resolve this in the future, all other tests should and are expected to pass.
See benchmarks directory for the different benchmarking that we currently run.
The default or recommended Python version is shown in [.python-version](./.python-version, currently 3.12, this can be changed using the uv command:
uv python pin
# uv python pin 3.13If you would like to develop/test the code within this repository in a cloud hosted environment, this can be done through my-binder for free. This environment is designed to try out the code in the repository rather than for contributing to the code base. To get to this environment click on the "launch binder" badge at the top of this README. It will create a Jupyterlab environment for you on My-Binder where you can run and change the code in this repository. Note when in this environment please install an editable version of pymusas by opening a terminal and running the following command pip install -e .
PyMUSAS is an open-source project that has been created and funded by the University Centre for Computer Corpus Research on Language (UCREL) at Lancaster University. For more information on who has contributed to this code base see the contributions page.