You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On Ubuntu :
sudo apt-get install ffmpeg libpng-dev libjpeg-dev libzip-dev
On OpenSuse :
sudo zypper install libpng16-devel libjpeg62-devel libzip-devel ffmpeg
Installation from git clone
git clone https://github.com/SciNim/flambeau
cd flambeau
nimble install or nimble develop
Note that install and develop will download and build libtorch so it may take a while.
Torchvision can now be built if desired:
nimble build_torchvision
Caveats
As this library is still in heavy developments, some constructs are a
bit brittle.
If you wish to return a RawTensor (the wrapped Torch Tensor type)
from a procedure, you need to annotate the procedure with the
{.noInit.} pragma, like so:
proc foo(x: float): RawTensor {.noInit.} =...
Otherwise, you will get a segmentation fault due to the implicit
initialization of the RawTensor object.
Note: you can use the {.push.} and {.pop.} pragmas at top level in
your code, if you wish to write multiple procedures returning
RawTensor without having to add this pragma to each procedure.
Note 2: In theory the {.requiresInit.} pragma should mean that the
RawTensor type is not implicitly initialized. However, this pragma
does not solve the issue at this time.
CUDA support
By default the Torch installation downloaded by this package contains
CUDA support. However, by default Nim packages using Flambeau compile
without CUDA support. This is controlled by the cuda compile time
option. Compile with:
nim cpp -d:cuda <foo>
to compile with CUDA support.
Limitations
Compared to Numpy and Arraymancer, Flambeau inherits the following PyTorch limitations:
No string tensors.
No support for negative step in tensor slice (a[::-1])