CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 154
Releases: luxonis/depthai-core
Release v3.0.0
Compare
DepthAI v3.0.0 release is out 🎉
We’re proud to announce a new major revision of the DepthAI library is now out of the release candidate stage and is fully released as v3.0.0!
Simplified API
We've simplified and unified the API to lower the barrier to entry and make development more intuitive. Core tasks like setting up camera streams, running neural networks, and building spatial pipelines are now streamlined with fewer lines of code.
We have removed many deprecated functions and simplified node constructors, creating a cleaner, more consistent, and easier-to-use library for newcomers as well as more experienced users.
Refreshed nodes
We've revisited some of the most fundamental nodes to make them more powerful and flexible.
-
Camera
-
ImageManip
New nodes
DepthAI v3 introduces new high-level nodes to simplify working with high level concepts (RGBD pointclouds, SLAM) as well as a node to perform dynamic calibration of stereo cameras.
- RGBD: This node makes it easy to work with synchronized and aligned color and depth data. It features an "autocreate" mode that automatically constructs the required input pipeline, delivering a ready-to-use RGBD message or a colored point cloud.
- VSLAM: We've added SLAM and VIO as host nodes, enabling advanced spatial AI applications. These are currently available on Linux and macOS and are in early preview.
- DynamicCalibration: This node performs dynamic calibration of stereo cameras, allowing you to achieve accurate depth perception even under high thermal and mechanical stress applied to the device.
Support for host nodes
DepthAI v3 now supports custom host nodes, allowing you to run parts of your pipeline on the host with custom logic while keeping the rest on-device. Whether you need to run complex post-processing in Python or integrate with external libraries, it’s all part of the same unified pipeline. See the Python and C++ examples.
Visualizer
Debugging and development just got easier with our new integrated visualizer. You can now visualize camera streams, neural network outputs, and detection overlays in real-time. The visualizer is also able to display the pipeline graph, making it a great tool for understanding and debugging your application's data flow.
See the Python and C++ examples.
A lot of nice to have features:
- Auto reconnect: If a device has a flaky connection and crashes, the library will now automatically try to reconnect instead of exiting, making your applications more resilient.
- Unified Coordinate System: We’ve standardized all coordinate systems. Camera, IMU, and spatial data now use the RDF (Right-Down-Forward) convention, eliminating inconsistencies when fusing data from multiple sensors.
- Native Model Zoo Support: Take advantage of the new Luxonis Model Zoo, which integrates natively into v3. Loading and deploying models is now seamless, with no boilerplate required.
- Remapping points between different frames: We've greatly improved the developer experience where detections from one image in the pipeline (on the input of the NN for example) need to be remapped to another image in the pipeline, that's still in full resolution by internally tracking all of the image operations that are applied to streams. See Python and C++ examples.
Support for RVC4 devices
DepthAI v3 brings support for both RVC2 and RVC4 devices, with a unified codebase that runs on both platforms. This allows you to prototype on one and deploy on another without code changes in most cases.
Migration from v2.x version of the library
We've prepared a porting guide to speed up the migration here.
Known issues
- On release flavors of the Luxonis OS on RVC4, the camera stack can get stuck in a bad state where cameras do not restart streaming, requiring a device reboot. It is recommended to use
-debug
versions of the OS until this is resolved.
What's changed since v3.0.0-rc4
New features
- DynamicCalibration node
- Add support for a new revision of OAK-Thermal (R8)
- Add host implementation to
ImageAlign
enabling higher FPS in pipelines that are partly running on host - Add a depthai-core-example project reference in README.md
- Add
ImgTransformations
tracking toObjectTracker
Bug fixes and stability improvements
- Fix a crash on RVC2 when a device is partially calibrated
- Fix pink streams on small resolutions on OAK-D[-Pro]-Wides with the OV9728 sensor
- Fix a regression for replay that caused the first set of frames to be invalid
- Fix the rotation of the IMU rotation vector on RVC2
- Fix RVC4 Camera not being able to stream without calibration
- Fix a rare startup bug for ImageAlign on RVC4 where DSP failed to load the required module
- Fix an edge case in ImageManip node on RVC4 failing to resize to desired resolution(s)
Misc
- Add a way to set sensor type (color/mono) on the Camera node on RVC4 with
.setSensorType()
- Allows quick switching and testing for OV9[2/7]82 sensors
- Improve ToF decoding performance on RVC2 to allow 30 FPS decoding
- Add ImgTransformations tracking in case of no calibration for a single source stream
- Add tests for
CalibrationHandler
- Move the Python utilities to the root of the repository
- Add a test tag for the OAK4-S and make a test run for the OAK4-S on CI
- Combine the Python wheels (all python versions per architecture combined in one wheel) for a smaller size footprint on PyPi
- Improve the responsiveness of Python bindings to interrupts
- Improve the performance of the ToF node and examples; they now run at 30 FPS
Assets 5
Release v3.0.0-rc.4
4adc1df
Compare
Features
- Added ObjectTracker node on RVC4 which can also run as a host node.
- Based on OC-Sort algorithm
- Improved performance compared to the RVC2 implementation
- Examples
- Added support for runtime calibration changes on RVC4
- Support for undistorted outputs in Camera node on RVC4
- Added ToF Filtering
- Greatly reduces noise and improves quality
- Filtering is enabled by default on the ToF node
- The filters currently run on the host
- The filtering can also be used independently with the new ImageFilters node
- Example
Bug fixes
- Fixed an issue where using OV9728 at resolutions smaller than 640x400 forced a mono-only sensor config on RVC2
- Stabilized internal camera clock and monotonic clock syncing
- Fixed the Camera->IMU extrinsics setting in BasaltVIO node
- Fixed an edge case where dai.Device constructor could get stuck indefinitely if the device lost connection at the wrong time by adding a timeout internally
Misc
- Removed the deprecated imageIn input on DetectionParser which has been replaced by using ImgTransformations
- Skipped file locking for Zoo model downloads when filesystem is read-only
- getCalibration always returns the current active calibration, even when not set by setCalibration
- Changed the RGBD autocreate logic to use ToF depth on OAK-ToF instead of stereo.
- Added ToF RGB align example
- Added a convenience getNNArchive getter to NeuralNetwork node
RVC4 OS comaptibility
- Integration tested on Luxonis OS 1.6.0-debug, 1.14.1-debug and 1.15.0-debug
Known issues / caveats:
- On release flavors of the OS on RVC4, the camera stack can get stuck in a bad state, where cameras don't start streaming again and a reboot is needed
- It is recommended to use -debug versions of the OS, until this is resolved
Assets 5
Release v3.0.0-rc.3
5883dab
Compare
Features
- Optimized XLink memory usage on RVC2 allowing for larger pipelines without running out of memory
- Added support for the latest revision of the OAK-ToF (R4)
Bug fixes
- Edge case in ToF decoding overflow fixed
- Fixed a regression on DSP post-processing in Stereo node on RVC4
- Addressed the camera timestamp drift on RVC4
CalibrationHandler.getCameraExtrinsics()
regression fixed by flipping the matrix multiplication order- Fixed a possible race condition in ImageManip in
LOW_POWER
mode on RVC4
Misc
- Added missing bindings for
RotatedRect
in Script node on RVC2 - Switched to standardized access for EEPROM on RVC4 platform
- Switched to
std::filesystem::path
fromdai::Path
throughout the codebase - Added unique input names by default making multi-input host nodes work without the need to explicitly name inputs/outputs
- Added examples for EdgeDetector and SystemLogger (RVC2 only)
- Added filesystem locks for Zoo model downloading for multi-process usage
- Added support for parallel Debug and Release builds on Windows (previous limitation due to vcpkg)
RVC4 OS comaptibility
- Integration tested on Luxonis OS 1.2, 1.6.0-debug and 1.14.1-debug
Known issues / caveats:
- On release flavors of the OS on RVC4, the camera stack can get stuck in a bad state, where cameras don't start streaming again and a reboot is needed
- It is recommended to use -debug versions of the OS, until this is resolved
Assets 5
Release v3.0.0-rc.2
Compare
DepthAI v3.0.0 release candidate is out 🎉
We’re proud to announce a new major revision of the DepthAI library is now in release candidate stage and ready to be used.
Simplified API
We've simplified and unified the API to lower the barrier to entry and make development more intuitive. Core tasks like setting up camera streams, running neural networks, and building spatial pipelines are now streamlined with fewer lines of code.
We have removed many deprecated functions and simplified node constructors, creating a cleaner, more consistent, and easier-to-use library for newcomers as well as more experienced users.
Refreshed nodes
We've revisited some of the most fundamental nodes to make them more powerful and flexible.
-
Camera
-
ImageManip
New nodes
DepthAI v3 introduces new high-level nodes to simplify working with high level concepts (RGBD pointclouds, SLAM)
- RGBD: This node makes it easy to work with synchronized and aligned color and depth data. It features an "autocreate" mode that automatically constructs the required input pipeline, delivering a ready-to-use RGBD message or a colored point cloud.
- VSLAM: We've added SLAM and VIO as host nodes, enabling advanced spatial AI applications. These are currently available on Linux and macOS and are in early preview.
Support for host nodes
DepthAI v3 now supports custom host nodes, allowing you to run parts of your pipeline on the host with custom logic while keeping the rest on-device. Whether you need to run complex post-processing in Python or integrate with external libraries, it’s all part of the same unified pipeline. See the Python and C++ examples.
Visualizer
Debugging and development just got easier with our new integrated visualizer. You can now visualize camera streams, neural network outputs, and detection overlays in real-time. The visualizer is also able to display the pipeline graph, making it a great tool for understanding and debugging your application's data flow.
See the Python and C++ examples.
A lot of nice to have features:
- Auto reconnect: If a device has a flaky connection and crashes, the library will now automatically try to reconnect instead of exiting, making your applications more resilient.
- Unified Coordinate System: We’ve standardized all coordinate systems. Camera, IMU, and spatial data now use the RDF (Right-Down-Forward) convention, eliminating inconsistencies when fusing data from multiple sensors.
- Native Model Zoo Support: Take advantage of the new Luxonis Model Zoo, which integrates natively into v3. Loading and deploying models is now seamless, with no boilerplate required.
- Remapping points between different frames: We've much improved the developer experience where detections from one image in the pipeline (on the input of the NN for example) need to be remapped to another image in the pipeline, that's still in full resolution by internally tracking all of the image operations that are made to streams. See Python and C++ examples.
Support for RVC4 devices
DepthAI v3 brings support for both RVC2 and RVC4 devices, with a unified codebase that runs on both platforms. This allows you to prototype on one and deploy on another without code changes in most cases.
Migration from v2.x version of the library
We've prepared a porting guide to speed up the migration here.
Known issues
- On release flavors of the Luxonis OS on RVC4, the camera stack can get stuck in a bad state where cameras do not restart streaming, requiring a device reboot. It is recommended to use
-debug
versions of the OS until this is resolved.
Assets 5
Release v2.30.0
Compare
Features
- Add RVC4 discovery to point users to
v3
version of the library for OAK4 devices - Add support for a new VCM enabling autofocus on new IMX378 CCMs
Bug fixes
- Fix an edge case in ImageManip to make https://github.com/geaxgx/depthai_hand_tracker run in edge mode again
- Fix an edge case when sending MessageGroup from host to device and using more than 4 messages
- Fix a bug where standolone logging would cause a memory leak
Assets 6
Release v2.29.0
d6a37a5
Compare
Features
- Add the ability to change the calibration on the device in runtime with the new
dai::Device.setCalibration()
method and to retrieve it with thedai::Device.getCalibration()
. - New
StereoDepth
presets:DEFAULT
FACE
HIGH_DETAIL
ROBOTICS
- Multiple camera improvements (more details in #972):
- Expose more downsampling modes when picking a lower than native resolutions
- Expose more binning modes when binning is picked on IMX582/586 (
sum
andavg
) - HDR on IMX582/586
- Option to bypass 3A for having manual expose/ISO take effect faster
- Initial support for new Sony 4K Starvis sensors: IMX678 and IMX715
- Option to set the main camera to drive auto-exposure and auto-white-balance in multi-camera configurations
- Improved StereoDepth filtering and an option to use a set a custom order of filters
- Disparity is first scaled to 13 bit before going through filtering, which results in filters being more effective.
Misc
- Remove false reports on crashes that happened on device destruction
- Add
getWidth()
andgetHeight()
API toEncodedFrame
Assets 6
Release v2.28.0
12158a5
Compare
Features
- Changed the automatic crashdump collection to always on unless
DEPTHAI_DISABLE_CRASHDUMP_COLLECTION
is set. - Added
DEPTHAI_ENABLE_ANALYTICS_COLLECTION
environment varialbe - when set, analytic data (pipeline schema) is sent to Luxonis which will be used to further improve the library. - Undistort both outputs of
ToF
by default. - Improved 3A syncing on OAK-D-LR
- Added support for YoloV10
Bug fixes
- Fix
Camera
node to correctly allocate resources for undistortion - Fix
StereoDepth
node when decimation filter and depth alignment are used - Fix host timestamps of thermal frames to be synced
Misc
- Updated XLink to support clangd and shared libraries on Windows:
- Remove a custom assert to always produce a crash dump to improve the UX with the automatic crashdump collection
- Increased watchdog priority on device side to improve stability during high load
Assets 6
Release v2.27.0
729e478
Compare
Features
- Automatic crash dump collection when
DEPTHAI_ENABLE_FEEDBACK_CRASHDUMP
is set.
Bug fixes
- Fix
PointCloud
generation when depth comes out of theImageAlign
node - Fix IMX214 (OAK Lite) potential crash on init with more complex pipelines
- Fix a rare bug on a crash if a camera stream is closed after 1 second while AF is running
Assets 6
Release v2.26.0
9048745
Compare
Features
- ImageAlign node
- it can align depth to any other sensor on the device - works for ToF too.
- it can align two sensors with a static depth (default at infinity), useful for thermal-rgb alignment
- Cast node
- Cast
NNData
message toImgFrame
- Useful in case apps need to use outputs from
NeuralNetwork
node to be fed into nodes that only acceptImgFrame
- Cast
- Full ToF support
- Running live at 30 FPS
- Measuring range of 20cm - 6m
- < 1% error across the range
- Support for ToF in spatial nodes
- Add an option to limit bandwidth over XLink
setXLinkRateLimit(int maxRateBytesPerSecond, int burstSize, int delayUs)
Stability improvements and bug fixes
- Improved PoE stability on reboots - eliminate the case where powercycle of the device was sometimes needed
- NOTE: This requires flashing the factory bootloader - by running the flash script or using the device manager
- Improved runtime stability of heavy pipelines by increasing priority of the cameras in the NoC
- Improved ImageManip stability
- Improved XLink communication to be able to detect memory corruption and avoid it
- Fix a bug where stereo rectification was inaccurate when the calibration data didn't contain direct link between the two inputs
- Relevant for custom setups on FFC devices
Misc
- Improve numerical stability of the rectification algorithm
- Improves stereo quality on wide FOV sensors
Assets 6
Release v2.25.1
c21bdd3
Compare
Features
- Added
DeviceBootloader::getFlashedVersion
to retrieve the bootloader version that is flashed on device.
Bug Fixes
- Fixed fsync on OAK-D-SR.
- Fixed boot issue on OAK-D-SR-POE and OAK-T.
- Fixed compilation in same cases, because of problems with
jsoncpp
. (#980)