CARVIEW |
Navigation Menu
-
-
Notifications
You must be signed in to change notification settings - Fork 56.2k
Onnx conformance tests #21088
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Onnx conformance tests #21088
Conversation
16c5fe3
to
ff9d20d
Compare
|
||
TEST_P(Test_ONNX_conformance, Layer_Test) | ||
{ | ||
applyTestTag(CV_TEST_TAG_DNN_SKIP_ONNX_CONFORMANCE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Skip" tags should be applied with some condition only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, these tests should be disabled by default and would be triggered only if CI explicitly set --test_tag_enable=dnn_skip_onnx_conformance
(in some Custom
build). How can I achieve that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test tags or OpenCV tests have different philosophy...
Current OpenCV "accuracy" tests (without tags) run always and prevent adding regressions into "existed codebase".
So,
0. "Enabled" test set of launched tests should not be empty.
- "Enabled" test set of launched tests should be green.
- Incoming patches should not make tests red.
- Future patches may enable more test cases.
Next, "skip" tags disables tests and related code which is not supported (due to HW or SW dependency) or broken.
Skip tags should not be applied for all tests at once.
applyTestTag(CV_TEST_TAG_DNN_SKIP_ONNX_CONFORMANCE);
So, we should not disable these tests by default.
Tags are designed for filtering of enabled test cases:
--gtest_filter
can do the same, but parameter value would be too long. Also its value must be updated somewhere (e.g., on CI which is not very convenient)- tag-based filtering rules work in runtime, e.g. we can check HW requirements
- tag-based filtering rules work in runtime, e.g. we can check SW requirements - e.g. versions of used external libraries
- "skip" tests are smart variant of GTest
DISABLED_
As we have all-in-one function we need to store "skip / unsupported cases" somewhere.
I would propose to have dedicated files for these rules:
- "parser" skip rules (common rules for all backends): file
test_onnx_conformance_onnx_layer_filter_parser.inl.hpp
- several backend-specific skip rules: e.g. files
test_onnx_conformance_onnx_layer_filter__{opencv,halide,ngraph,vulcan,cuda}.inl.hpp
TEST_P(Test_ONNX_conformance, Layer_Test)
{
std::string name = ...;
Backend backend = ...;
Backend target = ...;
#include "test_onnx_conformance_onnx_layer_filter_parser.inl.hpp"
if (backend == DNN_BACKEND_OPENCV)
{
#include "test_onnx_conformance_onnx_layer_filter__opencv.inl.hpp"
}
#ifdef HAVE_HALIDE
else if (backend == DNN_BACKEND_HALIDE)
{
#include "test_onnx_conformance_onnx_layer_filter__halide.inl.hpp"
}
#endif
#ifdef HAVE_INF_ENGINE
else if (backend == DNN_BACKEND_INFERENCE_ENGINE_NGRAPH)
{
#include "test_onnx_conformance_onnx_layer_filter__ngraph.inl.hpp"
}
#endif
#ifdef HAVE_VULKAN
else if (backend == DNN_BACKEND_VKCOM)
{
#include "test_onnx_conformance_onnx_filter__vulkan.inl.hpp"
}
#endif
#ifdef HAVE_CUDA
else if (backend == DNN_BACKEND_CUDA)
{
#include "test_onnx_conformance_onnx__cuda.inl.hpp"
}
#endif
else
{
std::ostringstream ss;
ss << "No test filter available for backend ";
PrintTo(backend, &ss);
ss << ". Run test by default";
std::cout << ss.str() << std::endl;
}
... test code with loading model, inputs, forward, check results ...
}
We can use script which discovers tests (and generates testConformanceConfig[]
) to generate these stubs for tests filtering:
else if (name == "test_abs")
{
// execute test
}
else if (name == "test_acos")
{
// execute test
}
...
So parser modifications would be simple (test_onnx_conformance_onnx_filter_parser.inl.hpp
):
#define SKIP_TAGS \
CV_TEST_TAG_DNN_SKIP_ONNX_PARSER, \
CV_TEST_TAG_DNN_SKIP_ONNX_CONFORMANCE
#define SKIP applyTestTag(SKIP_TAGS)
#define SKIP_(...) applyTestTag(__VA_ARGS__, SKIP_TAGS)
else if (name == "test_abs")
{
// execute test
}
else if (name == "test_acos")
{
SKIP; // optional comment with issue number
}
...
else
{
ADD_FAILURE() << "Parser: unknown test='" << name << "'. Update filter configuration";
}
#undef SKIP_TAGS
#undef SKIP
#undef SKIP_
Backend filters would be similar but they would have more SKIP* macros:
SKIP
(all target devices): SKIP_(tag_target_skip)SKIP_CPU
:if (target == DNN_TARGET_CPU
) SKIP_(tag_target_skip)`SKIP_NON_CPU
:if (target != DNN_TARGET_CPU
) SKIP_(tag_target_skip)`SKIP_OPENCL
SKIP_OPENCL_FP16
- CUDA-speficic: SKIP_CUDA
where variable tag_target_skip
is initialized in filter's prolog with one of the following values:
CV_TEST_TAG_DNN_SKIP_CPU
CV_TEST_TAG_DNN_SKIP_OPENCL
CV_TEST_TAG_DNN_SKIP_OPENCL_FP16
- ...
TBD later:
- how to update filter rules (how should we know that we need updates?)
- how to run all tests?
- how to generate conformance reports? (can we reuse existed regular test runs? - we can if our filters are up-to-date)
- checkBackend() improvement
I believe you could start with PARSER and OPENCV filters (CPU and/or OPENCL/OPENCL_FP16).
Other filters would be added later (or even generated!).
Note: CUDA filter may depend on CUDNN version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated PR.
Also updated PR's description.
Check opencv_extra's PR how to handle XML files (e.g., for reports).
Feel free to ask if you have any questions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replaced else if() {...}
-> if () { ... goto eof_label; }
in filters
(resolves fatal error C1061: compiler limit: blocks nested too deeply
)
modules/dnn/test/test_onnx_conformance_layer_filter_opencv_all_denylist.inl.hpp
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you π
|
||
std::string prefix = cv::format("conformance/node/%s", test_case.name); | ||
input.name = test_case.name; | ||
input.model_path = _tf(cv::format("%s/model.onnx", prefix.c_str())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach terminates whole test app if data is missing.
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(3.4.16-dev) /home/alalek/projects/opencv/dev/modules/ts/src/ts.cpp:1062: error: (-2:Unspecified error) OpenCV tests: Can't find required data file: dnn/onnx/conformance/node/test_abs/model.onnx in function 'findData'
As result zero tests are launched including tests without external test data.
I will fix that with filter updates.
Merge with extra: opencv/opencv_extra#937
Merged the version with
std::set
from asmorkalov#14Usage:
--gtest_output=xml:dnn_onnx_conformance.xml
--test_tag_force=dnn_skip_onnx_conformance
--test_tag_force=dnn_skip_cpu
Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.