CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 5.8k
[PIR]support tensort in PIR #70652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PIR]support tensort in PIR #70652
Conversation
… transform_to_trt_program
你的PR提交成功,感谢你对开源项目的贡献! |
Sorry to inform you that 78a617a's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually. |
paddle/phi/infermeta/unary.cc
Outdated
for (auto v : repeat_times_data) { | ||
std::cout << "v:" << v << std::endl; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个代码有必要吗
paddle/phi/infermeta/unary.cc
Outdated
std::cout << "repeat_times_data[i]:" << repeat_times_data[i] << std::endl; | ||
std::cout << "x_dim_vec[i]:" << x_dim_vec[i] << std::endl; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果是输出log建议使用GLOG
std::cout << "numel:" << value.numel() << std::endl; | ||
for (auto v : shape.GetData()) { | ||
std::cout << v << " "; | ||
} | ||
// std::cout<<"shape:"<<shape.GetData()<<std::endl; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
test/ir/inference/auto_scan_test.py
Outdated
# for ( | ||
# pred_config, | ||
# nodes_num, | ||
# threshold, | ||
# ) in self.sample_predictor_configs(prog_config): | ||
# if os.path.exists(self.cache_dir): | ||
# shutil.rmtree(self.cache_dir) | ||
|
||
# if isinstance(threshold, float): | ||
# atol = threshold | ||
# rtol = 1e-4 | ||
# elif isinstance(threshold, (list, tuple)): | ||
# atol = threshold[0] | ||
# rtol = threshold[1] | ||
# else: | ||
# raise NotImplementedError | ||
|
||
# is_fp8 = ( | ||
# pred_config.tensorrt_precision_mode() | ||
# == paddle_infer.PrecisionType.Int8 | ||
# ) | ||
# if (not is_fp8 and quant) or ( | ||
# is_fp8 and not (quant or explicit) | ||
# ): | ||
# continue | ||
|
||
# if explicit: | ||
# pred_config.enable_tensorrt_explicit_quantization() | ||
# self.assertTrue( | ||
# pred_config.tensorrt_explicit_quantization_enabled() | ||
# ) | ||
|
||
# ignore_flag = False | ||
# for teller, reason, note in self.ignore_cases: | ||
# if teller(prog_config, pred_config): | ||
# ignore_flag = True | ||
# if reason == IgnoreReasons.TRT_NOT_IMPLEMENTED: | ||
# self.ignore_log( | ||
# f"[TRT_NOT_IMPLEMENTED] {note} vs {self.inference_config_str(pred_config)}" | ||
# ) | ||
# elif reason == IgnoreReasons.TRT_NOT_SUPPORT: | ||
# self.ignore_log( | ||
# f"[TRT_NOT_SUPPORT] {note} vs {self.inference_config_str(pred_config)}" | ||
# ) | ||
# else: | ||
# raise NotImplementedError | ||
# break | ||
|
||
# if ignore_flag: | ||
# continue | ||
|
||
# try: | ||
# with paddle.pir_utils.OldIrGuard(): | ||
# main_program_desc, util_program = create_fake_model( | ||
# prog_config | ||
# ) | ||
# model = main_program_desc.serialize_to_string() | ||
# place = paddle.base.CPUPlace() | ||
# executor = paddle.base.Executor(place) | ||
# scope = paddle.base.Scope() | ||
# with paddle.base.scope_guard(scope): | ||
# executor.run(util_program) | ||
# params = scope.find_var("out_var_0").get_bytes() | ||
# if quant: | ||
# model, params = create_quant_model(model, params) | ||
# feed_data = prog_config.get_feed_data() | ||
# pred_config_deserialize = paddle_infer.Config( | ||
# pred_config | ||
# ) | ||
# trt_result = self.run_test_config( | ||
# model, params, prog_config, pred_config, feed_data | ||
# ) | ||
# self.assert_tensors_near( | ||
# atol, rtol, trt_result, baseline_result | ||
# ) | ||
# trt_engine_num, paddle_op_num = nodes_num | ||
# self.assert_op_size(trt_engine_num, paddle_op_num) | ||
# # deserialize test | ||
# if trt_engine_num > 0: | ||
# self.run_test_config( | ||
# model, | ||
# params, | ||
# prog_config, | ||
# pred_config_deserialize, | ||
# feed_data, | ||
# ) | ||
|
||
# self.success_log(f"program_config: {prog_config}") | ||
# self.success_log( | ||
# f"predictor_config: {self.inference_config_str(pred_config)}" | ||
# ) | ||
# except Exception as e: | ||
# self.fail_log(f"program_config: {prog_config}") | ||
# self.fail_log( | ||
# f"predictor_config: {self.inference_config_str(pred_config)}" | ||
# ) | ||
# self.fail_log(f"\033[1;31m ERROR INFO: {e}\033[0m") | ||
# all_passes = False | ||
|
||
# self.assertTrue(all_passes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这代码怎么都注释掉了,还需要吗
# "hard_swish", | ||
# "hard_sigmoid", | ||
# "leaky_relu", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
# test for old ir | ||
self.run_test() | ||
# self.run_test() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
@@ -163,7 +163,7 @@ def generate_trt_nodes_num(attrs, dynamic_shape): | |||
|
|||
def test(self): | |||
# test for old ir | |||
self.run_test() | |||
# self.run_test() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
@@ -151,7 +151,7 @@ def generate_trt_nodes_num(attrs, dynamic_shape): | |||
|
|||
def test(self): | |||
# test for old ir | |||
self.run_test() | |||
# self.run_test() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
# self.trt_param.precision = paddle_infer.PrecisionType.Half | ||
# yield self.create_inference_config(), generate_trt_nodes_num( | ||
# attrs, True | ||
# ), 1e-2 # atol=1e-2 while rtol is 1e-8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
… check_tensorrt_engin_op
* support pir_trt * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * update * fix codestyle * fix codestyle * fix * fix --------- Co-authored-by: Junjie Zhang <1356732652@qq.com>
PR Category
Execute Infrastructure
PR Types
Bug fixes
Description
在旧IR单测体系中适配PT来验证新IR下tensort适配情况,修复多个单测
pcard-67164