CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 5.8k
[CINN]add variance op in frontend #71184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CINN]add variance op in frontend #71184
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
if (fusion_op.GetOperators()[0]->isa<cinn::dialect::VarianceOp>()) { | ||
return false; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
有kernel可以fallback了吧?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fallback之后性能会变差
auto cinn_reduce = rewriter.Build<cinn::dialect::VarianceOp>( | ||
op->operand_source(0), axis, keepdim); | ||
rewriter.ReplaceAllUsesWith(op.result(0), cinn_reduce.result(0)); | ||
rewriter.EraseOp(op); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个op应该不用创建cinn的算子映射
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
void VarianceKernel(const Context& dev_ctx, | ||
const DenseTensor& x, | ||
const IntArray& dims, | ||
bool keep_dim, | ||
DenseTensor* out) { | ||
DenseTensor temp_mean = Mean<T, Context>(dev_ctx, x, dims, true); | ||
DenseTensor temp_differences = Subtract<T, Context>(dev_ctx, x, temp_mean); | ||
DenseTensor temp_pow = | ||
Multiply<T, Context>(dev_ctx, temp_differences, temp_differences); | ||
|
||
MeanKernel<T, Context>(dev_ctx, temp_pow, dims, keep_dim, out); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kernel实现放到.cc文件中
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
- op : variance | ||
args : (Tensor x, int64_t[] axis, bool keepdim) | ||
output : Tensor(out) | ||
infer_meta : | ||
func : ReduceInferMeta | ||
param : [x, axis, keepdim] | ||
kernel : | ||
func : frobenius_norm | ||
param : [x, axis, keepdim] | ||
interfaces : paddle::dialect::InferSymbolicShapeInterface |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
和phi的算子完全一样,就不用加cinn的算子定义了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done,之前考虑的是把axis加成可变参数,现在只支持int64_t应该不需要cinn的op了
@@ -431,6 +431,7 @@ CINN_REGISTER_HELPER(reduce_ops) { | |||
|
|||
CINN_REGISTER_REDUCTION(reduce_sum, ReduceSum); | |||
CINN_REGISTER_REDUCTION(reduce_prod, ReduceProd); | |||
CINN_REGISTER_REDUCTION(variance, ReduceProd); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cinn里面最好也建一个对应op,至少语义对应上没问题
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
该替换放在后端改造部分的PR, 由@lshpku生效
* [CINN]add variance op in frontend * fix-comment
* [CINN]add variance op in frontend * fix-comment
PR Category
CINN
PR Types
Others
Description
pcard-67164
This PR adds variance op in frontend.