CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 5.8k
[0-size Tensor No.57-58、60-61、63-66] Add 0-size Tensor support for fft2 #73042
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #73042 +/- ##
===========================================
Coverage ? 100.00%
===========================================
Files ? 1
Lines ? 4
Branches ? 0
===========================================
Hits ? 4
Misses ? 0
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
@@ -33,6 +33,11 @@ void FFTC2CKernel(const Context& ctx, | |||
bool forward, | |||
DenseTensor* out) { | |||
ctx.template Alloc<T>(out); | |||
if (x.numel() == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
涉及到这几个fft相关的kernel是不是input和output的shape是相同的?若是,x.numel() == 0
时,out也为0 size Tensor,这时直接return
就行吧?此处调用Full
是另有什么考虑吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
>>> scipy.fft.fft2(np.random.random([3, 0, 1, 2]), s=(1, 2), axes=(0, 1), norm='backward')
array([[[[0.-0.j, 0.-0.j]],
[[0.-0.j, 0.-0.j]]]])
这样执行时会返回为0,这里不容易理解,已增加到注释
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok,
void FFTR2CInferMeta(const MetaTensor& x,
const std::vector<int64_t>& axes,
const std::string& normalization,
bool forward,
bool onesided,
MetaTensor* out,
MetaConfig config) {
PADDLE_ENFORCE_NOT_NULL(
out,
common::errors::InvalidArgument("Output of fft_r2c should not be null."));
const phi::DDim x_dim = x.dims();
// only ensure that fft axes' size greater than zero at runtime
// they might be -1 to indicate unknown size ar compile time
if (config.is_runtime) {
for (auto axis : axes) {
PADDLE_ENFORCE_GT(x_dim[axis],
0,
common::errors::InvalidArgument(
"Invalid fft n-point (%d).", x_dim[axis]));
}
}
out->set_layout(x.layout());
out->set_dtype(ToComplexType(x.dtype()));
if (!onesided) {
out->share_dims(x);
} else {
phi::DDim out_dim = x.dims();
const int last_fft_axis = static_cast<int>(axes.back());
const int64_t last_fft_dim_size = x_dim[last_fft_axis];
out_dim.at(last_fft_axis) = last_fft_dim_size / 2 + 1;
out->set_dims(out_dim);
}
}
fft相关的kernel input和output的shape不一定相同
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR Category
Execute Infrastructure
PR Types
Improvements
Description
[0-size Tensor No.57-58、60-61、63-66] Add 0-size Tensor support for fft2
修改前向和反向
paddle/phi/infermeta/unary.cc 修改等于0判断
symbolic_shape 没有对应代码,查找 FftC2cOpInferSymbolicShape FftC2rOpInferSymbolicShape FftR2cOpInferSymbolicShape
Paddle/paddle/fluid/pir/dialect/operator/interface/infer_symbolic_shape/unary_infer_sym.cc
Line 1348 in 6c675da
cpu/gpu kernel共用impl实现
PaddleAPITest 测试已修复cuda error , paddle error,存在错误为torch error

fft2
fftn

ifft2

ifftn

ihfft2

ihfftn

rfft2

rfftn

单测中取消 test_zero_point 中检查错误的判断,这样 irfft2 中 s=None 可以