HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Mon, 09 Dec 2024 18:26:43 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"67573663-f9c9"
expires: Tue, 30 Dec 2025 01:04:00 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 580A:2680BD:968363:A91437:695322A6
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 00:54:00 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210097-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767056040.917463,VS0,VE206
vary: Accept-Encoding
x-fastly-request-id: 41f4953732e125fe9f3410e3cbaab418d9145790
content-length: 14998
Ke Wang
Ke
Wang (王
可)
I am currently a research scientist/engineer at Adobe Inc, with Marc Levoy’s computational photography team. Before that, I was a senior research engineer at Samsung Research America (SRA), MPI Lab.
I join Adobe Inc. as a computer scientist with Marc Levoy’s computational photography team!
Sep 21, 2023
Our MRI off-resonance correction paper (Physics-Informed Deep Learning Framework for MRI Off-Resonance Correction Trained with Noise Instead of Data) was accpeted to NeurIPS 2023! Project led by my awesome collaborator Alfredo! Arxiv and code will be available soon! [Paper]
Our paper High-fidelity Direct Contrast Synthesis from MR Fingerprinting was accepted by MRM is now published online! Please check it out! [Paper]
Jun 1, 2023
I join Samsung Research America (SRA) as a senior research engineer, working on real-worled computational imaging and computer vision! Lets keep making impacts!
I will be serving as reviewer for MICCAI 2023, Neurips 2023, Siggraph Asia 2023, Siggraph 2023, ICLR 2023.
May 30, 2022
I presented our work on Rigorous Uncertainty Estimation for MRI Reconstruction at ISMRM 2022 as an oral presentation. Manuscript and abstract is available upon request.
Apr 30, 2022
Our UFLoss paper titled High fidelity deep learning-based MRI reconstruction with instance-wise discriminative feature matching loss was accecpted by MRM and is now published online! Please check it out! [paper][talk][code]
Feb 20, 2022
Three abstracts (1 first-authored and 2 co-authored) were accepted by ISMRM 2022 as oral presentations!
Feb 1, 2022
Our Data Crimes paper with title Implicit data crimes: Machine learning bias arising from misuse of public data was accpeted for publication in PNAS! More infromation and details for this paper are avaible on Efrat’s website.
Sep 30, 2021
I presented our work on Memory-efficient Learning for High-dimensional MRI Reconstruction at MICCAI 2021. Date & Time: September 29th (Wednesday), 09:30 - 11:00 (UTC). Welcome to check it out! [Paper][Poster][Video]
Magnetic Resonance Imaging (MRI) is an effective medical imaging modality, offering excellent soft tissue contrast, versatile orientation capabilities, and no ionizing radiation exposure. However, its inherent physics constraints lead to time-consuming data acquisition and prolonged scan times. To reduce scan time, recently, deep learning (DL) has achieved notable success in reconstructing high-quality MR images from under-sampled data, surpassing conventional non-learned approaches. Despite this progress, challenges such as hand-crafted loss functions, high computational costs, and limited training data remain. In this dissertation, I will present a series of projects focused on enhancing fidelity and efficiency in MRI reconstruction. I will first introduce a supervised learning method that synthesizes multi-contrast MR images from a single MRF scan. Next, I will present a novel feature loss designed to preserve perceptual similarity, demonstrating its effectiveness in high-fidelity image reconstruction. Following that, I will touch upon memory-efficient learning for high-dimensional MRI reconstruction and present a novel framework for rigorous uncertainty estimation. Lastly, I will introduce a novel complex-valued representation tailored for tasks with limited training data.
×
ResoNet: a Physics-Informed DL Framework for Off-Resonance Correction in MRI Trained with Noise
Alfredo De Goyeneche, Shreya Ramachandran, Ke Wang, Ekin Karasan, and
3 more authors
In Thirty-seventh Conference on Neural Information Processing Systems, 2023
Magnetic Resonance Imaging (MRI) is a powerful medical imaging modality that offers diagnostic information without harmful ionizing radiation. Unlike optical imaging, MRI sequentially samples the spatial Fourier domain k-space of the image. Measurements are collected in multiple shots, or readouts, and in each shot, data along a smooth trajectory is sampled.
Conventional MRI data acquisition relies on sampling k-space row-by-row in short intervals, which is slow and inefficient. More efficient, non-Cartesian sampling trajectories (e.g., Spirals) use longer data readout intervals, but are more susceptible to magnetic field inhomogeneities, leading to off-resonance artifacts. Spiral trajectories cause off-resonance blurring in the image, and the mathematics of this blurring resembles that of optical blurring, where magnetic field variation corresponds to depth and readout duration to aperture size. Off-resonance blurring is a system issue with a physics-based, accurate forward model. We present a physics-informed deep learning framework for off-resonance correction in MRI, which is trained exclusively on synthetic, noise-like data with representative marginal statistics. Our approach allows for fat/water partial volume effects modeling and separation, and parallel imaging acceleration. Through end-to-end training using synthetic randomized data (i.e., images, coil sensitivities, field maps), we train the network to reverse off-resonance effects across diverse anatomies and contrasts without retraining. We demonstrate the effectiveness of our approach through results on phantom and \emphin-vivo data. This work has the potential to facilitate the clinical adoption of non-Cartesian sampling trajectories, enabling efficient, rapid, and motion-robust MRI scans. Code is publicly available at: https://github.com/mikgroup/ResoNet
Learning-based image harmonization techniques are usually trained to undo synthetic random global transformations applied to a masked foreground in a single ground truth photo. This simulated data does not model many of the important appearance mismatches (illumination, object boundaries, etc.) between foreground and background in real composites, leading to models that do not generalize well and cannot model complex local changes. We propose a new semi-supervised training strategy that addresses this problem and lets us learn complex local appearance harmonization from unpaired real composites, where foreground and background come from different images. Our model is fully parametric. It uses RGB curves to correct the global colors and tone and a shading map to model local variations. Our method outperforms previous work on established benchmarks and real composites, as shown in a user study, and processes high-resolution images interactively.
×
High-fidelity direct contrast synthesis from magnetic resonance fingerprinting
Ke Wang, Mariya Doneva, Jakob Meineke, Thomas Amthor, and
5 more authors
Magnetic Resonance Fingerprinting (MRF) is an efficient quantitative MRI technique that can extract important tissue and system parameters such as T1, T2, B0, and B1 from a single scan. This property also makes it attractive for retrospectively synthesizing contrast-weighted images. In general, contrast-weighted images like T1-weighted, T2-weighted, etc., can be synthesized directly from parameter maps through spin-dynamics simulation (i.e., Bloch or Extended Phase Graph models). However, these approaches often exhibit artifacts due to imperfections in the mapping, the sequence modeling, and the data acquisition. Here we propose a supervised learning-based method that directly synthesizes contrast-weighted images from the MRF data without going through the quantitative mapping and spin-dynamics simulation. To implement our direct contrast synthesis (DCS) method, we deploy a conditional Generative Adversarial Network (GAN) framework and propose a multi-branch U-Net as the generator. The input MRF data are used to directly synthesize T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) images through supervised training on paired MRF and target spin echo-based contrast-weighted scans. In-vivo experiments demonstrate excellent image quality compared to simulation-based contrast synthesis and previous DCS methods, both visually as well as by quantitative metrics. We also demonstrate cases where our trained model is able to mitigate in-flow and spiral off-resonance artifacts that are typically seen in MRF reconstructions and thus more faithfully represent conventional spin echo-based contrast-weighted images.
×
High fidelity deep learning-based MRI reconstruction with instance-wise discriminative feature matching loss
Ke Wang, Jonathan I Tamir, Alfredo De Goyeneche, Uri Wollner, and
3 more authors
Purpose: To improve reconstruction fidelity of fine structures and textures in deep learning (DL) based reconstructions.
Methods: A novel patch-based Unsupervised Feature Loss (UFLoss) is proposed and incorporated into the training of DL-based reconstruction frameworks in order to preserve perceptual similarity and high-order statistics. The UFLoss provides instance-level discrimination by mapping similar instances to similar low-dimensional feature vectors and is trained without any human annotation. By adding an additional loss function on the low-dimensional feature space during training, the reconstruction frameworks from under-sampled or corrupted data can reproduce more realistic images that are closer to the original with finer textures, sharper edges, and improved overall image quality. The performance of the proposed UFLoss is demonstrated on unrolled networks for accelerated 2D and 3D knee MRI reconstruction with retrospective under-sampling. Quantitative metrics including NRMSE, SSIM, and our proposed UFLoss were used to evaluate the performance of the proposed method and compare it with others.
Results: In-vivo experiments indicate that adding the UFLoss encourages sharper edges and more faithful contrasts compared to traditional and learning-based methods with pure l2 loss. More detailed textures can be seen in both 2D and 3D knee MR images. Quantitative results indicate that reconstruction with UFLoss can provide comparable NRMSE and a higher SSIM while achieving a much lower UFLoss value.
Conclusion: We present UFLoss, a patch-based unsupervised learned feature loss, which allows the training of DL-based reconstruction to obtain more detailed texture, finer features, and sharper edges with higher overall image quality under DL-based reconstruction frameworks.
×
Memory-efficient learning for high-dimensional mri reconstruction
Ke Wang, Michael Kellman, Christopher M Sandino, Kevin Zhang, and
4 more authors
In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part VI 24
Deep learning (DL) based unrolled reconstructions have shown state-of-the-art performance for under-sampled magnetic resonance imaging (MRI). Similar to compressed sensing, DL can leverage high-dimensional data (e.g. 3D, 2D+time, 3D+time) to further improve performance. However, network size and depth are currently limited by the GPU memory required for backpropagation. Here we use a memory-efficient learning (MEL) framework which favorably trades off storage with a manageable increase in computation during training. Using MEL with multi-dimensional data, we demonstrate improved image reconstruction performance for in-vivo 3D MRI and 2D+time cardiac cine MRI. MEL uses far less GPU memory while marginally increasing the training time, which enables new applications of DL to high-dimensional MRI.
×
Rigorous Uncertainty Estimation for MRI Reconstruction
Ke Wang, Anastasios Angelopoulos, Alfredo De Goyeneche, Amit Kohli, and
4 more authors
Deep-learning (DL)-based MRI reconstructions have shown great potential to reduce scan time while maintaining diagnostic image quality. However, their adoption has been plagued with fears that the models will hallucinate or eliminate important anatomical features. To address this issue, we develop a framework to identify when and where a reconstruction model is producing potentially misleading results. Specifically, our framework produces confidence intervals at each pixel of a reconstruction image such that 95% of these intervals contain the true pixel value with high probability. In-vivo 2D knee and brain reconstruction results demonstrate the effectiveness of our proposed uncertainty estimation framework.
×
Implicit data crimes: Machine learning bias arising from misuse of public data
Efrat Shimron, Jonathan I Tamir, Ke Wang, and Michael Lustig
Proceedings of the National Academy of Sciences, 2022
Although open databases are an important resource in the current deep learning (DL) era, they are sometimes used “off label”: Data published for one task are used to train algorithms for a different one. This work aims to highlight that this common practice may lead to biased, overly optimistic results. We demonstrate this phenomenon for inverse problem solvers and show how their biased performance stems from hidden data-processing pipelines. We describe two processing pipelines typical of open-access databases and study their effects on three well-established algorithms developed for MRI reconstruction: compressed sensing, dictionary learning, and DL. Our results demonstrate that all these algorithms yield systematically biased results when they are naively trained on seemingly appropriate data: The normalized rms error improves consistently with the extent of data processing, showing an artificial improvement of 25 to 48% in some cases. Because this phenomenon is not widely known, biased results sometimes are published as state of the art; we refer to that as implicit “data crimes.” This work hence aims to raise awareness regarding naive off-label usage of big data and reveal the vulnerability of modern inverse problem solvers to the resulting bias.
×
Non-Invasive Remote Temperature Monitoring Using Microwave-Induced Thermoacoustic Imaging
Hao Nan, Aidan Fitzpatrick, Ke Wang, and Amin Arbabian
In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
Non-invasive temperature monitoring of tissue at depth in real-time is critical to hyperthermia therapies such as high-intensity focused ultrasound. Knowledge of temperature allows for monitoring treatment as well as providing real-time feedback to adjust deposited power in order to maintain safe and effective temperatures. Microwave-induced thermoacoustic (TA) imaging, which combines the conductivity/dielectric contrast of microwave imaging with the resolution of ultrasound, shows potential for estimating temperature non-invasively in real-time by indirectly measuring the temperature dependent parameters from reconstructed images. In this work, we study the temperature dependent behavior of the generated pressure in the TA effect and experimentally demonstrate simultaneous imaging and temperature monitoring using TA imaging. The proof-of-concept experiments demonstrate millimeter spatial resolution while achieving degree-level accuracy.