Back
Autoencoding T1 using MRzero for simultaneous sequence optimization and neural network training
{Introduction: Previously we proposed a supervised learning approach to automatically generate MR sequences from scratch without providing sequence programming rules, called MRzero [1]. In the present work, we develop an auto-encoder for T1 by performing a joint optimization of sequence parameters and a neural network using MRzero. Subjects/methods: The fully differentiable MRI pipeline is simulated end-to-end with Bloch parameters as input and T1 as target. We utilize known operator learning [2] in the reconstruction to reduce the number of trainable parameters in our NN by keeping the adjoint formalism [1] as known operator in the image reconstruction. The T1 training dataset consist of ten T1 maps with matrix size 32 9 32. For each target sample a non-zero PD rectangle with matrix size 16 9 16 at varying spatial location with voxel-wise randomly assigned PD, T1, T2 and B0 is defined, resulting in a total training data size of 2560 samples. A three-hidden-layer multilayer perceptron is used for T1 quantification. The MR sequence is based on a 180 deg inversion prepared 2D FLASH sequence with matrix size 32 9 32, TR \textequals 15 ms, TE \textequals 8 ms, FA \textequals 5 deg, repeated 6 times with varying TI and Trec. Together with the NN parameters, all TI and Trec times are optimized to find the best sequence for T1 mapping and are initialized with 0. Additionally, a penalty for longer times was applied to enforce shorter sequences. The optimization process (Fig. 1) interleaves the sequence and NN optimization after 50 and 5000 iterations, respectively. In total 500 iterations of sequence optimization are performed. Simultaneously optimized sequence parameters and trained NN are applied on a higher resolution with matrix size 126 9 126 and parallel imaging (GRAPPA acceleration factor 3) for in vivo measurements at 3T. Results/discussion: The T1 map of a healthy subject generated by the final optimized sequence is displayed in Fig. 2. Figure 3 shows the different stages of sequence optimization. The acquired T1 values of CSF, white matter and grey matter for later iterations match well to literature values at 3T [3]. A standard inversion recovery sequence was used as reference. The obtained maps match well with the reference, but the acquisition time could be reduced from 63.3 s to 19.2 s. Optimized TI and Trec times range from 0.5 s to 1.8 s and 0.5 s to 1.1 s, respectively. The simultaneous sequence optimization and NN training was performed solely on synthetic data at low resolution, but inference on higher resolution on in vivo data provided high quality T1 maps. Preliminary results at low resolution were shown in [1]. The T1 autoencoder is a proof-of-concept that can be extended also to multiparametric mapping\textemdashsimilar to MR fingerprinting\textemdashyielding PD, T1, and T2, as well as B1 and B0 inhomogeneity maps.}
@misc{item_3256621, title = {{Autoencoding T1 using MRzero for simultaneous sequence optimization and neural network training}}, journal = {{Magnetic Resonance Materials in Physics, Biology and Medicine}}, abstract = {{Introduction: Previously we proposed a supervised learning approach to automatically generate MR sequences from scratch without providing sequence programming rules, called MRzero [1]. In the present work, we develop an auto-encoder for T1 by performing a joint optimization of sequence parameters and a neural network using MRzero. Subjects/methods: The fully differentiable MRI pipeline is simulated end-to-end with Bloch parameters as input and T1 as target. We utilize known operator learning [2] in the reconstruction to reduce the number of trainable parameters in our NN by keeping the adjoint formalism [1] as known operator in the image reconstruction. The T1 training dataset consist of ten T1 maps with matrix size 32 9 32. For each target sample a non-zero PD rectangle with matrix size 16 9 16 at varying spatial location with voxel-wise randomly assigned PD, T1, T2 and B0 is defined, resulting in a total training data size of 2560 samples. A three-hidden-layer multilayer perceptron is used for T1 quantification. The MR sequence is based on a 180 deg inversion prepared 2D FLASH sequence with matrix size 32 9 32, TR \textequals 15 ms, TE \textequals 8 ms, FA \textequals 5 deg, repeated 6 times with varying TI and Trec. Together with the NN parameters, all TI and Trec times are optimized to find the best sequence for T1 mapping and are initialized with 0. Additionally, a penalty for longer times was applied to enforce shorter sequences. The optimization process (Fig. 1) interleaves the sequence and NN optimization after 50 and 5000 iterations, respectively. In total 500 iterations of sequence optimization are performed. Simultaneously optimized sequence parameters and trained NN are applied on a higher resolution with matrix size 126 9 126 and parallel imaging (GRAPPA acceleration factor 3) for in vivo measurements at 3T. Results/discussion: The T1 map of a healthy subject generated by the final optimized sequence is displayed in Fig. 2. Figure 3 shows the different stages of sequence optimization. The acquired T1 values of CSF, white matter and grey matter for later iterations match well to literature values at 3T [3]. A standard inversion recovery sequence was used as reference. The obtained maps match well with the reference, but the acquisition time could be reduced from 63.3 s to 19.2 s. Optimized TI and Trec times range from 0.5 s to 1.8 s and 0.5 s to 1.1 s, respectively. The simultaneous sequence optimization and NN training was performed solely on synthetic data at low resolution, but inference on higher resolution on in vivo data provided high quality T1 maps. Preliminary results at low resolution were shown in [1]. The T1 autoencoder is a proof-of-concept that can be extended also to multiparametric mapping\textemdashsimilar to MR fingerprinting\textemdashyielding PD, T1, and T2, as well as B1 and B0 inhomogeneity maps.}}, volume = {33}, pages = {S27--S28}, publisher = {No longer published by Elsevier}, address = {Amsterdam}, year = {2020}, slug = {item_3256621}, author = {Dang, HN and Loktyushin, A and Glang, F and Herz, K and Doerfler, A and Sch\"olkopf, B and Scheffler, K and Maier, A and Zaiss, M} }