Journal of Healthcare Engineering

Journal of Healthcare Engineering / 2021 / Article
Special Issue

Multiple Criteria Decision-Making Approaches for Healthcare Management Applications

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5591660 | https://doi.org/10.1155/2021/5591660

Huanyu Liu, Jiaqi Liu, Junbao Li, Jeng-Shyang Pan, Xiaqiong Yu, "PSR: Unified Framework of Parameter-Learning-Based MR Image Superresolution", Journal of Healthcare Engineering, vol. 2021, Article ID 5591660, 14 pages, 2021. https://doi.org/10.1155/2021/5591660

PSR: Unified Framework of Parameter-Learning-Based MR Image Superresolution

Academic Editor: Hao Chun Lu
Received25 Feb 2021
Revised27 Mar 2021
Accepted09 Apr 2021
Published22 Apr 2021

Abstract

Magnetic resonance imaging has significant applications for disease diagnosis. Due to the particularity of its imaging mechanism, hardware imaging suffers from resolution and reaches its limit, and higher radiation intensity and longer radiation time will cause damage to the human body. The problem is expected to be solved by a superresolution algorithm, especially the image superresolution based on sparse reconstruction has good performance. Dictionary generation is a key issue that affects the performance of superresolution algorithms, and dictionary performance is affected by dictionary construction parameters: balance parameters, dictionary size, overlapping block size, and a number of training sample blocks. In response to this problem, we propose an optimal dictionary construction parameter search method through the experiment to find the optimal dictionary construction parameters on the MR image and compare them with the dictionary obtained by multiple sets of random dictionary construction parameters. The dictionary we searched for the optimal parameters of the dictionary construction training has more powerful feature expressions, which can improve the superresolution effect of MR images.

1. Introduction

Magnetic resonance imaging (MRI) becomes more and more widely used in medical clinical applications and plays an increasingly important role in the diagnosis of various diseases [15]. The mechanism of MR imaging is different from that of natural images. The hydrogen protons of human organs are magnetized under the action of external magnetic fields and generate a magnetic resonance phenomenon under the action of a magnetic field. The changing magnetic signals are converted into electrical signals by induction coils and fill the K space. Finally, an MR image is generated through the Fourier transform. The method of improving resolution relies heavily on increasing the magnetization of more free water in human tissues and organs [6], which will cause the increase of the radiation time and radiation intensity of the main magnetic field of the magnetic resonance imager and the loaded electromagnetic waves [7]; excessive radiation can lead to serious consequences, such as overheating of the human body and protein inactivation [8], causing harm to the human body and not suitable for clinical application. From the perspective of current imaging methods and technologies, the hardware imaging resolution reaches the limit value. To increase the resolution, software superresolution technology must be used to increase the image resolution.

Image superresolution methods are mainly based on interpolation, reconstruction, and learning. Li and Orchard [9] proposed a new edge-directed image interpolation (NEDI) method; Wang and Ling [10] proposed an Edge-Adaptive Interpolation Algorithm (EAIA), combined with bilinear and NEDI methods; Giachetti and Asuni [11] proposed an interpolation based on iterative curvature method based on the NEDI method. But the interpolation-based method does not essentially increase the image information. Irani and Peleg [12] proposed an iterative backprojection method (IBP), Schultz and Stevenson [13] proposed a superresolution method based on maximum posterior probability (MAP), Patti et al. [14] proposed the convex set projection method, and in [15], the projection onto convex sets superresolution reconstruction method was used for the superresolution of cardiac valve MR images. POCS algorithm is not good at maintaining the image edge and can not restore the high-frequency information on the image. The superresolution method based on reconstruction regards low-resolution observation images as a constraint condition of the original high-resolution image, and a series of solution spaces satisfying the constraint condition can be obtained through the alternating iteration method. Most of the above algorithms use prior knowledge such as the edge characteristics of the image, the nonnegativity of pixels and local smoothing characteristics to construct constraints and then solve the optimization problem through an iterative algorithm. The reconstruction algorithms are computationally expensive, and the resulting image is reconstructed too smoothly. Learning-based superresolution methods mainly include dictionary learning and deep learning. Yang et al. [16] proposed a superresolution reconstruction algorithm based on sparse representation. This method effectively overcomes the problem of inaccurate representation caused by using a fixed number of neighbors. On the basis of the methods [17], adaptive sparse field selection and adaptive regularization are applied to superresolution. Yang et al. [18] proposed a double-geometric neighborhood embedding method (DGNE), which uses multiview features and local spatial neighbors of image blocks to find image features-spatial manifold embedding. Zhang et al. [19] combined subspace division and local regressor learning through the mixture of experts method to further improve the quality of image reconstruction. With the development of deep learning. Shi et al. [20] propose a novel image SR method that integrates both locals and global information for effective image recovery. This is achieved by, in addition to TV, low-rank regularization that enables utilization of information throughout the image. The reconstructed image will produce stripe distortion, and the texture and other details will be blurred to some extent. Dong et al. [21] used convolutional neural networks for image superresolution reconstruction for the first time. This method first uses bicubic interpolation to enlarge it to the target size and then passes through a three-layer convolution network doing the nonlinear mapping. The results obtained are output as high-resolution images, and experiments show that it has achieved good results. Since then, Residual Dense Network [22], SRGAN [23], and many deep networks [24, 25] are proposed and used. Deep learning methods are based on data driving, and Network performance is affected by the amount of data. However, due to the particularity and privacy of MR images, it is difficult to obtain large amounts of data. MR image superresolution task is more suitable for methods that weak dependence on data volume.

Our contributions are threefold. First, we propose a superresolution architecture based on joint dictionary learning suitable for a small number of MR images. Second, we analyze the effect of dictionary parameters on dictionary performance and find the optimal dictionary parameters through parameters learning. Third, experiments prove that our proposed method can achieve state-of-the-art performance, even if there are only a few image data.

2. Algorithm and Analysis

2.1. Algorithm

We propose a superresolution architecture based on joint dictionary learning suitable for a small number of MR images. The algorithm framework is shown in Figure 1. Through corresponding high and low-resolution image block training, learn high and low-resolution dictionary, low-resolution image blocks are sparsely represented by a low-resolution dictionary, and the sparse coefficients can be used for high-resolution image reconstruction. The performance of high and low-resolution dictionaries directly affects the image reconstruction effect.

Training a joint dictionary requires the use of high-resolution image block sets and low-resolution image block sets. Training image pair is represented by . and represents features extracted from image blocks. The dictionary training of two image feature spaces is expressed as follows [16]:

According to the idea of the joint training method, the image blocks corresponding to the two image feature block spaces are concatenated to form a new image feature block space, so formula (3) [16] can be obtained:M is the dimensions of the low-dimensional image feature block, and N is the dimensions of the high-dimensional image feature block. It can be seen from formula (3) that balance parameters, dictionary block size, overlapping block size, and the number of dictionary blocks have an important impact on the performance of the dictionary. We obtain the optimal parameters of dictionary construction through experimental analysis so as to achieve the improvement of dictionary performance and image reconstruction effect.

2.2. Parameter Set and Analysis

The parameter set is written as parameter = [, overlap, n, spn], where is balance parameter, the overlap is the size of the overlap block, n is the size of the dictionary block, and spn is the number of the exemplar patch.

From the mechanism of reconstruction perspective, changes of the parameter can cause changes in the structure and quantity of the data calculated by the dictionary, which has a great influence on the effect of reconstruction. The specific analysis is as follows.

According to the formula in equation (3), the balance parameter is used to balance sparsity and low-resolution dictionary sparse, which represents image block errors. It can be analyzed from the equation that the sparsity is inversely proportional to the error. In order to find the minimum sparsity coefficient, the error will increase, and vice versa, so there must be an optimal value to minimize the value of equation (3).

The overlapped block overlap is the size of the overlapped portion among the image blocks, and is divided into the selection of sample blocks in dictionary training and overlap of test image blocks in reconstruction. In order to ensure that the detailed features of the sample block can be extracted in the dictionary training, the maximum overlapped block method is used to select samples, which makes 1 pixel gradually change between the training sample blocks of a localized image. In order to eliminate the boundary blur caused by the feature extraction of the test image block and the reliability of the connection between the reconstructed blocks, the overlap between the blocks is required. The larger the overlap block, the larger the constraint between the reconstructed blocks, and the better the reconstruction effect.

MR images are expected to show various tissue structures clearly, with high tissue resolution. However, the outline of the diseased tissue can not be seen clearly and separated from the surrounding structure, so it is important to clearly display the texture details, especially the boundaries between different organizational structures, and to some extent, reduce the chances of misjudgment of the diagnosis due to the blurring of the picture. Then, for the dictionary and the reconstructed image, the size of the dictionary block used to represent the feature will affect the number of effective features. The smaller the dictionary block, the fewer features are generated, which makes the reconstructed image have limitations and larger errors, such as the most extreme 1 × 1 and 2 × 2 blocks. The features they can describe are limited. However, the oversized block is also problematic. The image is too large, and the features described by the image block can be combined and described by several smaller feature blocks, then they lose the properties of the feature block, such as extreme cases; the test image itself is a large feature block; of course, this is unreasonable. Therefore, there must be an optimal value for image segmentation, which makes image reconstruction better.

We adopt the method of sparse representation and reconstruction of image blocks. When training the dictionary, a large number of sampling image blocks are required. The number of image sampling blocks has a certain influence on the reconstructed quality. If the image block sample is too little, it is not enough to complete the training of the dictionary. If there are too many image sample blocks, especially the features of some image sample blocks that are not obvious or typical, the characterization of the dictionary cannot be improved even if there is much training. Is there an optimal number of partitions? Since the selected image sampling blocks are randomly extracted, it is difficult to extract the required sample blocks in an accurate number of blocks, so there is no optimal number of sample blocks. Therefore, the selection of the sample block is as long as a certain amount. Too little or too much can both not change the image reconstruction effect.

3. Experimental Results and Analysis

3.1. Dataset

The experiments are all set as follows: The method adopts the experimental framework of Section 1 of Chapter 2. 81; representative pictures of different categories in the image library were selected as the training samples of the high-resolution dictionary. These MR images were obtained from Siemens 3T platforms using a 32-channel head coil. Low-resolution images are generated by the degradation of high-resolution images. LR images are generated following the steps: (1) the high-resolution images are transformed from image space to K space by FFT, (2) in the K space, the outer high frequency is truncated, (3) through the inverse Fourier transform, the truncated k space data are transformed into the image space to generate the corresponding low-resolution images. This mimics the actual acquisition of LR and HR images by MRI scanners. In the experiment, as shown in Figure 2, five images corresponding to different types of MR images are selected as test samples.


Balance parameter Sample 1Sample 2Sample 3Sample 4Sample 5

0.000132.114728.787532.025230.438230.1109
0.00532.264328.904932.048330.641830.3621
0.0132.277428.934132.055030.646330.3665
0.132.243128.872532.035730.591230.3487
0.232.102628.782831.934930.525330.2482
0.331.950228.689231.809430.448630.1392
0.431.793128.593531.675530.366730.0240
0.531.627128.489431.526730.282329.9142
0.631.452728.371231.358830.195029.7959
0.731.162428.163331.073830.059029.6033
0.829.170526.904729.241429.273928.1618
0.924.486423.195725.256626.578224.2847

3.2. Joint Optimization of Parameter
3.2.1. Balance Parameter λ

The optimal value of the balance parameter is verified by the experiments below. The initial parameter configures are as follows: the dictionary size is 512, the balance parameter  = 0.1, and the block size is 5 × 5, the overlap block is 4, the number of sample blocks is 100000, and the test samples are, respectively, selected from the head, ankle, carotid artery, knee, and neck, as shown in Figure 2.

It can be seen from Figure 4 that the value of PSNR decreases significantly with the increasing λ when λ > 0.1. On the contrary, when λ < 0.1, the value of PSNR decreases slowly with λ decreasing. As the balance parameter of the sparsity, λ exists optimal value, which makes PSNR maximum. For further verification, let λ = 0.1 as the optimal balance parameter. The super resolution ratio is 1 : 4.

The experiment used a superresolution ratio of 1 : 4, and other experimental parameters are the same as those in experiment ratio 1 : 2. The experimental results are shown in Table 2.


Balance parameter Sample 1Sample 2Sample 3Sample 4Sample 5

0.000129.759325.276429.053726.953227.4701
0.00529.804925.325029.149727.074227.5213
0.0129.818625.334929.157127.084327.5372
0.129.828125.345429.164527.092227.5412
0.229.739125.294429.102427.067527.4779
0.329.657625.251529.044127.038227.4229
0.429.574125.209228.985627.008827.3612
0.529.482625.163128.922326.977827.2964
0.629.386125.115928.854326.943927.2285
0.729.283225.066428.778326.907427.1586
0.829.153425.005728.683026.863927.0771
0.928.424624.713028.147626.676926.6622

Figure 3 is obtained from Table 2. It can be seen from the figure that the extreme point is near , and the experiment with a superresolution ratio of 1 : 4 has the same conclusion as the experiment with a superresolution ratio of 1 : 2.

3.2.2. Overlap Block

The relationship between the image reconstruction effect and overlapped blocks is verified by the following experiment. The initial parameters in the experiment are the same as those in Experiment Balance parameter, and the changed parameters are the size of the overlapped region. The overlapped blocks 1–4 are used to represent the superposed pixels. The experimental results can be seen in Table 3.


Superresolution ratioOverlap blockSample 1Sample 2Sample 3Sample 4Sample 5

1 : 2131.801528.536531.629630.348629.9244
1 : 2231.879528.615531.750030.458730.0547
1 : 2332.120728.822831.978130.542730.2708
1 : 2432.243128.872532.035730.591230.3487
1 : 4128.672524.751628.360226.651226.8626
1 : 4229.084025.028428.633226.821027.0542
1 : 4329.578625.217528.997827.002927.3708
1 : 4429.828125.345429.164527.092227.5412

According to the experimental results, the images with superresolution ratios of 1 : 2 and 1 : 4 are demonstrated, respectively, in Figure 5, where the abscissa represents the number of overlapped blocks, and the ordinate is the corresponding PSNR. It can be seen from the figure that as the overlay area of the overlapped blocks decreases, the value of PSNR decreases. This is because the larger the overlay area where the image blocks selected for the reconstructed block, the larger the constraint between the reconstructed blocks, and this is easy to find the closest image block to be connected. The more the pixel points at the edge of the image block overlap, the easier it is to eliminate the truncation error caused by the feature extraction. It also has a certain inhibitory effect on noise.

3.2.3. Dictionary Blocking Size

The following experiments show the quality of the reconstructed image when having the different blocking conditions for the same test image, where the set of dictionaries is generated with the altering size of the blocked image.

Other experimental parameters do not change, and the changed parameters are the block size of the image blocking. The image blocking of the dictionary has the same requirements as the image blocking of the test image.

The experimental results are as follows: when the superresolution ratio is 1 : 2, 8 high-definition dictionaries with the image block from 3 × 3 to 10 × 10 are generated. Three image blocks are shown in Figure 6. It can be seen that as the image block size increases, the dictionary block becomes more and more complicated. The resulting PSNR values are shown in Table 4.


Dictionary blocking sizeSample 1Sample 2Sample 3Sample 4Sample 5

5 × 532.2428.8732.0430.5930.35
6 × 632.3028.8732.0630.5330.31
7 × 731.9928.7531.9530.3230.14
8 × 831.6228.4431.6530.0929.89
9 × 931.2028.2131.3329.8029.52
10 × 1030.9227.9531.1829.6029.27

Table 4 shows the value of the superresolution reconstruction PSNR corresponding to the different block training dictionaries of images. Since the reconstructed overlapped block is 4, which exceeds the block size of the sample block itself, the reconstructed samples are not correct when the dictionaries are, respectively, corresponding to block 3 × 3 and block 4 × 4. The two data sets are not analyzed. The other data corresponding to dictionary block and PSNR are shown in Figure 7.

The abscissa in Figure 7 only represents the block of the dictionary and the image. For example, the abscissa 5 indicates that the dictionary block is 5 × 5 and so on. Increasing the block size will reduce the value of PSNR when the overlap block size is unchanged. That is to say, the block is not bigger always better. When the block is large, the number of dictionary blocks that represent the image feature block will increase, and the reconstruction error will become larger. It can be seen that the preferred block value is 5 × 5 or 6 × 6 blocks, and the calculation efficiency 5 × 5 blocks is optimal.

When the superresolution ratio is 1 : 4, high-resolution dictionaries from 5 × 5 to 13 × 13 are generated through experiment. Three high-resolution dictionaries are shown in Figure 8, where the dictionary block becomes more and more complicated with the number of the blocks increasing. But the too-large block causes too many singular matrices when calculating the dictionary block, which causes the dictionary block information to be lost. The larger the block, the fewer the valid dictionary blocks. This will lead to a decrease in PSNR values, as shown in Figure 8(c). The parameters in the experiment only change is the superresolution ratio of 1 : 4, and the experimental results are shown in Table 5.


Dictionary blocking sizeSample 1Sample 2Sample 3Sample 4Sample 5

5 × 529.8325.3529.1627.0927.54
6 × 630.0025.4929.3827.2227.72
7 × 730.3025.6129.4927.1827.91
8 × 830.4025.7129.7227.2328.02
9 × 930.4325.6929.7827.2728.05
10 × 1030.7525.7529.9427.3228.11
11 × 1130.6725.7729.9027.2228.12
12 × 1230.5325.7129.8227.1927.93
13 × 1330.3125.6029.6427.0827.78

The data in Table 5 are the corresponding PSNR values generated by the superresolution reconstruction of the test sample with different block training dictionaries for the corresponding image. In order to intuitively distinguish the influence of the block on the reconstruction, the horizontal coordinate is the image block and the ordinate is the PSNR, as shown in Figure 9.

The abscissa in Figure 9 only shows the image blocking situation. The preferred PSNR corresponds to a 10 × 10 or 11 × 11 image blocking. Taking into account the calculation amount, 10 × 10 image blocking is the best. If the image blocking is too small, it cannot represent features fully, and if the image blocking is too large, the algorithm itself has limitations.

Comparing results corresponding to the superresolution ratios of 1 : 4 and 1 : 2, they have their own best partitions. The image blocks with a superresolution ratio of 1 : 4 are approximately double that of 1 : 2. This is because the image local information required for 4 times superresolution becomes larger, and naturally, the image block needs to be correspondingly larger.

The above two experiments compared the results where the overlapped blocks are fixed as 4. But in the overlapped block experiment, the larger the overlapped blocks, the better the results. The experiment did not consider the best case of overlapped blocks. Next, we will consider that if the best block changes when blocking the different overlapped blocks corresponding maximum.

The experiment verified the effect of the maximum overlap block experiment on the superresolution performance. The parameters are the same as those before the experiments. The changed parameters are only the block size and the overlapped block. For example, the block size is n × n, and the overlapped block value is n − 1. When the superresolution ratio is 1 : 2, the experimental results can be seen in Table 6.


Block sizeOverlap blockSample 1Sample 2Sample 3Sample 4Sample 5

3 × 3231.3028.2131.2230.2629.70
4 × 4331.9628.7031.7330.5430.19
5 × 5432.2428.8732.0430.5930.35
6 × 6532.2928.9032.0730.5530.32
7 × 7632.1128.8332.0330.4030.26
8 × 8731.9028.7131.9330.2630.08
9 × 9831.7128.6031.8030.1029.93
10 × 10931.4328.4231.6529.9529.70

The data show the superresolution reconstruction, where each test sample corresponds to different blocks and overlap blocks. For comparison, the data are plotted as shown in Figure 10.

The horizontal coordinate in Figure 11 only indicates the difference of the block. It can be clearly seen that the reconstructed effect is better from Figure 11 when the dictionary block is 5 × 5 or 6 × 6 blocks. The smaller blocks make the features of the blocks insufficient, and the too-large blocks need to increase the number of calculated pixels. The increase in the size of the dictionary representation block caused by the increase of the feature block makes the error larger and impacts the PSNR effect. In this experiment, the largest overlap block is used to make each component block reach the best reconstruction. It can be seen that the better block value is still 5 × 5 or 6 × 6, and it is best to select a 5 × 5 block for the calculation efficiency.

The above experiment verified the effect of the maximum overlap block experiment on the superresolution performance. The parameters are the same as those in other experiments. The changed parameters are only the block size and the overlapped block. When the superresolution ratio is 1 : 4, the experimental results are shown in Table 7. The obtained data is still plotted with the block size as the abscissa and PSNR values as the ordinate, as shown in Figure 11.


Block sizeOverlap blockSample 1Sample 2Sample 3Sample 4Sample 5

3 × 3229.2225.0528.7826.9127.11
4 × 4329.4525.1628.9326.9727.26
5 × 5429.8325.3529.1627.0927.54
6 × 6530.1925.5729.4327.2627.81
7 × 7630.6325.7229.6827.3028.06
8 × 8730.8025.8829.9627.3528.25
9 × 9830.9225.9430.0327.4328.30
10 × 10931.1125.9630.1527.4428.31
11 × 111030.9125.9130.0927.3228.22
12 × 121130.8725.9030.0527.3028.17
13 × 131230.6325.7629.8827.1927.98

It can be seen from Figure 11 that the 10 × 10 training dictionary has the best superresolution reconstruction when the superresolution ratio is 1 : 4. The above experiment shows that the block size has the highest value and is related to the superresolution ratio. The larger the superresolution ratio is, the larger the block is needed. The change of the overlap block does not influence the result of the optimal block. For the MR image, the optimal block with a superresolution ratio of 1 : 2 is 5 × 5, and the optimal block with a superresolution ratio of 1 : 4 is 10 × 10.

3.2.4. Number of Sampling Blocks

The experiment uses the same parameters as other experiments. The changed parameters are the sample amount of sample image blocks, and the data can be overlap extraction. The superresolution ratio is 1 : 2, and the experimental results are shown in Table 8. The data is taken as an abscissa in the image block sampling with different numbers of training dictionaries, and the image is plotted with PSNR values as the ordinate, as shown in Figure 12.


Number of blocksSample 1Sample 2Sample 3Sample 4Sample 5

100031.9828.6431.7430.4530.10
500032.1528.7931.9430.5230.23
1000032.2028.8131.9630.5630.28
5000032.2928.8431.9930.5730.31
10000032.2428.8732.0430.5930.35
15000032.3128.9132.0430.6030.37
20000032.2628.8432.0430.5830.35
50000032.3228.8632.0130.6130.35

It can be seen from Figure 12 that the number of sample blocks below 10000 blocks is too small. Since the sample blocks that do not meet the requirements are removed in the algorithm, the MR images have many black or dark areas, and these gray scales are not changed much. Samples with little change in gray, all zeros, or near all zeros are rejected, which greatly reduces the number of blocks actually involved in the calculation. Therefore, as the training sample block, a training sample block with insufficient features reduces the value of PSNR when reconstructed. On the contrary, the large increase in the number of blocks does not cause a significant change in the PSNR, nor does it have a maximum value, showing a fluctuating change. All the training sample blocks participate in the training of the dictionary. Too many blocks will increase the training time, and there is no positive significance for the generation of the HD dictionary. Therefore, it is better to select 150000 sampling blocks. The following experiment with a superresolution 1 : 4 is verified.

The data in Table 9 are taken as an abscissa in the image block sampling with different numbers of training dictionaries, and the image is plotted with PSNR values as the ordinate, as shown in Figure 13. As can be seen from Figure 13, the conclusion with the superresolution ratio of 1 : 4 and is the same as that with the superresolution ratio of 1 : 2 in block selection, while the dictionary cannot be trained with a superresolution ratio of 1 : 4 when the number of blocks is 1000. There are more requirements on the number of dictionaries. Considering the reduction of dictionary training time, it is better to select 150,000 blocks.


Number of sample image blocksSample 1Sample 2Sample 3Sample 4Sample 5

1000
500029.7825.2929.1127.0427.43
1000029.7525.3029.1527.0927.48
5000029.7725.3529.1627.1127.53
10000029.8325.3529.1627.0927.54
15000029.8425.3829.1827.0827.54
20000029.8125.3629.1727.0927.51
50000029.8525.3929.1627.1027.53

3.3. Experiment Simulation of Comprehensive Parameters

The previous section analyzes several parameters that affect the superresolution effect. The values of the optimal parameters of the superresolution MR image are shown in Table 10.


Super resolution ratioBalance parameter Overlap blockDictionary block sizeNumber of sampling blocks

1 : 20.145 × 5150000
1 : 40.1910 × 10150000

The validity of the optimal parameters is verified by the experiments below. The parameters select several sets of random parameters to form a random group training dictionary, which is compared with the dictionary of optimal parameter training, as shown in Table 11. The PSNR results obtained by experiments are shown in Table 12 below.


Random groupSuperresolution ratioBalance parameter Overlap blockDictionary block sizeNumber of sampling blocks

First group1 : 20.124 × 4100000
Second group1 : 20.278 × 850000
Third group1 : 20.246 × 610000
Forth group1 : 20.337 × 7120000
Fifth group1 : 40.469 × 930000
Sixth group1 : 40.245 × 580000
Seventh group1 : 40.3912 × 1290000
Eighth group1 : 40.1810 × 107000


Random groupSample 1Sample 2Sample 3Sample 4Sample 5

First group31.8328.5631.6030.4630.07
Second group31.5028.4531.6130.1029.72
Third group31.8828.6331.7930.3530.13
Forth group30.7827.8530.8729.7429.17
Fifth group30.2325.5329.6127.1727.83
Sixth group29.8025.3429.1427.0727.50
Seventh group30.2725.6329.6427.0427.81
Eighth group30.6625.8129.8527.2828.06

Comparing the data in Tables 12 and 13, superresolution PSNR data in the optimal group are higher than that in the random group, no matter the superresolution ratio is 1 : 2 or 1 : 4. This shows that the parameters of the optimal group are the best parameter values.


Optimal groupSample 1Sample 2Sample 3Sample 4Sample 5

Super resolution ratio 1 : 232.3128.9132.0430.6030.37
Super resolution ratio 1 : 431.1026.0030.1427.3728.36

From the experimental results can be seen, the five human body parts of the superresolution effect have obvious differences. The head and carotid artery superresolution effect is best, and ankle superresolution effect is the worst. This is mainly because each part contains different water components. More water components can produce more hydrogen protons. Under the action of magnetic field and radio frequency pulse, high-frequency information will be generated, which can better generate image edge, texture, and other details.

4. Conclusion

We propose a joint dictionary learning framework for superresolution of MR images, in which changes in dictionary construction parameters will cause changes in the training dictionary and thus affect the performance of superresolution reconstructed images. We have learned the optimal dictionary construction parameters through a large number of experiments and verified that the automatically learned dictionary construction parameters could effectively improve the performance of the dictionary and enhance the expression ability of the image blocks, thereby achieving better MR image superresolution effects.

Data Availability

We have not used specific data from other sources for the simulation of the results. The two popular MRI datasets in this paper, fast MRI Dataset and IXI Dataset, can be freely downloaded from the website https://fastmri.org/and http://www.brain-development.org/.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank Dr. Xie Haozhe and Dr. Wang Tingting for their assistance in the writing of this paper. This work was supported by National Science Foundation of China under Grant Nos. 61671170 and 61872085, Science and Technology Foundation of National Defense Key Laboratory of Science and Technology on Parallel and Distributed Processing Laboratory (PDL) under Grant no. 6142110180406, Science and Technology Foundation of ATR National Defense Key Laboratory under Grant no. 6142503180402, China Academy of Space Technology (CAST) Innovation Fund under Grant No. 2018CAST33, Joint Fund of China Electronics Technology Group Corporation, and Equipment Pre-Research under Grant No. 6141B08231109.

References

  1. J. A. López, F. Saez, J. A. Larena et al., “MRI diagnosis and follow‐up of subcutaneous fat necrosis,” Journal of Magnetic Resonance Imaging, vol. 7, no. 5, pp. 929–932, 2010. View at: Google Scholar
  2. E.-S. A. El-Dahshan, H. M. Mohsen, K. Revett, and A.-B. M. Salem, “Computer-aided diagnosis of human brain tumor through MRI: a survey and a new algorithm,” Expert Systems with Applications, vol. 41, no. 11, pp. 5526–5545, 2014. View at: Publisher Site | Google Scholar
  3. A. M. Venkatesan, R. J. Stafford, C. Duran et al., “Prostate MRI for brachytherapists: diagnosis, imaging pitfalls, and post-therapy assessment,” Brachytherapy, vol. 16, no. 4, 2017. View at: Publisher Site | Google Scholar
  4. C. Lukas, C. Cyteval, M. Dougados et al., “MRI for diagnosis of axial spondyloarthritis: major advance with critical limitations “Not everything that glisters is gold (standard),” RMD Open, vol. 4, no. 1, Article ID e000586, 2018. View at: Publisher Site | Google Scholar
  5. F. Bruno, F. Arrigoni, P. Palumbo et al., “New advances in MRI diagnosis of degenerative osteoarthropathy of the peripheral joints,” La Radiologia Medica, vol. 124, no. 11, pp. 1121–1127, 2019. View at: Publisher Site | Google Scholar
  6. M. O. Leach, “Principle of magnetic resonance,” in Physics for Medical Imaging Applications, Springer, Dordrecht, Netherlands, 2007. View at: Google Scholar
  7. R. Krug, C. Stehling, D. A. C. Kelley, S. Majumdar, and T. M. Link, “Imaging of the musculoskeletal system in vivo using ultra-high field magnetic resonance at 7 T,” Investigative Radiology, vol. 44, no. 9, pp. 613–618, 2009. View at: Publisher Site | Google Scholar
  8. J. Karpowicz, K. Gryz, P. Politański et al., “Exposure to static magnetic field and health hazards during the operation of magnetic resonance scanners,” Medycyna Pracy, vol. 62, no. 3, pp. 309–321, 2011. View at: Google Scholar
  9. X. Li and M. T. Orchard, “New edge-directed interpolation,” IEEE Transactions on Image Processing, vol. 10, no. 10, pp. 1521–1527, 2001. View at: Publisher Site | Google Scholar
  10. X. F. Wang and H. F. Ling, “An edge-adaptive interpolation algorithm for superresolution reconstruction,” in Proceedings of the International Conference on Multimedia Information Networking & Security. IEEE Computer Society, Nanjing, China, November 2010. View at: Publisher Site | Google Scholar
  11. A. Giachetti and N. Asuni, “Real-time artifact-free image upscaling,” IEEE Transactions on Image Processing, vol. 20, no. 10, pp. 2760–2768, 2011. View at: Publisher Site | Google Scholar
  12. M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP: Graphical Models and Image Processing, vol. 53, no. 3, pp. 231–239, 1991. View at: Publisher Site | Google Scholar
  13. R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Transactions on Image Processing, vol. 5, no. 6, pp. 996–1011, 1996. View at: Publisher Site | Google Scholar
  14. A. J. Patti, M. I. Sezan, and A. Murat Tekalp, “Superresolution video reconstruction with arbitrary sampling lattices and nonzero aperture time,” IEEE Transactions on Image Processing, vol. 6, no. 8, pp. 1064–1076, 1997. View at: Publisher Site | Google Scholar
  15. A. W. Dowsey, J. Keegan, M. Lerotic, S. Thom, D. Firmin, and G.-Z. Yang, “Motion-compensated MR valve imaging with COMB tag tracking and superresolution enhancement,” Medical Image Analysis, vol. 11, no. 5, pp. 478–491, 2007. View at: Publisher Site | Google Scholar
  16. J. Yang, J. Wright, T. S. Huang et al., “Image superresolution via sparse representation,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2861–2873, 2010. View at: Google Scholar
  17. W. Dong, L. Zhang, G. Shi et al., “Image deblurring and superresolution by adaptive sparse domain selection and adaptive regularization,” IEEE Transactions on Image Processing, vol. 20, no. 7, pp. 1838–1857, 2010. View at: Publisher Site | Google Scholar
  18. S. Yang, Z. Wang, L. Zhang, and M. Wang, “Dual-geometric neighbor embedding for image super resolution with sparse tensor,” IEEE Transactions on Image Processing, vol. 23, no. 7, pp. 2793–2803, 2014. View at: Publisher Site | Google Scholar
  19. K. Zhang, D. Tao, X. Gao et al., “Learning multiple linear mappings for efficient single image superresolution,” IEEE Transactions on Image Processing, vol. 24, no. 3, pp. 846–861, 2015. View at: Google Scholar
  20. F. Shi, J. Cheng, L. Wang et al., “LRTV: MR image superresolution with low-rank and total variation regularizations,” IEEE Transactions on Medical Imaging, vol. 34, no. 12, p. 1, 2015. View at: Publisher Site | Google Scholar
  21. C. Dong, C. C. Loy, K. He et al., “Learning a deep convolutional network for image superresolution,” in Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, September 2014. View at: Google Scholar
  22. Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image restoration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018. View at: Publisher Site | Google Scholar
  23. C. Ledig, L. Theis, F. Huszar et al., “Photo-realistic single image superresolution using a generative adversarial network,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, July 2016. View at: Publisher Site | Google Scholar
  24. X. Hu, H. Mu, X. Zhang, Z. Wang, T. Tan, and J. Sun, “Meta-SR: a magnification-arbitrary network for superresolution,” in Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, June 2019. View at: Publisher Site | Google Scholar
  25. Z. Wang, J. Chen, and C. H. Hoi Steven, “Deep learning for image superresolution: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 99, p. 1, 2020. View at: Publisher Site | Google Scholar

Copyright © 2021 Huanyu Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views108
Downloads269
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.