• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Novel Unsupervised MRI Synthetic CT Image Generation Framework with Registration Network

    2023-12-15 03:57:14LiweiDengHenanSunJingWangSijuanHuangandXinYang
    Computers Materials&Continua 2023年11期

    Liwei Deng,Henan Sun,Jing Wang,Sijuan Huang and Xin Yang,★

    1Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration,School of Automation,Harbin University of Science and Technology,Harbin,150080,China

    2Institute for Brain Research and Rehabilitation,South China Normal University,Guangzhou,510631,China

    3Department of Radiation Oncology,Sun Yat-sen University Cancer Center,State Key Laboratory of Oncology in South China,Collaborative Innovation Center for Cancer Medicine,Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy,Guangzhou,510060,China

    ABSTRACT In recent years,radiotherapy based only on Magnetic Resonance (MR) images has become a hot spot for radiotherapy planning research in the current medical field.However,functional computed tomography(CT)is still needed for dose calculation in the clinic.Recent deep-learning approaches to synthesized CT images from MR images have raised much research interest,making radiotherapy based only on MR images possible.In this paper,we proposed a novel unsupervised image synthesis framework with registration networks.This paper aims to enforce the constraints between the reconstructed image and the input image by registering the reconstructed image with the input image and registering the cycle-consistent image with the input image.Furthermore,this paper added ConvNeXt blocks to the network and used large kernel convolutional layers to improve the network’s ability to extract features.This research used the collected head and neck data of 180 patients with nasopharyngeal carcinoma to experiment and evaluate the training model with four evaluation metrics.At the same time,this research made a quantitative comparison of several commonly used model frameworks.We evaluate the model performance in four evaluation metrics which achieve Mean Absolute Error (MAE),Root Mean Square Error(RMSE),Peak Signal-to-Noise Ratio(PSNR),and Structural Similarity(SSIM)are 18.55±1.44,86.91±4.31,33.45±0.74 and 0.960±0.005,respectively.Compared with other methods,MAE decreased by 2.17,RMSE decreased by 7.82,PSNR increased by 0.76,and SSIM increased by 0.011.The results show that the model proposed in this paper outperforms other methods in the quality of image synthesis.The work in this paper is of guiding significance to the study of MR-only radiotherapy planning.

    KEYWORDS MRI-CT image synthesis;variational auto-encoder;medical image translation;MRI-only based radiotherapy

    1 Introduction

    Cancer is often considered a threat to public health in recent years,and its incidence rate is increasing yearly[1,2].Among mainstream cancer treatment methods,radiation therapy[3]is the most widely used method of treatment for cancer and is the earliest one.In modern clinical treatment,using Magnetic Resonance (MR) and Computed Tomography (CT) images during radiation therapy are unavoidable.Because MR images can provide high-quality contrast of soft tissues,it is very important to determine the location and size of tumors.In addition,MR imaging has the advantage of being free of ionizing radiation and multi-sequence imaging.However,it is very important for CT images to provide electron density information for dose calculation during radiotherapy of cancer patients,but this cannot be obtained from MR images.Although CT images can provide electronic density information,this results in the patient being exposed to radiation with negative implications for the patient’s health.As a result,both CT and MR images were obtained during radiation exposure in both cases.Furthermore,MR images must be registered with CT images during radiation for further treatment,but this registration can introduce some errors[4].

    Given the above problems,some researchers have begun to study the method of generating CT images from MR-only images[5,6].It is challenging to achieve radiotherapy by MR alone.Researchers have used MRI to synthesize CT(sCT)through various methods,which can be broadly classified into three classes[7,8].The first method is voxel-based research[9],which requires accurate segmentation of MRI tissues,but this method takes a long time to complete.The second method is based on the atlas [10],which mainly registers MR and CT to get the corresponding deformation field,which can be used to register CT and MR in an atlas to get sCT.However,these methods all rely on high-precision registration,and the registration method’s accuracy directly affects the synthetic sCT.The third method is based on learning [11].This method is based on existing image data.Based on the two data distributions,a nonlinear mapping between the data is found,and the task of synthesizing the sCT is realized using this nonlinear relationship.Among the many different methods,deep learning-based techniques [12,13] have demonstrated their ability to produce high-quality sCT images.Among the methods of synthesizing sCT by deep learning,the mainstream research methods can be divided into supervised and unsupervised.The supervised methods require datasets to be strictly aligned and paired.Researchers attempted to perform MR synthetic CT using paired data using conditional Generative Adversarial Networks[14,15].In the data preprocessing process,image registration accuracy often significantly impacts the image quality generated by the network,so the paired MR and CT images must be strictly registered.On the one hand,strictly aligned data are challenging to obtain in practice,which undoubtedly increases the difficulty of the studies.To reduce the difficulty of data acquisition,in another method based on unsupervised learning,MR synthetic CT tasks can be performed from unpaired data.CycleGAN [16],a typically unsupervised learning network,is currently widely used in the field of image synthesis.For example,Wolterink et al.[17]used CycleGAN to perform brain MR to CT synthesis tasks.CycleGAN used a bidirectional network structure to generate images from different directions.Moreover,to constrain the structural consistency of the same mode,the cycle-consistency loss is added to the network.However,the training of CycleGAN is extremely unstable,which can easily cause mode collapse,and the network is often challenging to converge.The structural dissimilarity loss was added by Xiang et al.[18]to strengthen the constraint between images by capturing anatomical structures and improving the quality of synthetic dimensional CT.Yang et al.[19]introduced the modal neighborhood descriptors to constrain the structural consistency of input and synthesized images.

    This research proposed a novel unsupervised image synthesis framework with registration networks for synthesizing MR images into CT images.Like other researchers,this research adopts a bidirectional structure similar to CycleGAN.The primary contributions to this work are as follows:

    ? In this paper,to complete the task of MRI-CT conversion,we propose an image generation network based on the combination of variational self-encoder and generation adversarial network.Among them,we add a registration network in two directions to strengthen the structural consistency between the input image and the reconstructed image,as well as the input image and the cycle-consistent image.

    ? This paper introduces a new correction loss function to strengthen constraints between images,resulting in higher-quality synthetic images.The loss correction needs to be performed simultaneously with the registration network.Furthermore,we add ConvNeXt blocks to the network.This new convolution block has been proven effective,and its performance exceeds some Transformer blocks.

    ? Extensive experiments demonstrate our effectiveness.This research conducts extensive experiments on several popular frameworks,and the method proposed in this study outperforms other methods in modality conversion from MR to CT images.This research also conducts ablation experiments at the same time to confirm the effectiveness of each component.

    2 Methods and Materials

    2.1 Model Architecture

    The framework proposed in this paper is based on Variational Auto-Encoders (VAEs) [20-22]and Generative Adversarial Networks(GANs)[23].The network framework is shown in Fig.1.The network consists of eight sub-networks:two image encodersEMRandECT,two image generatorsGMRandGCT,two discriminatorsDMRandDCT,and two registration networksRMRandRCTfor enhancing cycle-constraints.Since the unpaired MR images are synthesized into the sCT images in this task,the generated sCT images lacked genuine labels to constrain the pseudo-CT;this paper adopts the same bidirectional structure as CycleGAN [16].Namely,the synthesis direction from MR to CT and the synthesis direction from CT to MR are included.Taking MR synthetic pseudo-CT as an example,anXMRdomain image is used as the input to the model,the image is encoded via theXMRdomain image encoder part of the model,and the obtained image code is input into theXCTdomain image generator to synthesize the target domain pseudo-CT.Similarly,the pseudo-CT is fed into theXCTimage encoder as the input fromXCTtoXMRto obtain image coding,and the image coding is fed into theXMRdomain image generator to be converted into the original MR image.Two discriminators are used to evaluate the authenticity of images from different image domains and compete with the generator to achieve the purpose of confrontation training.Finally,the registration network registers the original MR and the reconstructed MR image.In addition,the registration network also registers the original MR and the cycle-consistent MR image.The reconstructed MR image must be consistent with the original MR image,and the cycle-consistent and original images are no exception.Create a nonlinear mapping between unpaired image data.The network is trained through the above process,and the transformation of each image domain includes the image encoder,image generator,discriminator,and rigid registration network.

    2.2 Generators and Discriminator

    Among the models proposed in this paper,both the encoder for encoding images and the generator for synthesizing images adopt the ConvNeXt [24] module as the main structure of the model.The ConvNeXt module draws lessons from the successful experience use of the Vision Transformer(ViT)[25,26]and convolutional neural networks.It builds a pure convolutional network whose performance surpasses the advanced model based on Transformer.ConvNeXt adopts the standard neural network ResNet-50[27]and modernizes it to make the design closer to ViT.In the module,depthwise separable convolutions with a kernel size of seven are used to improve the perceptual field of the model and extract deeper information from the images.Using depthwise separable convolutions can effectively solve the computationally expensive problem caused by large convolution kernels.

    Figure 1: Flowchart of network framework of synthetic sCT based on VAE and CycleGAN.The black line represents the circular process in which the CT image domain participates,and the blue line represents the circular process in which the MR image domain participates

    In this paper,the two image encodersEMRandECTinclude three downsampling convolutional layers and an inverted bottleneck layer composed of six ConvNeXt modules.Each layer of downsampled convolutions contains the convolutions,the instance normalized (IN) leaky rectified linear unit (LReLU) operation,and the SAME padding.The first convolution layer has a convolution kernel size of 7 × 7,and the next two convolutions have a convolution kernel size of 4 × 4.Both image generatorsGMRandGCTcontain an inverse bottleneck layer consisting of six ConNeXt blocks and three upsampling convolutional layers.This sets the sample size in the first two upsampling convolutional layers to 2,an IN,LReLU operation,and the SAME padding.The activation function of the sampling layer in the last layer is Tanh.The specific network structure of the encoder,generator,and discriminator is shown in Fig.2.

    Figure 2: The concrete realization flow chart of the encoder,generator,and discriminator model architecture.The encoder and generator are symmetrical structures.Multi-scale discriminators and generators are used for confrontation training

    Most discriminators in Generative Adversarial Networks use PatchGAN [28].That is,feature extraction from images through convolutional networks,and the matrix with the final output is output to evaluate the image’s authenticity.The head of the image often contains complex texture information,while the texture information of the shoulder is relatively less.However,theN×Npatch output in PatchGAN is fixed.If the image is divided into large patches for calculation,it will lead to the loss of detailed information,and small patches will lead to high computational costs.The discriminator used in this paper is a multi-scale discriminator,which enables the discriminator to learn information from different scales simultaneously.

    The discriminator consists of three convolution blocks,wherein each convolution block comprises five layers of convolution and an average pooling operation;the first four convolution layers comprise a convolution operation and LReLU with the convolution kernel size of 4 and strides being 2;finally,a convolutional layer with a convolution kernel size of 1 is used to output anN×Nmatrix,and the final evaluation result is obtained through the average pooling operation.The multi-scale discriminator outputs evaluation matrices corresponding to different scales for loss calculation after the three convolution blocks are finished.It is ensured that the discriminator can learn image features from different scales.In this paper,two multi-scale discriminatorsDCTandDMRare used in the network.

    The registration network used in this research is consistent with RegGAN [29].There are seven downsampling layers composed of residual blocks in the registration network,and the convolution kernel size in each residual block is 3,and the stride is 1.The bottleneck layer uses three residual blocks.The upsampling layer also consists of seven residual modules.Finally,use the convolutional layer to output the registration result.The specific network structure of the registration network is shown in Fig.3.

    Figure 3:The structure of the registration network uses the ResUnet network structure

    2.3 Loss Functions

    This paper designs the complex loss functions,which include encoding loss,generator loss,discriminator loss,and smoothing and correction loss functions in the registration network.The network architecture of the generation model in this paper has a symmetrical structure,and the model structure of two different synthesis directions is the same.For the convenience of the expression,this paper useXCTandXMRto represent the images from the CT domain and the MR domain,XrecandXcycto represent the reconstructed and the cycle-consistent images,andcto represent the image code output by the encoder.

    2.3.1 Encoder Loss

    In the part of encoder loss,similar to Liu et al.[22],this paper punishes the deviation of potential coding distribution from prior distribution by calculating encoder loss.The concrete implementation is as follows:

    where the value ofλ1is 0.01 andNis the dimension of image coding.

    2.3.2 Adversarial Loss

    The generator primarily synthesizes the corresponding image via the input image encoding,matching the original image as closely as possible.At the same time,the synthesized images cheat the discriminator as much as possible.The generator’s total loss of the generator is as follows:

    In addition,the discriminator judges the authenticity of the input image,minimizing the loss of the real image and maximizing the loss for the image synthesized by the generator.This paper has a corresponding discriminator in each of the synthesis directions.The total loss of discriminator is as follows:

    2.3.3 Reconstruction Loss

    The reconstruction loss primarily includes the cycle-consistency loss of the model and the reconstruction loss of the same modal image.The cycle-consistent loss function is as follows:

    whereλ2is the loss weight ratio,and its value is 10.

    Image reconstruction loss means the image is encoded by the encoder output image,which is then input to the generator,which will reconstruct the image according to the same modality as the original input image.This loss function is comparable to the identity loss in CycleGAN.The loss function is calculated as follows:

    2.3.4 Registration Loss

    Then,the original image is taken as a fixed image,and the reconstructed or circularly consistent image is taken as a floating image.The reconstructed or cycle-consistent image is registered with the original image through the registration networkRto obtain the registration fieldT.Then the reconstructed or cycle-consistent image is deformed by the registration fieldT,and then the correction loss between them is calculated.The loss function is:

    where imagesXreal_1andXreal_2represent real images in the same modality asXrecandXcyc,respectively.T1andT2represent different deformation fields.Theλ3is the loss weight ratio,and its value is 20.

    At the same time,This work smoothes the deformation field,and designs a loss function to minimize the deformation field’s gradient in order to assess the smoothness of the deformation field.The smoothing loss of the field is consistent with RegGAN[29],so the loss function can be expressed by the Jacobian determinant as below:

    wherein each score represents the partial derivative of the point(m,n)in the image with respect to the direction of the image(x,y),andJ(m,n)represents the value of the Jacobian determinant of the point(m,n)in the image.Theλ4is the loss weight ratio,and its value is 10.

    In summary,this paper overall optimization goals are as follows:

    2.4 Evaluation Criterion

    In this research,four widely used evaluation metrics are used as benchmarks to test the quality of sCT generated by the proposed model in order to quantitatively evaluate its quality:Mean Absolute Error(MAE),Root Mean Square Error(RMSE),Peak Signal-to-Noise Ratio(PSNR)and Structural Similarity(SSIM).

    The MAE metric is able to reflect the actual occurrence of voxel error between real CT and sCT.It can circumvent the problem of error cancellation and so accurately reflect the model’s prediction error.Optimizing the value of MAE to the minimum can make the performance of the model stronger.The objective optimization formula of MAE is as follows:

    whichXCT(k)andXMR(k)represent thekth set of test data.

    The RMSE measures the standard deviation between images,consistent with MAE.Optimizing the value of RMSE to a minimum can make the model perform better.Its calculation formula is as follows:

    The PSNR is an objective standard for evaluating images.The PSNR is optimized to the maximum,which proves that the image synthesized by the model is less distorted.Its calculation formula is as follows:

    whichHU_MAXrepresents the maximum intensity of CT and pseudo-CT images.

    Usually,the SSIM metric can reflect the similarity between two images and mainly measure the correlation between the adjacent HU values of the images.Optimizing SSIM to the maximum proves that the images synthesized by the model are more similar.The calculation formula is as follows:

    3 Data Acquisition and Processing

    This paper obtained CT and MR image data from 180 patients with nasopharyngeal carcinoma.We get MR and CT images scanning the patients in regular clinical treatment.These 180 patients served as the model’s training and testing data.Among them,the Siemens scanner was used to obtain the CT images with an image size of 512 × 512.T1-weighted MR images were obtained in the MR simulator of Philips Medical System with a magnetic field intensity of 3.0 T,and its size was 720×720.The project was approved by the Ethics Committee of Sun Yat-sen University Cancer Center,which gave up informed consent.This research uses the volume surface contour data in the radiotherapy(RT)structure to construct an image mask,retain the images,and delete invalid information outside the mask.The specific image processing process is shown in Fig.4.This research aligned the relevant CT and MR images for each patient using affine and deformable registration in the open-access medical image registration library (ANTS).For best network training results,this research cropped the original image to 256 × 384.Since the trainable information from head and neck data occupies a small proportion of the image,to further accelerate the training of the network,the image size is finally cropped to 256×256.This research splices the overlapped parts of the two shoulder images for shoulder images by calculating the average value during the test.Based on the data set information,the Hounsfield Unint (HU) range of CT was [-1024,3072].This research normalizes it to [-1,1]during training to speed up the model’s training.The dataset is roughly divided according to the ratio of 6:3:3,110 cases of data are randomly selected as the training set,and 35 cases of data are randomly selected as the evaluation set and test set.

    Figure 4:Implementation of specific operations for image preprocessing

    4 Experiment and Result

    4.1 Training Details

    All models in this study are built in the Pytorch framework.Among them,the Pytorch version is 1.8.1,and the Python version is 3.8.The experiments and experimental results mentioned in this paper are all trained on RTX 2080 Ti,and the memory size of the GPU is 11 G.The optimizer of the training model in the experiment is the Adam optimizer,and the learning rate set in the experiment is 1e-4 and(β1,β2)=(0.5,0.999),and that training is iterated through 80 epochs with the batch size of 1.

    4.2 Compare the Quality of Synthesized sCT by Different Methods

    Table 1 compares three conventional commonly used frameworks with the techniques presented in this study,such as CycleGAN[16],UNIT[22],MUNIT[30],and the latest RegGAN[29]framework.The experimental finding in Table 1 shows that the method proposed in this research has the best performance among the four evaluation metrics and is superior to the other four frameworks.The MAE score is 18.55±1.44,decreased by 2.17.The RMSE score is 86.91±4.31,decreased by 7.82.The PSNR score is 33.45 ± 0.74,increased by 0.76.Furthermore,the SSIM score is 0.960 ± 0.005,increased by 0.011.It can be concluded from the evaluation indexes that the quality of sCT synthesized by the proposed method is superior to that of other methods.In addition,thep-value in the studentt-test between different indicators is also calculated.Thep-value indicates significant improvement by pairedt-test(p <0.05).

    Table 1: Through four evaluation metrics,sCT generated by different methods is compared

    Fig.5 shows the comparison between the above four frameworks and the proposed method for synthesizing the anatomical structure of head slices.This paper reduces the error’s HU value between genuine CT and sCT to[-400,400].The results show that the proposed method has the smallest error between the synthetic head sCT slice and the original CT and the highest similarity with the original CT in anatomical structure.The synthesized sCT in this paper is more similar to genuine CT in the area with complex head texture.In Fig.6,the performance of the five models on the test set is demonstrated by violin and box diagram.The violin plot shows that the evaluation metric of the sCT synthesized by this model for each patient is concentrated on the better side.Fig.6 is drawn using Hiplot [31]platform.

    Figure 5:The concrete realization of HU differences between sCT and genuine CT predicted by five different methods ranging from[-400,400]

    Figure 6:Box plot gives the median and quartile ranges of four evaluation metrics of five models on the test set.Violin plots show the distribution and density of the predicted data of the five models on the test set

    Through qualitative comparison,it is further illustrated that the anatomical structure of the sCT synthesized by this method is more similar to the genuine CT.In Fig.7,the real CT and corresponding sCT images randomly selected by the proposed model are shown.In the figure,the areas marked by the blue and red boxes are enlarged,which are located in the upper right corner and the lower right corner of the image,respectively.In the figure,this research visually compares the synthetic quality of sCT images of bones.In the comparison of three sets of images,the proposed method outperforms the other four methods in terms of the quality of synthetic images in bone tissues.At the same time,it has advantages in synthesizing some texture details,such as the red-marked area of the first group of images.This shows that the proposed method can transform MR image mode into its sCT corresponding mode more effectively.

    In addition,as shown in Fig.8,sagittal images of three patients were randomly selected for this research.It is evident by comparing sagittal images of patients that the proposed method outperforms the other four methods in terms of synthesis quality.The head and neck bones are more like genuine CT images.In addition,the texture synthesized by the proposed method is clearer and more delicate,and the similarity with the actual CT is higher in the complex texture area of the head cavity.

    Figure 7: From left to right,there are genuine CT,sCT synthesized by CycleGAN,sCT synthesized by UNIT,sCT synthesized by MUNIT,sCT synthesized by RegGAN,and sCT synthesized by the proposed method.The upper right corner of the image is a locally enlarged image of bones or tissues in a blue frame,and the lower right corner o is a locally enlarged image of bones or tissues in a red frame

    Figure 8: Sagittal view of the image.From left to right are real CT,sCT synthesized by CycleGAN,sCT synthesized by UNIT,sCT synthesized by MUNIT,sCT synthesized by RegGAN,and sCT synthesized by the method proposed in this paper

    4.3 Ablation Study

    The data set used in the ablation experiment is the same as the above experiment.This research performs ablation experiments on the essential parts of the proposed method,respectively,demonstrating the effectiveness of some critical parts of the proposed method:adding ConvNeXt blocks,adding an additional registration network,and calculating the registered images and ground truth correction loss between images to constrain the structural similarity between genuine and reconstructed images along with between genuine and cycle-consistent images.The experimental findings following each part’s ablation are shown in Table 2.Based on UNIT [22],this study adds different components to UNIT and carries out four groups of experiments.

    Table 2: Ablation study:Each component improves the model

    The experimental findings in Table 2 show that the components of the proposed method are effective in the task of synthesizing sCT from MR images.In this paper,the ConvNeXt block is added to the large kernel convolution to improve the receptive field,extract more detailed image features and enhance the network’s processing of image details and textures.The proposed registration network method combined with loss correction significantly improves the task of synthesizing sCT images from MR images in four evaluation indexes.Finally,the evaluation index obtained by combining all methods is the best.

    The experimental findings in Table 2 show that the components of the proposed method are effective in the task of synthesizing sCT from MR images.In this paper,the ConvNeXt block is added to the large kernel convolution to improve the receptive field,extract more detailed image features and enhance the network’s processing of image details and textures.The proposed registration network method combined with loss correction significantly improves the task of synthesizing sCT images from MR images in four evaluation indexes.Finally,the evaluation index obtained by combining all methods is the best.

    5 Discussion

    This research proposes a new unsupervised image synthesis framework with registration networks to solve the task from magnetic resonance image synthesis to CT image.It is used to train unpaired head and neck data to avoid the effects of a severe shortage of paired data.The experimental results in Table 1 show that the proposed method has obvious performance advantages.Specifically,the proposed method outperforms the current mainstream frameworks significantly when the performance of the model surpasses the benchmark network UNIT selected in this paper,in which MAE is increased from 20.72 to 18.55,RMSE from 94.73 to 86.91,PSNR from 32.69 to 33.45,SSIM from 0.949 to 0.960.The proposed method adds a simple and effective module ConvNeXt block to expand the perceptual field of the model and obtain deeper image features.In addition,this study introduces a registration network and an image rectification loss in the method to strengthen the constraints between the reconstructed image and the input image,as well as between the cycle-consistent image and the input image,and enhance the control ability of the model generation domain.

    To intuitively show the advantages of the method proposed within that study for the problem of sCT synthesis,this research shows the error diagrams between the sCT from different methods and the genuine CT.The error diagram between sCT and genuine CT has shown in Fig.5,which shows that the proposed method is more similar to the original CT in the texture details of the synthesized sCT.The partial enlargement in Fig.6 shows that the method is superior to other methods in synthesizing sCT bones and some texture details.In addition,the sagittal diagram shown in Fig.8 shows that the CT synthesized by this method performs better in the sagittal plane than the other four methods.The bone and texture regions are more continuous,indicating that the model has information related to two adjacent slices when synthesizing CT.Compared with other networks,the proposed method adds ConvNeXt blocks to the network,effectively improving the model’s receptive field and establishing a long-term relationship with the network.In addition,the added registration network and image correction loss can strengthen the constraints between the reconstructed and the genuine image and between the cyclic-consistent and the genuine image and enhance the model’s ability to control its own domain patterns.

    Table 2 shows the ablation experiments’ results on the proposed method’s components.The experimental findings in Table 2 demonstrate that each component of the proposed method can improve the performance of the network.In particular,the correction loss proposed in this study can significantly improve the network’s performance.At the same time,the performance of the network receptive field optimization model can be enhanced by adding ConvNeXt blocks.The results show that the proposed method significantly enhances the image constraints.The registration network registers both the reconstructed and cycle-consistent images with the original images,correcting the genuine and registered images by a correction loss,thereby reducing the uncertainty of the generator.

    In this paper,we proposed the 2D model framework for synthesizing MR images to CT images.However,there are still some areas that need improvement.Although the method proposed in this paper can be used to synthesize unpaired images,2D slice data will lose context information,resulting in a lack of correlation between adjacent slice data.We will build a 3D model based on the proposed method to solve the above problems,improve the accuracy of model synthesis and apply it to radiotherapy planning.

    6 Conclusion

    This paper proposes a novel method of synthetic CT images from MR images primarily based on Variational Auto-Encoders and Generative Adversarial Networks.We conduct experiments using head and neck data from patients with nasopharyngeal carcinoma and evaluate them using four metrics.The experimental results in Table 1 and the error plot of sCTvs.genuine CT shown in Fig.5 demonstrate that the proposed method outperforms the current four popular generation methods regarding visual effects and objective metrics,with minimal error to genuine CT.In Fig.7,the CT synthesized by the proposed method is superior to other methods in details of the bone region.Fig.8 shows that the proposed method shows better coherence on the sagittal plane.In the ablation study part,the effectiveness of some components in the proposed method is proved,and the advantages of this method in unsupervised medical image synthesis are demonstrated.The network architecture proposed in this paper adds registration networks in two directions to strengthen the structural consistency between the input image and the reconstructed image as well as the input image and the cycle-consistent image,and ensure the stability of network training.ConvNeXt module enhances the network feature processing ability,which is clearer in the synthesis of bone and soft tissue regions and has less error with real CT.At the same time,this paper introduces a new correction loss function combined with registration networks to strengthen the constraints between images,avoid the offset phenomenon of synthesized images,and obtain higher-quality synthesized images.To sum up,the method proposed in this paper shows the best effect in the task of MR synthetic CT.Through the quantitative and qualitative evaluation of synthetic images,it shows the advantages of this method in many aspects.Although adding ConvNeXt blocks to the model can expand its receptive field and improve its performance,doing so slows down the model’s training because ConvNeXt blocks use large kernel convolutions.We will address this in the future.In addition,the 2D model framework has certain limitations,and it is easy to lose contextual information.We plan to extend the model frame to the 3D model frame to solve the discontinuity of the 2D model on the Z axis for patients.We will use a 3D network to generate a more accurate sCT,which can be used to sketch the lesion site more accurately in the field of image segmentation so as to carry out radiotherapy more accurately.At the same time,the ConvNeXt block will be extended to 3D,and the large convolution kernel will be abandoned to improve the training speed.The results of this study have guiding significance for the research based on a magnetic resonance-only radiotherapy plan.

    Acknowledgement:We thank Shanghai Tengyun Biotechnology Co.,Ltd.for developing the Hiplot Pro platform (https://hiplot.com.cn/) and providing technical assistance and valuable tools for data analysis and visualization.

    Funding Statement:This research was supported by the National Science Foundation for Young Scientists of China(Grant No.61806060),2019-2021,the Basic and Applied Basic Research Foundation of Guangdong Province(2021A1515220140),the Youth Innovation Project of Sun Yat-sen University Cancer Center(QNYCPY32).

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design:Xin Yang,Liwei Deng;data collection:Xin Yang;analysis and interpretation of results:Liwei Deng,Henan Sun,Sijuan Huang,Jing Wang;draft manuscript preparation:Henan Sun,Sijuan Huang,Jing Wang.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.The data are not publicly available due to ethical restrictions.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产男人的电影天堂91| 国产成人aa在线观看| 男女啪啪激烈高潮av片| 欧美一区二区精品小视频在线| 寂寞人妻少妇视频99o| 日本与韩国留学比较| 国产高清激情床上av| 噜噜噜噜噜久久久久久91| 好男人视频免费观看在线| 一本久久中文字幕| 国产精品一区www在线观看| 久久精品国产亚洲网站| 久久人人爽人人片av| 中出人妻视频一区二区| 久久久国产成人精品二区| 蜜桃久久精品国产亚洲av| 国产成人a区在线观看| 中文字幕av在线有码专区| 日韩,欧美,国产一区二区三区 | 亚洲一区高清亚洲精品| av专区在线播放| 一级黄色大片毛片| 男女下面进入的视频免费午夜| 精品免费久久久久久久清纯| 中文字幕免费在线视频6| 国产免费男女视频| 亚洲真实伦在线观看| 国产精品久久久久久久电影| 麻豆精品久久久久久蜜桃| 国产成人福利小说| 蜜桃亚洲精品一区二区三区| 少妇高潮的动态图| 一级毛片电影观看 | 国产中年淑女户外野战色| 亚洲美女视频黄频| 欧美三级亚洲精品| 网址你懂的国产日韩在线| 亚洲图色成人| 国产久久久一区二区三区| 青春草视频在线免费观看| 久久鲁丝午夜福利片| 特级一级黄色大片| www.av在线官网国产| 一级黄色大片毛片| 国产女主播在线喷水免费视频网站 | 黄色配什么色好看| 亚洲电影在线观看av| 日韩人妻高清精品专区| 日韩制服骚丝袜av| 天美传媒精品一区二区| 国产真实乱freesex| 国产亚洲欧美98| 精品久久久久久久久久免费视频| 免费无遮挡裸体视频| 国产 一区精品| 国产一区二区三区在线臀色熟女| 亚洲七黄色美女视频| 精品欧美国产一区二区三| 亚洲不卡免费看| 内射极品少妇av片p| 午夜福利高清视频| 大型黄色视频在线免费观看| 欧美日韩综合久久久久久| 亚洲成人久久性| 国产真实伦视频高清在线观看| 国产成人福利小说| 97热精品久久久久久| 桃色一区二区三区在线观看| 乱码一卡2卡4卡精品| 好男人视频免费观看在线| 午夜免费激情av| 国产老妇伦熟女老妇高清| 亚洲av不卡在线观看| 少妇猛男粗大的猛烈进出视频 | 夫妻性生交免费视频一级片| 亚洲精华国产精华液的使用体验 | 人妻久久中文字幕网| 91久久精品国产一区二区成人| 嫩草影院精品99| 中文在线观看免费www的网站| 免费一级毛片在线播放高清视频| h日本视频在线播放| .国产精品久久| 亚洲,欧美,日韩| 12—13女人毛片做爰片一| 在线免费观看的www视频| а√天堂www在线а√下载| 亚洲婷婷狠狠爱综合网| 国产人妻一区二区三区在| 免费黄网站久久成人精品| 啦啦啦观看免费观看视频高清| 九草在线视频观看| 国产av在哪里看| 亚洲精品色激情综合| 在线观看美女被高潮喷水网站| 亚洲中文字幕日韩| 亚洲激情五月婷婷啪啪| 大型黄色视频在线免费观看| a级毛片免费高清观看在线播放| 岛国在线免费视频观看| 亚州av有码| 日本免费a在线| 综合色丁香网| 性插视频无遮挡在线免费观看| 亚洲四区av| 欧美成人精品欧美一级黄| 亚洲av成人精品一区久久| 国产伦在线观看视频一区| 久久亚洲国产成人精品v| 久久久久免费精品人妻一区二区| 99在线人妻在线中文字幕| 国产成人精品婷婷| 亚洲av.av天堂| 99久久无色码亚洲精品果冻| 日韩 亚洲 欧美在线| 18禁在线无遮挡免费观看视频| 最近2019中文字幕mv第一页| 久久久久久久亚洲中文字幕| 亚洲在久久综合| 亚洲色图av天堂| 亚洲精品456在线播放app| 免费一级毛片在线播放高清视频| 欧美极品一区二区三区四区| 欧美区成人在线视频| 伦理电影大哥的女人| 欧美日韩一区二区视频在线观看视频在线 | 欧美成人精品欧美一级黄| 久久国内精品自在自线图片| 欧美+亚洲+日韩+国产| 亚洲不卡免费看| 亚洲av电影不卡..在线观看| 中文字幕久久专区| 国内精品久久久久精免费| 国产精品久久久久久久久免| 真实男女啪啪啪动态图| 99精品在免费线老司机午夜| 91狼人影院| 久久久午夜欧美精品| 国产日韩欧美在线精品| 一个人看的www免费观看视频| 成人三级黄色视频| av又黄又爽大尺度在线免费看 | 国产精品国产高清国产av| 亚洲不卡免费看| 国产成人一区二区在线| 日韩精品青青久久久久久| av国产免费在线观看| 亚洲真实伦在线观看| 午夜亚洲福利在线播放| 国产成人91sexporn| 久久精品夜夜夜夜夜久久蜜豆| 国产av在哪里看| 精品欧美国产一区二区三| 春色校园在线视频观看| 国产精品久久电影中文字幕| 伦理电影大哥的女人| 日韩精品青青久久久久久| 我的女老师完整版在线观看| 欧美不卡视频在线免费观看| 国内少妇人妻偷人精品xxx网站| 免费电影在线观看免费观看| 亚洲国产精品成人综合色| 给我免费播放毛片高清在线观看| 精品日产1卡2卡| 久久鲁丝午夜福利片| 免费看av在线观看网站| 又爽又黄a免费视频| 简卡轻食公司| 直男gayav资源| 97超碰精品成人国产| 亚洲婷婷狠狠爱综合网| 最近视频中文字幕2019在线8| 好男人在线观看高清免费视频| 级片在线观看| 免费av不卡在线播放| 爱豆传媒免费全集在线观看| 久久九九热精品免费| 久久久a久久爽久久v久久| 免费av不卡在线播放| 九九久久精品国产亚洲av麻豆| 少妇猛男粗大的猛烈进出视频 | 亚洲不卡免费看| 亚洲国产精品合色在线| 久久国产乱子免费精品| 亚洲av一区综合| 国产69精品久久久久777片| 日韩在线高清观看一区二区三区| 精品一区二区三区人妻视频| 黄片wwwwww| 2021天堂中文幕一二区在线观| 国产av一区在线观看免费| 成人一区二区视频在线观看| 成年版毛片免费区| 3wmmmm亚洲av在线观看| 国产黄片视频在线免费观看| 韩国av在线不卡| 国产成人午夜福利电影在线观看| 22中文网久久字幕| 九色成人免费人妻av| 99在线视频只有这里精品首页| 国产 一区 欧美 日韩| 婷婷精品国产亚洲av| 亚洲欧洲国产日韩| 日日干狠狠操夜夜爽| 成人二区视频| 国产午夜精品论理片| 日韩欧美精品v在线| 我要搜黄色片| 婷婷色av中文字幕| 伦精品一区二区三区| 国产一区二区三区av在线 | 99久久九九国产精品国产免费| 99在线人妻在线中文字幕| 人妻久久中文字幕网| 成人毛片a级毛片在线播放| 日日摸夜夜添夜夜爱| av专区在线播放| 热99在线观看视频| 国产精品伦人一区二区| 悠悠久久av| av视频在线观看入口| 国产色婷婷99| 少妇被粗大猛烈的视频| 久久久久性生活片| 亚洲人成网站在线播放欧美日韩| 国产高清有码在线观看视频| 在线国产一区二区在线| 天堂√8在线中文| 精品99又大又爽又粗少妇毛片| 欧美精品一区二区大全| 美女内射精品一级片tv| 亚洲在久久综合| 天堂√8在线中文| 国产精品久久电影中文字幕| 2022亚洲国产成人精品| 久久久久网色| 国产熟女欧美一区二区| 夜夜爽天天搞| 内射极品少妇av片p| 少妇的逼好多水| 别揉我奶头 嗯啊视频| 午夜福利在线在线| 人妻少妇偷人精品九色| 亚洲成人av在线免费| 成人特级av手机在线观看| 国产av一区在线观看免费| 国产爱豆传媒在线观看| 色噜噜av男人的天堂激情| 国产白丝娇喘喷水9色精品| 国产综合懂色| 干丝袜人妻中文字幕| 老师上课跳d突然被开到最大视频| 国产探花极品一区二区| 精品国产三级普通话版| 久久午夜福利片| 国产日韩欧美在线精品| 午夜福利高清视频| 乱人视频在线观看| 日韩一区二区三区影片| 欧美+日韩+精品| 亚洲欧美日韩无卡精品| 丝袜美腿在线中文| 国产综合懂色| 一进一出抽搐gif免费好疼| 久久久久久久午夜电影| 久久精品国产99精品国产亚洲性色| 欧美激情久久久久久爽电影| 国产精品av视频在线免费观看| 春色校园在线视频观看| 天堂av国产一区二区熟女人妻| 只有这里有精品99| 亚洲国产精品成人综合色| 亚洲天堂国产精品一区在线| 久久久午夜欧美精品| 一本一本综合久久| 国产午夜福利久久久久久| 丝袜美腿在线中文| 久久久久国产网址| 国产一区二区三区在线臀色熟女| 天堂av国产一区二区熟女人妻| 国产精品国产三级国产av玫瑰| 校园春色视频在线观看| 国产真实伦视频高清在线观看| 国产成人aa在线观看| 亚洲电影在线观看av| 欧美激情在线99| 中文资源天堂在线| 哪里可以看免费的av片| 婷婷色av中文字幕| 亚洲天堂国产精品一区在线| 久久久久久久午夜电影| 日本一二三区视频观看| 中国美白少妇内射xxxbb| 久久久午夜欧美精品| 色吧在线观看| 岛国在线免费视频观看| 久久午夜亚洲精品久久| 成人亚洲精品av一区二区| 国产在线精品亚洲第一网站| 99热这里只有是精品50| av在线亚洲专区| 久久精品夜夜夜夜夜久久蜜豆| 99久久精品国产国产毛片| 精品久久久久久久人妻蜜臀av| 精华霜和精华液先用哪个| 91精品国产九色| 一个人看的www免费观看视频| 99热只有精品国产| 日韩一区二区三区影片| 久久久久久久久久久免费av| 亚洲av成人av| 国产日本99.免费观看| 白带黄色成豆腐渣| 中文精品一卡2卡3卡4更新| 国产单亲对白刺激| 国产精品1区2区在线观看.| 国产精品免费一区二区三区在线| 国产人妻一区二区三区在| 国产成人freesex在线| 在线免费观看的www视频| 久久人人爽人人片av| 成年版毛片免费区| 久久精品影院6| 精品99又大又爽又粗少妇毛片| 欧美一区二区精品小视频在线| 一级毛片久久久久久久久女| 成人永久免费在线观看视频| 欧美性猛交黑人性爽| 国产日韩欧美在线精品| 国产成人精品婷婷| 又黄又爽又刺激的免费视频.| 久久精品国产亚洲av香蕉五月| 非洲黑人性xxxx精品又粗又长| 在线观看66精品国产| 一区二区三区四区激情视频 | 麻豆国产97在线/欧美| 男女那种视频在线观看| 国产私拍福利视频在线观看| 久久人妻av系列| 日韩欧美一区二区三区在线观看| 在线播放国产精品三级| 欧美bdsm另类| 亚洲国产精品久久男人天堂| 亚洲综合色惰| 亚洲天堂国产精品一区在线| 免费不卡的大黄色大毛片视频在线观看 | a级毛片a级免费在线| 成人美女网站在线观看视频| 五月玫瑰六月丁香| 国产精品麻豆人妻色哟哟久久 | eeuss影院久久| 女人被狂操c到高潮| 男女啪啪激烈高潮av片| 美女 人体艺术 gogo| 中文字幕av成人在线电影| 精品一区二区三区视频在线| 天天躁夜夜躁狠狠久久av| 在线免费十八禁| 12—13女人毛片做爰片一| 日韩亚洲欧美综合| 成人毛片a级毛片在线播放| 精品一区二区三区视频在线| 男女啪啪激烈高潮av片| 青春草国产在线视频 | 亚洲欧美日韩高清专用| 国内少妇人妻偷人精品xxx网站| 久久久国产成人免费| 寂寞人妻少妇视频99o| 国产亚洲av嫩草精品影院| 国产高清不卡午夜福利| 禁无遮挡网站| 三级国产精品欧美在线观看| 亚洲成人久久爱视频| 日韩人妻高清精品专区| 非洲黑人性xxxx精品又粗又长| 日韩精品青青久久久久久| 国产中年淑女户外野战色| 成人特级黄色片久久久久久久| 男人的好看免费观看在线视频| 在线天堂最新版资源| 一边摸一边抽搐一进一小说| av.在线天堂| 男人的好看免费观看在线视频| 夜夜看夜夜爽夜夜摸| 插阴视频在线观看视频| 全区人妻精品视频| 在线免费十八禁| 天天躁夜夜躁狠狠久久av| 久久精品夜色国产| 欧美日本亚洲视频在线播放| 国产精品伦人一区二区| eeuss影院久久| 亚洲久久久久久中文字幕| avwww免费| 亚洲久久久久久中文字幕| 婷婷六月久久综合丁香| 又黄又爽又刺激的免费视频.| 深夜精品福利| 青春草视频在线免费观看| 色哟哟·www| a级一级毛片免费在线观看| 又粗又硬又长又爽又黄的视频 | 美女黄网站色视频| 真实男女啪啪啪动态图| 波多野结衣高清作品| 黄色欧美视频在线观看| 淫秽高清视频在线观看| 麻豆成人av视频| 国产精品女同一区二区软件| 婷婷亚洲欧美| 男人的好看免费观看在线视频| 色哟哟哟哟哟哟| 亚洲国产色片| 最好的美女福利视频网| 欧美另类亚洲清纯唯美| 日韩欧美三级三区| 午夜视频国产福利| 我要搜黄色片| 伦理电影大哥的女人| 成人无遮挡网站| 免费观看在线日韩| 国产精品一区www在线观看| 精品人妻偷拍中文字幕| 色视频www国产| 少妇被粗大猛烈的视频| 国产又黄又爽又无遮挡在线| 男女边吃奶边做爰视频| 内射极品少妇av片p| 少妇猛男粗大的猛烈进出视频 | 国产精品一区www在线观看| 神马国产精品三级电影在线观看| 国产一级毛片在线| 久久这里只有精品中国| 国内精品久久久久精免费| 一级毛片电影观看 | 亚洲经典国产精华液单| 春色校园在线视频观看| 欧美+亚洲+日韩+国产| 欧美3d第一页| 日韩制服骚丝袜av| 大又大粗又爽又黄少妇毛片口| a级毛片免费高清观看在线播放| 97超碰精品成人国产| 欧美日韩乱码在线| 人人妻人人澡人人爽人人夜夜 | 五月玫瑰六月丁香| 亚洲精华国产精华液的使用体验 | 国产亚洲精品久久久久久毛片| 欧美色视频一区免费| 男女啪啪激烈高潮av片| 精品日产1卡2卡| 美女国产视频在线观看| a级一级毛片免费在线观看| 国产精品女同一区二区软件| 波野结衣二区三区在线| 久久午夜亚洲精品久久| 亚洲欧美日韩无卡精品| 欧美极品一区二区三区四区| 国产精品,欧美在线| 啦啦啦韩国在线观看视频| a级毛片a级免费在线| 亚洲欧美日韩东京热| 在线观看午夜福利视频| 日本免费一区二区三区高清不卡| 国产单亲对白刺激| 国产精品,欧美在线| 午夜视频国产福利| 亚洲aⅴ乱码一区二区在线播放| 亚洲成人av在线免费| 亚洲国产色片| 91av网一区二区| 一本精品99久久精品77| 亚洲无线在线观看| 免费无遮挡裸体视频| 一本一本综合久久| 村上凉子中文字幕在线| 国产精品久久电影中文字幕| 五月玫瑰六月丁香| 欧美日韩国产亚洲二区| 国产精品久久视频播放| av.在线天堂| 秋霞在线观看毛片| 国产伦精品一区二区三区视频9| 丝袜喷水一区| 黄色视频,在线免费观看| 九草在线视频观看| 亚洲四区av| 日韩av不卡免费在线播放| 最近2019中文字幕mv第一页| 国内少妇人妻偷人精品xxx网站| 老司机影院成人| 日本黄大片高清| 国产精华一区二区三区| 人体艺术视频欧美日本| 男女视频在线观看网站免费| 国产成人freesex在线| 一个人免费在线观看电影| 欧美日本视频| 欧美性猛交╳xxx乱大交人| 日韩在线高清观看一区二区三区| 国国产精品蜜臀av免费| 久久久久久久久中文| 超碰av人人做人人爽久久| 亚洲最大成人中文| 久久综合国产亚洲精品| 国产午夜精品论理片| 一级毛片久久久久久久久女| 亚洲性久久影院| 欧美一区二区亚洲| 特级一级黄色大片| 亚洲av第一区精品v没综合| 亚洲欧美日韩东京热| 久久6这里有精品| 国产毛片a区久久久久| 嫩草影院入口| 人人妻人人澡人人爽人人夜夜 | 久久久a久久爽久久v久久| 免费电影在线观看免费观看| 男女那种视频在线观看| 久久精品影院6| 波多野结衣巨乳人妻| 国产精品爽爽va在线观看网站| 久久久欧美国产精品| 18禁在线播放成人免费| 亚洲欧美日韩东京热| 99久久无色码亚洲精品果冻| 亚洲中文字幕日韩| 欧美丝袜亚洲另类| 亚洲欧美成人综合另类久久久 | 国产精品综合久久久久久久免费| 在线免费观看不下载黄p国产| 亚洲精品乱码久久久久久按摩| 国产成人福利小说| 日韩一区二区三区影片| 最好的美女福利视频网| 麻豆精品久久久久久蜜桃| 国产精品一区二区性色av| 一本一本综合久久| 男女啪啪激烈高潮av片| 成年免费大片在线观看| 一进一出抽搐gif免费好疼| 99在线视频只有这里精品首页| 国产免费一级a男人的天堂| 午夜爱爱视频在线播放| 大型黄色视频在线免费观看| 在现免费观看毛片| 不卡一级毛片| 熟女人妻精品中文字幕| 日韩精品青青久久久久久| 久99久视频精品免费| 神马国产精品三级电影在线观看| 夜夜夜夜夜久久久久| 亚洲欧美日韩无卡精品| 久久久久久久久久久丰满| 此物有八面人人有两片| 少妇人妻精品综合一区二区 | 搡女人真爽免费视频火全软件| 国产乱人偷精品视频| 国产精品人妻久久久影院| 国产激情偷乱视频一区二区| 一级毛片电影观看 | 人妻制服诱惑在线中文字幕| 久久精品人妻少妇| 亚洲av成人精品一区久久| 深夜a级毛片| 搡老妇女老女人老熟妇| 天天躁日日操中文字幕| 啦啦啦观看免费观看视频高清| 欧美三级亚洲精品| 成人漫画全彩无遮挡| 中出人妻视频一区二区| 国产乱人视频| 嫩草影院入口| 国产午夜精品论理片| 97在线视频观看| 久久久久久久亚洲中文字幕| 我的女老师完整版在线观看| 国产黄色视频一区二区在线观看 | 久久99热6这里只有精品| av在线蜜桃| 国产黄片视频在线免费观看| 中文字幕制服av| av.在线天堂| 97热精品久久久久久| 黄色一级大片看看| 超碰av人人做人人爽久久| 中国美白少妇内射xxxbb| 狂野欧美激情性xxxx在线观看| 美女被艹到高潮喷水动态| 亚洲内射少妇av| 99热精品在线国产| 一边亲一边摸免费视频| 国产成人精品一,二区 | 亚洲av电影不卡..在线观看| 欧美日韩在线观看h| 啦啦啦啦在线视频资源| 在线观看av片永久免费下载| 99riav亚洲国产免费| 亚洲电影在线观看av| 午夜福利在线观看吧| av福利片在线观看| 69人妻影院| 不卡视频在线观看欧美| 18禁在线播放成人免费| 国产探花在线观看一区二区| 亚洲熟妇中文字幕五十中出| 一本精品99久久精品77| 久久久久久久久大av| 国产精品永久免费网站| 男女下面进入的视频免费午夜| 国产视频内射| 2022亚洲国产成人精品| 99热6这里只有精品| 黄片无遮挡物在线观看| 干丝袜人妻中文字幕|