• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    MAAUNet: Exploration of U-shaped encoding and decoding structure for semantic segmentation of medical image

    2022-11-28 02:09:36SHAOShuoGEHongwei

    SHAO Shuo, GE Hongwei

    (1. Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi 214122, China; 2. School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China)

    Abstract: In view of the problems of multi-scale changes of segmentation targets, noise interference, rough segmentation results and slow training process faced by medical image semantic segmentation, a multi-scale residual aggregation U-shaped attention network structure of MAAUNet (MultiRes aggregation attention UNet) is proposed based on MultiResUNet. Firstly, aggregate connection is introduced from the original feature aggregation at the same level. Skip connection is redesigned to aggregate features of different semantic scales at the decoder subnet, and the problem of semantic gaps is further solved that may exist between skip connections. Secondly, after the multi-scale convolution module, a convolution block attention module is added to focus and integrate features in the two attention directions of channel and space to adaptively optimize the intermediate feature map. Finally, the original convolution block is improved. The convolution channels are expanded with a series convolution structure to complement each other and extract richer spatial features. Residual connections are retained and the convolution block is turned into a multi-channel convolution block. The model is made to extract multi-scale spatial features. The experimental results show that MAAUNet has strong competitiveness in challenging datasets, and shows good segmentation performance and stability in dealing with multi-scale input and noise interference.

    Key words: U-shaped attention network structure of MAAUNet; convolutional neural network; encoding-decoding structure; attention mechanism; medical image; semantic segmentation

    0 Introduction

    With the development of computer vision, image segmentation has achieved superior performance in the fields of natural image and biomedical image. Under the conditions of medical image, the parts that need to be segmented are often only specific regions, such as tumor regions, organ tissues and diseased regions. Unlike natural image, medical image has inconsistencies in scale, difficult to collect datasets and more noise interference. Manual inspection requires considerable professionalism and subjective dependence. Therefore, the development of semantic segmentation technology for medical image is an important research topic.

    Early image segmentation methods commonly include region segmentation, boundary segmentation, thresholding and feature-based clustering. Although traditional segmentation methods have certain improvements in segmentation accuracy, they require prior knowledge, are not applicable to challenging tasks, and cannot maintain robustness. For example, the ISIC-2018[1]dataset contains skin lesion images of different scales. Fig.1 demonstrates that the scale, shape and color of skin lesions can greatly vary in dermoscopy images. Some images with complex shapes or unclear boundaries are unsatisfactory in traditional segmentation methods.

    Fig.1 Variation of scale in medical images

    Relying on the popularity of deep convolutional neural network (CNN[2]) in computer vision, CNN is quickly used for medical image segmentation tasks[3]. Networks such as fully convolutional networks (FCN[4]), SegNet[5], U-Net[6], V-Net[7], ResNet[8], DDANet[9], PSPNet[10], DenseNet[11], MultiResUNet[12], U-Net++[13], DC-UNet[14]and DoubleUNet[15]are used for image and voxel segmentation in various medical image modes. These methods have also achieved good performance on many complex datasets, proving the effectiveness of CNN networks in learning and identifying features to segment organs or diseased tissues from medical image.

    A fully CNN (FCN) structure[4]is proposed to perform end-to-end image segmentation, which is superior to the existing algorithms at the time. FCN is improved and a new architecture of SegNet[5]is developed, which includes a 13-layer deep encoder to extract spatial features, and a corresponding 13-layer deep decoder to give segmentation results. DeepLab[16]is proposed, and deep CNN with a fully connected conditional random field (CRF) is used to refine the segmentation result. Then DeepLabV2[17]is improved by using atrous convolution to reduce the degree of signal down-sampling. Atrous spatial pyramid pooling (ASPP) module is employed to capture long range context for DeepLabV3[18]. DeepLabV3+[19]is proposed, in which encoder-decoder structure is employed for semantic segmentation. An architecture of U-Net[6]is proposed, which includes a contracted path for acquiring context and a symmetrical extended path for precise positioning. A skip connection is added to the encoder-decoder image segmentation network (such as SegNet) to improve the accuracy of the model and solve the problem of gradient disappearance. A similar architecture of V-Net[7]is proposed, which adds residual connections and replaces 2D operations with 3D operations to process 3D voxel images. The optimization of Dice, a widely used segmentation metric, is also proposed. Some studies have developed a segmented version of the densely connected network architecture of DenseNet[11], which uses an encoder-decoder framework like U-Net.

    However, these models still face the problems of variable segmentation target scale, noise interference, rough segmentation results, slow training process and insufficient robustness. In response to these problems, a multi-scale residual aggregation U-shaped attention network structure of MAAUNet (MultiRes aggregation attention U-Net) is proposed. Through extensive experiments on different medical image datasets, it is found that MAAUnet is better than the classic U-Net model and the recent MultiResUNet model in most cases. The contributions of this article can be summarized as 1)-3).

    1) Aggregate connection is introduced, which is different from the original single feature aggregation of the same level. Skip connections are redesigned to merge the features of different semantic scales at multiple levels and multiple scales, thereby the semantic gap between skip connections is further reduced.

    2) After the multi-scale convolution module, a convolution block attention module is added to focus and integrate features in the two attention directions of channel and space, and optimize the intermediate feature map.

    3) The residual connection is improved in the original convolution block. The convolution channel is expanded with a series convolution structure. The residual connection is retained, and the multi-scale convolution block is turned into a multi-channel convolution block.

    1 Prior knowledge

    1.1 U-Net architecture

    Fig.2 shows the U-Net network architecture. It consists of encoder and decoder. The encoder follows the typical structure of a convolutional network. It includes repeated application of two 3×3 convolutions, each of which is followed by a rectified linear unit (ReLU) and a 2×2 maximum pooling operation with a step size of 2 for down sampling. In each down sampling step, the number of feature channels is doubled. This operation is repeated four times. Each step in the decoder includes the up sampling of the feature map, followed by 2×2 deconvolution, which reduces the number of feature channels by half, and concatenates the feature map with the corresponding feature map of the skip connection in the encoder. And then two 3×3 convolutions are used, a ReLU after each convolution is obtained. Since boundary pixels are lost in each convolution, cropping is required. In the last layer, 1×1 convolution is used to map each component feature vector to the required number of classes.

    Fig.2 U-Net architecture

    In addition, a skip connection is introduced to transmit the output of the encoder to the decoder by the U-Net architecture. These feature maps are connected with the output of the upsampling operation. And the spliced feature maps are propagated to subsequent layers. The network is allowed to retrieve spatial features lost due to pooling operations by Skip connections.

    1.2 MultiResUNet architecture

    The U-Net model is considered and redesigned by MultiResUNet. Aiming at the diversity of medical image in scale, the original convolutional layer is replaced by a block like Inception[20], which can better solve the problem of different scales of images, as shown in the Fig.3.

    Fig.3 MultiRes block improvement process

    The parallel structure of the left picture in Fig.3 is converted into the serial structure of the middle picture through reconstruction. Based on the serial structure, a residual connection is added to form the structure of the right picture in Fig.3. So far, a series of smaller 3×3 convolutional layers are used to replace the larger 5×5 and 7×7 convolutional layers, and a 1×1 convolutional layer called residual connection[8]is added, which can provide some additional spatial characteristics. This structure is called MultiRes block.

    There may be a semantic gap between the corresponding levels of encoding-decoding architecture. The main reason is that the feature map obtained by the encoder cannot be directly connected with the feature map output by the decoder. There is a semantic gap between encoder and decoder, so some convolutional layers are added to the path of the skip connection, which is called ResPath. Some modifications have been made to the skip connection named ResPath between encoder and decoder. Instead of simply connecting the feature maps from the encoder stage to the decoder stage, they are first passed through a chain of 3×3 convolutional layers with residual connections, and then they are connected with the decoder features. The ResPath path is shown in Fig.4.

    The MultiRes block and ResPath are added to the U-shaped structure to form the MultiResUNet model, and it is shown as Fig.5.

    Fig.4 ResPath structure diagram

    Fig.5 MultiResUNet architecture

    2 MAAUNet model

    The MAAUNet model improves the aggregation connection based on MultiResUNet, reduces the semantic gap, integrates the attention mechanism to optimize the intermediate feature map, and proposes a multi-channel convolution block to deal with interference of different scales. The model can effectively deal with scale transformations and background interference, and provide more effective segmentation based on these improvements .

    2.1 Aggregate connection

    Although MultiResUNet reduces the semantic gap between encoder and decoder by adding ResPath at the corresponding level, in order to further bridge the semantic gap between encoder start and decoder end, it is recommended to use aggregate join strategy on the basis of the original ResPath retention. The deeper feature maps are up-sampled, and the low-level feature maps of the skip connection are fused in this layer to better deal with images of different scales and help to further reduce the semantic gap.

    This is because the deep-level feature map has more accurate semantic information, and is a coarse-grained feature map that is not conducive to recovering details. The low-level feature map has inaccurate semantics and is a fine-grained feature map that helps to restore segmentation details. Therefore, the upward aggregation connection not only makes full use of the semantic information, but also can restore the fine segmentation results. The skip connection is redesigned to aggregate the features of different semantic scales in the decoder sub-network to form a highly flexible feature fusion scheme. In this way, different levels of features can be merged, and they can be integrated through feature superposition.

    The specific method is to fill the center of the U-shaped structure with nodes based on the original ResPath retention. Each node is concatenated by the ResPath result of the previous node at the same depth and the upsampling result of the node at the next depth which together constitute the restoration information of this node. The specific structure is shown in Fig.6.

    Different sizes of receptive fields have different sensitivities to target objects of different sizes. For example, the characteristics of large receptive fields can easily identify large objects. However, the edge information of large objects and the small objects themselves are easily lost by the down-sampling and up-sampling of the deep network in medical image segmentation. At this time, it may be necessary to help with the characteristics of small receptive fields. Therefore, feature aggregation strategies of different depths are helpful to deal with scale changes.

    Fig.6 Diagram of aggregation connection

    2.2 Convolution block attention module

    The existing U-shaped structure treats image feature recognition equally, but in fact the information of a picture is not evenly distributed but slightly focused.To pay more attention to areas with rich information gathering features, the CNN model is combined with the attention mechanism to improve the segmentation performance of the model. For the channel attention mechanism, the squeeze-and-excitation module before is proposed[21], which can distinguish the importance of different channels. The spatial attention module is introduced for the spatial attention mechanism, such as SA-UNet[22]. The module draws the attention map along the spatial dimension, and multiplies the attention map by the input feature map to optimize the adaptive features.

    The convolutional block attention module (CBAM[23]) is introduced, which can refine the feature map along the channel and space dimensions and integrate the two dimensions. Given an intermediate feature mapF∈RC×H×W, CBAM sequentially infers a one-dimensional channel attention feature mapMc∈RC×1×1and a two-dimensional spatial attention feature mapMs∈R1×H×W. The overall attention process can be summarized as

    F′=Mc(F)?F,

    F″=Ms(F′)?F′,

    where ? represents element-wise multiplication. During the multiplication process, the attention value is propagated accordingly. The channel attention value is propagated along the spatial dimension, and vice versa.F″ is the final refined output. Fig.7 shows the calculation process of each attention map.

    Fig.7 CBAM module structure

    The feature map is compressed into a one-dimensional vector in the spatial dimension by the channel attention sub-module. The global maximum pooling and global average pooling are used to aggregate the feature information of the spatial mapping. And the result is added element-wise through a shared fully-connected layer. Global pooling and maximum pooling can be used together to extract richer high-level features and provide more accurate information. The results of the channel sub-module is used to apply average pool and maximum pool operations along the channel axis to generate concatenate operations feature descriptors by the spatial attention sub-module. Convolutional layers are used to generate spatial attention maps.

    To effectively calculate the channel attention, the spatial dimension of the input feature map is compressed. The most commonly used method to aggregate spatial information is average pooling, but the maximum pooling also collects important clues about the characteristics of different objects, which can be used to infer more refined channel attention. The maximum pooling feature that encodes the main part can compensate for the average pooling feature of the global statistics. Therefore, both average pooling and maximum pooling features are used .

    Fig.8 CBAM sub-module structure diagram

    The spatial relationship is used to generate a spatial attention map. Spatial attention and channel attention are complementary. To calculate the spatial attention, average pooling and maximum pooling operations are first applied along the channel axis, and they are connected to generate effective feature descriptors. It can effectively highlight the information area by applying pooling operations along the axis of the channel. On the cascaded feature descriptors, the convolutional layer is applied to generate the spatial attention mapMs(F)∈RH×W, and the coding of the spatial attention map represents the enhanced or suppressed position information.

    After the convolution module of the network, the convolution attention module is introduced to adaptively refine the generated feature map, focus on the feature-rich channel and space, and then perform the next layer of convolution to obtain a more accurate and finer intermediate feature map.

    2.3 Multi-channel block

    Note that there is a simple residual connection in MultiRes block. This residual connection only provides some additional spatial features, which may not be enough to complete some challenging tasks. Features of different scales have shown great potential in medical image segmentation. Therefore, to overcome the problem of insufficient spatial features, a sequence of three concatenated 3×3 convolutional layers is used to expand the convolution channel in the MultiRes block, so that the two convolutional channels in series can complement each other and provide a richer space feature. To prevent the neural network from degenerating and improve the convergence speed when training the model, the symmetry of the convolutional structure can be broken, so the original residual connections are kept. This block is called Multi-channel block. Its structure is as shown in the Fig.9.

    Fig.9 Multi-channel block structure

    Therefore, the basic architecture of MAAUnet is proposed. Aggregate connections are added to reduce the semantic gap based on MultiResUNet. The intermediate feature map and the multi-channel block are optimized to deal with scale changes by the convolutional block attention mechanism module. Its structure is as shown in the Fig.10.

    Fig.10 MAAUNet structure diagram

    3 Experiments

    3.1 Experimental setup

    The model in this article is based on Python programming, and the network model has been implemented by using Keras with Tensorflow backends. The operating system of the experimental platform is Linux4.4.0, and the GPU is Ge Force RTX2080Ti.

    In order to verify the effectiveness and segmentation performance of MAAUNet model, comparative experiments are carried out on the ISIC-2018, Murphy lab[24], CVC-ClinicDB[25]and ISBI-2012 datasets by using U-Net, MultiResUNet and DC-UNet.

    3.1.1 Datasets

    Four public datasets are selected to test the performance of four U-Net based models. The nuclei in Murphy lab dataset are irregular in terms of brightness and the images often contain noticeable debris. Some images in the ISBI-2012 electron microscope dataset contain many interferences, such as noise, and other parts of the cell will affect the model identify boundary. The ISIC-2018 dataset contains skin lesion images of different scales, and the shape, size and colour of the lesion area are all different. In the colonoscopy images of CVC-ClinicDB, the boundaries of polyps are very blurred and difficult to distinguish, and the shape, size, structure and location of polyps are also different. These factors make this dataset the most challenging. The Table 1 briefly describes the dataset used in the experiment.

    Fluorescence microscope image dataset is collected by Murphy lab. This dataset contains fluorescence microscope images , and cell nuclei are manually segmented by experts. The brightness of the nuclei is irregular, and the image usually contains bright fragments, making it a challenging dataset of microscopy images.

    Electron microscopy images are segmtioned using the ISBI-2012: 2D EM challenge dataset. The images face slight alignment errors and are corrupted by noise.

    ISIC-2018 is a dermoscopy image dataset. A total of 2 594 images of different types of skin lesions with expert annotations are included. The original and input resolutions are shown in the Table 1. CVC ClinicDB is a colonoscopy image database used in the endoscopy image experiments. The images are extracted from 29 colonoscopy video sequence frames. A total of 612 images are obtained.

    3.1.2 Pre-processing/post-processing

    The purpose is to study the performance of the proposed MAAUNet architecture compared with the original U-Net and MultiResUNet. Therefore, no specific pre-processing is applied. The only pre-processing applied is to resize the input image to fit the GPU memory, and divide the pixel value by 255 to bring it to the range of [0,1].

    3.1.3 Training

    For a batch containingnimages, the loss functionJis

    The Adam optimizer with parameters is used to train these modelsβ1=0.9 andβ2=0.999. The number of training epochs varies with the size of the dataset.

    3.1.4 Evaluation metric

    In semantic segmentation, the target points occupy different proportions in the entire image. Therefore, indicators such as accuracy and recall rate are not enough, and may show exaggerated segmentation performance, which changes with the proportion of segmented background. Therefore, the Jaccard index is used to evaluate the image segmentation model. The Jaccard index of two setsAandBcan be defined as the ratio of the intersection and union of the two sets.

    3.2 Results and discussion

    Four kinds of datasets of Murphy lab, ISBI-2012, ISIC-2018 and CVC-ClinicDB are compared with U-Net, MultiResUNet and DC-UNet. The proposed MAAUNet model is analysed from both quantitative and qualitative perspectives to validate the segmentation performance. Compared experiment results are shown in Table 2. For better readability, the fractional values of Jaccard index have been converted to percentage ratios (%). And bold values in the Table 2 represent the maximum performance for each dataset.

    Table 2 Comparison of experimental results of different models

    3.2.1 Quantitative analysis

    It can be seen that the proposed model achieves improvement of 2.979 7%, 7.958 0%, 5.072 8% and 6.392 9% on the Jaccard index for Murphy lab dataset, ISBI-2012 dataset, ISIC-2018 dataset and CVC-ClinicDB dataset, respectively, compared with the classic UNet model. Among them, results on ISBI-2012 dataset and CVC-ClinicDB dataset have been significantly improved. For Murphy lab dataset and ISIC-2018 dataset, the proposed model still achieves improvements compared to U-Net. Therefore, these improvement effects are obvious.

    Compared with the MultiResUNet model, the Jaccard index of the proposed model achieves improvement of 0.628 7%, 7.419 5% and 1.201 7% for Murphy lab dataset, ISBI-2012 dataset and ISIC-2018 dataset. For CVC-ClinicDB dataset only, MAAUNet seems to be equivalent to MultiResUNet. Compared with the DC-UNet model, the proposed model has also been improved on all datasets.

    Quantitatively, for the Jaccard index, which is widely used in medical image segmentation, the proposed model MAAUNet has achieved considerable performance improvement on the datasets which have multi-scale input and interference of bright noise. The proposed multi-scale model with aggregation connection and attention mechanism has indeed achieved good improvements.

    3.2.2 Qualitative analysis

    This paper selects the more typical samples in the dataset for qualitative analysis. As shown in Fig.11, the proposed model is more robust to images of different scales. The segmentation of boundaries, fragments and small areas are more refined, and the interference of high-brightness noise is more effectively avoided.

    For example,the first row of Fig.11 is the experiment results of ISBI-2012 dataset. Because of the influence of light and dark changes, the original MultiResUNet segmentation results contain many messy lines inside the cells. While these fragments and lines are filtered out for the results of proposed model segmentation. The interior is cleaner.

    Experiments on the ISIC-2018 dataset are shown in the second row. Segmentation results obtained by MultiResUNet are too fine and narrow due to blurred boundaries and noise interference, while MAAUNet integrates multi-scale features to obtain more overlapping lesion segmentation areas, which achieves relative improvement.

    The third row of Fig.11 shows experiment results on the CVC-ClinicDB dataset. A small target is incorrectly segmented by MultiResUNet due to the influence of other similar organizations, while the wrong segmentation and the interference of similar organizations are avoided by MAAUNet.

    Experiment results on the Murphylab dataset are shown in the fourth row of Fig.11. The lower right corner of the input image contains highlighted fragments. Incomplete cell segmentation is caused by MultiResUNet due to the highlight interference. The segmentation results obtained contain small cell fragments, while the proposed MAAUNet avoid confusion of highlight noise. A clearer and complete cell segmentation is obtained. It can be observed from the qualitative analysis that the MAAUNet model faces multi-scale input, and the noise interference input obtains a more refined and clear segmentation result, and its effectiveness and robustness are verified on multiple datasets.

    Fig.11 Qualitative analysis. Segmentation results for different models on four datasets. Dataset (row from top to bottom): ISBI-2012, ISIC-2018, CVC-ClinicDB, Murphy Lab. Image (Column from left to right): Original image, Ground truth, MultiResUNet, MAAUNet.

    3.3 Ablation experiment

    To further confirm that the aggregation connection marked as ①, the convolutional block attention mechanism module marked as ② and the multi-channel module marked as ③ do play a positive role, the ablation experiments as shown in Table 3 are performed.

    3.3.1 Aggregate connection

    Comparing original MultiResUNet network and the structure with aggregated connections added, it can be seen that the addition of aggregated connections improves the segmentation performance by 0.237 5%, 4.392 1% and 0.502 9% on Murphy lab dataset, ISBI-2012 dataset and ISIC-2018 dataset. It shows that adding aggregate connections on the basis of the attention mechanism also yields 2.023 7%, 0.131 9% and 1.067 6% improvements on ISBI-2012 dataset, ISIC-2018 dataset and CVC-ClinicDB dataset. It can be seen that aggregate connections based on the multi-channel module still achieve 0.133 6%, 2.051 3% and 0.389 5% performance improvements on Murphy lab dataset, ISBI-2012 dataset and ISIC-2018 dataset.

    Table 3 Ablation experiment results

    The aggregation connection reduces the semantic gap and plays a good role in the decoder to recover low-level position information. It can be seen from Fig.11 that the original skip connection at the same level is replaced with cross-level flexible aggregation connections. The segmentation result of electron microscope dataset, skin disease image dataset and cell nucleus dataset have all been improved. Similar results are obtained for segmentation on endoscopy dataset. The aggregation connection can fuse different levels of feature information, which is more conducive to the accurate restoration of segmented images. The use of aggregate connection is a beneficial expansion of the U-shaped structure, and is very helpful for medical image segmentation with different scales and sensitive boundary information.

    3.3.2 Convolution block attention module

    It can be seen that the performance of the model is improved by 0.260 6%, 4.351 1% and 0.383 3% in Murphy lab dataset, ISBI-2012 dataset and ISIC-2018 dataset because of the addition of the attention module, respectively. It shows that the use of the attention module based on aggregation connection also promotes the performance of the model by 1.982 7%, 0.012 3% and 0.271 1% in ISBI-2012 dataset, ISIC-2018 dataset and CVC-ClinicDB, respectively. It shows that the attention module based on the multi-channel module has achieved segmentation performance improvement by 1.770 6% and 0.366 8% in ISBI-2012 dataset and ISIC-2018 dataset.

    It can extract richer advanced features, provide more refined information, adaptively optimize the intermediate feature map, and obtain more accurate segmentation results by the insertion of the convolutional block attention model. Improvements have been made in the fluorescence microscope, dermoscopy dataset and electron microscope dataset,while the endoscopy dataset has achieved comparable results.

    3.3.3 Multi-channel block

    It can be seen that the performance of the model is improved by 0.306 0%, 5.435 7% and 0.174 5% on Murphy lab dataset, ISBI-2012 dataset and ISIC-2018 dataset by using of multi-channel modules, respectively. It shows that the model with multi-channel modules based on aggregation connection obtain comprehensive segmentation performance improvement by 0.202 1%, 3.094 9% , 0.061 1% and 0.606 0% in Murphy lab dataset, ISBI-2012 dataset, ISIC-2018 dataset and CVC-ClinicDB dataset, respectively. It can be seen that the performance of multi-channel module based on the attention module is relatively improved by 2.855 2%, 0.158 0% and 1.557 1% in ISBI-2012 dataset, ISIC-2018 dataset and CVC-ClinicDB dataset, respectively.

    The multi-channel convolution block has a positive effect on the training of the network gradient. Improvements have been made to the fluorescence microscope dataset, electron microscope dataset and dermoscopy dataset. It can better extract spatial features of different scales, enrich complementary feature information and produce better segmentation results through the multi-channel convolution module.

    Finally, the aggregation connection, attention mechanism module and multi-channel convolution block are merged into the original U-shaped encoding-decoding structure, and the proposed model achieves better segmentation results.

    4 Conclusions

    By analyzing the architectures of classic U-Net and recent MultiResUNet in view of the different image scales, noise interference and other influencing factors, the aggregate connection structure, the convolution block attention module and the multi-channel convolution are designed to better capture multi-scale features, optimize intermediate feature maps and reduce the semantic gap. A new U-shaped architecture-MAAUNet is proposed.

    To verify the segmentation performance of the model, experiments on four public medical datasets are compared with a variety of mainstream models.The efficiency and stability of MAAUNet are verified in medical image segmentation. The qualitative results also show better segmentation fineness, which can detect fuzzy boundaries more effectively and avoid noise interference.

    In summary, the proposed model MAAUNet with aggregation connection and attention mechanism has indeed achieved good segmentation results. Of course, it is necessary to continue the research in lightening the model structure and improving the generalization ability of the model in the future.

    国产男人的电影天堂91| 青草久久国产| 91精品国产国语对白视频| 免费高清在线观看视频在线观看| 女人久久www免费人成看片| 91麻豆精品激情在线观看国产 | 超色免费av| 欧美日本中文国产一区发布| 国产三级黄色录像| 久久人妻熟女aⅴ| 一区在线观看完整版| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲情色 制服丝袜| 美女福利国产在线| 窝窝影院91人妻| 国产在视频线精品| 丝袜在线中文字幕| 精品国产一区二区三区四区第35| 久久久水蜜桃国产精品网| 欧美精品一区二区大全| 夜夜夜夜夜久久久久| 免费av中文字幕在线| 欧美日韩国产mv在线观看视频| 激情视频va一区二区三区| 在线观看人妻少妇| 捣出白浆h1v1| 香蕉丝袜av| 美女大奶头黄色视频| 一区二区三区精品91| 高清av免费在线| 我的亚洲天堂| 国产老妇伦熟女老妇高清| 激情视频va一区二区三区| 亚洲国产看品久久| 亚洲精品美女久久av网站| 国产有黄有色有爽视频| 欧美日韩视频精品一区| 一级片'在线观看视频| 脱女人内裤的视频| 91精品国产国语对白视频| 欧美大码av| 亚洲精品一二三| 他把我摸到了高潮在线观看 | av免费在线观看网站| 国产亚洲av高清不卡| 秋霞在线观看毛片| 曰老女人黄片| 9热在线视频观看99| 9热在线视频观看99| 亚洲自偷自拍图片 自拍| 国产在视频线精品| 亚洲va日本ⅴa欧美va伊人久久 | 黑人巨大精品欧美一区二区蜜桃| 久久人人97超碰香蕉20202| 不卡一级毛片| a 毛片基地| 欧美激情久久久久久爽电影 | 男人操女人黄网站| 亚洲人成电影观看| cao死你这个sao货| 人成视频在线观看免费观看| 母亲3免费完整高清在线观看| 免费观看av网站的网址| 久久人妻熟女aⅴ| 成年美女黄网站色视频大全免费| 久久人妻熟女aⅴ| 亚洲黑人精品在线| 18在线观看网站| 欧美日韩成人在线一区二区| 免费人妻精品一区二区三区视频| 青草久久国产| 人人妻人人澡人人看| 操美女的视频在线观看| 可以免费在线观看a视频的电影网站| 一本—道久久a久久精品蜜桃钙片| 亚洲免费av在线视频| av视频免费观看在线观看| 亚洲精品一卡2卡三卡4卡5卡 | 国产亚洲av片在线观看秒播厂| av在线app专区| 天天影视国产精品| av欧美777| 亚洲av美国av| 国产男女超爽视频在线观看| 91av网站免费观看| 亚洲七黄色美女视频| 电影成人av| 嫁个100分男人电影在线观看| 一区在线观看完整版| 一进一出抽搐动态| 久久久久久亚洲精品国产蜜桃av| 久久久久视频综合| 大码成人一级视频| 午夜成年电影在线免费观看| 国产精品 欧美亚洲| 午夜免费鲁丝| 少妇猛男粗大的猛烈进出视频| 欧美日韩国产mv在线观看视频| 国产精品免费视频内射| 亚洲欧美激情在线| 亚洲国产成人一精品久久久| 精品人妻1区二区| 精品视频人人做人人爽| 国产xxxxx性猛交| 亚洲第一青青草原| 成人亚洲精品一区在线观看| 免费观看人在逋| 69av精品久久久久久 | 母亲3免费完整高清在线观看| 这个男人来自地球电影免费观看| 国产熟女午夜一区二区三区| bbb黄色大片| 免费不卡黄色视频| 欧美日韩亚洲国产一区二区在线观看 | 91成年电影在线观看| 最近中文字幕2019免费版| 交换朋友夫妻互换小说| 嫁个100分男人电影在线观看| 欧美日韩一级在线毛片| 日韩大片免费观看网站| √禁漫天堂资源中文www| 黄片播放在线免费| 丁香六月欧美| 亚洲专区字幕在线| 亚洲欧美激情在线| 国产在线一区二区三区精| 久久狼人影院| 欧美激情极品国产一区二区三区| 中文字幕另类日韩欧美亚洲嫩草| 亚洲一码二码三码区别大吗| 十八禁人妻一区二区| bbb黄色大片| 美女扒开内裤让男人捅视频| 午夜免费鲁丝| 无限看片的www在线观看| 超碰97精品在线观看| 亚洲伊人色综图| 美女午夜性视频免费| 他把我摸到了高潮在线观看 | 91麻豆精品激情在线观看国产 | 丝袜美腿诱惑在线| 母亲3免费完整高清在线观看| 99热国产这里只有精品6| 日韩大片免费观看网站| 黑人欧美特级aaaaaa片| xxxhd国产人妻xxx| 国产老妇伦熟女老妇高清| 日韩中文字幕欧美一区二区| 日韩欧美国产一区二区入口| 搡老岳熟女国产| 欧美黄色片欧美黄色片| 午夜福利免费观看在线| 中亚洲国语对白在线视频| bbb黄色大片| 日本猛色少妇xxxxx猛交久久| av天堂久久9| 老司机深夜福利视频在线观看 | 免费看十八禁软件| 一进一出抽搐动态| √禁漫天堂资源中文www| 日日夜夜操网爽| 女性被躁到高潮视频| 久久久久久免费高清国产稀缺| 美女中出高潮动态图| 亚洲精品国产精品久久久不卡| 精品免费久久久久久久清纯 | 亚洲专区国产一区二区| 天天躁夜夜躁狠狠躁躁| 人妻 亚洲 视频| www.自偷自拍.com| 一区二区日韩欧美中文字幕| 视频区欧美日本亚洲| 黑人巨大精品欧美一区二区蜜桃| 免费少妇av软件| 日韩欧美一区二区三区在线观看 | 国产欧美日韩综合在线一区二区| 一本色道久久久久久精品综合| 亚洲国产精品一区三区| 丰满少妇做爰视频| 老司机靠b影院| 少妇精品久久久久久久| 久久中文看片网| 国产福利在线免费观看视频| 日本一区二区免费在线视频| 午夜久久久在线观看| 精品国产国语对白av| 亚洲成人免费电影在线观看| 欧美在线黄色| 91成年电影在线观看| 一区二区三区精品91| 免费久久久久久久精品成人欧美视频| 啦啦啦视频在线资源免费观看| 九色亚洲精品在线播放| 欧美av亚洲av综合av国产av| 美女高潮到喷水免费观看| 汤姆久久久久久久影院中文字幕| 最新在线观看一区二区三区| 国产99久久九九免费精品| 国产精品1区2区在线观看. | 俄罗斯特黄特色一大片| 亚洲五月婷婷丁香| 国产男女内射视频| 午夜福利视频精品| 99热全是精品| 国产野战对白在线观看| www.自偷自拍.com| 18禁国产床啪视频网站| 免费观看av网站的网址| 久久精品亚洲av国产电影网| 欧美日韩亚洲国产一区二区在线观看 | 午夜精品久久久久久毛片777| 亚洲国产欧美在线一区| 窝窝影院91人妻| 国产精品久久久久久精品古装| 精品一品国产午夜福利视频| 天天添夜夜摸| 精品熟女少妇八av免费久了| 高清在线国产一区| 欧美激情久久久久久爽电影 | 少妇猛男粗大的猛烈进出视频| 9热在线视频观看99| 新久久久久国产一级毛片| 国产免费现黄频在线看| 国产成人精品无人区| 国产成人精品久久二区二区91| 热99re8久久精品国产| 久久久久久久精品精品| 欧美精品亚洲一区二区| 亚洲成人免费av在线播放| 成年人免费黄色播放视频| 麻豆av在线久日| 精品久久蜜臀av无| 国产亚洲精品一区二区www | 汤姆久久久久久久影院中文字幕| 久久久久国产精品人妻一区二区| 日本av免费视频播放| 午夜免费观看性视频| 精品国产乱码久久久久久男人| 亚洲精品中文字幕一二三四区 | 午夜福利免费观看在线| 欧美激情 高清一区二区三区| 丝袜美腿诱惑在线| 一区二区av电影网| 亚洲av片天天在线观看| 久久久久久人人人人人| 一二三四社区在线视频社区8| 国产精品成人在线| 高清在线国产一区| 美女国产高潮福利片在线看| 久久国产精品男人的天堂亚洲| 国产av精品麻豆| 国产欧美日韩一区二区精品| 久久ye,这里只有精品| 日韩 亚洲 欧美在线| 在线观看一区二区三区激情| 国产一区二区激情短视频 | 免费在线观看视频国产中文字幕亚洲 | 免费少妇av软件| 国产成人精品在线电影| 日本av免费视频播放| 搡老乐熟女国产| 黄色视频在线播放观看不卡| 精品一区二区三卡| 久久人妻熟女aⅴ| 亚洲免费av在线视频| 99国产精品一区二区三区| 99久久人妻综合| 大陆偷拍与自拍| 一区二区av电影网| 免费一级毛片在线播放高清视频 | 热99久久久久精品小说推荐| 国产精品.久久久| 老司机影院毛片| 精品熟女少妇八av免费久了| 黄色怎么调成土黄色| 国产成人欧美| 亚洲第一青青草原| 视频在线观看一区二区三区| 高清av免费在线| 99国产精品一区二区三区| 少妇精品久久久久久久| 欧美日韩亚洲综合一区二区三区_| 国产成人一区二区三区免费视频网站| 黄色毛片三级朝国网站| 欧美亚洲日本最大视频资源| 亚洲,欧美精品.| 国产黄色免费在线视频| 男女高潮啪啪啪动态图| 777久久人妻少妇嫩草av网站| 美女午夜性视频免费| 大型av网站在线播放| 中文精品一卡2卡3卡4更新| 欧美乱码精品一区二区三区| www.自偷自拍.com| 免费高清在线观看日韩| 国产区一区二久久| 又黄又粗又硬又大视频| 欧美av亚洲av综合av国产av| 亚洲欧美一区二区三区黑人| 一本—道久久a久久精品蜜桃钙片| 91九色精品人成在线观看| 午夜福利视频在线观看免费| 亚洲九九香蕉| 高清av免费在线| 亚洲精品一卡2卡三卡4卡5卡 | 狠狠狠狠99中文字幕| 日本vs欧美在线观看视频| 欧美乱码精品一区二区三区| 波多野结衣av一区二区av| 狠狠狠狠99中文字幕| 2018国产大陆天天弄谢| 欧美成人午夜精品| 日韩三级视频一区二区三区| 99香蕉大伊视频| 国产精品一区二区免费欧美 | 这个男人来自地球电影免费观看| 久久影院123| 日韩一区二区三区影片| 午夜福利,免费看| 后天国语完整版免费观看| 啦啦啦视频在线资源免费观看| 亚洲精品一区蜜桃| 国产伦人伦偷精品视频| 各种免费的搞黄视频| 手机成人av网站| 亚洲精品一二三| 国产熟女午夜一区二区三区| 日日摸夜夜添夜夜添小说| av天堂在线播放| 国产精品自产拍在线观看55亚洲 | 青春草视频在线免费观看| 亚洲黑人精品在线| 国产精品 欧美亚洲| 激情视频va一区二区三区| 久久久国产精品麻豆| 亚洲免费av在线视频| 国产精品99久久99久久久不卡| 亚洲第一青青草原| 中文字幕人妻熟女乱码| 午夜福利视频精品| 乱人伦中国视频| 免费观看av网站的网址| 夫妻午夜视频| 在线观看www视频免费| 日韩三级视频一区二区三区| 久久久精品国产亚洲av高清涩受| 欧美人与性动交α欧美软件| 午夜免费观看性视频| 男女边摸边吃奶| 国产一区二区 视频在线| 999精品在线视频| 成人亚洲精品一区在线观看| 首页视频小说图片口味搜索| 九色亚洲精品在线播放| 久久中文看片网| 欧美日韩福利视频一区二区| 人妻人人澡人人爽人人| 精品视频人人做人人爽| 满18在线观看网站| 法律面前人人平等表现在哪些方面 | 精品福利观看| 一区福利在线观看| 亚洲美女黄色视频免费看| 中文字幕高清在线视频| 亚洲第一av免费看| 色精品久久人妻99蜜桃| 国产一区二区 视频在线| 日韩制服丝袜自拍偷拍| 亚洲av日韩在线播放| 亚洲国产中文字幕在线视频| 少妇 在线观看| 岛国毛片在线播放| 在线av久久热| 欧美精品亚洲一区二区| 午夜福利一区二区在线看| 国产色视频综合| 精品高清国产在线一区| 午夜影院在线不卡| 免费高清在线观看视频在线观看| 窝窝影院91人妻| 国产在线观看jvid| 汤姆久久久久久久影院中文字幕| 亚洲欧美日韩高清在线视频 | 亚洲精品国产区一区二| 国产精品自产拍在线观看55亚洲 | 国产免费av片在线观看野外av| 男人操女人黄网站| 丁香六月欧美| 亚洲伊人色综图| 亚洲精品自拍成人| 少妇猛男粗大的猛烈进出视频| 久久亚洲精品不卡| 亚洲天堂av无毛| 老司机深夜福利视频在线观看 | 高清av免费在线| 婷婷成人精品国产| 久久女婷五月综合色啪小说| 国产精品成人在线| 亚洲精品av麻豆狂野| 免费少妇av软件| 两性夫妻黄色片| 国产男人的电影天堂91| 动漫黄色视频在线观看| 国产精品自产拍在线观看55亚洲 | 国产91精品成人一区二区三区 | 一区在线观看完整版| 亚洲专区字幕在线| 久久中文看片网| e午夜精品久久久久久久| 精品少妇一区二区三区视频日本电影| 亚洲国产精品999| 成人国产av品久久久| 大码成人一级视频| 美女国产高潮福利片在线看| 精品乱码久久久久久99久播| 菩萨蛮人人尽说江南好唐韦庄| 国产精品一区二区精品视频观看| 国产精品二区激情视频| 老司机福利观看| 悠悠久久av| 久久人妻熟女aⅴ| 久久99热这里只频精品6学生| 啦啦啦免费观看视频1| 久9热在线精品视频| 男人爽女人下面视频在线观看| 99精品欧美一区二区三区四区| 精品久久久久久电影网| 少妇 在线观看| 天堂中文最新版在线下载| 91老司机精品| h视频一区二区三区| 一进一出抽搐动态| 老熟妇乱子伦视频在线观看 | 在线观看免费午夜福利视频| 亚洲一区中文字幕在线| 成年人免费黄色播放视频| 亚洲国产欧美在线一区| 国产成人欧美| 亚洲黑人精品在线| 99久久99久久久精品蜜桃| 精品乱码久久久久久99久播| 在线av久久热| 免费在线观看影片大全网站| 电影成人av| 99九九在线精品视频| 午夜福利乱码中文字幕| 日本一区二区免费在线视频| 麻豆乱淫一区二区| 亚洲欧美精品自产自拍| 美女福利国产在线| 人妻 亚洲 视频| 爱豆传媒免费全集在线观看| 亚洲欧美日韩另类电影网站| 国产精品av久久久久免费| 人人妻人人添人人爽欧美一区卜| 欧美在线黄色| 18禁黄网站禁片午夜丰满| 国产亚洲一区二区精品| 亚洲九九香蕉| 精品国产乱子伦一区二区三区 | 亚洲国产欧美在线一区| 国产男人的电影天堂91| 亚洲国产av新网站| 欧美成人午夜精品| 国产免费福利视频在线观看| 91精品伊人久久大香线蕉| 欧美老熟妇乱子伦牲交| 精品欧美一区二区三区在线| 国产亚洲一区二区精品| 亚洲欧美一区二区三区黑人| 啦啦啦中文免费视频观看日本| 久久精品国产亚洲av香蕉五月 | 好男人电影高清在线观看| 1024视频免费在线观看| 搡老熟女国产l中国老女人| 十八禁网站免费在线| 国产激情久久老熟女| www.自偷自拍.com| 一区二区三区精品91| 欧美精品亚洲一区二区| 精品国产乱码久久久久久男人| 人人妻人人添人人爽欧美一区卜| 精品人妻在线不人妻| 精品欧美一区二区三区在线| 老熟女久久久| 成人黄色视频免费在线看| 水蜜桃什么品种好| 51午夜福利影视在线观看| 午夜福利视频精品| www.999成人在线观看| 久久久精品94久久精品| 国产真人三级小视频在线观看| 色婷婷av一区二区三区视频| 999精品在线视频| 日韩大码丰满熟妇| 搡老熟女国产l中国老女人| 大香蕉久久网| 青青草视频在线视频观看| av视频免费观看在线观看| 国产男人的电影天堂91| 精品亚洲成国产av| 99久久国产精品久久久| 另类精品久久| 免费人妻精品一区二区三区视频| 一边摸一边抽搐一进一出视频| 日韩欧美免费精品| 9191精品国产免费久久| 国产在线视频一区二区| 91大片在线观看| 午夜福利乱码中文字幕| 亚洲第一欧美日韩一区二区三区 | 高清欧美精品videossex| 成在线人永久免费视频| 久久国产亚洲av麻豆专区| 中文字幕人妻丝袜一区二区| 成人影院久久| 考比视频在线观看| 亚洲avbb在线观看| 天天影视国产精品| 在线av久久热| 国产野战对白在线观看| 午夜精品久久久久久毛片777| 欧美日韩精品网址| 久久人妻福利社区极品人妻图片| 国产免费视频播放在线视频| 日韩,欧美,国产一区二区三区| 国产成人av激情在线播放| 91麻豆精品激情在线观看国产 | 窝窝影院91人妻| 久久亚洲国产成人精品v| 男女国产视频网站| 国产av一区二区精品久久| 欧美中文综合在线视频| 精品国产一区二区三区四区第35| av片东京热男人的天堂| 91字幕亚洲| 欧美人与性动交α欧美精品济南到| 亚洲熟女毛片儿| 日韩大码丰满熟妇| 中文字幕高清在线视频| 亚洲,欧美精品.| 亚洲av欧美aⅴ国产| 国产精品久久久久成人av| 中文字幕另类日韩欧美亚洲嫩草| 色播在线永久视频| 亚洲av日韩精品久久久久久密| 黄色毛片三级朝国网站| 啦啦啦 在线观看视频| 青青草视频在线视频观看| 精品乱码久久久久久99久播| 男女国产视频网站| 少妇被粗大的猛进出69影院| 久久久久精品国产欧美久久久 | 性色av乱码一区二区三区2| 亚洲欧美一区二区三区久久| 国产成人精品在线电影| 国产免费视频播放在线视频| 日韩,欧美,国产一区二区三区| 啦啦啦中文免费视频观看日本| 久久久水蜜桃国产精品网| 久久热在线av| 亚洲精品美女久久久久99蜜臀| 淫妇啪啪啪对白视频 | 狠狠狠狠99中文字幕| 在线十欧美十亚洲十日本专区| 51午夜福利影视在线观看| 成年人午夜在线观看视频| 精品福利永久在线观看| 亚洲午夜精品一区,二区,三区| 交换朋友夫妻互换小说| 九色亚洲精品在线播放| 久久天躁狠狠躁夜夜2o2o| 一边摸一边做爽爽视频免费| 在线观看免费日韩欧美大片| 999久久久精品免费观看国产| 波多野结衣一区麻豆| 在线观看www视频免费| 香蕉国产在线看| 久久久久久人人人人人| 菩萨蛮人人尽说江南好唐韦庄| 亚洲伊人色综图| 中文字幕色久视频| 欧美在线黄色| 大片电影免费在线观看免费| 久久久精品免费免费高清| 久久午夜综合久久蜜桃| kizo精华| 国产人伦9x9x在线观看| 日本精品一区二区三区蜜桃| 免费在线观看日本一区| av有码第一页| 国产成人精品久久二区二区免费| 大陆偷拍与自拍| 热99国产精品久久久久久7| 国产福利在线免费观看视频| 9热在线视频观看99| 麻豆av在线久日| 久久久精品区二区三区| 波多野结衣一区麻豆| 韩国精品一区二区三区| 别揉我奶头~嗯~啊~动态视频 | 99香蕉大伊视频| 国产深夜福利视频在线观看| 欧美黄色淫秽网站| 亚洲欧美色中文字幕在线| 欧美中文综合在线视频| 美女高潮喷水抽搐中文字幕| 一级毛片精品| 黑丝袜美女国产一区| 岛国在线观看网站| 国产成人影院久久av| 在线亚洲精品国产二区图片欧美| 极品人妻少妇av视频| 久久久国产欧美日韩av| √禁漫天堂资源中文www|