• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Arbitrary-oriented target detection in large scene sar images

    2020-07-02 03:17:50ZishuoHanChunpingWangQiangFu
    Defence Technology 2020年4期

    Zi-shuo Han, Chun-ping Wang, Qiang Fu

    Shijiazhuang Campus, Army Engineering University, Shijiazhuang, 050003, China

    Keywords:Target detection Convolutional neural network Multilayer fusion Context information Synthetic aperture radar

    ABSTRACT Target detection in the field of synthetic aperture radar (SAR) has attracted considerable attention of researchers in national defense technology worldwide, owing to its unique advantages like high resolution and large scene image acquisition capabilities of SAR. However, due to strong speckle noise and low signal-to-noise ratio,it is difficult to extract representative features of target from SAR images,which greatly inhibits the effectiveness of traditional methods. In order to address the above problems, a framework called contextual rotation region-based convolutional neural network (RCNN) with multilayer fusion is proposed in this paper. Specifically, aimed to enable RCNN to perform target detection in large scene SAR images efficiently, maximum sliding strategy is applied to crop the large scene image into a series of sub-images before RCNN. Instead of using the highest-layer output for proposal generation and target detection, fusion feature maps with high resolution and rich semantic information are constructed by multilayer fusion strategy. Then, we put forwards rotation anchors to predict the minimum circumscribed rectangle of targets to reduce redundant detection region. Furthermore, shadow areas serve as contextual features to provide extraneous information for the detector identify and locate targets accurately. Experimental results on the simulated large scene SAR image dataset show that the proposed method achieves a satisfactory performance in large scene SAR target detection.

    1. Introduction

    As an important mean of ground detection, synthetic aperture radar (SAR) possesses all-day all-weather and certain penetration capability. Therefore, SAR has been widely applied aboard in military and civilian fields [1], such as target surveillance, weapon guidance,battlefield monitoring,geodetic surveying and mapping,environmental monitoring, disaster prevention. Automatic target detection and recognition (ATDR) of SAR images can effectively obtain target information, regarded as the primary technology to achieve the above practical applications [2]. Traditional target recognition and detection methods of SAR images mainly based on template matching [3], statistical model [4], or feature space.However, with the advent of big data era, the traditional methods are difficult to meet the requirements of massive data processing in terms of efficiency and accuracy.

    Since Krizhevsky et al. [5] scooped the top prize in ImageNet large scale visual recognition challenge(ILSVRC)using convolution neural network (CNN) in 2012, CNN has been widely used in classification[[6-8]]and target detection.Up to now,target detection based on CNN has achieved remarkable successes, such as R-CNN[9],Fast R-CNN[10],Faster R-CNN[11],FPN[12],which belong to two-stage detection approaches, and YOLO [13], SSD [14], DSSD[15],which belong to single-stage detection approaches.Generally,the two-stage detectors are superior to the single-stage detectors in accuracy[16],so the former is adopted more in the field of accurate identification and positioning. The RR-CNN [17] realizes multiangle ship detection by adding rotation region pooling layer and rotation border regression model in R-CNN. Yang et al. [18] propose a R2CNN++ network framework, which combines the rotation branch network with Faster R-CNN and is applied to multioriented vehicle detection with an accuracy rate of 91.75%.

    CNNs are developing rapidly both in optical image and SAR image target detection.In Ref.[19],full convolution neural network is proposed and verified to be effective in SAR image target classification. In Ref. [20], DeepSAR-Net based on CNNs with normalized layer is proposed for ship detection,and achieves good results.In Ref. [21], a simple CNN incorporating multi-aspect perception technology gets an inspiring accuracy of targets recognition in MSTAR dataset. Liu et al. [22] use R-CNN to detect targets in the regions of interest(RoIs)extracted from the original image directly,and realize large scene SAR image detection. In Refs. [23,24], the efficiency of Faster R-CNN for target recognition and detection is verified in MSTAR and its extended dataset. Most of the above researches on SAR image target detection focus on single target recognition and horizontal frame localization, and often cause target loss with the simple clipping strategy of large scene SAR images. Multi-oriented target detection in scene images can not only identify the targets, but also make a strong judgment of the next dynamic of each target based on the positioning results,which is of great significance to the judgment of battlefield hostility and urban traffic monitoring.

    Another way to improve the performance is multilayer fusion strategy, FPN and CMS-RCNN [25] are typical representatives. Researches show that multilayer fusion is beneficial to feature propagation and reuse, and makes feature maps consider both semantics and high resolution requirements.Kang et al.[26]apply CMS-RCNN to ship detection in space-borne SAR images and achieve an astonishing result.In addition,contextual features are often used as supplemental information of targets to reduce false alarm rate and improve recognition rate [27,28]. For SAR images, each target has its own unique shadow, which can help detectors to identify and locate targets more accurately. Thus, adding the feature of shaded part to the whole representative information seems a good way to make the target detection networks more robust.

    As mentioned above, detecting targets in SAR image not only has great development space, but also faces many challenges. In this paper,we propose a contextual rotation RCNN with multilayer fusion for target detection in large scene SAR images.For clarity,the main innovations of this paper are summarized as follows:

    1. For large scene SAR images, a maximum sliding cropping strategy is adopted, which increases the randomness of targets distribution and avoids the problem of over-fitting caused by small training dataset.

    2. We build a novel target detection architecture based on Faster R-CNN, which is able to generate rotational bounding boxes,reduce redundant detection region and handle different complex scenes.

    3. We apply multilayer fusion strategy to obtain fusion feature maps with high resolution and rich semantic information for proposal generation, and adopt rotation anchors to generate rotation proposals for the next stage, which greatly improves the detection accuracy of the network and enriches practical application value.

    4. We propose an integrating shadow context strategy, which can rule out false alarms, enhance the classification and location performance of the framework,and supplement the calculation of confidence scores and bounding box regression.

    The proposed method is evaluated on simulated large scene SAR images, which are randomly fused from environmental scene images and target slices in MSTAR dataset and MiniSAR dataset, and compared with other five methods.The experimental results verify the efficiency of the proposed method in target detection.

    The remainder of this paper is organized as follows. Section 2 concerns the implementation details of the proposed method.Section 3 introduces dataset description, training details and evaluation metrics. Section 4 presents the specific experimental process and discusses the results. Finally, section 5 concludes the whole paper.

    2. Methodology

    In this section, we will detail the various parts of the proposed target detection method. Fig.1 shows the overall network framework of the proposed method, which is composed of three major components:the feature extraction network(FEN),rotation region proposal network (R-RPN), rotation region detection network (RRDN).Firstly,the original large scene image is divided into several sub-images according to the maximum sliding cropping strategy.Then semantic and high resolution feature maps are constructed by FEN based on multilayer fusion strategy for region generation.Secondly, R-RPN gets rotational regions of interest (R-RoIs) by rotation anchors, and provides high score region proposals for RRDN. Thirdly, in R-RDN, the minimum circumscribed rectangle of each proposal and the shadow region is adopted as context information together with the proposal, which can provide representative information after max pooling and R-RoI pooling for R-RDN to output the class prediction and location regression. Finally,labeled sub-images are stitched together to a scene image.

    2.1. Maximum sliding cropping strategy

    Normally, in order to obtain feature maps with the same size,the original images will been adjusted to a certain size before being sent to the CNN, which will lead to discarding of many pixels,resulting in information loss and sharp decline in target detection accuracy, including target missing, inaccuracy of target location,and lower confidence[29].In order to avoid the defect,cutting large scene images into a series of sub-images has been widely used in scene image detection. Since targets may be in any position of the image,the traditional strategies often cause targets missing,which results in unsatisfactory detection results.

    In this paper, we use maximum sliding operation to clip large scene images into sub-images,which are sent to CNN for detection subsequently.In order to obtain higher target detection accuracy,it is necessary to ensure that every potential target is included in at least one sub-image after clipping.Absolutely,a single-pixel sliding window can get the best detection result, but it is equivalent to increasing the number of detection times artificially, which will inevitably affect the evaluation of real results in the later period,and at the same time, the time loss will increase. A large-stride sliding window can reduce time consumption and human intervention, but one target may be split into multiple blocks by adjacent sub-images, resulting in target missing. In view of the above analysis,it is necessary to select appropriate window sliding stride to minimize human intervention and time consumption without decreasing the detection accuracy. If the target size is wt× ht, and the sliding window size is z, then the sliding window stripe must satisfy the following formula:

    According to Eq.(1),we take z=400 and k =310,in accordance with the principle of “minimizing human intervention and time loss without affecting the detection accuracy”. Fig. 2 shows the diagram of the maximum sliding cropping strategy.

    2.2. Feature extraction network

    Fig.1. Overall network framework of the proposed method.

    Fig. 2. Diagram of the maximum sliding cropping strategy.

    In recent years,researchers have proposed six typical networks for feature extraction: AlexNet [30], VGGNet [31], GoogleNet [32],ResNet [33], DenseNet [34], SENNet [35]. For SAR image interpretation, ResNet18 have obvious advantages in recognition accuracy and time loss compared to other network structures [36]. Therefore,ResNet18 is adopted to extract features and construct feature maps in this paper,but the detection result is not very satisfactory.As we know, high-level features contain highly semantic information,but lack targets location information due to lower resolution.On the contrary, the low-level features are just the opposite. We apply multilayer fusion strategy to reprocess the feature maps constructed by ResNet18 to obtain more comprehensive and representative feature representations. As shown in Fig. 3, C2, C3,C4 are the outputs of conv2_2, conv3_2 and conv4_2 of ResNet18 respectively. In order to get more representative features, the shallow layer C2 is down-sampled by maxpooling, the deep layer C4 is up-sampled by deconvolution.Then they are compressed into a uniform space by l2normalization as P2 and P4, which are concatenated with l2-normalized C3 and fuse into P3 with more detailed information for region generation.

    2.3. Rotation region proposal network

    In R-RPN, a series of cursory R-RoIs is generated by rotation anchors, and each R-RoIs is accompanied by a score to determine whether it is a target or not for subsequent re-detection.The main ingredients of R-RPN will be described below.

    2.3.1. Rotation bounding box

    The traditional detection algorithm calibrates the target by a horizontal bounding box,which is simply recorded by coordinates of the upper left corner and lower right corner,expressed as(xmin,ymin,xmax,ymax). However, this calibration method lacks direction information, and once the target inclines, more redundant information will appear in the calibration area. Therefore, in order to locate the target more accurately, we have to redefine the representation at first. In this paper, we use five variables(x,y,w,h,θ) to uniquely redefine the arbitrary-oriented bounding box.As shown in Fig.4,(x,y)represents the coordinate of the center point,rotation angle(θ)is the angle at which the horizontal axis(xaxis)rotates counterclockwise to the first edge of the encountered rectangle,and its range is[-90°,0°).At the same time we define this side is width(w), and the other is height(h).

    Fig. 3. The multilayer fusion strategy. (a) The stereogram; (b) the flowchart.

    Fig. 4. General representation of rotation bounding box.

    2.3.2. Rotation anchor

    In the RPN phase of the two-stage detection methods such as R-CNN series and R-FCN[37],the anchors at each feature point are the initial shape of the RoIs. Properly setting anchors can help to form RoIs quickly.A rectangular anchor can generate a rectangular RoI eventually. Similarly, R-RoIs can be obtained by the rotation anchors. The scale, ratio and angle of the anchors depend on the targets being tested.

    Taking into account of the characteristics of the target slices in MSTAR dataset,the w-to-h radios are set to{1:1.5,1:2,1:2.5,1,1.5,2,2.5},and the sizes of the scale are{25,30,35,40}.On this basis,we add nine angles {-10°, -20°, -30°, -40°, -50°, -60°, -70°,-80°,-90°}to generate rotation anchors.Namely,for each feature point of each feature map,there are 252 anchors(7×4×9),which will be imported into the box-classification layer and the boxregression layer in a sibling fully connected manner, and then there will be 504 outputs (2× 252) for the classification layer and 1260 outputs (5× 252) for the regression layer.

    2.3.3. Skew intersection-over-union computation

    In RPN, a large number of cross-boundary proposals will be generated.Therefore,to alleviate the redundancy and improve the detection performance, non-maximum suppression (NMS) is utilized to get the most appropriate proposals. For NMS, the value of the intersection-over-union (IoU) is a judgment criterion for determining whether the proposal meets the requirements. However, due to the arbitrary orientation of rotation bounding boxes,there are skew interactive areas between cross-boundary proposals,so IoU calculation on axis aligned bounding box is no longer suitable for computing skew IoU.But any overlap area between two cross-boundary bounding boxes is always a polygon. So, the skew IoU calculation method based on triangulation is proposed to address the problem [38]. The geometric principle is shown in Fig. 5. We can obtain the overlap area Soand the union area Suas follows:

    Fig. 5. Skew interaction.

    The skew IoU can be defined as:

    2.3.4. Loss function

    The loss function is defined as a multi-task loss L to minimize the objective function [39]. It can be computed as follows:

    where piis the predicted probability of the i-th anchor calculated by the soft-max function,lirepresents the label of ground-truth,tirepresents the five parametric coordinate vector of predicted bounding box output by the regression layer,and t*irepresents the five parametric coordinate vector of ground-truth. The classification loss Lclsis log loss over two classes (background and target).The regression loss Lregis the robust loss function (smoothL1). The two task losses are normalized by Nclsand Nreg, and balanced by hyper-parameterλ. In addition, the classification loss Lclsand the regression loss Lregin Eq. (5) are defined as:

    The parameterizations of five coordinates are defined as follows:

    where(x,y,w,h,θ)(xa,ya,wa,ha,θa)and(x*,y*,w*,h*,θ*)denote the position coordinates of the predicted bounding box,anchor box and ground-truth box respectively. The parameter k∈Z keeps θ in the range of[-90°,0°).When k is an odd number,w and h needs to be swapped to keep the bounding box in the same position.

    2.4. Rotation region detection network

    R-RDN detects the proposals obtaining from R-RPN to output the eventual information of classification and location. In this section,the integrating shadow context strategy and the generation process of detection results are introduced in detail.

    2.4.1. Integrating shadow context

    Target detection based on contextual CNN is mostly used in face detection [25,26], human behavior detection [40], and population density estimation [41]. These researches suggest that contextual information is a critical piece to improve target detection performance, and context can greatly rule out false alarms and provide additional features for target recognition.For SAR images,both the target and the shadow are rich in target feature information,which can be used to obtain robust feature representation. Especially for target recognition and reducing false alarm in large scene SAR images, it is particularly important to increase the amount of information used for detection. Based on the above analysis, our network is designed to make explicit reference to the context information and the proposals in target detection.

    As shown in Fig. 6, the integrating shadow context strategy takes the green block as the context information.The blue block is translated by the red proposal along the incident angle of the radar to the disjoint position,and the minimum circumscribed rectangle of red and blue block is context information.This strategy may not cover all the shadow sometimes but should be correct in most scenarios.And we can adjust the size of the blue block at any time according to the incident angle of the radar to adapt to various scenarios. Let the coordinate of the proposal be(xp,yp,wp,hp,θp),then the contextual region coordinate (xc,yc,wc,hc,θc) can be determined by the following equation:

    Fig. 6. Integrating shadow context.

    2.4.2. R-RoI pooling

    In R-RDN, after the integrating shadow context strategy is implemented,proposals and the corresponding contextual regions will be ascertained.Then,R-RoI pooling and RoI pooling operations are performed for each proposal and contextual region in fusion feature maps to represent the target features and the contextual features. After two fully connected layers, they concatenates to a single feature block which is thoroughly mixed together by the next fully connected layer.Then,it is imported to the classification layer and the regression layer to compute confidence score and bounding box regression, as shown in Fig.1.

    For horizontal bounding box calibration, RoI pooling is often used to obtain fixed-length feature vector from the proposal,but it is not suitable for arbitrary direction calibration algorithm. So, we use R-RoI pooling to dimension-reduce the rotation proposal. The process of R-RoI pooling is shown as follows: considering the first width edge of the rotational bounding box as the horizontal axis,we divide the bounding box with a coordinate of(xp,yp,wp,hp,θp)into 3×3 bins by a parallel grid,the size of each bin is wp/3× hp/3,then R-RoI pooling can be modeled as:

    where yrjdenotes the pooled output of the j-th bin of the r-th RRoI,B(r,j) is a set of pixels belonging to the j-th bin,xiis the size of the i-th pixel.When θp= -90°,R-RoI pooling is equivalent to RoI pooling.

    2.4.3. Non-maximum suppression between sub-images

    As mentioned before, the large scene image will be cut into a series of sub-images before being sent to the CNN. Once the localization and classification are completed in the CNN network for each sub-image, the next task is to splice the sub-images together. However, it should be noted that the common area of adjacent sub-images may have the same targets, which will result in bounding boxes overlapping after the splicing.We execute nonmaximum suppression between sub-images(NMS-SI)to deal with this problem, after the operation of NMS is implemented in every sub-image.

    Before NMS-SI, we need to determine the absolute coordinate(x*,y*) of each pixel in the large scene image, so that the subimages can be stitched together without confusion in the later stage.Suppose that the sub-image of a large scene image is the i-th from left to right and the j-th from top to bottom, (x,y) is the coordinate of any pixel in this sub-image, then (x*,y*) can be calculated by the following equation:

    where k is the sliding window stripe in Eq. (1).

    We handle the overlapping bounding boxes in the common region of adjacent sub-images according to the following strategies.At first, these bounding boxes are divided into several groups according to whether there are common overlapping areas or not.Then, in each group, the bounding box with the highest classification score is set as a compared box.Finally,the IoUs between the compared box and any other bounding box in the group are calculated,and the bounding boxes which have an IoU higher than a certain threshold will be deleted.Fig.7 shows each step of target detection in a large scene SAR image graphically, and in order to observe conveniently, we use red and green bounding boxes to represent the detection results of the same target in two adjacent sub-images respectively.

    3. Dataset and experimental setting

    3.1. Dataset and extending

    In this paper, the experiments are based on the MSTAR dataset collected by the US Defense Advanced Research Projects Agency and the Air Force Laboratory, and MiniSAR dataset released by Sandia Laboratory in the United States.

    At present, the MSTAR dataset is widely used in SAR ATDR algorithms test comparing. The SAR images are in spotlight model,with 0.3 m resolution and the azimuths are full coverage over 360°.The dataset is acquired by x-band, HH polarization and 0.3 m×0.3 m resolution spotlight SAR,and consists of ten types of typical military targets static slices with full aspect coverage over 360°and 100 environmental scene images. We use the slices of BMP2,T72 and BTR70 as original material,which are more standard in MSTAR. SAR images and Optical images of the three types of military targets are shown in Fig. 8.

    The MiniSAR dataset contains a large number of 2510 × 1638 high-resolution urban scene SAR images,including various types of targets, such as trees, lawns, buildings, vehicles, as shown in Fig. 9(d). In the experiments, various attitude targets in MiniSAR images can be used as interference signals to verify the robustness of the network.

    Fig. 7. Graphical detection process and NMS-SI.

    Fig. 8. SAR images and optical images of BMP2, T72 and BTR70.

    There are a total of 340 MSTAR scene images and 25 MiniSAR scene images use for experiments,which are randomly fused from environmental scene images and military target slices. Among these images,300 images of MSTAR with 2730 targets are used for training,and the remaining 40 MSTAR images with 696 targets are used as test set 1, 25 MiniSAR scene images are used as test set 2,also including 696 targets. Because in the training process, a large scene image needs to be divided into 30 sub-images, so the actual number of images used for training is 300× 30 = 9000. Similarly,test set 1 and test set 2 each contain 1200 and 1350 sub-images.Nevertheless, the number of large scene SAR images in the train set is still insufficient to obtain an excellent target detection network,so the target slices in MSTAR are used to expand the train set.If the slices in MSTAR are resized directly to the appropriate size for training, the difference of target size between the two groups will have a negative impact on network training.Therefore,we fill pixels around the slices randomly to match the size of the subimages of scene images, and choose two extended images for each slice as training samples too.Fig.9 shows the example of large scene images of MSTAR and MiniSAR,sub-images(400×400),and extended images (400×400). The specific information of train set with 15°depression angle and test set with 17°depression angle used in this paper are shown in Table 1. From the table, it can be seen that the actual size of the train set is 300× 30+ 1174 =10174.

    Table 1 Composition of the experimental datasets.

    3.2. Training

    All experiments are done on the deep learning framework,tensorflow [42] and run on a PC with dual E5-2630v4 CPUs, a NVIDIA GTX-1080Ti GPU (11G video memory),and 64 GB RAM.

    All initialization parameters in the network are randomly sampled from Gauss distribution with mean value of 0 and standard deviation of 0.01.The initial learning rate of the R-RPN is 0.001,the next learning rate is to divide the current learning rate by 10 per 20 k iterations,and the maximum number of iterations is 80 k.We train a total of 120 k iterations in R-RDN training phase with the learning rate same as R-RPN. R-RPN and R-RDN share the feature maps output by FEN and are trained in an alternating manner[23].

    In order to improve training efficiency, several positive and negative samples are extracted from all anchors generated by RRPN to form a mini-batch.The anchor with an IoU higher than 0.5 and angular difference less than 15°will be taken as a positive sample.In contrast,the anchor with IoU less than 0.2 or IoU higher than 0.5 but angular difference greater than 15°will be defined as a negative sample.In R-RPN stage,a total of 256 anchors form a mini batch for training,where the ratio of positive and negative samples is 0.5. Similar to R-RPN stage, the total number of positive and negative samples is 128 and the ratio is 0.5, in R-RDN.

    3.3. Evaluation metrics

    An excellent target detector not only needs to perform position detection, but also can correctly classify the detected targets. To quantitatively evaluate the performance of the detector,we use the detection precision metric (P), recall metric (R), F_1 score (F_1) to assess the position detection performance and recognition rate metric(A)to evaluate the recognition performance.P measures the proportion of correct detection in all predictions, R measures the proportion of correct detection in the ground-truth, F_1 is an overall statistic of the detection performance. A measures the proportion of correct classifications in the positives. The four metrics are defined as follows:

    Table 2 Numbers of BMP2, BTR70,T72 for testing.

    where true-positive(TP)denotes the number of correct predictions,false-positive (FP) denotes the number of error predictions, falsenegative (FN) denotes the number of missing checks, and Ntrdenotes the number of correct classifications.

    4. Experimental analysis and discussion

    4.1. Recognition and detection results on original slices of MSTAR dataset

    In order to verify the recognition and detection performance of the proposed network on original slices of MSTAR, we select 696 original slices of BMP2, BTR70 and T72 with 17°depression angle from MSTAR as test set for experiment,specific settings are shown in Table 2.Since the size of original slices is quite different from that of training samples, we fill pixels around the slices to 400× 400 before CNN. After the detection, the result image is cut to the original size according to the proportion, as the final result. Examples of test results, detection and recognition accuracy are shown in Fig.10, Table 3 and Table 4, respectively.

    Table 3 shows that the proposed method achieves an excellent performance in the detection of original slices of BMP2,BTR70 and T72.The F_1 scores are all above 0.99,and the R values of BMP2 and BTR70 are 100%. Table 4 shows the confusion matrix of the three kinds of target recognition,in which the diagonal elements record the correct recognition number of different targets.Although there are many cross-classification errors in BMP2 and T72 due to their close features, any of the three kinds target can be correctly classified with over 90% correct recognition rate and the overall recognition rate reaches 94.81%, which fully illustrates the effectiveness of the method.

    4.2. Influence of different layer combination models

    As mentioned before,there are differences in spatial resolution and semantics between feature maps from different convolution layers of ResNet18, so the selection of fusion layers has a great impact on the performance of the detector, which gives themcomparative advantages and disadvantages. Besides “C2+C3+C4”multilayer fusion model used in this paper,there are“C3+C4+C5”,“C2+C3+C5”, “C2+C3+C4+C5” and so on. In this section, we use the proposed network structure with five different fusion models for target detection to verify the advantages and disadvantages of the combination of different convolution layers. The first model contains just one layer C5.The second model combines all layers of ResNet18,namely“C2+C3+C4+C5”.The third model integrates C2,C3, C5. The fourth model is the fusion of C3, C4, C5, and the final model includes C2, C3, C4.

    Table 3 Detection results of 3 targets.

    Table 4 The confusion matrix of 3 targets recognition.

    Two scene images with different backgrounds are randomly selected from test set 1, each of them contains 15 targets, BMP2,BTR70 and T72, each with five, details are shown in Fig.11. Fig.12 shows the detection and recognition results of two different models on scene images. Model C5 misses 1 target and misidentifies 2.The situation improves greatly when C2,C3 and C4 are combined. “C2+C3+C4” achieves the best results in both simple and complex scene images of MSTAR. The comparison of the performance indicates that multilayer fusion strategy has a great impact on detection performance.

    In order to verify the influence of multilayer fusion model more comprehensively,we perform a group of experiments on test set 1.Table 5 displays the A, P, R and F_1 scores of different multilayer fusion models. Ndetected_targetsrepresents the total number of predictions.Compared with the performance on a single layer C5,the multilayer fusion models achieve better results,especially in A and R. The performance of models “C2+C3+C4+C5” and “C2+C3+C5”which achieve more than 98%both in P and R,over 93%in A,0.98 in F_1 score, is much better than models “C3+C4+C5” and “C5”. It is not difficult to see that the models containing C2 can achieve better performance,which indicates that shallow features play a vital role in detecting networks because of its high resolution and rich target location information. Compared with other fusion structures,“C2+C3+C4”has the best performance,since the feature maps give full play to the advantages of shallow features and high-level features in target detection.The reason for discarding C5 is the highly semantic target features contain less available target location information and classification information.

    Fig.10. Examples of test results on original slices. (a) (b) (c) indicates the test results of BMP2, BTR70, T72.

    Fig.11. Two scene images selected from test set 1.1-5 are BTR70,6-10 are BMP2,11-15 are T72.(a)Targets distribution on simple background;(b)targets distribution on complex background.

    Fig. 12. Experiment on MSTAR scene images. The blue labels and boxes represent BMP2, green represent BTR70 and red represent T72, yellow and aqua rectangles represent missing target and misidentifying targets. (a) (b) detection results of model C5; (c) (d) detection results of model C2+C3+C4.

    In summary, the layer combination strategy has a profound impact on the improvement of detection performance.As for target detection in SAR images, since the sizes of most targets are small,their features are relatively simple. The combination of shallow layers from ResNet18 can acquire enough semantic features to complete the detection task,and also this is exactly why ResNet50,ResNet101, or even deeper networks are not used in this paper.

    Table 5 Experimental results with different layer combination strategies.

    4.3. Influence of maximum sliding strategy, multilayer fusion strategy, rotation anchors, and integrating shadow context strategy

    In order to validate the influence of maximum sliding strategy,rotation anchors,and integrating shadow context strategy,a series of experiments on large scene SAR images in test set 1 are applied.Table 6 summarizes the results of 6 experiments,and then we can analyze the main role of each structure by comparing the results of different methods.Easy to know,our framework achieves the stateof-the-art performance, 96.11% in recognition rate, 99.28% in detection precision, 99.71% in recall, and 0.995 in F_1 score.

    Experiment 1 is actually the basic Faster RCNN with ResNet18 as feature extraction network,which yields unsatisfactory results.On the basis of experiment 1,we add maximum sliding strategy,which improves the performance of Faster RCNN to some extent. By the way,if the targets are densely distributed,or the image size is large enough, the role of maximum sliding strategy will be moreimportant. In experiment 3, the performance of the detector has been greatly improved,due to the application of multilayer fusion,which leads to 18.37% increase in A, 5.30% increase in P, 15.95%increase in R and 0.114 increase in F_1 score. It can be seen that multilayer fusion plays a vital role in network performance. In experiment 4, rotation anchors are used to generate rotational bounding boxes, which can complete multi-oriented target detection accurately. As an aside, although the application of rotation anchor seems not improve detection performance obviously,but it still exerts great influence on observing target dynamics.As shown in Fig. 13, rotational bounding boxes not only reduce the redundancy of the target areas,but also help the observer to find targets and make further judgments easily. Experiment 5 shows that although integrating shadow context strategy does not improve the performance as much as multilayer fusion strategy,it also achieves satisfactory results compared with experiment 1, especially in A with 11.87% increase and R with 12.93% increase.

    In summary, multilayer fusion strategy possesses the most obvious improvement on the overall performance of the network,followed by integrating shadow context strategy and maximum sliding strategy. Although the application of rotation anchors improves the performance unsatisfactorily, it has a greater effect on practice.

    Fig.13. Comparison of two different labels.The blue boxes represent BMP2,green represent BTR70 and red represent T72.(a)Horizontal rectangular labels;(b)rotational box labels.

    Fig.14. Experimental results on test set 2.The blue labels and boxes represent BMP2,green represent BTR70 and red represent T72.Purple rectangles represent private cars.(a)The scene image of a residential area; (b) the scene image of a train station; (c)(d) detection results.

    4.4. Robustness analysis of the network

    In order to increase the complexity of MSTAR dataset and verify the robustness of the network,a group of comparative experiments on test set 1 and test set 2 are applied without changing the settings of the training set, and the results are shown in Table 7. It can be seen from the table that the two experiments have achieved satisfactory detection results, among which A, P and R can reach over 95%,indicating that the network has certain robustness to the complex scene images. However, due to the presence of the interference signals, the detection results on test set 2 are worse than those on test set 1, FP is increased by 10, and FN is increased by 6,the declines of A, P, R and F_1 are all within 2%.

    Fig.14 shows the detection results of the proposed network on test set 2. The two sample images contain 15 targets, 5 each of BMP2, T72 and BTR70, as shown in Fig. 14(a) and (b). Fig. 14(a)displays a residential area with a large number of buildings and trees,and Fig.14(b)displays a railway station with a large number of tracks,trains and private cars.Fig.14 indicates that the proposed method still maintains excellent detection performance even under many interference signals.

    4.5. Comparisons with other target detection methods

    In order to verify the superiority of the proposed method,Constant False Alarm Rate(CFAR)[43],Light level CNN[44],Faster RCNN[23],SSD[14]and RCNN+Fast Sliding[29]are applied to the test sets, in the meantime. All methods are run in the same experimental environment and settings. The experimental results of the six different detection methods are listed in Table 8.

    First of all, we analyze the performance of the various algorithms on test set 1. The performance of CFAR is relatively poor because of its noise sensitivity.The large amount of speckle noise in SAR images makes the results of CFAR contain too much false alarm and missing alarm. The network structure of Light level CNN is relatively simple, including two convolution layers, two pooling layers and two full connection layers, which leads to 13.19% increase in P,7.04%in R and 0.102 in F_1 score compared with CFAR.But its recognition performance is worse than CFAR,because of the limited high-level semantic information contained in extracted features.Faster RCNN and RCNN+Fast Sliding deepen the network extraction layer and improve the overall performance, especially the recognition rate,but they also miss too many targets,resulting in unsatisfactory performance.SSD reduces the missing alarm and improves the recognition rate compared with the former two, but the detection results are still unsatisfactory due to the lower P value. With the maximum sliding, multilayer fusion, and the integrating shadow context,the proposed method increases by 16.78%in A, 20.02% in P,13.5% in R and 0.169 in F_1 score compared with CFAR. Our model has the lowest FN and FP and the best four evaluation metrics. The comparison experiments show that the proposed model achieves the best performance both in recognition and detection.

    As can be seen from Table 8, the performance trend of the six methods on test set 2 is similar to that on test set 1, further demonstrating the above analysis. In addition, by comparing the detection results of the various methods on test set 1 and test set 2,we can see that the recognition rate does not fluctuate greatly with the improvement of the scene complexity, indicating that the change of background does not affect the target structure. On the other hand,with the increase of interference signals in the test set,P, R and F_1 of CFAR, Light level CNN and Faster RCNN decrease sharply, while the performance degradations of SSD, RCNN + Fast Sliding and the proposed method are limited.

    5. Conclusions

    Because of the strong speckle noise and the low signal-to-noise ratio, it is very difficult to achieve target detection in large scene SAR images. Inspired by the tremendous achievements of deep convolutional neural networks in interpretation of visible light images, deep convolutional neural networks are applied to SAR image interpretation,and a novel contextual rotation region-based convolutional neural network with multilayer fusion is proposed to achieve target detection and recognition in large scene SAR images,which employs maximum sliding strategy to segment large scene image before RCNN, adopts multilayer fusion strategy to obtain feature maps with high resolution and rich semantic information,and generates high confidence prediction boxes by rotation anchors. Additionally, shaded areas serve as context information to help the detector identify and locate the targets accurately. By comparing several sets of experiments, the validity of multilayer fusion strategy, maximum sliding strategy, rotation anchors, and integrating shadow context strategy is verified. More importantly,the robustness analysis and the comparisons with CFAR,Light level CNN, Faster RCNN, SSD and RCNN + Fast Sliding demonstrate that the proposed method has superior robustness and state-of-the-art detection performance.

    Despite the best performance, the superiority of the proposed method is based on network complexity.In the future,optimization algorithms should aim at achieving excellent performance with simple network structure. At the same time, other structures of CNN can also be applied to SAR image interpretation.

    Author contributions

    Zi-shuo Han:Conceptualization,Methodology,Validation,Data curation, Writing-original draft preparation;Chun-ping Wang:Conceptualization, Validation, Formal analysis, Writing-review and editing, Funding acquisition;Qiang Fu:Software, Validation,Supervision.

    Declaration of competing interest

    The authors declare no conflict of interest.

    欧美人与性动交α欧美软件| aaaaa片日本免费| 国产精品久久久人人做人人爽| 精品人妻1区二区| 久久影院123| 欧美日韩乱码在线| 热99久久久久精品小说推荐| 悠悠久久av| 香蕉久久夜色| 午夜久久久在线观看| 久久精品亚洲av国产电影网| 黄色视频,在线免费观看| 欧美日韩乱码在线| 国产欧美日韩一区二区三| ponron亚洲| 亚洲国产欧美网| 黑人猛操日本美女一级片| 丝袜人妻中文字幕| 国产一区二区激情短视频| 国产成人系列免费观看| www日本在线高清视频| 最近最新中文字幕大全电影3 | 黑人巨大精品欧美一区二区蜜桃| 在线观看舔阴道视频| 久久久国产精品麻豆| 亚洲免费av在线视频| 久久99一区二区三区| 国产精品乱码一区二三区的特点 | 美女扒开内裤让男人捅视频| 欧美黑人欧美精品刺激| 12—13女人毛片做爰片一| 精品一区二区三区av网在线观看| 国产成人啪精品午夜网站| 久9热在线精品视频| 日韩熟女老妇一区二区性免费视频| 999久久久国产精品视频| 精品卡一卡二卡四卡免费| 老司机午夜十八禁免费视频| 天天操日日干夜夜撸| 国产av一区二区精品久久| 悠悠久久av| 色婷婷久久久亚洲欧美| 国产麻豆69| 国产精品 国内视频| 天天添夜夜摸| 1024香蕉在线观看| 午夜精品国产一区二区电影| 国产乱人伦免费视频| 国产1区2区3区精品| 欧美日韩亚洲国产一区二区在线观看 | 黄色成人免费大全| 久久青草综合色| 老司机午夜福利在线观看视频| 国产av又大| 天天添夜夜摸| 1024香蕉在线观看| 高清视频免费观看一区二区| 中文字幕制服av| 精品福利观看| 窝窝影院91人妻| 日日摸夜夜添夜夜添小说| 我的亚洲天堂| 18禁裸乳无遮挡动漫免费视频| 亚洲人成电影观看| 国产亚洲精品久久久久5区| 人妻一区二区av| av天堂在线播放| 少妇 在线观看| 国产精华一区二区三区| 久久中文字幕一级| 精品福利观看| 老熟妇乱子伦视频在线观看| 久久久国产一区二区| 欧美人与性动交α欧美精品济南到| 亚洲aⅴ乱码一区二区在线播放 | av有码第一页| 国产在线观看jvid| 91字幕亚洲| 亚洲熟女毛片儿| 亚洲国产精品合色在线| 香蕉久久夜色| 一区二区三区国产精品乱码| 免费av中文字幕在线| 免费不卡黄色视频| 国产有黄有色有爽视频| 中文字幕人妻丝袜一区二区| 国产av一区二区精品久久| 999久久久国产精品视频| 亚洲欧美日韩另类电影网站| 精品高清国产在线一区| 人人澡人人妻人| 久久青草综合色| 国产aⅴ精品一区二区三区波| 一级片免费观看大全| 精品视频人人做人人爽| 亚洲第一av免费看| 巨乳人妻的诱惑在线观看| 婷婷精品国产亚洲av在线 | 国产精品一区二区在线观看99| 精品乱码久久久久久99久播| 两个人免费观看高清视频| 久久国产精品男人的天堂亚洲| 精品久久久久久,| av天堂在线播放| 亚洲美女黄片视频| 精品亚洲成国产av| 十八禁高潮呻吟视频| 女人被躁到高潮嗷嗷叫费观| ponron亚洲| 自线自在国产av| 91字幕亚洲| 午夜免费成人在线视频| 男女午夜视频在线观看| 韩国精品一区二区三区| 老熟妇仑乱视频hdxx| 婷婷成人精品国产| 免费在线观看影片大全网站| 久久久精品区二区三区| 老司机靠b影院| 亚洲熟女毛片儿| 激情视频va一区二区三区| 成人特级黄色片久久久久久久| 免费日韩欧美在线观看| 欧美日韩国产mv在线观看视频| 男男h啪啪无遮挡| 看黄色毛片网站| 亚洲av片天天在线观看| 精品高清国产在线一区| 91九色精品人成在线观看| 久久精品国产综合久久久| 窝窝影院91人妻| 午夜日韩欧美国产| 好看av亚洲va欧美ⅴa在| 中文字幕高清在线视频| 黄色毛片三级朝国网站| 久久国产亚洲av麻豆专区| 国产1区2区3区精品| 精品高清国产在线一区| 精品久久久精品久久久| 欧美久久黑人一区二区| 怎么达到女性高潮| 午夜免费鲁丝| 欧美日韩国产mv在线观看视频| 在线观看舔阴道视频| 中文字幕高清在线视频| 精品一区二区三区av网在线观看| 亚洲情色 制服丝袜| 亚洲欧洲精品一区二区精品久久久| 亚洲精品国产色婷婷电影| 50天的宝宝边吃奶边哭怎么回事| 精品久久久久久久久久免费视频 | 欧美日韩中文字幕国产精品一区二区三区 | 91大片在线观看| 亚洲情色 制服丝袜| 欧美精品av麻豆av| www.熟女人妻精品国产| 国产片内射在线| 精品第一国产精品| 成人影院久久| 午夜成年电影在线免费观看| www.熟女人妻精品国产| 精品第一国产精品| 亚洲欧美日韩高清在线视频| 欧美精品av麻豆av| 亚洲欧洲精品一区二区精品久久久| 黄色视频不卡| 久久久久精品人妻al黑| 国产成人精品久久二区二区91| 99国产精品一区二区蜜桃av | 啦啦啦视频在线资源免费观看| 一级毛片精品| 国精品久久久久久国模美| 少妇猛男粗大的猛烈进出视频| 欧美最黄视频在线播放免费 | 成人手机av| 亚洲色图 男人天堂 中文字幕| 国产精品成人在线| 亚洲熟女精品中文字幕| 99热国产这里只有精品6| 99精品久久久久人妻精品| 成人黄色视频免费在线看| 精品熟女少妇八av免费久了| 国产欧美日韩综合在线一区二区| 黄色片一级片一级黄色片| 99久久精品国产亚洲精品| 日韩一卡2卡3卡4卡2021年| 国产97色在线日韩免费| 大码成人一级视频| 色尼玛亚洲综合影院| 一级毛片女人18水好多| 欧美日韩国产mv在线观看视频| 女性被躁到高潮视频| 亚洲精品一卡2卡三卡4卡5卡| 亚洲专区中文字幕在线| 亚洲伊人色综图| 国产一区在线观看成人免费| 高清视频免费观看一区二区| 一个人免费在线观看的高清视频| 亚洲av日韩在线播放| 久久精品亚洲av国产电影网| 99热网站在线观看| √禁漫天堂资源中文www| 一级毛片精品| 午夜福利影视在线免费观看| svipshipincom国产片| 黄片播放在线免费| 黑人操中国人逼视频| 国产精品一区二区免费欧美| 最近最新中文字幕大全电影3 | 天天添夜夜摸| 在线观看午夜福利视频| 亚洲欧美激情综合另类| 多毛熟女@视频| 日日摸夜夜添夜夜添小说| 国产精品国产高清国产av | 啪啪无遮挡十八禁网站| 欧美乱妇无乱码| 成人手机av| x7x7x7水蜜桃| www.精华液| 99精品在免费线老司机午夜| 国产成+人综合+亚洲专区| 在线观看免费视频日本深夜| 嫩草影视91久久| 国产不卡一卡二| 亚洲精品粉嫩美女一区| 51午夜福利影视在线观看| 久久性视频一级片| 无人区码免费观看不卡| 大码成人一级视频| 91老司机精品| 欧美激情久久久久久爽电影 | 日韩大码丰满熟妇| 国产极品粉嫩免费观看在线| 自拍欧美九色日韩亚洲蝌蚪91| 成人18禁高潮啪啪吃奶动态图| 欧美乱码精品一区二区三区| 一边摸一边做爽爽视频免费| 极品少妇高潮喷水抽搐| 国产主播在线观看一区二区| 亚洲中文av在线| 精品免费久久久久久久清纯 | 下体分泌物呈黄色| av线在线观看网站| 一个人免费在线观看的高清视频| 亚洲一卡2卡3卡4卡5卡精品中文| www.精华液| 99久久99久久久精品蜜桃| 国产视频一区二区在线看| 国产野战对白在线观看| 欧美激情 高清一区二区三区| 在线天堂中文资源库| 在线看a的网站| 亚洲熟女精品中文字幕| 最新在线观看一区二区三区| 欧美中文综合在线视频| 看片在线看免费视频| 亚洲精品成人av观看孕妇| 人人妻,人人澡人人爽秒播| 国产精品久久久人人做人人爽| 成在线人永久免费视频| 在线观看免费日韩欧美大片| 亚洲精品乱久久久久久| 久久久久久久久免费视频了| 午夜福利影视在线免费观看| 亚洲熟女毛片儿| 一级a爱片免费观看的视频| 亚洲精品粉嫩美女一区| 老司机午夜十八禁免费视频| 国产片内射在线| 一本综合久久免费| 丝袜在线中文字幕| 午夜福利乱码中文字幕| 90打野战视频偷拍视频| 久久性视频一级片| 欧美成人免费av一区二区三区 | 精品少妇一区二区三区视频日本电影| 伊人久久大香线蕉亚洲五| 欧美黑人精品巨大| 久久香蕉国产精品| 国产精品.久久久| 又黄又爽又免费观看的视频| 午夜亚洲福利在线播放| 黄色片一级片一级黄色片| 韩国精品一区二区三区| 三上悠亚av全集在线观看| 啦啦啦免费观看视频1| 亚洲熟妇中文字幕五十中出 | 女人高潮潮喷娇喘18禁视频| 国产欧美日韩精品亚洲av| 交换朋友夫妻互换小说| 国产成人免费观看mmmm| 啦啦啦视频在线资源免费观看| 久久久久久免费高清国产稀缺| 久久亚洲精品不卡| 久久人妻熟女aⅴ| 一级作爱视频免费观看| 国产日韩一区二区三区精品不卡| 午夜福利影视在线免费观看| 男人舔女人的私密视频| 亚洲第一青青草原| 久久天躁狠狠躁夜夜2o2o| 亚洲精品乱久久久久久| 韩国av一区二区三区四区| ponron亚洲| 国产午夜精品久久久久久| 高清毛片免费观看视频网站 | 岛国毛片在线播放| 久久中文字幕人妻熟女| av福利片在线| 色尼玛亚洲综合影院| 老熟妇乱子伦视频在线观看| 国产在线观看jvid| 99精国产麻豆久久婷婷| 国产麻豆69| 亚洲av片天天在线观看| 12—13女人毛片做爰片一| 99国产精品免费福利视频| 在线看a的网站| 欧美一级毛片孕妇| 免费在线观看视频国产中文字幕亚洲| 乱人伦中国视频| 视频区欧美日本亚洲| 69精品国产乱码久久久| 正在播放国产对白刺激| 亚洲专区国产一区二区| 波多野结衣av一区二区av| 黄色视频,在线免费观看| 国产欧美日韩综合在线一区二区| 丰满迷人的少妇在线观看| 国产精品99久久99久久久不卡| 天天影视国产精品| 黄频高清免费视频| 在线av久久热| 亚洲专区字幕在线| 91成人精品电影| 99国产综合亚洲精品| 99久久精品国产亚洲精品| 精品卡一卡二卡四卡免费| 亚洲av成人不卡在线观看播放网| 亚洲久久久国产精品| 99riav亚洲国产免费| 免费av中文字幕在线| 欧美精品亚洲一区二区| 亚洲国产看品久久| 亚洲欧美一区二区三区黑人| 91麻豆av在线| 又黄又爽又免费观看的视频| 国产真人三级小视频在线观看| 国产有黄有色有爽视频| 色婷婷av一区二区三区视频| 在线十欧美十亚洲十日本专区| 国产男靠女视频免费网站| 国产男女超爽视频在线观看| 精品一区二区三区四区五区乱码| 热re99久久国产66热| 人人妻,人人澡人人爽秒播| 老司机影院毛片| 久久这里只有精品19| 黑人操中国人逼视频| 99re在线观看精品视频| 欧美日韩视频精品一区| 亚洲 欧美一区二区三区| 日韩免费av在线播放| 90打野战视频偷拍视频| 人人妻人人澡人人看| 久久精品国产综合久久久| 无遮挡黄片免费观看| 91成年电影在线观看| 国产成人免费观看mmmm| 韩国av一区二区三区四区| 精品亚洲成国产av| 18禁美女被吸乳视频| 91麻豆av在线| 97人妻天天添夜夜摸| 80岁老熟妇乱子伦牲交| 国产黄色免费在线视频| 黄片播放在线免费| 少妇粗大呻吟视频| 免费高清在线观看日韩| 久久久久视频综合| a级毛片黄视频| 另类亚洲欧美激情| 大香蕉久久成人网| 精品国产乱子伦一区二区三区| 久久久久久人人人人人| 亚洲中文av在线| 女人被狂操c到高潮| 欧美日韩精品网址| 欧美日本中文国产一区发布| av网站在线播放免费| 国产精品影院久久| 两人在一起打扑克的视频| 自线自在国产av| 极品人妻少妇av视频| 新久久久久国产一级毛片| 成人国语在线视频| 日韩 欧美 亚洲 中文字幕| 国产激情久久老熟女| 国产蜜桃级精品一区二区三区 | 高清视频免费观看一区二区| xxxhd国产人妻xxx| 国产精品二区激情视频| 美女福利国产在线| 天堂中文最新版在线下载| 国产精品国产高清国产av | 久久久久久久国产电影| 国产成人免费观看mmmm| 国产又色又爽无遮挡免费看| 国产精品一区二区在线不卡| 欧美日本中文国产一区发布| 久久精品国产综合久久久| 欧美激情高清一区二区三区| 午夜激情av网站| 99riav亚洲国产免费| 69av精品久久久久久| 亚洲精品一卡2卡三卡4卡5卡| 国产成人影院久久av| 超碰97精品在线观看| 中文欧美无线码| 国产一区二区三区视频了| 在线观看日韩欧美| 亚洲少妇的诱惑av| 婷婷丁香在线五月| 人妻 亚洲 视频| 精品第一国产精品| 成人亚洲精品一区在线观看| 两性午夜刺激爽爽歪歪视频在线观看 | 久久久久国内视频| 欧美 日韩 精品 国产| 国产国语露脸激情在线看| 国产日韩欧美亚洲二区| 亚洲国产中文字幕在线视频| 男女床上黄色一级片免费看| 午夜日韩欧美国产| 大型av网站在线播放| 一级黄色大片毛片| 亚洲人成电影免费在线| 无遮挡黄片免费观看| www.精华液| 亚洲男人天堂网一区| 国产成人免费无遮挡视频| 精品国产一区二区三区久久久樱花| 色婷婷久久久亚洲欧美| 超色免费av| 中文字幕制服av| 国产亚洲欧美98| 国产精品香港三级国产av潘金莲| 国产精品一区二区免费欧美| 精品久久久久久久毛片微露脸| 国产免费男女视频| 18在线观看网站| 国产精品综合久久久久久久免费 | 欧美激情高清一区二区三区| 变态另类成人亚洲欧美熟女 | 99riav亚洲国产免费| 成人av一区二区三区在线看| 成人国语在线视频| 热99久久久久精品小说推荐| 免费一级毛片在线播放高清视频 | 90打野战视频偷拍视频| 国产精品国产高清国产av | 午夜免费成人在线视频| 欧美中文综合在线视频| 精品人妻熟女毛片av久久网站| 成人精品一区二区免费| 国产极品粉嫩免费观看在线| 亚洲精品在线观看二区| 午夜久久久在线观看| 男女床上黄色一级片免费看| 91九色精品人成在线观看| 精品福利永久在线观看| 国产99白浆流出| 性少妇av在线| 久久亚洲精品不卡| 国产精品电影一区二区三区 | 黑人巨大精品欧美一区二区mp4| 久久午夜亚洲精品久久| 亚洲少妇的诱惑av| www.熟女人妻精品国产| 女性生殖器流出的白浆| 老熟妇乱子伦视频在线观看| 国产欧美日韩一区二区精品| 国产一区有黄有色的免费视频| 国产高清videossex| 黄色视频不卡| 岛国在线观看网站| 男人操女人黄网站| 色在线成人网| 丰满迷人的少妇在线观看| 午夜福利在线免费观看网站| 九色亚洲精品在线播放| 在线观看免费午夜福利视频| 精品一区二区三卡| 精品国产乱子伦一区二区三区| 欧美日韩亚洲综合一区二区三区_| 手机成人av网站| 国产精品成人在线| 村上凉子中文字幕在线| 久久国产精品人妻蜜桃| 国产精品av久久久久免费| 在线观看66精品国产| 18禁黄网站禁片午夜丰满| 日韩 欧美 亚洲 中文字幕| 我的亚洲天堂| 51午夜福利影视在线观看| 精品国产超薄肉色丝袜足j| 国产麻豆69| 人妻 亚洲 视频| 日日摸夜夜添夜夜添小说| 最近最新中文字幕大全免费视频| 欧美亚洲日本最大视频资源| 久久久久视频综合| 人人妻人人爽人人添夜夜欢视频| 美国免费a级毛片| 国产成人影院久久av| 99国产极品粉嫩在线观看| 欧美中文综合在线视频| 女性生殖器流出的白浆| 亚洲欧美精品综合一区二区三区| 黄色片一级片一级黄色片| 国产主播在线观看一区二区| 少妇猛男粗大的猛烈进出视频| cao死你这个sao货| 午夜精品国产一区二区电影| 亚洲国产中文字幕在线视频| 亚洲中文av在线| 精品福利观看| 国产麻豆69| 日韩欧美一区二区三区在线观看 | 91精品三级在线观看| 国产精品久久久久成人av| 成年人黄色毛片网站| 亚洲国产精品sss在线观看 | av不卡在线播放| 男人的好看免费观看在线视频 | 满18在线观看网站| 黄色 视频免费看| 人人妻人人澡人人看| 精品视频人人做人人爽| 看片在线看免费视频| 国产精品 国内视频| 嫩草影视91久久| av一本久久久久| 黑丝袜美女国产一区| 国产亚洲精品久久久久久毛片 | 国产不卡av网站在线观看| 韩国av一区二区三区四区| 高清毛片免费观看视频网站 | 久久中文看片网| 国产在视频线精品| 国产av又大| 99re在线观看精品视频| 啦啦啦视频在线资源免费观看| 久久亚洲真实| 婷婷精品国产亚洲av在线 | 天天添夜夜摸| 天堂俺去俺来也www色官网| 久久精品国产99精品国产亚洲性色 | 久久国产亚洲av麻豆专区| a级毛片在线看网站| 婷婷精品国产亚洲av在线 | 大陆偷拍与自拍| 久久久精品区二区三区| 大码成人一级视频| 岛国毛片在线播放| 久热爱精品视频在线9| 国产成人免费观看mmmm| 天堂动漫精品| 成年版毛片免费区| 无人区码免费观看不卡| a级片在线免费高清观看视频| av电影中文网址| 如日韩欧美国产精品一区二区三区| 美女午夜性视频免费| 少妇猛男粗大的猛烈进出视频| 视频在线观看一区二区三区| 色在线成人网| 国产99久久九九免费精品| 亚洲av第一区精品v没综合| 老熟妇仑乱视频hdxx| 高清在线国产一区| 免费观看人在逋| 成人av一区二区三区在线看| 成人18禁在线播放| 女警被强在线播放| 精品国产美女av久久久久小说| 午夜福利影视在线免费观看| 麻豆国产av国片精品| 人妻 亚洲 视频| 国产亚洲精品一区二区www | 丁香欧美五月| 国产1区2区3区精品| 国产成人影院久久av| 搡老岳熟女国产| 成人精品一区二区免费| 国产精华一区二区三区| 久久精品熟女亚洲av麻豆精品| 亚洲av成人一区二区三| 两性夫妻黄色片| 亚洲精品久久成人aⅴ小说| 韩国av一区二区三区四区| 成人黄色视频免费在线看| 午夜福利一区二区在线看| 美女福利国产在线| 在线观看免费日韩欧美大片| 国产又色又爽无遮挡免费看| 精品卡一卡二卡四卡免费| 制服诱惑二区| 国产av一区二区精品久久| 高清欧美精品videossex| 18禁黄网站禁片午夜丰满| 欧美黄色片欧美黄色片| 久久久久国产精品人妻aⅴ院 | 国产精品久久久人人做人人爽|