• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-Modal Scene Matching Location Algorithm Based on M2Det

    2023-12-12 15:51:06JiweiFanXiaogangYangRuitaoLuQinggeLiandSiyuWang
    Computers Materials&Continua 2023年10期

    Jiwei Fan,Xiaogang Yang,Ruitao Lu,Qingge Li and Siyu Wang

    Department of Automation,PLA Rocket Force University of Engineering,Xi’an,710025,China

    ABSTRACT In recent years,many visual positioning algorithms have been proposed based on computer vision and they have achieved good results.However,these algorithms have a single function,cannot perceive the environment,and have poor versatility,and there is a certain mismatch phenomenon,which affects the positioning accuracy.Therefore,this paper proposes a location algorithm that combines a target recognition algorithm with a depth feature matching algorithm to solve the problem of unmanned aerial vehicle(UAV)environment perception and multi-modal image-matching fusion location.This algorithm was based on the single-shot object detector based on multi-level feature pyramid network(M2Det)algorithm and replaced the original visual geometry group(VGG)feature extraction network with the ResNet-101 network to improve the feature extraction capability of the network model.By introducing a depth feature matching algorithm,the algorithm shares neural network weights and realizes the design of UAV target recognition and a multi-modal image-matching fusion positioning algorithm.When the reference image and the real-time image were mismatched,the dynamic adaptive proportional constraint and the random sample consensus consistency algorithm(DAPC-RANSAC)were used to optimize the matching results to improve the correct matching efficiency of the target.Using the multi-modal registration data set,the proposed algorithm was compared and analyzed to verify its superiority and feasibility.The results show that the algorithm proposed in this paper can effectively deal with the matching between multi-modal images(visible image–infrared image,infrared image–satellite image,visible image–satellite image),and the contrast,scale,brightness,ambiguity deformation,and other changes had good stability and robustness.Finally,the effectiveness and practicability of the algorithm proposed in this paper were verified in an aerial test scene of an S1000 sixrotor UAV.

    KEYWORDS Visual positioning;multi-modal scene matching;unmanned aerial vehicle

    1 Introduction

    With the outbreak of many military wars,the human war form has changed from information to an intelligent form.In combat tasks,higher requirements have been put forward for the precision attack capability of weapons and equipment,and the precision positioning and anti-jamming capability of guidance systems.The degree of intelligence of weapons and equipment changes the combat style and methods of future wars.Unmanned warfare has become an important development trend[1].Missiles,unmanned aerial vehicles,and other types of aircraft require a navigation system to constantly determine their position to adjust the operating state when performing tasks;therefore,it is very important to further study UAV navigation systems[2].At present,navigation systems can be divided into autonomous navigation and non-autonomous navigation according to the degree of dependence on external information.Autonomous navigation means that it does not rely on manually set information sources,but only uses its equipment to achieve accurate navigation,such as inertial navigation,visual navigation,and doppler navigation.Non-autonomous navigation refers to technologies that rely on receiving external information for navigation and positioning,such as satellite navigation,radar navigation,and radio navigation[3–5].Due to the special working environment of aircraft,the main navigation methods used today are global satellite navigation,inertial navigation,and visual navigation.The traditional satellite navigation mode is greatly affected by the environment,and the flight safety of the aircraft faces a huge threat if the satellite signal is interfered with,deceived,etc.Inertial navigation relies on the inertial navigation components carried by the aircraft itself to complete the navigation task.It is not easily interfered with by external information when it is working,but with the extension of the working time,there will be serious pose drift.Therefore,the aircraft’s navigation information must be corrected at regular intervals during use[6].Visual navigation refers to the use of computer vision,image processing,and other technologies to obtain the spatial information and motion information of UAVs.Common technologies can be divided into two categories according to whether prior knowledge is used.One of these categories includes the use of UAV aerial image sequences for matching to obtain the position and attitude transformation relationship of the UAV.Mature technologies include visual odometry (VO) and simultaneous localization and mapping(SLAM) [7,8].SLAM technology can realize self-positioning while building maps using real-time images in unknown environments.It is widely used in various indoor positioning scenes,but the positioning effect is poor in open outdoor scenes.VO uses UAV aerial images for inter-frame matching to calculate the position and attitude transformation relationship of UAVs.Its principle is similar to that of the inertial navigation system,and the positioning results will also have a large deviation as time goes by.The other category is image-matching navigation,which uses the UAV to select the ground object scene in the predetermined flight area as the reference image database.When the UAV reaches the predetermined area,the airborne camera acquires the current scene in real-time as a real-time image and sends it to the airborne computer for matching and comparison with the reference image in the database.According to the matching position of the real-time image,the current position of the UAV can be determined[9,10].Image-matching navigation is an absolute positioning technology to realize UAV navigation,which can provide an accurate positioning guarantee for long-endurance UAVs.Image-matching technology was first applied to the terminal guidance of cruise missiles,which has the characteristics of strong autonomy,simple equipment structure,and high positioning accuracy,and has gradually developed into a visual navigation technology.Image-matching technology is to align multiple images acquired by different sensors,different angles,and different time phases in space to determine the relative position relationship between images in the same coordinate system.Its main purpose is to search for the best matching position of the real image in the reference image and to provide basic data for the change in carrier position[11].In the navigation task,to provide the UAV with the ability to work all day and in all weather conditions,the multi-modal remote sensing imagematching method is generally used for navigation and positioning[12,13].Multimodal remote sensing image-matching refers to image-matching between different imaging methods(such as satellite images and aerial images)and different sensors(such as visible and infrared images,and visible and satellite images).One or more differences in the imaging characteristics,scale,angle of view,geometry,and other aspects of multimodal image-matching,bring great difficulties to the matching work.At present,multimodal image-matching methods are mainly divided into gray-based image-matching methods and feature-based image-matching methods[14–17].

    In dealing with the actual UAV image-matching and positioning problem,it is necessary to consider the working conditions of the UAV under all weather conditions.Therefore,the highresolution optical image or satellite image is usually used as the reference image for image-matching.The real-time image mostly uses an infrared image or synthetic aperture radar(SAR)image to meet the all-weather working requirements of the navigation system.In feature-based image-matching methods,feature extraction is a very important component.Traditional feature extraction methods mainly include scale-invariant feature transform (SIFT) [18],oriented fast and rotated brief (ORB)[19],features from accelerated segment test (FAST) [20],histogram of orientated gradient (HOG)[21],affine-SIFT (ASIFT) [22],binary robust invariant scalable keypoints (BRISK),binary fisheye spherical distorted robust independent elemental features(FSD-BRIEF)[23],etc.Because traditional feature extraction methods do not fully utilize data,they can only extract certain aspects of image features.As a result,most of these methods only applying to image-matching tasks in specific scenes.In the consideration of multimodal image-matching,the characteristics of the same feature descriptor in different images are often quite different due to different imaging mechanisms,which makes it difficult to ensure the reliability of the matching results[24].With the rapid development of depth learning,many feature-matching methods based on convolutional neural networks(CNN)and generic adversarial networks(GAN)have been proposed.Compared with traditional image-matching methods,depth learning methods can learn the shape,color,texture,semantic level,and other features of images through training models,and rotation has a certain invariance.Compared with traditional feature extraction methods,it has stronger description ability and higher generalization.It does not require the manual design of complex features,does not rely on the prior knowledge of designers,and can be effectively generalized to image pairs of other modes[25–29].In recent years,more and more deep learning feature extraction networks have been proposed,mainly including VGG[30,31],residual network(Res Net)[32,33],dense convolutional network(Dense Net)[34,35],dual path network(DPN)[36,37],neural architecture search network(NASNet)[38],etc.

    Recently,some research has been carried out on the image-matching of depth learning.Reference[39]extracted features from image regions through CNN and used a metric network composed of three full connection layers to calculate the matching score of feature pairs,which can effectively reduce the computation of the feature network and improve the accuracy of image-matching.Reference[40]proposed a local feature extraction network with global perception(point pair feature network,PPFNet),which has rotation invariance,can extract local features of 3D point clouds,and can make full use of the sparsity of point clouds,and improve the recall rate,but PPFNet takes up a large amount of memory,so the application effect is poor in some specific tasks.Reference [41]drew on the idea of a transformer and used the self-attention layer and mutual attention layer to obtain the feature descriptors of two images.This method can produce dense matching in areas with less texture.The end-to-end network structure can optimize the whole process of feature matching through information feedback during training;however,the feature descriptors learned by using such methods alone cannot guarantee the matching effect.Reference [42] and others proposed a feature point descriptor discrimination learning matching method,which uses a Siamese network,takes the nonlinear mapping of the CNN output as the descriptor,and uses Euclidean distance to calculate the similarity.This method is applied to different data sets and applications,including rotation scaling,non-rigid transformation,and illumination variation.The Siamese network is characterized by being composed of two or more sub-networks,and the weights of the two neural networks are shared.The Siamese network is suitable for dealing with“similarity problems”because of its excellent structural characteristics and simple principle.In recent years,it has been widely used in the field of semantic classification and target tracking.If the left and right branch networks adopt different network structures or do not share network weights,they are called pseudo-Siamese networks.The pseudo-Siamese network can extract the common features of multimodal images through different network models,which can effectively alleviate the nonlinear image differences between images.Reference[43]improved the Siamese network by extracting the features of two images for comparison.Nevertheless,only the similarity between the two images can be obtained,not the target’s location in the reference image.To achieve positioning,traversal operations are required,but real-time performance is poor.Due to the different imaging mechanisms of multimodal images,it is difficult to directly apply the traditional homologous image-matching algorithm to multimodal image-matching.To reduce the difference between multimodal images,many scholars have transformed images of different modes into images of the unified mode through style transfer learning to eliminate imaging differences between images of different modes.Reference[44]proposed a visible–infrared image-matching method based on generating a countermeasure model centered around the idea of generating countermeasure network transformation and traditional local feature extraction.This method reduces the difficulty of heterogeneous image-matching and provides a new idea for multimodal image-matching,but the matching accuracy of this method is limited by the completeness of image conversion model samples[45].At present,the commonly used deep learning image-matching methods are D2-Net[46],R2D2[47],SuperGlue[48],Key.Net[49],PN-Net[50],CFOG[51],RIOS[52],AffNet+HardNet[53],and so on.The existing algorithms cannot meet the requirements of robustness and operation in real time under any conditions.It is not possible to apply some algorithms,such as convolution neural networks and deep learning,to engineering since there are no training samples and they are not available in real-time.In particular,in the UAV image matching based on the camera,when the commonly used image matching methods determine the target position,not only the obvious defects of low target resolution,low contrast,distortion,zoom and so on should be considered.In addition,consideration should also be given to matching failures caused by drone vibration,posture changes,angle of view changes,lighting,and lack of texture.In the feature-based image-matching method,matching two images through feature point descriptors may produce wrong matching points,which will affect the visual positioning effect.Therefore,a method of screening the image-matching results is needed to judge the advantages and disadvantages of matching point pairs,to better eliminate mismatching point pairs,and improve the reliability and accuracy of visual positioning.A practical image-matching algorithm requires insensitivity to the factors existing in the scene,such as imaging characteristics,geometric deformation,scale change,rotation change,and so on.Furthermore,especially under the premise of a small number of samples,determining how to use deep learning for a feature-matching operation is a great test for the generalizability of the network.

    This paper proposed a fusion location algorithm for recognizing targets and matching multimodal scenes to solve the above problems.We combined the M2Det with the depth feature matching algorithm to propose an integrated network structure of target recognition and image matching.The target features of the real-time image and the reference image were extracted by the improved M2Det target recognition algorithm,the depth feature matching was completed based on the trained network structure,and the dynamic adaptive Euclidean distance and random sample consensus (RANSAC)consistency algorithm were used to eliminate the error matching.The experimental results showed that the algorithm proposed in this paper has stronger robustness and higher matching accuracy than the traditional matching algorithm,and can effectively deal with the matching difficulties caused by different imaging modes,resolutions,and scales of multimodal remote sensing images while ensuring it is carried out in real-time.This can effectively improve the generalization ability of the network.Compared with the target recognition algorithm,the target recognition rate of the algorithm proposed in this paper was higher and has a certain engineering practical value.

    The main contributions of this paper are as follows:

    ? A target recognition matching fusion localization algorithm is proposed,which combines the M2Det algorithm with the depth feature matching algorithm.The feature extraction network of the M2Det algorithm was improved to enhance the feature acquisition capability of the target environment.By sharing the neural network weight,the UAV target recognition and image-matching positioning algorithm were integrated,to improve the matching performance and positioning accuracy of the algorithm,and solve the functional defects of a single algorithm.

    ? The target matching strategy combines the deep feature based Brute-force (BF) matcher matching algorithm with the dynamic adaptive Euclidean distance RANSAC consistency algorithm.By using this method,we can reduce matching errors and incorrect matching point pairs in the algorithm,optimize matching results,and increase the correct matching rate.

    ?An algorithm presented in this paper was compared and analyzed in the multi-modal registration data set as well as tested in a real-life six-rotor UAV flight.The analysis and test results showed that the performance of the algorithm proposed in this paper was improved compared with the existing matching algorithm,and it can meet the requirements of UAV visual positioning and has certain theoretical and practical reference values.

    The structure of this paper is as follows: Section 2 describes the problems and preparations.Section 3 describes the research methods of this paper.Section 4 gives the experimental results and analysis of the proposed algorithm.Section 5 presents the conclusion.

    2 Problem Description and Preliminaries

    To enable UAVs to fly autonomously all day,and in all weather conditions,first of all,UAVs must have persistent and stable scene perception and motion perception.Aerial images acquired by UAVs in the process of multi-modal image-matching navigation tasks are generally characterized by a high resolution,a large difference in imaging characteristics,high visibility,etc.It requires a lot of time and computing memory to process reference images and aerial images,and it is difficult to achieve the unification of the UAV environment perception and navigation positioning methods,Therefore,it is a very difficult task to develop a UAV target recognition matching fusion localization algorithm.This task first obtains the scene information of the UAV flight through the airborne camera;then,the target recognition algorithm is used to perceive the flight area,extract the target features and identify the target of interest,and completes the UAV matching and positioning task in the flight area based on the target matching strategy,combining the depth feature matching algorithm and the dynamic adaptive Euclidean distance random consistency algorithm.The navigation and positioning algorithm of UAVs is key for UAVs to be able to perform flight tasks.This paper mainly focuses on the UAV image-matching navigation task,based on the target recognition algorithm,and guaranteed by the dynamic adaptive Euclidean distance random consistency algorithm,which studies the fusion positioning algorithm for UAV target recognition and image-matching.

    With the rapid development of deep learning,the model complexity of the target recognition algorithm is increasing,and the memory footprint is also getting larger and larger,which puts forward higher requirements for the integration and operation speed of hardware processors.The current mainstream advanced reduced instruction set computer machine (ARM) architecture and some edge computing processing devices,at this stage,cannot meet the real-time requirements when processing a large number of unstructured data;consequently,the application of some deep learning neural network algorithms in engineering is limited.In recent years,deep neural networks with high recognition accuracy and wide application mainly include (1) you only look once (YOLO) series[54–56];(2)the region-based volatile neural network(R-CNN)series[57–59];and(3)the single-shot multi-box detector(SSD)series[60–62].The SSD algorithms series combines Faster R-CNN’s anchor mechanism with YOLO’s regression idea;therefore,SSD algorithms have both the characteristics of Faster R-CNN’s high accuracy and the rapidity of YOLO.The principle of the SSD algorithm is to use the VGG16 network to extract features,detect and classify feature maps with different scales,generate multiple candidate boxes,and finally generate detection results through non-maximum suppression steps.However,the backbone network used in this method can only classify targets.Due to the small number of shallow feature convolution layers and insufficient shallow feature extraction,the SSD algorithm has a poor detection effect on small and weak targets.To solve this problem,Zhao et al.proposed the M2Det algorithm based on the multi-level feature pyramid network(MLFPN) structure [63].This algorithm uses VGG16 as the backbone feature extraction network and integrates MLFPN based on the SSD model.Compared with the SSD model,the combined M2Det backbone feature extraction network and MLFPN extract features from the input image generate dense boundary boxes and category scores according to the learned features,and use nonmaximum inhibition to predict,to obtain the final prediction result.The MLFPN network adopted by M2Det inherits the advantages of the feature pyramid network feature extraction network in the SSD algorithm and refines the size information of the target.The MLFPN network module is mainly composed of three parts:feature fusion module(FFM),thinned U-shape module(TUM),and scalewise feature aggregation module(SFAM).In the MLFPN network,firstly,the features extracted from the backbone network are aggregated into a base feature with richer semantic information through the FFMv1 module.Then,the two largest effective feature layers generated by the TUM module are fused through the FFMv3 module.The results of the FFMv3 fusion and the basic features are fused through the FFMv2 module to obtain multi-level and multi-scale features.Finally,SFAM stacks multilevel features obtained from TUM according to different dimensions,applies an adaptive attention mechanism,forms a multi-level feature pyramid,and generates boundary boxes and category scores with uneven confidence.Finally,the non-maximum suppression(NMS)prediction network is used to remove the boundary boxes with low confidence,to obtain the prediction results closest to the target object.The network structure of the M2Det algorithm is shown in Fig.1[64,65].

    Figure 1:M2Det algorithm network structure

    3 The Proposed Approach

    The basic idea of the multi-modal scene matching location algorithm based on M2Det(MCML)proposed in this paper is to extract the real-time aerial image features of UAVs based on the improved M2Det network and perceive the real-time image feature information through the target recognition algorithm.A fusion positioning algorithm of target recognition and multi-mode scene matching based on depth features is constructed.Finally,a dynamic adaptive European range random consistency algorithm is used to eliminate mismatched point pairs,realizing the integrated network function design of UAV target recognition and navigation positioning,and achieving the goal of image-matching navigation and accurate positioning.Section 3.1 proposes the network structure of the multi-mode scene matching positioning algorithm based on M2Det,and details the algorithm’s principle process.Section 3.2 describes the improved M2Det target recognition algorithm.Section 3.3 discusses the depth feature matching method in detail.Section 3.4 describes the error-matching elimination strategy based on dynamic adaptive Euclidean distance random consistency.

    3.1 MCML Algorithm Process

    In the vision navigation task of UAV image-matching,the main function of the image-matching algorithm is to realize the navigation and positioning of the UAV.However,the image-matching algorithm cannot obtain the visual situation awareness information of the UAV,and there are changes in illumination,rotation,translation,and affine between the UAV aerial real-time image and the reference image pre-stored by the navigation system,which greatly increases the difficulty of the image-matching task.The main function of the UAV target recognition algorithm is to judge whether there are interesting targets in the image and mark their respective categories and positions in the image,to realize the situation awareness function of the UAV to the flight environment;however,the target recognition algorithm cannot output the navigation position of the UAV in the geography.At present,the UAV image-matching algorithm and target recognition algorithm are studied separately and independently,and an integrated theoretical system has not been formed yet.Therefore,it is also urgent to build a fusion positioning algorithm for UAV image-matching and target recognition,complete the autonomous positioning and target recognition of UAVs in the complex environment without satellites and unknown environments,and give consideration to the real-time nature and accuracy of both.Therefore,based on the improved M2Det target recognition algorithm,this paper introduces the depth feature matching algorithm and the error matching elimination strategy of dynamic adaptive Euclidean distance random consistency.By sharing the neural network weights,this paper constructs the fusion localization algorithm of target recognition and multi-modal scene matching based on the depth feature.The network structure proposed in this paper firstly used the improved M2Det algorithm to train the flight scene,perceives the flight environment situation information by extracting the image features of the UAV aerial real-time image and the reference image,and at the same time,used the depth feature matching algorithm to match the images.Finally,the position of the real-time image in the reference image was selected through the affine transformation box to achieve the accurate positioning of the UAV.The MCML algorithm flow is shown in Fig.2.

    3.2 Improved M2Det Target Recognition Algorithm

    The original M2Det target recognition algorithm uses VGG as the backbone feature extraction network.As it consumes more computing resources and takes up more memory space,the general lightweight network improves the detection speed,but the detection accuracy also decreases.Classical deep learning feature extraction models mainly include the VGG series,Inception series,and ResNet series.In terms of model structure,VGG series models have a shallow network depth,Inception series models have a complex network structure,and ResNet series models have a simple structure.When the number of network layers is deepened,it can solve the problem of gradient disappearance and has strong performance and excellent generalizability in feature extraction.ResNet network models can be divided into ResNet-18,ResNet-34,ResNet-50,ResNet-101,and other structures.As different levels of features have different effects on the model,deep high-level features can help the model to classify,and low-level features can help the model regression.ResNet-101 can maintain high performance while ensuring a deep network layer.Therefore,to obtain a backbone feature extraction network more suitable for identifying various targets and improving the detection accuracy and speed of various small targets under complex backgrounds,the ResNet-101 network was selected to replace the VGG network.The network structure of ResNet-101 is shown in Fig.3.

    Figure 2:MCML algorithm flow chart

    During the training of the M2Det algorithm,the deviation between the anchor frame and the rear frame will be large.To prevent gradient explosion and other situations,the Smooth L1 loss function was used in this paper.The calculation formula is as follows:

    wherexis the difference between the prediction box and the real sample label.As the single-stage target recognition algorithm has the problem of imbalance between positive and negative samples in the training process,the improved M2Det target recognition algorithm in this paper added Focal Loss to calculate the classification loss during training.The Focal Loss calculation formula is as follows:

    whereyis the real sample label,y′is the prediction output,αis the positive and negative sample weight,andγis the weight of easy-to-classify samples and difficult-to-classify samples.Finally,the loss function used in this paper was the combination of Focal Loss and Smooth L1:

    Figure 3:Network structure of ResNet-101

    3.3 Depth Feature Matching Method

    Depth feature matching is a method that uses the features extracted by the depth neural network to find the pixel correspondence between two images.It does not require external detection and feature description but directly calculates the correspondence between two images.The main idea of this paper was to use the ResNet-101 network,pre-trained by the M2Det target recognition algorithm,to extract features without any special training for feature matching.The classical ResNet-101 feature extraction is mainly used for classification tasks.Generally,the receptive field of the convolution network in the first few layers is very small,and the features obtained are generally corner,edge,and other features;however,the positioning accuracy is high.With the deepening of the network layer,the more abstract the extracted features are,the stronger the feature expression ability is,and the more complete the information is,the more resistant to the interference information between different source images;however,the positioning accuracy of features is poor.Therefore,to balance the contradiction between the abstraction of features and the positioning accuracy,this algorithm discarded the STAGE 2–4 features of ResNet-101 and selected the STAGE 1 output as the feature map for keypoint extraction.The feature map is thin-order toe output result of the original image after the multi-layer convolution and pooling of the ResNet-101 network.Since the resolution of the feature map will decline after each layer of the convolutional neural network is pooled,maintain the resolution of the feature map after the pooling layer,this paper replaced the sliding step of the pooled layer window from 2 pixels to 1 pixel and replaced the maximum pooling with the average pooling.Assume that the input sizew×his the original imageI,and set the feature map extracted by the network as a 3D tensorF=F(I),F∈Rw×h×n,where the number of channels isn=512.To filter out more significant feature points in the Rw×h×nfeature space,the channel direction of the high-dimensional feature map and the maximum filtering strategy in the local plane is adopted.Set:

    whereDkis the characteristic value of layerk,andDk∈Rw×h,andis the characteristic values at the point(i,j)on the characteristic map.For pointP(i,j)to be selected,first select the channelkwith the largest response value from thenchannel characteristic graphs,obtain the characteristic graph on the corresponding channel isDk,and finally verify whetheris the local maximum.If these two conditions are met,the point to be selectedP(i,j)is a significant feature point.At the same time,512-dimensional channel vector at(i,j)is extracted from feature mapFand normalized toL2 normal form to obtain feature descriptor:

    wheredij=Fij,d∈Rn.Since the extreme points in discrete space are not real extreme points,to obtain more accurate positions of key points,the method of local interpolation of the feature map was used to obtain pixel-level positioning accuracy,and the descriptor was also obtained by bilinear interpolation in the neighborhood.Finally,is andimensional vector obtained by interpolation,which can be used to match according to Euclidean space distance.Two methods of feature point matching are used in the Opencv3 open-source library:BF matcher matching and Flann-based matcher matching.Since the BF matcher matching method will try all the possibilities of matching points,it will find the global best matching point.However,in Flann-based matchers,the match is approximated.It will find the local best matching point,and the matching time is short.To improve the matching accuracy and obtain the global best matching point,a bidirectional BF matcher was selected for the matching operation in this paper.The matching algorithm flow is shown in Fig.4.

    Figure 4:Depth feature matching algorithm

    3.4 DAPC-RANSAC Mismatch Elimination Strategy

    Due to the large difference between different source images,a large number of mismatches will inevitably occur in the process of image-matching.In this paper,a method combining dynamic adaptive Euclidean distance constraints and random sampling consistency was used to eliminate mismatched point pairs.It is generally believed that for thejmatching point pair in the matching point pair,the distance dis1 of the first matching point is smaller than the distance dis2 of the second matching point.It indicates that the matching point pair is better.Traditional algorithms use fixed thresholdtto select candidate matching point pairs,that is,when dis1

    In the formula,Nis the total matching point pair,dis is the distance between the first matching point,and dis′is the distance between the second matching point;For each pair of matching points,the screening condition is that the first distance is less than the mean value of the second distance and distance difference,as follows:

    The algorithm can adapt well to the differences between different source images and effectively filter the first round of feature-matching point pairs by obtaining the mean value of distance difference from the image-matching data as the criteria for discrimination and comparison.It provides a good initial value for the subsequent RANSAC algorithm,reduces the number of initial matching points,and improves the robustness and operation efficiency of the algorithm.The RANSAC algorithm is a widely used parameter estimation method.Using continuous iteration,it is able to find the optimal parameter model for a group of data sets with “matching points” and “mismatched points”,and finally eliminate the mismatched points.Using bidirectional matching,this paper improved algorithm matching accuracy.The matching process was as follows:

    (a)Images of image1 and image2 were read.

    (b)The depth feature points of image1 and image2 were detected,and two sets of feature points,points1,and points2 were obtained,respectively.

    (c)For each point i in points1,the corresponding point j in points2 was found.

    (d)For each point k in points2,the corresponding point l in points1 was found.

    (e)If the matching point of point i was j and the matching point of point j was i,the matching was successful.

    (f) The dynamic Euclidean distance between matching point pairs was calculated,and the first round of false matching point pair elimination was completed.

    (g)The RANSAC algorithm was used to eliminate the second round of mismatched point pairs.

    Fig.5 shows the matching results of the two algorithms.Fig.5 shows that the RANSAC algorithm solved the mismatch problem of the depth feature matching algorithm and obtained good matching results.

    Figure 5:Comparison of matching results of two algorithms

    4 Experimental Results and Analysis

    The proposed algorithm should be evaluated for feasibility and superiority,this paper used Opencv3 and MATLAB2016b in the University Release dataset and an S1000 six-rotor UAV aerial video for relevant experimental verification.The University Release datasets captured 1652 buildings from 72 universities,with a total of 50218 images,including three view images: ground streetscape,satellite,and drone perspectives.The operating system of the ground control station was Ubuntu 18.04,and the processor was an Inter(R)Core(TM)i7-11800U CPU@2.30 GHz 32 GB laptop.In this paper,nine classical matching algorithms were selected to compare the matching performance,and the UAV aerial video was used to analyze and verify the positioning effect of the algorithm.

    4.1 Matching Efficiency Comparison

    To verify the matching performance of the proposed algorithm,the visible image and satellite image,infrared image and satellite image,and infrared image and visible image in the multi-modal data set were selected for the matching and positioning experiments.The experimental scene had contrast,scale,brightness,blur,low resolution,and other scene changes.The experiment compared and analyzed SIFT,SURF,ORB,AKAZE,D2-Net,LoFTR,SuperPoint,Patch2Pix,AffNet+HardNet,R2D2,MCML,and other algorithms for feature matching.Scene A is the matching result between visible light image and satellite image,Scene B is the matching result between infrared image and satellite image,and Scene C is the matching result between infrared image and visible light image.The experimental results are shown in Fig.6,and the performance comparison of matching methods is shown in Tables 1–3.From the matching results,it can be seen that the MCML algorithm had a higher matching success rate,no false matching phenomenon that could effectively deal with the matching between multi-modal images,and had good adaptability to complex environments such as inter-image scale,blur,deformation,and low resolution.Due to the different application fields of image-matching methods,it is difficult to use a unified evaluation index to define the quality of imagematching results.In this paper,the performance of the algorithm was compared and analyzed from the matching positioning error.Matching positioning error refers to the proximity between the position information of the UAV and its real position according to the relative position relationship of matched feature points.This paper used the L2 distance to measure the positioning center error between realtime images and reference images.To evaluate the matching performance of the MCML algorithm in complex environments,Fig.7 shows the comparison results of center position errors in three typical environments,and the legend shows the average values of center position errors of various algorithms in three environments.It can be seen from Figs.6 and 7 that MCML had obvious advantages over the correct matching points of the traditional algorithms SIFT,SURF,ORB,and AKAZE.D2-Net,LoFTR,SuperPoint,Patch2Pix,R2D2,AffNet+HardNet,and other depth learning matching algorithms had many correct matching points;however,due to the existence of false matching,the positioning error was large.The matching efficiency comparison results from Tables 1 to 3 showed that the MCML algorithm had a good overall matching accuracy,good generalization and stable feature extraction ability for cross-modal image-matching,good robustness for contrast,scale,brightness,blur,deformation,and other changes,and a good positioning effect.

    4.2 UAV Visual Localization Test

    To verify whether the proposed algorithm can be implemented at night,we used a Pixhawk flight control board to independently build an S1000 six-rotor UAV target recognition and multimodal scene matching fusion positioning test system.The complete UAV positioning and testing system included a ground station,dual optical pod,digital transmission equipment,wireless image transmission equipment,and an airborne computer for computing processing.The ground station was used to control UAV flights and monitor flight status.The real-time flight data of the UAV could be transmitted to the ground station in real-time through the data transmission equipment,and the input from the flight commander of the ground station could also be transmitted to the UAV.The data transmission equipment selected the 3DR radio data transmission radio V5 module,with a frequency of 915 MHz,a transmission power of 1000 mW,and a transmission distance of 5 km.The model of the dual optical pod used in this paper was TSHD10T3.The images collected by the pod were output by the HDMI interface,and the output frame rate was 60 FPS.The pod met the image acquisition requirements of target recognition and image-matching positioning and ensured real-time image recognition.In the process of target recognition and image-matching,the dual optical pod captured the real-time image and transmitted the image to the airborne computer for operation processing.In the airborne computer,the target recognition and image-matching positioning of image sequences were realized through the designed target recognition and a multi-modal scene-matching fusion positioning algorithm.Through the calculation of photographic geometry,the coordinates of the UAV and the target were obtained in the world coordinate system.Therefore,the position and posture of the UAV could be adjusted to achieve the visual positioning of the UAV.The configuration of UAV target recognition and multi-modal scene matching fusion positioning test system built in this paper is shown in Fig.8.

    Figure 6:Experimental comparison of various algorithms for test sequences

    Table 1:Scene A:Performance comparison of image-matching methods

    Table 2:Scene B:Performance comparison of image-matching methods

    Table 3:Scene C:Performance comparison of image-matching methods

    Figure 7:Comparison results of center position errors in three typical environments

    The target recognition and multi-modal scene matching fusion location algorithm was based on a depth feature.The night infrared UAV location test scene is shown in Fig.9,and the scene had unstructured environmental features.To improve the effectiveness of the proposed algorithm,the experimental environment used the UAV’s forward and downward-looking flight angles of view.The resolution of the satellite reference image was set to 640 × 1064,the UAV’s flight altitude was 495-500 m,and the flight distance was 2 km.The real-time image collected by the UAV was directly transmitted to the onboard computer through the pod network port for related operations.To improve the operation efficiency and save hardware computing resources,the aerial image was first preprocessed during the experiment,and the resolution of the infrared image collected by the pod was adjusted to 640 × 512.The real-time visualization results of the infrared image taken by the UAV are shown in Fig.9.Fig.10 represents the matching result of the UAV matching reference image and infrared image target recognition.It can be seen from Figs.9 and 10 that the matching result of the MCML algorithm was completely coincidental with the flight path,and the MCML algorithm had a good identification and matching result,which could output the corresponding center point of the real-time image in the reference image,to achieve the navigation and positioning of the UAV.

    Figure 8:UAV visual localization test system

    Figure 10:UAV matching reference map and target recognition matching results (real-time map matching center point,UAV flight path)

    4.3 Discussion

    Compared with the traditional image-matching methods,the MCML algorithm proposed in this paper greatly improved the matching accuracy and generalization of the algorithm;however,most of the depth learning algorithms need to rely on the powerful computing power of GPU.Recently,deep learning has been gaining popularity in the field of image-matching because the deep convolution neural network is trained using a large number of data,which extracts the deep features of the target and improves the matching effect.It is still difficult,however,to apply depth learning to image matching in practice:(i)image-matching algorithms need to have high real-time performance.In most depth learning image-matching algorithms,multi-layer convolutional neural networks are used to extract deep features from the image,thus improving the matching and positioning effect.Nevertheless,as convolution layers and the training network become more complex,higher requirements will be put forward for training samples and computations.(ii) The matching region is arbitrary,in this case,the image classification network may not be suitable for image matching and positioning because it was trained on a data set that had been used for image classification,which also presents a great test for the generalizability of the deep learning network.The algorithm proposed in this paper not only demonstrated the perceptibility of the target recognition algorithm to the environment,but also integrated the image-matching,and positioning function of depth features,which can effectively solve the problem of integration of target recognition,matching,and positioning,and make up for the shortcomings of a single algorithm.Although the operation speed is not as fast as some depth learning image-matching methods,MCML is easy to implement and does not need more prior knowledge of the adaptive region.The image-matching effect is good,and it has certain engineering practical value.

    5 Conclusion

    The positioning function of UAVs is a challenging research topic but is essential for the autonomous navigation of UAVs.To solve the problem of real-time and robust matching when the UAV’s heterogeneous images are quite different,this paper proposed a fusion localization algorithm of target recognition and multi-modal scene matching based on depth features.This algorithm extracts the depth features of reference images and real-time images by sensing the environmental information through the target recognition algorithm and uses the depth feature matching algorithm and the dynamic adaptive European distance random consistency error matching elimination strategy to complete target recognition and matching positioning tasks.The experimental results showed that the algorithm proposed in this paper has good adaptability to different flight environments and can complete matching and positioning tasks between infrared images and visible images,infrared images and satellite images,and visible images and satellite images.Compared with other matching algorithms,it has stronger robustness and higher matching accuracy.On the premise of ensuring realtime operation,it effectively improves the generalizability of the network,realizes the integrated design of target recognition and the matching positioning algorithm,and reduces the amount of computation.The recognition and matching performance are improved,especially when the illumination,scale,and imaging angle change greatly.Next,we plan to improve the algorithm’s speed while ensuring that the matching positioning effect is maintained.

    Acknowledgement:The authors are grateful to Zhengjie Zhu for her help with the preparation of figures in this paper.

    Funding Statement:This work was supported in part by the National Natural Science Foundation of China under Grant 62276274,in part by the Natural Science Foundation of Shaanxi Province under Grant 2020JM-537,and in part by the Aeronautical Science Fund under Grant 201851U8012(corresponding author:Xiaogang Yang).

    Author Contributions:Conceptualization,X.Y.,J.F.and R.L.;Methodology,J.F.and R.L.;Software,J.F.;Investigation,Q.L.and S.W.;Resources,Q.L.;Writing-original draft preparation,J.F.and R.L.;Writing-review and editing,X.Y.,J.F.and S.W.;Visualization,J.F.;Supervision,J.F.and S.W.;Project administration,X.Y.;Funding acquisition,X.Y.All authors have read and agreed to the published version of the manuscript.

    Availability of Data and Materials:The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美 日韩 精品 国产| 在线观看人妻少妇| 欧美人与善性xxx| 午夜福利网站1000一区二区三区| 成年女人毛片免费观看观看9 | 亚洲成人av在线免费| 日本午夜av视频| 色哟哟·www| 2021少妇久久久久久久久久久| 日韩制服丝袜自拍偷拍| 国产免费现黄频在线看| 日韩精品有码人妻一区| 日韩成人av中文字幕在线观看| 亚洲精品成人av观看孕妇| 日本欧美视频一区| 国产女主播在线喷水免费视频网站| 一级毛片 在线播放| av卡一久久| 韩国高清视频一区二区三区| 18禁国产床啪视频网站| 久久久亚洲精品成人影院| 肉色欧美久久久久久久蜜桃| 丝袜在线中文字幕| 国产精品久久久久久精品古装| 男人爽女人下面视频在线观看| 女的被弄到高潮叫床怎么办| 热99久久久久精品小说推荐| 永久网站在线| 99香蕉大伊视频| 国产成人免费观看mmmm| 亚洲综合色网址| 亚洲,一卡二卡三卡| 精品国产超薄肉色丝袜足j| 国产成人午夜福利电影在线观看| 男女啪啪激烈高潮av片| 亚洲国产精品一区三区| 女人被躁到高潮嗷嗷叫费观| 黄色配什么色好看| 日本vs欧美在线观看视频| 91精品国产国语对白视频| 欧美成人精品欧美一级黄| 午夜福利,免费看| 91国产中文字幕| 国产日韩一区二区三区精品不卡| 晚上一个人看的免费电影| 国产探花极品一区二区| 一区二区av电影网| 婷婷色综合www| 免费黄频网站在线观看国产| 大话2 男鬼变身卡| 欧美中文综合在线视频| 免费黄色在线免费观看| 日韩,欧美,国产一区二区三区| 中文欧美无线码| 国产精品女同一区二区软件| 纯流量卡能插随身wifi吗| 免费大片黄手机在线观看| 男人操女人黄网站| 国产免费一区二区三区四区乱码| 亚洲国产毛片av蜜桃av| 国产福利在线免费观看视频| 在线观看国产h片| 一边摸一边做爽爽视频免费| 成年人免费黄色播放视频| 观看av在线不卡| 一区二区三区四区激情视频| av在线app专区| 国产av码专区亚洲av| 亚洲精品乱久久久久久| 亚洲精品第二区| 免费在线观看完整版高清| 国产精品无大码| 亚洲伊人色综图| 国产精品.久久久| 久久精品亚洲av国产电影网| 天美传媒精品一区二区| av在线app专区| 肉色欧美久久久久久久蜜桃| 国产精品一区二区在线不卡| 亚洲精品在线美女| 老汉色av国产亚洲站长工具| 少妇被粗大猛烈的视频| 又黄又粗又硬又大视频| 18+在线观看网站| 日产精品乱码卡一卡2卡三| 制服人妻中文乱码| 伊人久久大香线蕉亚洲五| 久久女婷五月综合色啪小说| 日韩av免费高清视频| 黄色视频在线播放观看不卡| 国产精品国产av在线观看| 18在线观看网站| 国产综合精华液| 飞空精品影院首页| 久久久久久久久久人人人人人人| 欧美国产精品一级二级三级| 飞空精品影院首页| 寂寞人妻少妇视频99o| 国产一区二区三区av在线| 免费人妻精品一区二区三区视频| 亚洲国产看品久久| 国产探花极品一区二区| 日本免费在线观看一区| 欧美精品一区二区免费开放| 亚洲一级一片aⅴ在线观看| 在线观看美女被高潮喷水网站| www日本在线高清视频| 精品99又大又爽又粗少妇毛片| 免费看不卡的av| 在线天堂最新版资源| 国产 一区精品| 日韩av免费高清视频| 欧美日韩国产mv在线观看视频| 韩国高清视频一区二区三区| 男人添女人高潮全过程视频| 人人妻人人澡人人爽人人夜夜| 国产黄频视频在线观看| 男女边吃奶边做爰视频| 热99国产精品久久久久久7| 成年女人在线观看亚洲视频| 大香蕉久久成人网| av电影中文网址| 午夜福利乱码中文字幕| 欧美日韩精品成人综合77777| 丝袜美足系列| 国产免费又黄又爽又色| 国产精品一国产av| 欧美日韩国产mv在线观看视频| 国产免费又黄又爽又色| 亚洲天堂av无毛| 大片免费播放器 马上看| 午夜福利视频在线观看免费| 这个男人来自地球电影免费观看 | 中文字幕精品免费在线观看视频| 国产精品不卡视频一区二区| 久久久久久伊人网av| 亚洲av免费高清在线观看| av视频免费观看在线观看| 日本av免费视频播放| 大片免费播放器 马上看| 巨乳人妻的诱惑在线观看| 最新的欧美精品一区二区| 亚洲 欧美一区二区三区| 亚洲国产精品国产精品| 丝袜人妻中文字幕| 精品久久久精品久久久| 亚洲,欧美,日韩| 日韩一卡2卡3卡4卡2021年| 在现免费观看毛片| 国产欧美亚洲国产| 亚洲综合色惰| 国产精品国产三级国产专区5o| 亚洲中文av在线| 亚洲精品乱久久久久久| 三级国产精品片| av有码第一页| 自拍欧美九色日韩亚洲蝌蚪91| 少妇 在线观看| 成年人免费黄色播放视频| 日韩大片免费观看网站| 啦啦啦啦在线视频资源| 日本-黄色视频高清免费观看| 男女高潮啪啪啪动态图| 丝袜脚勾引网站| 欧美精品av麻豆av| 亚洲国产欧美在线一区| www.熟女人妻精品国产| 九色亚洲精品在线播放| 美女中出高潮动态图| 老司机影院毛片| 欧美日韩精品成人综合77777| 人人澡人人妻人| 婷婷色综合www| 天天躁夜夜躁狠狠躁躁| 国产一级毛片在线| 国产精品麻豆人妻色哟哟久久| a级片在线免费高清观看视频| 成年人免费黄色播放视频| 91aial.com中文字幕在线观看| 国产片内射在线| 曰老女人黄片| 久久精品熟女亚洲av麻豆精品| 日本-黄色视频高清免费观看| 性少妇av在线| 男男h啪啪无遮挡| 久热久热在线精品观看| 十八禁网站网址无遮挡| 国产熟女午夜一区二区三区| 日本av手机在线免费观看| 飞空精品影院首页| 性高湖久久久久久久久免费观看| 国产片内射在线| 自线自在国产av| 久久久久久久久久久久大奶| 欧美日韩国产mv在线观看视频| 另类亚洲欧美激情| av免费在线看不卡| 黄色 视频免费看| 天堂中文最新版在线下载| 久久久久人妻精品一区果冻| 丝袜喷水一区| 丝袜人妻中文字幕| 中文乱码字字幕精品一区二区三区| 亚洲男人天堂网一区| 日韩一卡2卡3卡4卡2021年| a 毛片基地| 国产欧美亚洲国产| 免费在线观看视频国产中文字幕亚洲 | 亚洲少妇的诱惑av| 男人操女人黄网站| 亚洲欧美一区二区三区国产| 熟女电影av网| 激情五月婷婷亚洲| 少妇精品久久久久久久| 两性夫妻黄色片| 色播在线永久视频| 亚洲精品美女久久久久99蜜臀 | 街头女战士在线观看网站| 人妻人人澡人人爽人人| 欧美+日韩+精品| 久久久久久伊人网av| 在线免费观看不下载黄p国产| 国产精品.久久久| av一本久久久久| 久久精品国产a三级三级三级| a级毛片在线看网站| 亚洲,一卡二卡三卡| 国产精品麻豆人妻色哟哟久久| 在线观看免费日韩欧美大片| av不卡在线播放| 亚洲国产精品999| 中文字幕色久视频| 蜜桃国产av成人99| 免费大片黄手机在线观看| 久久免费观看电影| 18+在线观看网站| 国产av码专区亚洲av| 十分钟在线观看高清视频www| 美国免费a级毛片| 97在线人人人人妻| 熟女电影av网| 啦啦啦在线观看免费高清www| 欧美 亚洲 国产 日韩一| 国产淫语在线视频| 午夜福利网站1000一区二区三区| √禁漫天堂资源中文www| 人人妻人人添人人爽欧美一区卜| 晚上一个人看的免费电影| 国产成人精品在线电影| 十分钟在线观看高清视频www| 国产精品嫩草影院av在线观看| 少妇被粗大的猛进出69影院| 哪个播放器可以免费观看大片| 91精品伊人久久大香线蕉| 乱人伦中国视频| 成年美女黄网站色视频大全免费| 黄片播放在线免费| 两个人免费观看高清视频| 亚洲国产av影院在线观看| 在线免费观看不下载黄p国产| av.在线天堂| 啦啦啦视频在线资源免费观看| 亚洲人成网站在线观看播放| 高清av免费在线| 丝瓜视频免费看黄片| 久久99精品国语久久久| 亚洲欧美中文字幕日韩二区| 国产av码专区亚洲av| 9色porny在线观看| 日韩电影二区| 99热网站在线观看| 亚洲欧美日韩另类电影网站| 久久久久国产网址| 一级毛片电影观看| 日韩欧美一区视频在线观看| 啦啦啦在线观看免费高清www| 日韩制服丝袜自拍偷拍| 精品国产乱码久久久久久小说| 亚洲三区欧美一区| 日韩av在线免费看完整版不卡| 欧美日韩国产mv在线观看视频| 亚洲精品美女久久av网站| 欧美国产精品一级二级三级| 亚洲国产看品久久| 十分钟在线观看高清视频www| 久久久久久久大尺度免费视频| 午夜日本视频在线| 一级a爱视频在线免费观看| 你懂的网址亚洲精品在线观看| av在线播放精品| 亚洲第一青青草原| 国产亚洲最大av| 美女午夜性视频免费| 久久久精品国产亚洲av高清涩受| 熟妇人妻不卡中文字幕| 欧美精品一区二区免费开放| 亚洲欧美日韩另类电影网站| 亚洲精品日韩在线中文字幕| 黄色视频在线播放观看不卡| 美女大奶头黄色视频| 超碰成人久久| 最近中文字幕2019免费版| 亚洲色图综合在线观看| 宅男免费午夜| 一级片'在线观看视频| 国产日韩欧美亚洲二区| 亚洲av男天堂| 999精品在线视频| 天天躁夜夜躁狠狠久久av| 美国免费a级毛片| 成人亚洲精品一区在线观看| 国产免费视频播放在线视频| www.av在线官网国产| 国产一区二区 视频在线| 久久毛片免费看一区二区三区| 久久久国产一区二区| 久久久a久久爽久久v久久| 国产麻豆69| 香蕉精品网在线| 国产精品无大码| 一级a爱视频在线免费观看| 黄色一级大片看看| 观看美女的网站| 黄片小视频在线播放| 中国三级夫妇交换| 伊人久久国产一区二区| 亚洲人成电影观看| 亚洲欧美日韩另类电影网站| 亚洲人成电影观看| 一区二区三区激情视频| 伦理电影大哥的女人| 国产激情久久老熟女| 精品一区二区三区四区五区乱码 | 国产成人免费观看mmmm| 国产精品三级大全| 一边摸一边做爽爽视频免费| 黄色配什么色好看| 午夜激情久久久久久久| 性色av一级| 美女高潮到喷水免费观看| 可以免费在线观看a视频的电影网站 | 久久热在线av| 九九爱精品视频在线观看| 永久免费av网站大全| 久久午夜综合久久蜜桃| 欧美成人午夜免费资源| 永久免费av网站大全| 亚洲精品久久午夜乱码| 丝袜喷水一区| 国产精品无大码| 夜夜骑夜夜射夜夜干| 99re6热这里在线精品视频| 国产成人精品婷婷| 婷婷色av中文字幕| 久久精品国产亚洲av高清一级| 2018国产大陆天天弄谢| 激情视频va一区二区三区| 日韩,欧美,国产一区二区三区| 日本wwww免费看| 亚洲欧美一区二区三区国产| videosex国产| 91久久精品国产一区二区三区| 天天躁夜夜躁狠狠躁躁| 91午夜精品亚洲一区二区三区| 男男h啪啪无遮挡| 久热这里只有精品99| 哪个播放器可以免费观看大片| 亚洲av免费高清在线观看| 久久精品国产亚洲av涩爱| 日日撸夜夜添| 97精品久久久久久久久久精品| 亚洲av电影在线进入| 国产片内射在线| 精品亚洲成a人片在线观看| 麻豆av在线久日| 午夜福利视频精品| 伦精品一区二区三区| 欧美日韩亚洲国产一区二区在线观看 | 性少妇av在线| 涩涩av久久男人的天堂| 欧美精品国产亚洲| 久久毛片免费看一区二区三区| 欧美精品av麻豆av| 久久精品国产亚洲av涩爱| av国产精品久久久久影院| 91精品三级在线观看| 国产精品一区二区在线不卡| 巨乳人妻的诱惑在线观看| 亚洲成人av在线免费| 国产免费福利视频在线观看| 伦理电影免费视频| 丰满饥渴人妻一区二区三| 欧美日本中文国产一区发布| 国产成人精品婷婷| 久久久久久人妻| 亚洲av国产av综合av卡| 天天躁日日躁夜夜躁夜夜| 捣出白浆h1v1| 男女国产视频网站| 在线观看人妻少妇| 中文字幕人妻熟女乱码| 在线免费观看不下载黄p国产| 你懂的网址亚洲精品在线观看| 观看美女的网站| 成人午夜精彩视频在线观看| 国产97色在线日韩免费| 欧美成人精品欧美一级黄| 麻豆av在线久日| 热99国产精品久久久久久7| 亚洲经典国产精华液单| 中文字幕av电影在线播放| 女人久久www免费人成看片| 久久午夜福利片| 国产av码专区亚洲av| 在现免费观看毛片| freevideosex欧美| 国产一级毛片在线| 色婷婷av一区二区三区视频| 久久狼人影院| 男女国产视频网站| 国产欧美亚洲国产| 最近手机中文字幕大全| 日韩一本色道免费dvd| 下体分泌物呈黄色| 欧美xxⅹ黑人| 一级毛片我不卡| 看十八女毛片水多多多| 黄片播放在线免费| 亚洲视频免费观看视频| 日韩av免费高清视频| 免费黄频网站在线观看国产| 亚洲欧洲国产日韩| 黄网站色视频无遮挡免费观看| 天天操日日干夜夜撸| 精品国产国语对白av| 岛国毛片在线播放| 久久影院123| 久久这里有精品视频免费| 国产精品亚洲av一区麻豆 | 一区二区三区激情视频| 交换朋友夫妻互换小说| 夜夜骑夜夜射夜夜干| 国产免费现黄频在线看| 黄色怎么调成土黄色| 国产不卡av网站在线观看| 亚洲内射少妇av| 一个人免费看片子| 精品国产一区二区久久| 超碰97精品在线观看| 国产黄色免费在线视频| 一级黄片播放器| 欧美人与性动交α欧美精品济南到 | 久久久久精品人妻al黑| 美女福利国产在线| 亚洲中文av在线| 日本爱情动作片www.在线观看| 免费黄频网站在线观看国产| 两个人免费观看高清视频| 欧美bdsm另类| 九色亚洲精品在线播放| 黑丝袜美女国产一区| 黄片播放在线免费| 成人午夜精彩视频在线观看| 久久久久视频综合| 精品亚洲乱码少妇综合久久| 又大又黄又爽视频免费| 国产成人欧美| 成人手机av| 麻豆乱淫一区二区| 伊人亚洲综合成人网| 在线观看人妻少妇| 国产精品秋霞免费鲁丝片| 伊人久久大香线蕉亚洲五| 欧美精品一区二区免费开放| 国产欧美日韩一区二区三区在线| 久久久久国产一级毛片高清牌| 国产不卡av网站在线观看| 国精品久久久久久国模美| 一级毛片黄色毛片免费观看视频| 秋霞在线观看毛片| 久久久精品国产亚洲av高清涩受| h视频一区二区三区| 亚洲精品久久午夜乱码| 成年美女黄网站色视频大全免费| 日韩中文字幕欧美一区二区 | 精品国产露脸久久av麻豆| 日韩 亚洲 欧美在线| 欧美人与性动交α欧美精品济南到 | 国产成人精品在线电影| 97人妻天天添夜夜摸| 亚洲天堂av无毛| 在线观看人妻少妇| 青春草亚洲视频在线观看| 国产精品.久久久| 国产片特级美女逼逼视频| 国产精品无大码| 91成人精品电影| 国产高清国产精品国产三级| 久久久久国产网址| 亚洲一区中文字幕在线| 婷婷色综合大香蕉| 国产免费又黄又爽又色| 免费观看性生交大片5| 日本-黄色视频高清免费观看| 亚洲三区欧美一区| 国产97色在线日韩免费| 免费在线观看完整版高清| 精品国产国语对白av| 麻豆av在线久日| www.自偷自拍.com| 激情五月婷婷亚洲| 老司机影院毛片| 97精品久久久久久久久久精品| 免费播放大片免费观看视频在线观看| 国产淫语在线视频| 日日摸夜夜添夜夜爱| 纵有疾风起免费观看全集完整版| 丝袜在线中文字幕| 国产成人精品婷婷| 岛国毛片在线播放| 97人妻天天添夜夜摸| 国产极品粉嫩免费观看在线| 精品视频人人做人人爽| 高清不卡的av网站| 国产精品 国内视频| 久久久久久人人人人人| 欧美日韩亚洲国产一区二区在线观看 | 亚洲综合精品二区| 波多野结衣一区麻豆| 日韩一区二区三区影片| 女人精品久久久久毛片| 免费在线观看完整版高清| 国产在视频线精品| 香蕉国产在线看| 巨乳人妻的诱惑在线观看| 看免费av毛片| 天天操日日干夜夜撸| 久久久久久久国产电影| 久久久精品94久久精品| 黑丝袜美女国产一区| 日韩一区二区三区影片| 亚洲精品乱久久久久久| 不卡视频在线观看欧美| a级毛片黄视频| 精品国产乱码久久久久久小说| 一区福利在线观看| 99久久综合免费| 汤姆久久久久久久影院中文字幕| videos熟女内射| 久久人人爽av亚洲精品天堂| 国产免费福利视频在线观看| 久久久久久人人人人人| 97在线视频观看| 国产高清不卡午夜福利| 黄网站色视频无遮挡免费观看| 18禁裸乳无遮挡动漫免费视频| 99国产精品免费福利视频| 免费日韩欧美在线观看| 欧美bdsm另类| 赤兔流量卡办理| 咕卡用的链子| 大香蕉久久成人网| 欧美97在线视频| 美女福利国产在线| 久久99一区二区三区| 最近2019中文字幕mv第一页| 国产极品天堂在线| 国产成人a∨麻豆精品| 久久精品国产a三级三级三级| 极品人妻少妇av视频| 亚洲国产毛片av蜜桃av| 午夜激情久久久久久久| 免费看不卡的av| 我的亚洲天堂| 女人久久www免费人成看片| 国产在视频线精品| 老鸭窝网址在线观看| 人妻系列 视频| 26uuu在线亚洲综合色| 老女人水多毛片| 日本猛色少妇xxxxx猛交久久| 国产精品不卡视频一区二区| 免费观看无遮挡的男女| 免费观看在线日韩| 久久精品夜色国产| 欧美日韩精品成人综合77777| 丁香六月天网| 91久久精品国产一区二区三区| 日韩,欧美,国产一区二区三区| 亚洲国产最新在线播放| 精品福利永久在线观看| 啦啦啦啦在线视频资源| 日韩一卡2卡3卡4卡2021年| 日韩一区二区视频免费看| 国产成人av激情在线播放| 国产男女超爽视频在线观看| 亚洲国产色片| 中文欧美无线码| 国产乱人偷精品视频| 一边亲一边摸免费视频| 伦精品一区二区三区| 精品国产国语对白av| a级毛片在线看网站| 久久久国产欧美日韩av| 免费观看无遮挡的男女| 超色免费av| 国产淫语在线视频| 日韩欧美精品免费久久| 观看av在线不卡| 日韩在线高清观看一区二区三区| 国产精品熟女久久久久浪| 精品国产一区二区久久| 伦精品一区二区三区| 狂野欧美激情性bbbbbb| 国产一区亚洲一区在线观看|