• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    The concept of sUAS/DL-based system for detecting and classifying abandoned small firearms

    2023-12-27 04:09:40JungmokOlegYkimenko
    Defence Technology 2023年12期

    Jungmok M ,Oleg A.Ykimenko

    aDepartment of Defense Science, Korea National Defense University (KNDU), Republic of Korea

    b Department of Systems Engineering, Naval Postgraduate School (NPS), USA

    Keywords:Small firearms Object detection Deep learning Small unmanned aerial systems

    ABSTRACT Military object detection and identification is a key capability in surveillance and reconnaissance.It is a major factor in warfare effectiveness and warfighter survivability.Inexpensive,portable,and rapidly deployable small unmanned aerial systems (sUAS) in conjunction with powerful deep learning (DL)based object detection models are expected to play an important role for this application.To prove overall feasibility of this approach,this paper discusses some aspects of designing and testing of an automated detection system to locate and identify small firearms left at the training range or at the battlefield.Such a system is envisioned to involve an sUAS equipped with a modern electro-optical (EO)sensor and relying on a trained convolutional neural network (CNN).Previous study by the authors devoted to finding projectiles on the ground revealed certain challenges such as small object size,changes in aspect ratio and image scale,motion blur,occlusion,and camouflage.This study attempts to deal with these challenges in a realistic operational scenario and go further by not only detecting different types of firearms but also classifying them into different categories.This study used a YOLOv2 CNN(ResNet-50 backbone network)to train the model with ground truth data and demonstrated a high mean average precision (mAP) of 0.97 to detect and identify not only small pistols but also partially occluded rifles.

    1.Introduction

    Artificial Intelligence (AI) technologies have been incorporated into a variety of systems already,and shown the great potential for new opportunities in every aspect of our lives.Among them,computer vision is one of the most popular areas,which enables computers to acquire meaningful information from the visual inputs by utilizing AI technology.This allows machines to detect the objects of interest with no human input,which can be very useful for surveillance,autonomous driving,rescue operations,resource management,etc.

    In military,Automatic Target Recognition (ATR) is frequently used as “an umbrella term for the entire field of military image exploitation”[1].ATR has actively been studied since it is a“key to automatic military operations and surveillance missions”[2].Accurate and fast detection of military objects can ultimately increase survivability of warfighters.In the stricter sense,ATR aims at automatically locating and classifying targets from the collected sensor data.Automatic Target Detection and Classification(ATD/C)is also widely used in a similar manner where a target represents“any object of military interest”[1].ATD/C can significantly improve the orient stage in the OODA (Observe,Orient,Decide,Act) loop which represents the military decision-making cycle,especially in the case on non-stationary targets.The faster and more accurate OODA cycle ensures major advantages for all units from the operational to tactical and strategical levels.In this paper,ATD/C is applied to small firearms.

    Conventional object detection methods such as feature matching,threshold segmentation,background identification,etc.Are advantageous for certain situations but generally limited to noncomplex,non-diverse scenarios and require manual interference[3,4].In recent years,deep learning(DL)based models or variants of Convolutional Neural Networks (CNNs) have gained great popularity with adaptability and flexibility for object detection.Also,CNN-based detectors such as YOLO (You Only Look Once) [5],SSD(Single Shot MultiBox Detector)[6],and Faster RCNN(Region-based CNN) [7] have already demonstrated state-of-the-art performance on several public object detection datasets.

    Another popular trend for military object detection (MOD) is utilizing small Unmanned Aerial Systems(sUAS).sUAS can provide higher efficiency,flexibility and mobility as compared to traditional ground-based platforms so that they are widely used in a diverse civil and military missions.According to the U.S.Army UAS roadmap 2010-2035 [8],sUAS will provide reconnaissance and surveillance capability for the battalion and below levels.Currently,a next-generation sUAS is developing with a focus on autonomous capability [9].

    However,object detection with sUAS can face many challenges such as small object size,changes in aspect ratio and image scale,motion blur and occlusion [10,11].Moreover,detecting military objects in general,can increase the difficulty such as lack of data,camouflage,and complex background[3].Using sUAS for MOD can also suffer from limited computational and memory resources available on board [2].

    In this paper,the DL-based military object (small firearm)detection system is prototyped and tested using a realistic scenario and real object imagery collected using a typical sUAS electrooptical (EO) sensor.The contribution of the paper can be summarized as follows.First,there were few studies to provide the proof of the feasibility of DL-based sUAS for MOD with real data.The recent study by Cho et al.[12] proposed a YOLO-based unexploded ordnance (UXO) detection model with sUAS and assessed its effectiveness on a small set of imagery collected under severe security constraints.The paper concentrated on small firearm detection only leaving classification for follow-on research.The current paper extends the work of Cho et al.[12] by conducting both detection and classification using real data and exploring other than YOLO object detection algorithms explicitly demonstrating YOLO's superiority.Second,the military data collection and analysis are designed to tackle the challenges of small object size,motion blur,occlusion,and camouflage.While this paper does not provide any methodological advancement in object detection and classification in general,it represents a real data driven feasibility study of employing a widely used YOLO algorithm in a specific reallife application.The achieved results proved that the overall concept is viable and demonstrated a very high precision of detection and classification.Reliability and robustness of small firearm detection in a wider range of underlying terrain and weather conditions can be enhanced by utilizing the recent novel techniques like going beyond a common image augmentation used in this paper and employing albumentation approach [13],as well as incorporating the oriented bounded boxes [14].These proven enhancements would need to be used for a higher technological readiness level of the proposed system but were out the scope of this paper.

    The paper is organized as follows.Section 2 provides an overview of the previous research as related to the DL-based object detection methods in general and MOD in particular.Section 3 describes an operational scenario and data collection procedure utilized in this research,and designs and tests the automatic detector based on the military object videos recorded by sUAS.The paper ends with conclusions.

    2.Background

    This section provides an overview of the known previous research efforts dedicated to using DL for object detection and DLbased MOD systems integrated with sUAS.

    2.1.DL-based object detection

    Object detection methods can be classified as traditional image processing methods and DL-based methods [10].The DL-based methods can be further subdivided into region-based (or twostage network)and single-shot(or one-stage network)approaches.

    The region-based detection establishes Regions of Interest(ROI)at the first stage,and then defines the bounding boxes(BBs)around potential objects of interest and classifies them using a trained DL network.Such methods as RCNN [15],Fast RCNN [16],and Faster RCNN are widely known and popular in this category.After introducing RCNN in 2014 as a two-stage network,its major weakness was a slow speed since each candidate region required a CNN forward propagation procedure.Fast RCNN uses a ROI pooling layer to make the CNN forward propagation performed only once on the entire image.Faster RCNN proposed in 2016 uses a Region Proposal Network (RPN) for selective search which is a region proposal algorithm.

    While region-based detection methods can achieve high localization and recognition accuracy,the inference speed seems to be inefficient,especially for real-time applications.That is why the single-shot detection methods,computing the BBs(spatial location of objects) in a single step,were developed.The SSD and YOLO algorithms belong to this latter category.

    SSD uses a single feed-forward CNN to calculate scores for the potential presence of objects within the BBs and detects objects using the Non-Maximum Suppression (NMS) procedure.NMS selects the best BB and suppresses other overlapped BBs with the best one based on Intersection over Union(IoU).IoU of the two boxes is computed as an intersection area divided by the union area between them.In addition to the base network (VGG-16),SSD features multi-scale feature maps,convolutional predictors,default boxes (similar to anchor boxes) of different aspect ratios [6].

    YOLO is also a one-stage network which splits the input image into the grids and predicts BBs with confidence scores for each grid.Then,NMS selects the best BB based on IoU.The authors of YOLO characterized the model as refreshingly simple,with a single CNN,extremely fast,and with a global reasoning about the whole image.YOLO evolved into YOLOv2[17],YOLOv3[18],YOLOv4[19],YOLOv5[20].YOLOv2 was introduced in 2016 with the key futures of anchor boxes with K-means clustering,batch normalization,highresolution classifiers for better accuracy in addition to the backbone network of Darknet-19.YOLOv3 was rolled out in 2018 with the multi-scale features (three scales) and Darknet-53 as a backbone network.Starting with YOLOv4,researchers keep adding new features as new YOLO versions[21-23].

    Reviewing multiple research efforts on DL-based object detection employing sUAS,Ramachandran and Sangaiah [10] and Wu et al.[11]established that Faster RCNN,YOLO,and SSD were the top three commonly used methods.

    2.2.DL-based MOD systems with sUAS

    A majority of modern sUAS (aka Group 1 and Group 2 UAS)feature light weight(less than 30 kg)and endurance not exceeding 1 h [11].By combining with remote sensors and wireless communications,these sUAS play an important role in a variety of missions including surveillance,monitoring,smart management,delivery services,etc.Literature review reveals that despite a remarkable progress in developing object detection algorithms (as briefly described in Sec.2.1),they are not widely employed on the sUAS yet[10,11].

    In Ref.[11],the DL-based object detection research was conducted to overcome the challenges of object detection using sUAS such as small objects,scale and direction diversity,and detection speed.Chen et al.[24] proposed RRNet,which removed prior anchors and added a re-regression module for better capturing small objects with a scale variation.This research used a public dataset(VisDrone)and was able to detect people and vehicles captured by sUAS EO sensor.Liu et al.[25] developed sUAS-based YOLO platform,which attempted to improve the network structure of YOLOv3 by enlarging the receptive field for small scale of the target.Both public dataset (UAV123) and collected dataset were used to detect people.Liang et al.[26] proposed a feature fusion and scaling-based SSD model for small objects with a public dataset(PASCAL Visual Object Classes (VOC)).Jawaharlalnehru et al.[27]built a YOLOv2 detection model with some revised approaches such as fine-tuning with both ImageNet dataset and self-made dataset,changing input sizes of the model during training,and changing NMS to the maximum value of the operation.There were also some trials to enhance the detection speed,including SlimYOLOv3 [28],Efficient ConvNet [29],and SPB-YOLO [30],as well as determine the best angle for detecting targets for UAVs [31].

    Even fewer number of publications (in the open literature) are available on the challenges and successes of the MOD missions.One of the main reasons is lack of available data due to operational and security issues.D’Acremont et al.[32] reported training the CNN models using simulated data and test on infrared (IR) images(SENSIAC (Military Sensing Information Analysis Center) open dataset).Zhang et al.[33]constructed an intelligent platform which can generate a near-real military scene dataset with different illumination,scale,angle,etc.Yi et al.[34]suggested a benchmark for MOD which has characteristics such as complex environment,inter-class similarity,scale,blur,and camouflage.Liu and Liu [2]utilized fused images of the mid-wave infrared image (MWIR),visible image,and motion image as an input for the Fast RCNN in lieu of insufficient data.SENSIAC dataset was used to detect military vehicles.Janakiramaiah et al.[4] proposed a variant of the Capsule Network CNN model,Multi-level Capsule network under the case of small training dataset.Five military object images(armored car,multi-barrel rocket,tank,fighter plane,gunship)were collected from the Internet for the classification task.

    Nevertheless,a few studies have reported a success of conducting MOD using the real imagery captured by an sUAS sensor.For example,Gromada et al.[35] used YOLOv5 to detect military objects from open synthetic aperture radar (SAR) image datasets.The objects included tanks and buildings from MSTAR dataset and ships from SAR Ship Detection Dataset(SSDD).Cho[36]proposed a YOLOv2-based UXO detection model using fused images of Blue,Green,Red,Red Edge,and Near Infrared spectra captured by sUAS’multispectral sensor.However,UXO data was collected using a ground-based system.The feasibility of the proposed paradigm and performance of the developed detector were tested on small firearms but were limited to just detection(i.e.,no classification).This paper continues this latter line of efforts,but this time involves detection and classification.

    3.CONOPS and imagery collection procedure

    The tactical background for the MOD mission explored in this research and concept of operations (CONOPS) can be described as follows.A military commander is required to conduct a MOD mission of a certain operating area.One or multiple sUAS are deployed to execute this mission utilizing standard onboard EO sensors.The sUAS executes a serpentine search pattern transmitting imagery to the ground control station where the objects of military interest are localized and classified using the pretrained CNN.

    For the small firearm detection mission,the multirotor sUAS can fly a pattern maintaining a fixed~3 m height above the ground level at a constant speed of~1.4 m/s.In this case,during a 30 min flight,a single sUAS can cover on the order of 1300 m2.

    In this study,a DJI Inspire 1 Pro(Fig.1(a))was integrated with a Zenmuse X5 sensor (Fig.1(b)).DJI Inspire features a maximum takeoff weight of 3.5 kg,maximum speed of 18 m/s,maximum flight time of~15 min and maximum line-of-sight transmitting distance of 5 km.Zenmuse X5 is a 16-megapixel EO RGB sensor with a video resolution of 4096 pix×2160 pix and shutter speed of 1/8000 s.

    Fig.1.(a) sUAS;(b) EO sensor used in this research.

    To collect imagery for CNN training,this sUAS flew search patterns over a 28.7 m×15.2 m(440 m2)operating area.Three classes of small firearms were considered: pistol (Fig.2(a)),black rifle(RifleBK) (Fig.2(b)),and brown rifle (RifleBN) (Fig.2(c)).2.During the data collection,the positions and orientations of the objects were varied.Obviously,finding and classifying a pistol appears to be a more challenging task compared to finding the larger objects(rifles)because a smaller object can sometimes be confused with a shadow or a black stone.The confusion may also occur when even the larger objects are partially obscured.To this end,Fig.2(c) illustrates an example of part of a brown rifle being intentionally hidden in the grass/leaves.

    Fig.2.Objects of interest: (a) Pistol (in red box);(b) Black rifle;(c) Brown rifle.

    In total,18 about 1-min-long video clips were collected.These video clips were transformed into 3140 images which have 4096 pix × 2160 pix resolution.Then,the images were resized as 416 ×416 pixels for reducing a computational cost of DL.Since it is common to have partially occluded objects in military applications[37],images with less than half occluded firearms were included in the dataset as well.

    All further analysis was conducted using the MATLAB (R2021b version) interpretative environment using a generic laptop with a single CPU.This laptop featured an Intel(R)Core?i5-8265U CPU@1.60 GHz 8 GB RAM processor.

    4.Choosing the best candidate detector

    Following imagery collection and conditioning,the Faster RCNN,YOLO,and SSD models were explored to see which one delivers the best performance in detecting small firearms.Since the previous study[12]used YOLOv2,for the sake of comparison,this study was based on YOLOv2 as well.No doubt,that the latest versions of YOLO,for example YOLOv3 [18] or YOLOv4 [19],that became available in MATLAB R2023a,or YOLOv5-YOLOv8[20-23]available in Python would work more efficiently(be more precise and faster).A randomly selected subset of 300 images was used for assessment of the three models.

    These object detection models are composed of feature extraction network (backbone) and subnetworks trained to classify objects from the background,trained to classify the detected objects,etc.For a feature extraction network,ResNet-50 was selected among other well-known networks such as VGG-16/19,DarkNet-19/53,and Inception-v3 based on the tradeoff between accuracy and speed.ResNet-50 has 48 convolution layers along with 1 maxpooling layer and 1 average pooling layer,which uses deep residual learning.While this study used the 40-layer version of ResNet-50 for improving speed,it reused the pretrained version of the network trained on the ImageNet database in MATLAB,which is transfer learning with 25.6 million parameters.

    The Faster RCNN model adds two more subnetworks,a RPN for generating object proposals and a network for predicting the objects.Both YOLOv2 and SSD models add a detection subnetwork which consists of a few convolution layers and layers specific to the models such as yolov2TransformLayer(transforms the CNN outputs into the object detection forms) and yolov2OutputLayer (defines the anchor box parameters and loss function).

    Training the models (in a supervised learning mode) requires ground truth data.This was accomplished by utilizing the Image Labeler of MATLAB resulting in the BBs (locations) with the object names(three different firearms)added to all 300 images.Then,the images were randomly divided into three groups -70% (210) for training,15% (45) for validation,and 15% (45) for testing.

    Next,to estimate the number of anchor boxes for Faster RCNN and YOLOv2,the K-means clustering based estimation was implemented.Based on the plot of numbers of anchors and mean IoUs,5,8,and 11 anchors were tested with the mean IoU of 0.75,0.82,and 0.87,respectively.Note that utilizing more anchor boxes can increase the IoU values between the anchors and ground truth boxes,but at the same time,increases the training time and leads to overfitting.

    Table 1 shows the results of testing Faster RCNN,YOLOv2,and SSD detectors based on five runs (five random splits of data onto training,validation and testing subsets).Specifically,this table shows the mean Average Precisions (mAPs) computed for all the object classes.The AP,the area under the precision-recall (p-r)curve,is computed using the 11-point interpolation [38,39].

    Table 1 Comparison of detectors’ mAPs.

    where

    In Eqs.(1)-(2),pis the precision (true positive/total positive),andris the recall (true positive/total true).The detection IoU threshold was set to 0.5.Obviously,different runs(different threeway split of imagery) result in different mAP values.

    As seen from this table,in one of the YOLOv2 runs the mAP of 1 was achieved over three firearms(which is a perfect performance).This case corresponds to the YOLOv2 model been trained with the 8 anchors,mini-batch size of 8,learning rate of 0.001,and maximum number of epochs of 10.The training time happened to be about 1 h and 20 min for each run.When the learning rate was increased to 0.01,the mAP dropped to almost 0,and when the number of anchor boxes was increased to 11,the mAP was 0.948(AP for Pistol was 1,for RifleBK-0.85,and for RifleBN -1).Other variations of parameters did not improve the YOLOv2 mAP.

    The SSD model was trained with the mini-batch size of 8,learning rate of 0.001,and maximum number of epochs of 10 as well(the MATLAB ssdLayersfunction selected 8 anchor boxes too).The best mAP achieved was only 0.486 (AP Pistol was 0.357,for RifleBK-0.267,and for RifleBN-0.833).It took about the same 1 h and 20 min to train the model for each run.Varying other model parameters did not improve mAP either.

    The Faster RCNN was attempted to be trained with the 8 anchors,mini-batch size of 2(due to the memory issue),learning rate of 0.001,and maximum number of epochs of 10.However,just a single epoch required over 2 h to be trained so further exploration of this model was abandoned due to computational inefficiency.

    As a result of different detectors exploration,the YOLOv2 CNN model was chosen as the best candidate for the proposed sUAS/DLbased system for detecting and classifying abandoned small firearms.

    5.YOLOv2 model testing on a full set of data

    Based on the preliminary tests reported in Sec.4,the whole 3140-image data set was used with the YOLOv2 model.The structure of a YOLOv2 model (40-layer version of ResNet-50 as backbone) is shown in Fig.3.

    Fig.3.Structure of a YOLOv2 model.

    The images were labeled to obtain ground truth data and then randomly divided into three groups -70%(2198) for training,15%(471) for validation,and 15% (471) for testing.Also,in order to improve the performance of the model,data augmentation such as rotation,contrast,and brightness change was applied to the training data group.As a result,the training dataset was increased fourfold to 8792 images.

    The dependence of the mean IoU vs the number of anchor boxes is shown in Fig.4.In the following analysis,7 and 11 anchors were tested with the mean IoU of 0.78 and 0.81,respectively.

    Fig.4.Mean IoU vs the number of anchor boxes.

    To begin with,the YOLOv2 model was trained with the 11 anchors,mini-batch size of 8,learning rate of 0.001,and maximum number of epochs of 5.With these settings,the mAP of 0.971 was achieved.Specifically,the AP for Pistol was 0.988,for RifleBK -0.984,and for RifleBN-0.942(Fig.5).The training time was about 11 h and 45 min.The training loss started at 518 for the first iteration,gradually decreasing to 2 at the 50th iteration and then keeping decreasing down to almost 0.

    Fig.5.Average precision vs.recall.

    Next,some of the parameters were varied to see their net effect.Increasing the number of epochs twofold (to 10) had only some minor effect.When 7 anchors were used,the mAP dropped to 0.845.

    In general,the achieved results are very encouraging and prove the overall feasibility of the proposed concept.It was also proven that the trained detector can effectively detect the relatively small objects (pistols) and partially occluded objects as well (Fig.6).Indeed,the partially visible objects are more difficult to be detected than the fully visible objects (which was reported by other researchers as well).However,data augmentation (an artificial fourfold increase of the training data set) allowed to increase the number of images with partially occluded small firearms (which was only a small fraction of the 3140-image data set) to the point when the trained detector became efficient even for these images.

    Fig.6.Illustration of partially occluded object detection capability: (a),(d) Pistol;(b),(e) Black rifle;(c),(f) Brown rifle.

    While the trained YOLOv2 detector showed the good performance over all three objects studied in this research,two quick experiments were conducted to test the effect of data augmentation and performance of the detector using a different background.Since the brown rifle showed the worst AP among all objects,the imagery with the brown rifle was used in this experiment.As an example,Fig.7(a) shows the original image with the brown rifle and Fig.7(b)successful detection/classification result by the trained detector.

    Fig.7.(a) Test image with a brown rifle;(b) successful detection/classification.

    In the first experiment,the brown rifle was cropped from the original image,rotated and placed in different locations within the image.These ten modified images were then tested using the original trained detector.Eight of those ten were processed correctly resulting in a brown rifle being detected and correctly identified.The results of processing two of the eight images are illustrated in Fig.8(a)and 8(b).As seen from Fig.8(c)and 8(d),the rotated background within the cropped image apparently messed the overall pattern in two images preventing a rifle from being detected.

    A similar test but repeated with the different background images.This time,out of ten artificially modified images,only three successful detections existed.Two of these three processed images are shown in Figs.9(a) and 9(b).As seen,even though a rifle was detected (Fig.9(a)),it was incorrectly classified as a black rifle.Fig.9(c) and 9(d) show two cases of failed detection.These experiments prove that data augmentation does lead to a better performance,however,background may play a major effect and therefore different types of background should be used while collecting imagery for CNN training.Alternatively,if background changes drastically,the detector needs to be retrained.

    Fig.9.(a),(b) Successful,and (c),(d) unsuccessful detection with a different background.

    6.Conclusions

    This paper explored a feasibility of a concept of MOD system based on a small multirotor sUAS equipped with a high-resolution EO sensor.From the previous studies,it was known that object detection with sUAS could face challenges imposed by the small object size,changes in object aspect ratio and image scale,motion blur and occlusion.Hence,the main goal of this study was to see if the current technological and algorithmic levels can support detection and classification of small firearms left/lost at the battlefield/training area.The total of 3140 images of three different shape/size/color small firearms were collected to train an artificial convolutional neural network.A YOLOv2 model (ResNet-50 backbone network)was determined as being the most effective one for small firearms detection and classification.Data augmentation was applied to improve overall performance of the proposed system.The trained network demonstrated the very good results featuring a mean value of average precision of 0.97(even more,0.98,for the good contrast objects regardless of their size,and slightly less,0.94,for a lower contrast object).Good performance was demonstrated even for the partially occluded objects.A high probability of correct detection/classification proves that such a system can be developed,prototyped and tested in a realistic operational environment which will be the direction of a follow-up research.Another direction of research is to further explore different ways of augmenting existing(limited)data sets by applying various rendering effects ([13]) and varying background complexity.Also,it will be interesting to quantify how the shape,size,and orientation of BBs can affect the performance of detectors[14].Finally,there are other critical research directions of MOD systems with sUAS such as realtime implementation [40] and utilization of sUAS swarming for mission acceleration [41].

    Declaration of competing interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgements

    The authors would like to thank Maj.Seungwan Cho and NPS Base Police LT Supervisor Edward Macias for designing an operational scenario,executing it,and collecting imagery for this study.Maj.Cho has also developed the original algorithms to condition and train the DL model.The authors are also thankful to the Office of Naval Research for supporting this effort through the Consortium for Robotics and Unmanned Systems Education and Research,as well as the Engineer and Scientist Exchange Program of the U.S.Navy International Programs Office for enabling a fruitful collaboration between KNDU and NPS.

    欧美激情在线99| av专区在线播放| 白带黄色成豆腐渣| 亚洲欧美日韩无卡精品| 久久久久久国产a免费观看| 久久午夜福利片| 免费看日本二区| 国产激情偷乱视频一区二区| 国产精品久久久久久久电影| 成人国产麻豆网| 亚洲av一区综合| 国产麻豆成人av免费视频| 国产成人a区在线观看| 亚洲专区国产一区二区| eeuss影院久久| 久久综合国产亚洲精品| 又爽又黄无遮挡网站| 成人欧美大片| 亚洲一区二区三区色噜噜| 国产精品久久久久久精品电影| 岛国在线免费视频观看| 精品99又大又爽又粗少妇毛片| 尾随美女入室| 深夜a级毛片| 激情 狠狠 欧美| 国产片特级美女逼逼视频| 直男gayav资源| 久久精品国产亚洲av涩爱 | 日韩中字成人| 18禁在线播放成人免费| 久久精品国产亚洲网站| 人人妻人人看人人澡| 国产女主播在线喷水免费视频网站 | 一级黄片播放器| 精品人妻视频免费看| 亚洲无线在线观看| 午夜激情福利司机影院| 午夜日韩欧美国产| 欧美日本亚洲视频在线播放| 日韩欧美免费精品| 精品午夜福利视频在线观看一区| 国产精品99久久久久久久久| 蜜臀久久99精品久久宅男| 国产一区二区在线av高清观看| 日本欧美国产在线视频| 十八禁国产超污无遮挡网站| 欧美一区二区精品小视频在线| 亚洲自偷自拍三级| 人妻少妇偷人精品九色| 国产激情偷乱视频一区二区| 久久人人精品亚洲av| 听说在线观看完整版免费高清| 久久精品夜色国产| 国产伦在线观看视频一区| 波多野结衣高清无吗| 国产精品三级大全| 久久久国产成人免费| 精品久久久久久久久久免费视频| 老熟妇仑乱视频hdxx| 波多野结衣高清作品| 18禁在线无遮挡免费观看视频 | 最近最新中文字幕大全电影3| 日韩高清综合在线| 久久天躁狠狠躁夜夜2o2o| 日本欧美国产在线视频| 最近2019中文字幕mv第一页| 可以在线观看的亚洲视频| 久久草成人影院| 成人欧美大片| 精品人妻视频免费看| 日本爱情动作片www.在线观看 | 成人av一区二区三区在线看| 久久久精品94久久精品| 久久久久性生活片| 在线播放无遮挡| 亚洲av中文av极速乱| 久久久久久国产a免费观看| 少妇人妻精品综合一区二区 | 国产精品乱码一区二三区的特点| 少妇丰满av| 日韩av在线大香蕉| 中文字幕久久专区| 亚洲精品在线观看二区| 一个人免费在线观看电影| 久久精品国产99精品国产亚洲性色| 欧美日韩国产亚洲二区| 国产精品无大码| 亚洲av熟女| 精品久久久久久成人av| 永久网站在线| 97在线视频观看| 久久精品夜色国产| 熟妇人妻久久中文字幕3abv| 美女黄网站色视频| 一进一出抽搐动态| 看免费成人av毛片| 免费电影在线观看免费观看| 欧美xxxx性猛交bbbb| 中文字幕av在线有码专区| 久久精品人妻少妇| 免费观看人在逋| 国产v大片淫在线免费观看| 久久久久国产精品人妻aⅴ院| 久久精品久久久久久噜噜老黄 | 欧美日韩乱码在线| 老女人水多毛片| 成年女人看的毛片在线观看| 女生性感内裤真人,穿戴方法视频| 日韩 亚洲 欧美在线| 少妇的逼水好多| 99国产极品粉嫩在线观看| 久久综合国产亚洲精品| 国产91av在线免费观看| 日韩欧美 国产精品| 成年女人永久免费观看视频| 国产69精品久久久久777片| 国产探花极品一区二区| 久久中文看片网| 精品欧美国产一区二区三| 欧美潮喷喷水| 精华霜和精华液先用哪个| 欧美在线一区亚洲| 免费观看精品视频网站| 美女大奶头视频| 亚洲aⅴ乱码一区二区在线播放| 亚洲国产精品久久男人天堂| 老熟妇仑乱视频hdxx| 黄色视频,在线免费观看| 成熟少妇高潮喷水视频| 欧美区成人在线视频| 国产精品人妻久久久影院| 国产真实乱freesex| 赤兔流量卡办理| 亚洲美女黄片视频| 夜夜爽天天搞| 国内精品久久久久精免费| 久久热精品热| 亚洲aⅴ乱码一区二区在线播放| 午夜久久久久精精品| 亚洲欧美精品自产自拍| 国产精品爽爽va在线观看网站| 免费看日本二区| 国产高清不卡午夜福利| 变态另类丝袜制服| 精品午夜福利视频在线观看一区| 久久久久久九九精品二区国产| 少妇裸体淫交视频免费看高清| 人妻久久中文字幕网| 日本成人三级电影网站| 国产一区二区激情短视频| 秋霞在线观看毛片| 久久鲁丝午夜福利片| 国产不卡一卡二| 性色avwww在线观看| 成人综合一区亚洲| 少妇人妻一区二区三区视频| 插阴视频在线观看视频| 亚洲在线观看片| 老熟妇仑乱视频hdxx| 18禁在线播放成人免费| 国产高清三级在线| 日韩精品青青久久久久久| 听说在线观看完整版免费高清| 亚洲av不卡在线观看| 女同久久另类99精品国产91| 久久精品综合一区二区三区| 欧美高清性xxxxhd video| 久久久精品欧美日韩精品| 成人二区视频| av.在线天堂| 我要看日韩黄色一级片| 尤物成人国产欧美一区二区三区| 天天躁夜夜躁狠狠久久av| 日韩强制内射视频| 国产精品伦人一区二区| 成年女人看的毛片在线观看| 黄色视频,在线免费观看| 亚洲久久久久久中文字幕| 变态另类成人亚洲欧美熟女| 国产精品三级大全| 国内久久婷婷六月综合欲色啪| 日本黄大片高清| a级毛片免费高清观看在线播放| 免费看av在线观看网站| 欧美三级亚洲精品| 亚洲无线在线观看| 国内久久婷婷六月综合欲色啪| 人妻夜夜爽99麻豆av| 男女之事视频高清在线观看| 日本一本二区三区精品| 亚洲欧美日韩无卡精品| 亚洲最大成人手机在线| 国内精品一区二区在线观看| 亚洲熟妇熟女久久| 午夜精品国产一区二区电影 | 日本黄大片高清| 91久久精品国产一区二区三区| 国产精品伦人一区二区| av在线蜜桃| 听说在线观看完整版免费高清| 中文字幕免费在线视频6| 精品不卡国产一区二区三区| 噜噜噜噜噜久久久久久91| 欧美成人免费av一区二区三区| 女同久久另类99精品国产91| 一区二区三区四区激情视频 | 日本成人三级电影网站| 深爱激情五月婷婷| 国产 一区 欧美 日韩| 12—13女人毛片做爰片一| 深爱激情五月婷婷| 欧美最黄视频在线播放免费| 国产伦精品一区二区三区视频9| 久久国内精品自在自线图片| 91狼人影院| 男女下面进入的视频免费午夜| 日韩高清综合在线| 久久久久免费精品人妻一区二区| 日韩三级伦理在线观看| 成人鲁丝片一二三区免费| 亚洲电影在线观看av| 久久久精品94久久精品| 亚洲人成网站高清观看| 国产 一区精品| 欧美成人a在线观看| 精品一区二区三区av网在线观看| 搡老岳熟女国产| 男人的好看免费观看在线视频| 国产在线男女| 久久精品国产清高在天天线| 国产日本99.免费观看| 欧美最新免费一区二区三区| 成人漫画全彩无遮挡| 欧美高清性xxxxhd video| 99久久久亚洲精品蜜臀av| 国产黄色小视频在线观看| 亚洲在线观看片| 亚洲国产精品sss在线观看| 插阴视频在线观看视频| 久久精品国产亚洲av涩爱 | 99久久精品国产国产毛片| 亚洲中文日韩欧美视频| 草草在线视频免费看| 99热6这里只有精品| 国产毛片a区久久久久| 国产国拍精品亚洲av在线观看| 最新在线观看一区二区三区| 人妻夜夜爽99麻豆av| 亚洲七黄色美女视频| 国产高清三级在线| 黄片wwwwww| 可以在线观看的亚洲视频| 欧美不卡视频在线免费观看| 中国国产av一级| 内地一区二区视频在线| 又黄又爽又免费观看的视频| 少妇的逼水好多| 免费看日本二区| 亚洲av中文字字幕乱码综合| av天堂在线播放| 波多野结衣高清作品| 日本免费a在线| 国产单亲对白刺激| 亚洲精品在线观看二区| 亚洲欧美日韩高清专用| 国产精品国产高清国产av| 亚洲国产欧美人成| 亚洲天堂国产精品一区在线| 久久国产乱子免费精品| 最近在线观看免费完整版| 搞女人的毛片| 女生性感内裤真人,穿戴方法视频| 99热这里只有精品一区| 亚洲乱码一区二区免费版| 春色校园在线视频观看| 久久久久国产网址| 最近视频中文字幕2019在线8| 亚洲图色成人| av在线观看视频网站免费| 欧美xxxx黑人xx丫x性爽| 天堂av国产一区二区熟女人妻| 男人舔女人下体高潮全视频| 欧美成人精品欧美一级黄| 国产精品一区二区性色av| 亚洲aⅴ乱码一区二区在线播放| av专区在线播放| 97人妻精品一区二区三区麻豆| 麻豆国产av国片精品| av天堂中文字幕网| 亚洲五月天丁香| 久久亚洲国产成人精品v| 高清日韩中文字幕在线| 特大巨黑吊av在线直播| 欧美+亚洲+日韩+国产| 日韩高清综合在线| 99视频精品全部免费 在线| 日日干狠狠操夜夜爽| 久久久精品大字幕| 香蕉av资源在线| 美女高潮的动态| 人妻夜夜爽99麻豆av| 欧美xxxx黑人xx丫x性爽| 国产探花极品一区二区| 舔av片在线| 中文在线观看免费www的网站| 国产三级在线视频| 热99re8久久精品国产| 国产午夜福利久久久久久| 一区二区三区四区激情视频 | 网址你懂的国产日韩在线| 久久久久久大精品| 人妻丰满熟妇av一区二区三区| 国内揄拍国产精品人妻在线| 麻豆av噜噜一区二区三区| 成年女人看的毛片在线观看| 亚洲欧美清纯卡通| 在线观看66精品国产| 亚洲av一区综合| 可以在线观看毛片的网站| 99热网站在线观看| 日韩强制内射视频| 日韩一区二区视频免费看| 九九久久精品国产亚洲av麻豆| 97超级碰碰碰精品色视频在线观看| 国产精品久久久久久久电影| 亚洲五月天丁香| 欧美不卡视频在线免费观看| 一夜夜www| 国产在线男女| 久久精品影院6| 99久久成人亚洲精品观看| 婷婷色综合大香蕉| 久久精品国产99精品国产亚洲性色| 22中文网久久字幕| 日本欧美国产在线视频| 婷婷精品国产亚洲av| 一级毛片久久久久久久久女| 国产成人一区二区在线| 欧美一区二区亚洲| 欧美日本视频| 国产av不卡久久| 国产精品永久免费网站| 久久久成人免费电影| 人妻久久中文字幕网| 亚洲真实伦在线观看| 性色avwww在线观看| 久久久久国产精品人妻aⅴ院| 日韩强制内射视频| 91在线观看av| av在线天堂中文字幕| 美女被艹到高潮喷水动态| 久久中文看片网| 久久精品国产鲁丝片午夜精品| 久久这里只有精品中国| 亚洲久久久久久中文字幕| av国产免费在线观看| 非洲黑人性xxxx精品又粗又长| 人妻夜夜爽99麻豆av| 俺也久久电影网| 国产高清不卡午夜福利| 美女cb高潮喷水在线观看| 少妇高潮的动态图| 日本免费a在线| 欧美高清成人免费视频www| 久久久久久久午夜电影| 国产精品野战在线观看| 丰满乱子伦码专区| 国产在线男女| 女生性感内裤真人,穿戴方法视频| 丝袜喷水一区| 十八禁网站免费在线| 级片在线观看| 高清日韩中文字幕在线| 国产乱人偷精品视频| 男女下面进入的视频免费午夜| 国产v大片淫在线免费观看| 亚洲成人中文字幕在线播放| 午夜福利成人在线免费观看| 国产 一区精品| 桃色一区二区三区在线观看| 久久久午夜欧美精品| 午夜激情福利司机影院| 亚洲成人精品中文字幕电影| 男人狂女人下面高潮的视频| 久久国产乱子免费精品| 性欧美人与动物交配| 日本五十路高清| 99热全是精品| 国产午夜精品久久久久久一区二区三区 | 亚洲av电影不卡..在线观看| 亚洲国产精品sss在线观看| 老司机午夜福利在线观看视频| 自拍偷自拍亚洲精品老妇| 熟女人妻精品中文字幕| 午夜福利高清视频| 午夜激情欧美在线| 欧美日本亚洲视频在线播放| 成人鲁丝片一二三区免费| 日本五十路高清| 国产黄色视频一区二区在线观看 | h日本视频在线播放| 日日摸夜夜添夜夜爱| 搞女人的毛片| 成人午夜高清在线视频| 亚洲精品一卡2卡三卡4卡5卡| 日韩欧美精品免费久久| 成人亚洲欧美一区二区av| 亚洲五月天丁香| 亚洲av二区三区四区| 欧美在线一区亚洲| 中文亚洲av片在线观看爽| 精品久久久久久久末码| 九九在线视频观看精品| 在线观看午夜福利视频| 三级经典国产精品| 久久久久九九精品影院| 两个人的视频大全免费| 国内精品宾馆在线| 悠悠久久av| 国产成人影院久久av| 亚洲国产高清在线一区二区三| 1024手机看黄色片| av在线蜜桃| 国产片特级美女逼逼视频| 国产免费一级a男人的天堂| 午夜爱爱视频在线播放| 亚洲av中文av极速乱| 精品无人区乱码1区二区| 人人妻人人澡欧美一区二区| 桃色一区二区三区在线观看| 看非洲黑人一级黄片| 国产乱人偷精品视频| 日日撸夜夜添| 又黄又爽又刺激的免费视频.| 五月玫瑰六月丁香| 一级毛片aaaaaa免费看小| 国产一区二区在线av高清观看| 精品国内亚洲2022精品成人| 国产一区二区三区在线臀色熟女| 不卡视频在线观看欧美| 成人午夜高清在线视频| .国产精品久久| 日本欧美国产在线视频| 亚洲精品亚洲一区二区| 美女高潮的动态| 亚洲精品日韩av片在线观看| 99热这里只有精品一区| 国产高清视频在线观看网站| 精品国内亚洲2022精品成人| 亚洲无线在线观看| 国产成人freesex在线 | 蜜臀久久99精品久久宅男| 麻豆乱淫一区二区| 亚洲精品日韩av片在线观看| 在线播放国产精品三级| 精品午夜福利视频在线观看一区| 搡老岳熟女国产| 亚洲av电影不卡..在线观看| av免费在线看不卡| av专区在线播放| or卡值多少钱| 伦精品一区二区三区| 日本爱情动作片www.在线观看 | 日韩 亚洲 欧美在线| 午夜福利在线在线| 国产精品久久久久久久久免| 亚洲综合色惰| av专区在线播放| 激情 狠狠 欧美| 内射极品少妇av片p| 六月丁香七月| 国内少妇人妻偷人精品xxx网站| 99热6这里只有精品| av中文乱码字幕在线| 久久精品国产99精品国产亚洲性色| 国产伦精品一区二区三区视频9| 99热这里只有是精品50| 久久久久久国产a免费观看| 91狼人影院| 国产成人a∨麻豆精品| 寂寞人妻少妇视频99o| 国产美女午夜福利| 国产一区二区三区av在线 | 在线观看免费视频日本深夜| 简卡轻食公司| 欧美日本亚洲视频在线播放| 亚洲久久久久久中文字幕| 国产成人91sexporn| 联通29元200g的流量卡| 一级毛片电影观看 | 人妻久久中文字幕网| 99久久精品热视频| 精品一区二区三区av网在线观看| 亚洲性夜色夜夜综合| 婷婷精品国产亚洲av| 亚洲四区av| 91午夜精品亚洲一区二区三区| 99热这里只有是精品在线观看| 一区福利在线观看| 国产探花在线观看一区二区| 欧美性猛交╳xxx乱大交人| 成人高潮视频无遮挡免费网站| 插阴视频在线观看视频| 日韩精品青青久久久久久| 国产精品伦人一区二区| 免费av毛片视频| 一级毛片aaaaaa免费看小| 狂野欧美激情性xxxx在线观看| 少妇熟女aⅴ在线视频| 亚洲av熟女| 插逼视频在线观看| 最近手机中文字幕大全| 男女啪啪激烈高潮av片| 午夜福利成人在线免费观看| 五月玫瑰六月丁香| 91在线观看av| 中国美女看黄片| 亚洲精品亚洲一区二区| 如何舔出高潮| 我的女老师完整版在线观看| 欧美日本视频| 中国美女看黄片| 人妻夜夜爽99麻豆av| 干丝袜人妻中文字幕| 国产高清激情床上av| 联通29元200g的流量卡| 国产精品女同一区二区软件| 欧美区成人在线视频| 麻豆乱淫一区二区| 日韩欧美一区二区三区在线观看| 免费人成在线观看视频色| 欧美性猛交╳xxx乱大交人| av福利片在线观看| 狂野欧美白嫩少妇大欣赏| 精品久久久久久久久亚洲| 久久久国产成人精品二区| 久久鲁丝午夜福利片| 精品久久国产蜜桃| 精品久久久久久久末码| 欧美+日韩+精品| 特级一级黄色大片| 色在线成人网| 成年女人永久免费观看视频| 国产精品一区www在线观看| 啦啦啦韩国在线观看视频| 99久久久亚洲精品蜜臀av| 国产精品久久久久久av不卡| 黄色欧美视频在线观看| 日韩欧美精品v在线| 国产成人a∨麻豆精品| 在线播放无遮挡| 国产亚洲91精品色在线| 国产91av在线免费观看| 亚洲精品日韩在线中文字幕 | 国产aⅴ精品一区二区三区波| 蜜臀久久99精品久久宅男| 春色校园在线视频观看| 欧美激情久久久久久爽电影| 偷拍熟女少妇极品色| 婷婷精品国产亚洲av| 久久久色成人| 日本与韩国留学比较| 蜜桃久久精品国产亚洲av| 在线免费观看的www视频| 亚洲人成网站在线观看播放| 搡老妇女老女人老熟妇| 国产精品久久久久久久电影| 国产精品一区www在线观看| 欧美日韩在线观看h| 精品人妻一区二区三区麻豆 | 亚洲美女搞黄在线观看 | 国产伦一二天堂av在线观看| 久久天躁狠狠躁夜夜2o2o| 成人av一区二区三区在线看| 国产精品伦人一区二区| 国产精品一及| 日日摸夜夜添夜夜添av毛片| 97热精品久久久久久| 97碰自拍视频| 国产精品久久电影中文字幕| 久久人人爽人人爽人人片va| 国产亚洲精品综合一区在线观看| 国产成人a∨麻豆精品| 免费在线观看影片大全网站| 性欧美人与动物交配| 99热网站在线观看| 人人妻人人看人人澡| 一边摸一边抽搐一进一小说| 国产精品久久久久久亚洲av鲁大| 蜜桃久久精品国产亚洲av| 国产一区二区亚洲精品在线观看| 日韩,欧美,国产一区二区三区 | 日韩av在线大香蕉| 国内少妇人妻偷人精品xxx网站| 国产v大片淫在线免费观看| 日韩欧美三级三区| 国产成人一区二区在线| 国产成人影院久久av| 午夜影院日韩av| 丝袜喷水一区| 18禁在线播放成人免费| 插逼视频在线观看| 国产日本99.免费观看| 丰满的人妻完整版| 在线播放无遮挡| 男人和女人高潮做爰伦理| 国产一区二区三区av在线 | 身体一侧抽搐| 亚洲av不卡在线观看| 成年女人永久免费观看视频| 亚洲av五月六月丁香网| 熟女人妻精品中文字幕| 男女下面进入的视频免费午夜| 一级毛片aaaaaa免费看小| 久久韩国三级中文字幕|