• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Panicle-3D: A low-cost 3D-modeling method for rice panicles based on deep learning,shape from silhouette,and supervoxel clustering

    2022-10-12 09:31:02DnWuLejunYuJunliYeRuifngZhiLingfengDunLingboLiuNiWuZedongGengJingboFuChenglongHungShngbinChenQinLiuWnnengYng
    The Crop Journal 2022年5期

    Dn Wu ,Lejun Yu ,Junli Ye ,Ruifng Zhi ,Lingfeng Dun ,Lingbo Liu ,Ni Wu,Zedong Geng,Jingbo Fu,Chenglong Hung,Shngbin Chen,Qin Liu,b,*,Wnneng Yng,*

    a Britton Chance Center for Biomedical Photonics,Wuhan National Laboratory for Optoelectronics,and Key Laboratory of Ministry of Education for Biomedical Photonics,Department of Biomedical Engineering,Huazhong University of Science and Technology,Wuhan 430074,Hubei,China

    b School of Biomedical Engineering,Hainan University,Haikou 570228,Hainan,China

    c National Key Laboratory of Crop Genetic Improvement,National Center of Plant Gene Research,Huazhong Agricultural University,Wuhan 430070,Hubei,China

    Keywords:Panicle phenotyping Deep convolutional neural network 3D reconstruction Shape from silhouette Point-cloud segmentation Ray tracing Supervoxel clustering

    ABSTRACT Self-occlusions are common in rice canopy images and strongly influence the calculation accuracies of panicle traits.Such interference can be largely eliminated if panicles are phenotyped at the 3D level.Research on 3D panicle phenotyping has been limited.Given that existing 3D modeling techniques do not focus on specified parts of a target object,an efficient method for panicle modeling of large numbers of rice plants is lacking.This paper presents an automatic and nondestructive method for 3D panicle modeling.The proposed method integrates shoot rice reconstruction with shape from silhouette,2D panicle segmentation with a deep convolutional neural network,and 3D panicle segmentation with ray tracing and supervoxel clustering.A multiview imaging system was built to acquire image sequences of rice canopies with an efficiency of approximately 4 min per rice plant.The execution time of panicle modeling per rice plant using 90 images was approximately 26 min.The outputs of the algorithm for a single rice plant are a shoot rice model,surface shoot rice model,panicle model,and surface panicle model,all represented by a list of spatial coordinates.The efficiency and performance were evaluated and compared with the classical structure-from-motion algorithm.The results demonstrated that the proposed method is well qualified to recover the 3D shapes of rice panicles from multiview images and is readily adaptable to rice plants of diverse accessions and growth stages.The proposed algorithm is superior to the structure-from-motion method in terms of texture preservation and computational efficiency.The sample images and implementation of the algorithm are available online.This automatic,cost-efficient,and nondestructive method of 3D panicle modeling may be applied to high-throughput 3D phenotyping of large rice populations.

    1.Introduction

    Rice (Oryza sativa) is one of the most important food crops in the world,which feeds over half of the global population [1,2].Geneticists and breeders have made great efforts to identify rice accessions with ideal characteristics for crop growth and production [3-5].The rice panicle,whose characteristics influence grain yield,is the target of many rice phenotyping studies [6-8].

    Image-based techniques have become increasingly important in crop phenotyping.These techniques,generally adopting one or more imaging methods such as visible,hyperspectral,thermal infrared,and tomographic imaging,have greatly advanced the progress of high-throughput phenotyping and have potential to replace conventional phenotyping methods,which depend mainly on low-efficiency manual manipulation [9,10].Deep learning methods have shown impressive performance in many areas and have increasing application in phenotyping research,especially for detection and counting tasks.Pound et al.[11] presented a deep-learning approach with a new dataset for localizing and counting wheat spikes and spikelets.Lu et al.[12] solved the infield counting problem of maize tassels with a local count regression network.Xiong et al.[13] proposed a robust method for field rice panicle segmentation by simple linear iterative clustering and convolutional neural-network classification.With the combination of image processing techniques,panicle traits can be efficiently quantified from images.In the method proposed by Duan et al.[14],the panicle numbers of rice plants were determined using multiangle imaging and artificial neural network segmentation.Wu et al.[15]developed an image analysis-based method to quantify the grain numbers of detached panicles.But nondestructive image-based panicle phenotyping accuracy is greatly reduced by self-occlusions that commonly appear in rice canopy images.Such interference can be largely eliminated if panicles are phenotyped at the three-dimensional (3D) level.However,relatively few studies have considered acquiring panicle traits from 3D rice models,given that an efficient method for modeling large numbers of rice plants is lacking.

    Generally,a more comprehensive understanding of plant features can be obtained from 3D plant models than from singleview or multiple-view plant images,and analysis of links between canopy architecture characteristics and photosynthetic capacity can be performed based on 3D plant models [16].The generation of a 3D plant model is an essential step for subsequent trait extraction and can be achieved using various techniques,including laser scanning,structured light (SL),time of flight (TOF),and structure from motion (SFM) [17].The laser-scanning technique is used mostly in the case of miniplots or experimental fields.This approach can obtain 3D point clouds of high resolution and accuracy [17].Usually,canopy-associated parameters,including plant height,leaf inclination angle,and plant area density,can be extracted automatically or by manual interpretation [18,19].The structured-light technique is also superior in resolution and accuracy,though the long time required for imaging limits its application to high-throughput phenotyping.Nguyen et al.[20]established a structured light-based 3D reconstruction system to phenotype plant heights,leaf numbers,leaf sizes and internode distances of cabbages,cucumbers,and tomatoes.Time-of-flight imaging,despite its low resolution,was successfully applied to corn[21,22],and phenotypic parameters of the corn plant,including leaf length,leaf width,leaf area,stem diameter,stem height and leaf angle,could be estimated.This technique was also used to measure cotton plant height under field conditions [23].SFM reconstruction,in contrast to active illumination-based laser scanning,SL and TOF,is a passive approach that represents the current state-of-the-art technique in multiview stereo vision.Pound et al.[24] presented an automatic SFM-based approach to recover 3D shapes of rice plants in which the outputs were surface-mesh structures consisting of a series of small planar sections.This method was then employed by Burgess et al.[25] to investigate the potential effects of wind-induced perturbation of the plant canopy on light patterning,interception,and photosynthetic productivity.It was combined with ray tracing [26] to characterize the light environment within an intercrop.The reconstruction software packages VisualSFM [27] and PMVS [28],on which Pound et al.’s method[24]was based,accept as input a set of images with no special shooting mode required,and have shown high robustness in various cases.They have also been applied to 3D phenotyping of other crops including corn [29],strawberry [30],and grapevine[31].Another multiview stereo algorithm for 3D modeling is space carving [32].This method reconstructs the 3D shape according to the photoconsistency of calibrated images around the scene of interest.Simpler than space carving is the shapefrom-silhouette (SFS) algorithm,which requires foreground segmentation for each input image.Both space carving and SFS are good at reconstructing high curvature and thin structures [33].Another novel method is modeling by hyperspectral imaging,which was investigated by Liang et al.[34]and Behmann et al.[35].

    Despite the many approaches available for 3D plant modeling,a reconstruction system for rice panicles is lacking.In general,a 3D panicle model for rice can be developed by two methods: cutting off panicles from rice plants and reconstructing 3D panicle models with images of excised panicles,or developing a 3D shoot rice model and then segmenting panicles from the rest of the plant.Examples of the first method are PI-Plat[36]and X-ray-based work[37].Given that panicles were cut from stems,neither estimation of the panicle spatial distributions nor dynamic observations were possible.In addition,the phenotyping process was slowed by manual collection of panicles.For the second method,a 3D shoot rice model can be generated,while there are no algorithms for automatic 3D panicle segmentation.Panicle segmentation of a 3D shoot rice model is more complex than 2D panicle segmentation of rice canopy images.It is difficult to distinguish panicles from the remaining parts of a 3D shoot rice model using color information or geometric features.Although there are some well-built neural networks for 3D classification and segmentation,such as VoxNet[38] and PointNet [39],the input data sizes are quite limited (an occupancy grid of 32×32×32 for VoxNet and thousands of points for PointNet).Current computing power cannot deal with a shoot rice model that may contain hundreds of thousands of points.No such current technology focuses on nondestructive 3D panicle modeling.

    In this paper,we present an automatic and nondestructive 3D modeling method for rice panicles.The SFS algorithm is used to generate the shoot rice model,and then a deep convolutional neural network and supervoxel clustering are used to perform 3D panicle segmentation.Totally 50 rice plants of various genotypes and growth stages were used to test the proposed algorithm and comparisons with the SFM method were performed.The results show that the proposed method is well qualified to recover the 3D shapes of rice panicles from multiview images and is easily adaptable to rice plants of diverse accessions and growth stages.It is superior to the SFM method in terms of texture preservation and computational efficiency.

    2.Materials and methods

    2.1.Multiview imaging system

    An indoor 3D imaging system named Panicle-3D was developed to acquire multiview rice images.The imaging system (Fig.1A)comprised mainly a digital single-lens reflex camera (EOS 760D,Canon,Tokyo,Japan),a turntable (MERA200,Red Star Yang Technology,Wuhan,Hubei,China),a group of LED lights(Philips,Amsterdam,Netherlands),a PLC control unit (CP1H,OMRON,Kyoto,Japan)and a computer(Gigabyte Technology,New Taipei City,Taiwan,China).The camera was kept at a fixed position,and the focal length was fixed to 18 mm throughout image acquisition.A rice plant was placed on the turntable rotating at a constant speed of 2° per second,and the camera automatically shot at two-second intervals during the revolution.The acquisition time of each image was recorded with millisecond precision for calibration.It took approximately 4 min to phenotype a single rice plant,including manual handling and image acquisition.

    2.2.Rice materials

    Fifty rice plants of various genotypes and growth stages were tested with the Panicle-3D system.These plants,including 25 rice accessions selected from 529 O.sativa accessions [40] and 25 mutants of ZH11,were grown in plastic pots.Images of the 25 accessions from 529 O.sativa accessions were taken between the flowering and dough grain stages.Images of the 25 ZH11 mutants were taken between the dough-grain and mature-grain stages.For each rice plant,90 side-view images,as shown in Fig.1B,were acquired.A total of 4500 images were collected to form a dataset for 3D panicle modeling.

    Fig.1.Multiview imaging system.(A) The multiview imaging system.(B) A rice canopy image in side view.

    2.3.The concepts of four 3D models

    We introduce the four 3D models for later reference.The shoot rice model(SRM)refers to the stuffed point cloud of a rice canopy.The shoot rice model reconstructed by the SFS algorithm does not contain color information.The surface shoot rice model (SSRM)refers to the surface point cloud of a rice canopy.The panicle model(PM) refers to the stuffed point cloud of all panicles in a rice canopy.The panicle model acquired by 3D segmentation of the shoot rice model reconstructed by the SFS algorithm does not contain color information.The surface panicle model (SPM) refers to the surface point cloud of all panicles in a rice canopy.Both the surface shoot rice model and the surface panicle model contain color information obtained by identifying correspondences between image pixels and spatial points.

    2.4.The pipeline of the 3D panicle modeling algorithm

    The flow diagram of the proposed 3D panicle modeling algorithm is shown in Fig.2.It includes 2D panicle segmentation,3D shoot rice reconstruction,and 3D panicle segmentation.The shoot rice model,surface shoot rice model,panicle model,and surface panicle model were generated from multiview rice canopy images.Following are the detailed steps of the algorithm,taking one rice plant as an example.(1) The SegNet-Panicle model for 2D panicle segmentation: 60 field rice images [13] of 1971 × 1815 and 4000×4500 pixel resolution(Fig.2A)and the corresponding label images (Fig.2B) were used to generate 2370 rice images of 360× 480 resolution(Fig.2C)and the corresponding label images(Fig.2D).These images were used to train SegNet [41] to obtain a SegNet-Panicle model (Fig.2E) for 2D panicle segmentation.(2)Multiview rice canopy images: for a single rice plant,90 images of 6000×4000 resolution(Fig.2F)were taken automatically from different views in the imaging chamber.All these images were calibrated with a rotation-axis calibration technique following Zhang[42].(3) Rice canopy silhouette images: all original rice canopy images (Fig.2F) were segmented using fixed-color thresholding to obtain a canopy silhouette images (Fig.2G) in which each pixel is categorized into either a rice or background pixel.(4) Paniclesegmented images: all original rice canopy images (Fig.2F) were segmented using the pretrained SegNet-Panicle model to obtain panicle-segmented images (Fig.2H) in which each pixel was assigned as either a panicle or background pixel.(5) Shoot rice model and surface shoot rice model: the shoot rice model(Fig.2I) was reconstructed by the SFS algorithm using 90 canopy silhouette images,and then the surface shoot rice model (Fig.2J)was obtained by rendering the surface points of the shoot rice model.(6) Panicle model and surface panicle model: the panicle model(Fig.2K)was obtained by performing 3D panicle segmentation of the shoot rice model,and then the surface panicle model(Fig.2L) was obtained as the intersection of the panicle model and the surface shoot rice model.

    2.5.Camera calibration

    The SFS algorithm requires calibration parameters corresponding to rice image sequences.To obtain these parameters,the rotation axis was set as the Z axis of the world coordinate system.Because the object underwent pure rotation,the origin of the world coordinate system could be an arbitrary point on the rotation axis.A simple technique was developed to determine the orientation of the rotation axis relative to the camera.First,a chessboard pattern with 15 × 10 white and black squares and 14 × 9 inner corners (see in Supplementary files) was printed and attached to a Perspex panel.A few images of the chessboard panel in different orientations were taken with the camera from close distances.These close-up shots were used to calculate the intrinsic camera parameters by Zhang’s calibration method [42].The chessboard panel was then placed on the top surface of the turntable to acquire an image sequence over 360° of rotation for extrinsic parameter calibration.The pixel image coordinates of each inner corner of the chessboard pattern were tracked automatically with OpenCV [43].Given the intrinsic camera parameters,the extrinsic parameters including the rotation parameters and the translation parameters that related the world coordinate system to the camera coordinate system were calculated from the correspondences of spatial coordinates and pixel coordinates of the chessboard corners.A different assignment of the world coordinate system determines a different group of spatial coordinates of the corners,which further leads to a different group of rotation and translation vectors.Note that the translation vectors corresponding to the extrinsic calibration images theoretically have the same value when the rotation axis is taken as the Z-axis of the world coordinate system.Any other assignment of the Z-axis leads to variation between the translation vectors.Accordingly,the extrinsic parameters that corresponded to the extrinsic calibration images with the adopted assignment were determined by finding the group of translation vectors with minimum variance.The rotation vectors that corresponded to the extrinsic calibration images determined a regression plane from which an initial rotation vector could be selected.The selection of the initial rotation vector could be arbitrary because only relative positions would be considered in SFS reconstruction.Once the initial rotation vector and the translation vector were determined,the extrinsic parameters corresponding to the rice image sequences were calculated according to the acquisition time and rotation speed.

    Fig.2.The pipeline of the 3D panicle modeling algorithm.(A)Original training images.(B)Original label images.(C)Cropped training images.(D)Cropped label images.(E)The SegNet-Panicle model for 2D panicle segmentation.(F)Multiview rice canopy images.(G)Rice canopy silhouette images.(H)Panicle-segmented images.(I)The shoot rice model reconstructed by the SFS algorithm.(J) The surface shoot rice model generated by texture extrusion.(K) The panicle model generated by performing 3D panicle segmentation on the shoot rice model.(L) The surface panicle model generated by taking the intersection of the surface shoot rice model and the panicle model.

    2.6.2D panicle segmentation

    The aim of 2D panicle segmentation was to acquire paniclesegmented images that provide essential information for 3D panicle segmentation.In a panicle-segmented image,each pixel is categorized as either a panicle pixel or a nonpanicle pixel.Considering that the panicle colors are similar to those of other parts of rice plants and that panicles appear in various shapes and poses,it is difficult to segment panicles by color thresholding or conventional machine-learning algorithms that depend on hand-engineered features.Instead,a well-built deep convolutional neural network(CNN),SegNet [41],was employed to perform robust 2D panicle segmentations.The SegNet architecture is composed of an encoder network,a corresponding decoder network,and a pixelwise classification layer.It takes as input an image of 360 × 480 resolution and generates a prediction image of the same size.The network should be trained with an adequate number of training samples before it can be applied for panicle segmentation.The use of Seg-Net is similar to that of general deep learning methods,and the detailed implementation steps are described as follows.

    (1) Training set: Sixty rice images (including 50 images of 1971 × 1815 resolution and 10 images of 4500 × 4000 resolution) with corresponding ground-truth labels were selected from the Panicle-Seg dataset[13].These rice images were acquired in complex field environments,involving diverse challenges for panicle segmentation,such as variations in rice accession,illumination imbalance,and cluttered background caused by soil and water reflection.Because the size of these images did not match the input image size of SegNet,each image and ground-truth was first extended to a larger size by adding a black background and then cut into small patches of 360 × 480 resolution.Each image of 1971×1815 resolution was extended to 2160×1920 resolution and cut into 24 patches.Each image of 4500 × 4000 resolution was extended to 4680 × 4320 resolution and cut into 117 patches.In total,2370 patches of 360×480 resolution were acquired,and all these patches were used as the training set.

    (2) Training SegNet: The network was trained using stochastic gradient descent [44] with a fixed learning rate of 0.001 and momentum of 0.9.The model was accepted after 100 epochs through the training set when the training loss converged and no increases in accuracy were observed.This model was named the SegNet-Panicle model.

    (3) Segmentation with the SegNet-Panicle model: All 4500 rice canopy images of 6000 × 4000 resolution for 3D panicle modeling were segmented using the pretrained SegNet-Panicle model.To meet the input image size of SegNet,each rice image was extended to 6120×4320 resolution by adding a black background and then cut into 153 patches of 360 × 480 resolution.Each patch was segmented with the pretrained SegNet-Panicle model.The segmentation results of 153 patches were spliced into a single result image of 6120 × 4320 resolution.The extended black area was removed and the image was trimmed to obtain a final image of 6000 × 4000 resolution.

    2.7.3D reconstruction of rice shoot

    The SFS algorithm,also known as visual hull construction,was employed to generate the shoot rice model.The algorithm recovers an object shape by carving away empty regions using a silhouette sequence[45].The general processes for a rice shoot are as follows:(1) acquire the calibration parameters corresponding to 90 rice canopy images by the method described in section 2.5;(2)acquire 90 canopy silhouette images using fixed color thresholding;and(3) initialize a volume that is large enough to contain a rice shoot and carve away the regions of the volume that once projected out of the canopy silhouettes.

    The silhouette images for shoot rice reconstruction were the binary segmentation results of the rice canopy.For rice images taken in the Panicle-3D imaging system in which the color of the scene background was unlikely to appear in rice canopies and the pixel values of the background were significantly lower than that of the rice shoot,the silhouettes were extracted automatically with fixed color thresholding according to the discriminants given below:

    where r,g and b represent respectively the gray values of the red,green,and blue channels of pixels in the original rice canopy images.The silhouette was the combination of all the pixels that satisfied these inequalities.It should be mentioned that the exposure and brightness were higher in the images of the 25 accessions selected from 529 O.sativa accessions than in the images of the 25 ZH11 mutants.The threshold value m was set 80 for 2250 shoot rice images of 25 ZH11 mutants and to 150 for 2250 shoot rice images of 25 of 529 O.sativa accessions.

    After the silhouette sequence and corresponding calibration parameters were obtained,the shoot rice model was computed volumetrically.This was done by initializing 1,048,576,000(1024 × 1024 × 1000) cubic voxels of 1 × 1 × 1 mm3that represented a cuboid volume of 1024×1024×1000 mm3,then projecting each voxel to the silhouette sequence and carving away all the voxels that had once projected outside the silhouettes.

    Besides the SFS reconstruction,the surface points of the shoot rice model were also rendered by extruding the original RGB rice images along viewing rays.Texture extrusion was implemented by the ray-tracing technique.For each pixel that belonged to the rice shoot silhouette,a ray was cast from the viewpoint across the pixel into space and the intersection of the viewing ray and the shoot rice model was calculated.Actually,the viewing ray was a list of occupancy intervals during intersection calculations.The voxel corresponding to the pixel was the voxel of intersection nearest to the viewpoint.The intersection voxel was drawn with an average of colors as seen from the RGB rice image sequence.The textured surface shoot rice model was obtained by assembling all colored voxels.

    2.8.3D panicle segmentation

    Apparently,panicle segmentation at the three-dimensional level is more difficult than for 2D images.The problem does not lie merely in the similarities of the panicle voxel colors to those of other parts of the rice plant.Some progressive deep learning techniques targeted on 3D objects were also influenced because of the shortage of 3D training datasets and because of memory intensiveness.We accordingly developed a solution for 3D panicle segmentation that innovatively integrated 2D pixelwise panicle segmentation with ray tracing and supervoxel clustering.

    Fig.3.The pipeline of 3D panicle segmentation.(A)3D shoot rice model.(B)3D mask points generated by presegmentation according to panicle-segmented images.(C)3D mask points (local view).(D) Supervoxel clusters of shoot rice generated by VCCS segmentation.(E) Supervoxel clusters of shoot rice (local view).(F) Coarse supervoxel classification results (local view).(G) External panicle supervoxels (local view).(H) Internal panicle supervoxels (local view).(I) The panicle model.(J) The surface panicle model.

    The detailed procedures of 3D panicle segmentation are illustrated in Fig.3.First,presegmentation was performed to acquire mask points (Fig.3B).This operation required 2D paniclesegmented images,as previously mentioned.The presegmentation resembled the rendering of the shoot rice model in the application of a ray-tracing technique.For each pixel that belonged to the canopy silhouette,its corresponding voxel on the shoot rice model was determined by intersection calculation.The shoot rice model voxels were scored by introducing a parameter,referred to as the score,that indicated the probability of the voxel’s belonging to the panicle.Each voxel achieved an initial score of zero.The score increased when the voxel was seen to be foreground in the paniclesegmented images and decreased when the voxel was seen to be background.After scoring was complete,mask points (Fig.3B)were obtained by removing voxels whose score was zero or negative.For better observation,a partial view of the mask points is presented in Fig.3C.Paralleling the presegmentation was the generation of supervoxels by voxel-cloud connectivity segmentation(VCCS) [46].The VCCS algorithm works directly in the 3D space using voxel relationships to produce oversegmentations.It constructs an adjacency graph for the voxel cloud by searching the voxel K-dimensional tree,then selects seed points to initialize the supervoxels,and finally iteratively assigns voxels to supervoxels using flow constrained clustering.Generally,three parameters,λ,μ and ?,which control the influence of color,spatial distance and geometric similarity,respectively,need to be specified when the VCCS algorithm is run.We expected that supervoxels would occupy a relatively spherical space,which would be more desirable for further processing.Accordingly,the values of λ and?were set to zero,which meant that only spatial distance was considered.A sample result of VCCS and a partial view are shown in Fig.3D and E,respectively.After the shoot rice model was transformed into a set of supervoxels,coarse supervoxel segmentation was performed to classify each supervoxel into either panicle supervoxel or nonpanicle supervoxel.If a supervoxel contained one or more mask points,it was classified as a panicle supervoxel;otherwise,it was classified as a nonpanicle supervoxel.The result of coarse supervoxel segmentation is shown in Fig.3F.It is easy to see that only external panicle supervoxels (Fig.3G) can be recognized,given that the mask points were a set of surface points,there being no opportunity for an internal panicle supervoxel to contain a mask point.Accordingly,a simple criterion was adopted to detect the internal panicle supervoxels (Fig.3H) that were not recognized by coarse supervoxel segmentation.If a supervoxel was adjacent to identified panicle supervoxels in three or more directions,it was also classified as a panicle supervoxel.The ultimate panicle model (Fig.3I) was the combination of all identified external and internal panicle supervoxels.Derived from the shoot rice model,the panicle model did not contain color information.A textured surface panicle model was also created (Fig.3J) from the intersection of the panicle model and the surface shoot rice model.

    2.9.Performance evaluation

    To evaluate the performance of the SegNet-Panicle model,25 images that originated from 25 ZH11 mutants for 3D panicle modeling were selected to build the test set.For each mutant,the first image of the multiview image sequence was selected.The ground-truth labels of the 25 test images were obtained by manual segmentation using Photoshop software [47].Automatic segmentation of the test images was performed with the SegNet-Panicle model.Four indicators: precision,recall,IoU,and F-measure of the test images,were calculated for evaluation of 2D panicle segmentation accuracy.

    We also investigated how the number of panicle-segmented images that were used would affect the efficiency and accuracy of 3D panicle segmentation on the shoot rice model using 25 ZH11 mutants.When the algorithm was tested on each plant,all 90 images taken from different angles were used to generate the shoot rice model and the surface shoot rice model.The panicle-segmented images were obtained using the pretrained SegNet-Panicle model.Then,different numbers of panicle-segmented images,in turn from 3 to 90,were used to segment panicle points from the shoot rice model.Manual segmentations on shoot rice models were conducted using CloudCompare[48]software for comparison with automatic segmentation.The IoU was adopted to evaluate the performance of 3D panicle segmentation.The calculation formula of IoU at the three-dimensional level is given below:

    where TP refers to points that were categorized as panicle points both by the algorithm and by manual segmentation.FP refers to points that were categorized as panicle points by the algorithm but were manually classified as nonpanicle points.FN refers to points that were manually classified as panicle points but were not recognized by the algorithm.The ranges of IoU were from 0 to 1,and a higher value of IoU indicated better 3D panicle segmentation.

    VisualSFM software (retrieved at https://ccwu.me/vsfm/),which represents the current state-of-the-art technology of multiview stereo reconstruction,was tested on the dataset for comparison with the proposed method.To make use of VisualSFM,the background in each original image was colored black and the rice pixels remained unchanged,since these images were acquired in a static camera capture system,while VisualSFM was adaptable to moving camera systems.For a single rice plant,90 background-removed images were used to generate the surface shoot rice model.Then,noise points on the surface shoot rice model were filtered out using color thresholding.Finally,manual segmentation of the surface shoot rice model was performed using CloudCompare software to obtain the surface panicle model.

    2.10.Requirements for data processing

    Computations were performed on Ubuntu 18.04 64-bit and Windows 10 64-bit dual operating systems with an NVIDIA GeForce RTX 2080Ti GPU configured.The training of SegNet and 2D panicle segmentation were performed on the Ubuntu system.All other processes were performed on the Windows system.The project for 3D shoot rice reconstruction and 3D panicle segmentation was developed in the C++language combined with OpenCV and PCL libraries [49].OpenMP [50] and CUDA [51] were adopted to speed up the calculations.The source codes of the algorithm and the implementation of the codes have been provided in the supplementary files.

    3.Results and discussion

    3.1.Performance of 2D panicle segmentation

    The comparison of the original rice images,ground-truth labels and segmentation results using the SegNet-Panicle model is shown in Fig.4.The automatic segmentation results were highly consistent with the ground-truth labels.The precision,recall,IoU and F-measure of the 25 test images were respectively 0.84,0.93,0.79 and 0.88,also indicating that the SegNet-Panicle model could provide reliable 2D panicle segmentation results.

    Fig.4.The results of 2D panicle segmentation using the SegNet-Panicle model.The original rice images are shown in the first column,the manual segmentation results using Photoshop software are shown in the second column,and the results using the SegNet-Panicle model are shown in the last column.(A),(B),and(C)are three mutants of ZH11.

    3.2.Performance of 3D shoot rice reconstruction and 3D panicle segmentation

    The panicle modeling algorithm was tested on 50 rice plants.The results of four samples are shown in Fig.5.For each sample,one original rice image and the corresponding 2D panicle segmentation result are shown in the first column and the third column,respectively.The surface shoot rice model (SSRM) and the surface panicle model(SPM)of each sample were loaded using CloudCompare software.For comparison,the SSRM and SPM were rotated to an angle that was closest to the shooting angle of the selected original rice image,and the screenshots of the SSRM and the SPM from this view (side view) are displayed in the second and fourth columns,respectively.Screenshots of the SPMs from the top view are shown in the last column.For better observation,only panicle regions are shown in the third column.Comparing the third column with the fourth column,the SPMs were generally consistent with panicles in images,showing that the proposed algorithm was well qualified to recover the 3D shapes of rice panicles from multiview images.For sample A and B,images were taken at the flowering stage when panicles grew upright and appeared green.For sample C and D,images were taken at the mature stage,and panicles were bent by the weight of the spikes and appeared yellow.The results showed that the algorithm was easily adaptable to rice plants of different accessions and growth stages.Besides,the reconstructed models are focused on panicle level and larger localized examples are provided in Fig.6 for detailed comparison.The original rice images,the surface shoot rice models,the panicle-segmented images,and the surface panicle models are shown from the first to the last column,respectively.The texture of the reconstructed panicles could be easily observed.The videos of the reconstructed SRM,SSRM,PM,and SPM are represented in the supplementary files.

    Fig.5.The results of 3D shoot rice reconstruction and 3D panicle segmentation.The original rice images and the corresponding 2D panicle segmentation results are shown in the first and third columns,respectively.The surface shoot rice models are shown in the second column.The surface panicle models from the side view and top view are shown in the fourth and last columns,respectively.(A,B) Rice samples from 529 O.sativa accessions at the flowering stage.(C,D) Mutants of ZH11 at the mature stage.

    Fig.6.The results of 3D shoot rice reconstruction and 3D panicle segmentation in local view at panicle level.The original rice images and the corresponding 2D panicle segmentation results are shown in the first and third columns,respectively.The surface shoot rice models and the surface panicle models are shown in the second and last columns,respectively.(A,B) Rice samples from 529 O.sativa accessions at the flowering stage.(C,D) Mutants of ZH11 at the mature stage.

    Fig.7.The accuracies and efficiencies of 3D panicle segmentation using different numbers of panicle-segmented images.

    Fig.8.The results of 3D panicle segmentation using different numbers of panicle-segmented images.n is the number of panicle-segmented images used in 3D panicle segmentation.TP,true positive;FN,false negative;FP,false positive.

    3.3.Processing efficiency

    The training of SegNet and 2D panicle segmentation were conducted on the Ubuntu system.It took approximately 8.6 h to train the SegNet for 100 epochs through the training set.The execution time for 2D panicle segmentation per rice sample using the SegNet-Panicle model was approximately 11.6 min.All the other processes were conducted on the Windows system.It took approximately 23.5 s to reconstruct the shoot rice model and 8.8 min to reconstruct the surface shoot rice model for a single rice plant when 90 images were used.The computational time for generating the panicle model and surface panicle model of a single rice plant varied from 0.8 to 5.1 min when different numbers of paniclesegmented images were used in the 3D panicle segmentation procedure.The whole process of panicle modeling for a rice plant was completed within 26 min.

    3.4.Efficiency and accuracy of 3D panicle segmentation

    The mean IoU and mean time cost when different numbers of panicle-segmented images were used in 3D panicle segmentation of 25 ZH11 mutants are shown in Fig.7.The minimum and maximum values of the IoU and time cost are marked in the figure.The performance of 3D panicle segmentation was improved and the execution time increased when more panicle-segmented images were used.The mean IoU achieved the highest value of 0.95 when 90 panicle-segmented images were used.Detailed data of the efficiencies and accuracies are shown in Table 1.The segmentation results obtained using 3,15,45,and 90 panicle-segmented images are shown in Fig.8.TP,FN,and FP are marked in different colors.Smaller areas of FN and FP indicate higher accuracy.Although marked improvement is visible in the comparison of Fig.8A with Fig.8B,the results in Fig.8B-D show little improvement.This finding is consistent with those shown in Fig.7.Although the execution time increases proportionally,accuracy shows diminishing increases.Although there is a tradeoff between the accuracy and efficiency of 3D panicle segmentation,the execution time of this process can be reduced by half with almost no sacrifice of accuracy.

    Table 1 Efficiencies and accuracies of 3D panicle segmentation.

    3.5.Comparison with structure from motion

    The surface shoot rice models(SSRM-SFM)and the surface panicle models (SPM-SFM) based on VisualSFM are shown in respectively the first and third columns of Fig.9.The surface shoot rice models (SSRM-SFS) and surface panicle models (SPM-SFS) generated by the proposed method are shown in respectively the second and last columns of Fig.9.Comparing the last two columns,the surface panicle models generated by SFM method and the proposed method are similar overall.In detail,as shown in Fig.10,with original rice images as references,many points are missing in SPM-SFM but were well recovered by SPM-SFS.This finding indicates that the proposed method gave better performance with respect to shape and texture preservation in the case of rice panicles.From the aspect of efficiency,for a single rice plant,the processing time for the generation of the surface shoot rice model using 90 images by VisualSFM varied from 22 to 35 min.Tens more minutes were needed for manual segmentation on the SSRM-SFM by CloudCompare to obtain the final SPM-SFM.The execution time for all procedures of the proposed algorithm using 90 images for a single rice plant was no longer than 26 min.The proposed method was superior in processing efficiency as well.

    Fig.9.Comparison of the reconstructed results of the proposed method with the SFM method.The surface shoot rice model and the surface panicle model by the SFM method are shown in the first and third columns,respectively,and the surface shoot rice model and the surface panicle model by the proposed method are shown in the second and last columns,respectively.The numbers are the point sizes of the models.(A)and(B)Rice samples from 529 O.sativa accessions at the flowering stage.(C)and(D)Mutants of ZH11 at the mature stage.

    Fig.10.Comparison of the reconstructed results of the proposed method with SFM method in local view at panicle level.The original rice images are shown in the first column.The surface shoot rice model and the surface panicle model by the proposed method are shown in the second and third columns,respectively.The surface shoot rice model and the surface panicle model by SFM method are shown in the fourth and the last column,respectively.(A)and(B)Rice samples from 529 O.sativa accessions at the flowering stage.(C) and (D) Mutants of ZH11 at the mature stage.

    3.6.Advantages and limitations

    To our knowledge,automatic panicle segmentation of a 3D shoot rice model has not been described previously.In the proposed method,this issue was well addressed by using deep convolutional neural network and supervoxel clustering.The method is superior to a SFM-based method which needs subsequent manual processing in terms of texture preservation and computational efficiency.

    The image acquisition is nondestructive,given that the panicle modeling algorithm requires multi-view images of the whole rice canopies instead of excised panicles.The developed low-cost (US$~2000) multi-view imaging system is well adaptable for fully automatic high-throughput data acquisition if equipped with electromechanical controllers and automated conveyer.

    The proposed algorithm was developed specially for indoor imaging systems,requiring a fixed camera and a pure rotation movement of the rice plant at constant speed.For this reason,it could not be applied in the field.

    The validity is not guaranteed when the rice canopy is extremely dense.This limitation is common in visible image-based reconstructions and is unlikely to be eliminated unless other techniques such as computed tomography or magnetic resonance imaging are adopted.In addition,automatic acquisition of panicle traits such as panicle number,single-panicle length,and kernel number based on panicle models or surface panicle models remains a challenge,which should be relieved in the future work.

    4.Conclusions

    This paper described an automatic and nondestructive method for 3D modeling of rice panicles that combines shape from silhouette with a deep convolutional neural network and supervoxel clustering.The outputs of the algorithm for each rice plant are four 3D point clouds,including the shoot rice model,surface shoot rice model,panicle model,and surface panicle model.The image acquisition for a single rice plant was~4 min and the image processing time was~26 min when 90 images were used.The tradeoff between accuracy and efficiency involved in 3D panicle segmentation was assessed.Comparing the proposed algorithm with the widely used VisualSFM software,the proposed algorithm is superior with respect to texture preservation and processing efficiency.In future,we expect this method would be applied in highthroughput 3D phenotyping of large rice populations.

    Data availability

    Supplementary files for this article,which include source code,Panicle-3D technical documentation,evaluations of SFS reconstruction accuracy,and four videos of the reconstructed SRM,SSRM,PM,and SPM,can be retrieved from http://plantphenomics.hzau.edu.cn/usercrop/Rice/download.

    Declaration of competing interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    CRediT authorship contribution statement

    Dan Wu:Data curation,Formal analysis,Methodology,Writing-original draft.Lejun Yu:Data curation,Formal analysis,Methodology,Writing-original draft.Junli Ye:Writing-review&editing.Ruifang Zhai:Software,Writing -review &editing.Lingfeng Duan:Software,Writing -review &editing.Lingbo Liu:Writing-review &editing.Nai Wu:Resources.Zedong Geng:Writing -review &editing.Jingbo Fu:Writing -review &editing.Chenglong Huang:Software,Writing -review &editing.Shangbin Chen:Software,Writing-review&editing.Qian Liu:Conceptualization,Funding acquisition,Project administration,Writing -review &editing.Wanneng Yang:Conceptualization,Funding acquisition,Project administration,Writing -review &editing.

    Acknowledgments

    This work was supported by the National Natural Science Foundation of China (U21A20205),Key Projects of Natural Science Foundation of Hubei Province (2021CFA059),Fundamental Research Funds for the Central Universities (2021ZKPY006),and cooperative funding between Huazhong Agricultural University and Shenzhen Institute of Agricultural Genomics (SZYJY2021005,SZYJY2021007).

    自拍欧美九色日韩亚洲蝌蚪91| 乱人伦中国视频| 高清黄色对白视频在线免费看| av线在线观看网站| 欧美成人午夜精品| 岛国在线观看网站| 别揉我奶头~嗯~啊~动态视频| 亚洲五月天丁香| 午夜亚洲福利在线播放| 少妇 在线观看| 久久国产精品人妻蜜桃| 99国产精品一区二区蜜桃av | 中文字幕人妻熟女乱码| 免费一级毛片在线播放高清视频 | 大码成人一级视频| 国产亚洲欧美98| 亚洲aⅴ乱码一区二区在线播放 | 一级片'在线观看视频| 免费观看a级毛片全部| 女性生殖器流出的白浆| 激情视频va一区二区三区| 欧美不卡视频在线免费观看 | 日韩中文字幕欧美一区二区| 伦理电影免费视频| 青草久久国产| 人人妻人人澡人人爽人人夜夜| 女人高潮潮喷娇喘18禁视频| 亚洲国产精品合色在线| 久久人妻av系列| 亚洲一卡2卡3卡4卡5卡精品中文| 国产免费男女视频| 777米奇影视久久| 久久久久国产一级毛片高清牌| 亚洲一区高清亚洲精品| 色综合欧美亚洲国产小说| 黄色女人牲交| 中文亚洲av片在线观看爽 | videos熟女内射| 国产欧美日韩一区二区三区在线| 国产99白浆流出| 日日夜夜操网爽| 亚洲欧洲精品一区二区精品久久久| 夜夜躁狠狠躁天天躁| 91精品国产国语对白视频| 国产亚洲精品一区二区www | 成在线人永久免费视频| www.999成人在线观看| 亚洲综合色网址| 免费在线观看黄色视频的| 国产精品偷伦视频观看了| 啪啪无遮挡十八禁网站| 日韩大码丰满熟妇| 亚洲成av片中文字幕在线观看| 国产精品久久久久久精品古装| 久久 成人 亚洲| 国产精品永久免费网站| 十分钟在线观看高清视频www| 精品熟女少妇八av免费久了| 亚洲精品自拍成人| 欧美日韩中文字幕国产精品一区二区三区 | 亚洲熟女毛片儿| 精品久久久精品久久久| 在线观看免费高清a一片| 国产成人精品久久二区二区免费| 黑人操中国人逼视频| 精品第一国产精品| 中文欧美无线码| 中文欧美无线码| 亚洲人成电影免费在线| 一级,二级,三级黄色视频| av有码第一页| 人成视频在线观看免费观看| 一区在线观看完整版| 18禁黄网站禁片午夜丰满| 久99久视频精品免费| 亚洲国产精品sss在线观看 | 国产欧美日韩综合在线一区二区| 国产极品粉嫩免费观看在线| 99久久国产精品久久久| 可以免费在线观看a视频的电影网站| 国产色视频综合| 久久天躁狠狠躁夜夜2o2o| 久久九九热精品免费| 中文字幕另类日韩欧美亚洲嫩草| 国产有黄有色有爽视频| 中文欧美无线码| 十八禁人妻一区二区| 亚洲av美国av| 国产不卡av网站在线观看| 大香蕉久久网| 满18在线观看网站| 日本a在线网址| 日本欧美视频一区| 黑人巨大精品欧美一区二区mp4| 成人精品一区二区免费| 国产精品久久电影中文字幕 | 91老司机精品| 一级片'在线观看视频| 中文字幕人妻熟女乱码| 日韩大码丰满熟妇| 夜夜爽天天搞| 免费高清在线观看日韩| 欧美老熟妇乱子伦牲交| 国产xxxxx性猛交| 亚洲 国产 在线| 色综合婷婷激情| 一级毛片女人18水好多| 国产乱人伦免费视频| 天天躁夜夜躁狠狠躁躁| 国产精品久久久av美女十八| 国产精品免费一区二区三区在线 | 一a级毛片在线观看| 久久人人97超碰香蕉20202| 韩国精品一区二区三区| 在线永久观看黄色视频| 久久中文看片网| 久久中文字幕一级| 国产精品久久视频播放| 亚洲aⅴ乱码一区二区在线播放 | 建设人人有责人人尽责人人享有的| 乱人伦中国视频| 1024香蕉在线观看| 黑丝袜美女国产一区| 久久精品人人爽人人爽视色| 亚洲精品美女久久久久99蜜臀| 国产淫语在线视频| 日本黄色日本黄色录像| 一个人免费在线观看的高清视频| 18禁美女被吸乳视频| 老鸭窝网址在线观看| 国产精华一区二区三区| 国产极品粉嫩免费观看在线| 99国产综合亚洲精品| 在线永久观看黄色视频| 在线观看一区二区三区激情| 村上凉子中文字幕在线| 在线视频色国产色| 国产精品影院久久| videos熟女内射| 天天躁狠狠躁夜夜躁狠狠躁| 丝瓜视频免费看黄片| 黄片小视频在线播放| 国产成人精品久久二区二区91| 女警被强在线播放| 国产日韩一区二区三区精品不卡| 搡老熟女国产l中国老女人| 色综合婷婷激情| 国产aⅴ精品一区二区三区波| 亚洲精品av麻豆狂野| 一边摸一边抽搐一进一出视频| 精品少妇一区二区三区视频日本电影| 亚洲熟女毛片儿| 精品少妇一区二区三区视频日本电影| 99国产精品一区二区三区| 精品熟女少妇八av免费久了| av天堂久久9| 亚洲精品一卡2卡三卡4卡5卡| 亚洲少妇的诱惑av| 午夜福利在线免费观看网站| 高清毛片免费观看视频网站 | 一区二区三区国产精品乱码| 无限看片的www在线观看| 日韩中文字幕欧美一区二区| 国产av一区二区精品久久| 免费观看人在逋| 女人精品久久久久毛片| 一级毛片精品| 麻豆乱淫一区二区| 国产精品久久久av美女十八| 视频区欧美日本亚洲| 一级,二级,三级黄色视频| 777米奇影视久久| 不卡av一区二区三区| 精品亚洲成国产av| 亚洲中文av在线| 一级片免费观看大全| 极品人妻少妇av视频| 一级毛片女人18水好多| 国产精品亚洲一级av第二区| 十八禁高潮呻吟视频| 亚洲欧美色中文字幕在线| 亚洲成av片中文字幕在线观看| 男女之事视频高清在线观看| 操出白浆在线播放| 在线av久久热| 啦啦啦在线免费观看视频4| 亚洲中文字幕日韩| 一a级毛片在线观看| 亚洲国产精品一区二区三区在线| 在线观看日韩欧美| 国产成人欧美在线观看 | 下体分泌物呈黄色| 丰满饥渴人妻一区二区三| 精品久久久久久,| 久久久久久久午夜电影 | 性色av乱码一区二区三区2| 国产极品粉嫩免费观看在线| 亚洲avbb在线观看| 国精品久久久久久国模美| 国产精品乱码一区二三区的特点 | 中文字幕色久视频| 久久热在线av| 岛国在线观看网站| 午夜精品国产一区二区电影| 亚洲精品一二三| 18禁裸乳无遮挡免费网站照片 | 成人免费观看视频高清| 日韩大码丰满熟妇| 老司机福利观看| 国产1区2区3区精品| 国产99白浆流出| 巨乳人妻的诱惑在线观看| 一a级毛片在线观看| 国产精品久久久久久精品古装| 国产亚洲精品第一综合不卡| 国产一区二区三区综合在线观看| 交换朋友夫妻互换小说| 大码成人一级视频| 欧美乱码精品一区二区三区| 亚洲欧美日韩另类电影网站| 免费看a级黄色片| 国产精品二区激情视频| 免费高清在线观看日韩| 午夜福利在线观看吧| 国产熟女午夜一区二区三区| 国产单亲对白刺激| 精品欧美一区二区三区在线| 热99re8久久精品国产| www.熟女人妻精品国产| 午夜免费鲁丝| 欧美人与性动交α欧美软件| 久久精品国产a三级三级三级| 免费高清在线观看日韩| 两人在一起打扑克的视频| 国内久久婷婷六月综合欲色啪| 国产精品久久电影中文字幕 | 久久 成人 亚洲| 久久精品亚洲熟妇少妇任你| 国产精品亚洲一级av第二区| 亚洲专区中文字幕在线| 久久狼人影院| 美女高潮喷水抽搐中文字幕| 丝袜美腿诱惑在线| 色精品久久人妻99蜜桃| 欧美人与性动交α欧美精品济南到| 日韩一卡2卡3卡4卡2021年| 欧美国产精品一级二级三级| 亚洲精品av麻豆狂野| 老司机深夜福利视频在线观看| 中文亚洲av片在线观看爽 | 香蕉国产在线看| 国产成人欧美在线观看 | 久久精品熟女亚洲av麻豆精品| 精品人妻熟女毛片av久久网站| 99国产综合亚洲精品| 视频在线观看一区二区三区| 国产成人av教育| 日日爽夜夜爽网站| 免费在线观看影片大全网站| www.999成人在线观看| 久久久久久久精品吃奶| 狠狠婷婷综合久久久久久88av| av福利片在线| 在线观看免费高清a一片| 纯流量卡能插随身wifi吗| 中文字幕人妻丝袜一区二区| 香蕉国产在线看| 久久久久久久久免费视频了| 午夜91福利影院| 亚洲精品中文字幕在线视频| 成年动漫av网址| 久久精品国产亚洲av高清一级| 校园春色视频在线观看| 精品福利观看| 午夜福利乱码中文字幕| 亚洲国产欧美一区二区综合| 国产片内射在线| 亚洲午夜精品一区,二区,三区| 婷婷精品国产亚洲av在线 | 亚洲avbb在线观看| 成人特级黄色片久久久久久久| 人人妻人人澡人人爽人人夜夜| 欧美丝袜亚洲另类 | 人妻一区二区av| 欧美日韩亚洲综合一区二区三区_| 99热国产这里只有精品6| 9191精品国产免费久久| 一a级毛片在线观看| 桃红色精品国产亚洲av| 欧美日韩国产mv在线观看视频| 精品国产一区二区三区四区第35| 日韩人妻精品一区2区三区| 国产精品一区二区在线不卡| 一级a爱视频在线免费观看| 国产无遮挡羞羞视频在线观看| 精品国产一区二区三区四区第35| 91成年电影在线观看| 后天国语完整版免费观看| 老司机午夜福利在线观看视频| 色婷婷久久久亚洲欧美| 一a级毛片在线观看| 麻豆av在线久日| 淫妇啪啪啪对白视频| 午夜精品国产一区二区电影| 精品国产乱码久久久久久男人| 中文字幕色久视频| 两性午夜刺激爽爽歪歪视频在线观看 | 国产精品一区二区在线观看99| 精品无人区乱码1区二区| 日日夜夜操网爽| 少妇粗大呻吟视频| 国产精品九九99| 亚洲成a人片在线一区二区| 超碰成人久久| 人妻 亚洲 视频| 欧美日韩成人在线一区二区| 久久 成人 亚洲| av天堂在线播放| 日韩视频一区二区在线观看| 在线免费观看的www视频| 国产一区二区激情短视频| 在线观看日韩欧美| 中文字幕精品免费在线观看视频| 国产蜜桃级精品一区二区三区 | 欧美成人午夜精品| 黑丝袜美女国产一区| 成熟少妇高潮喷水视频| 国产成人欧美| 欧美日韩亚洲综合一区二区三区_| 久久99一区二区三区| 性少妇av在线| 王馨瑶露胸无遮挡在线观看| 亚洲专区中文字幕在线| 亚洲成av片中文字幕在线观看| 久久久久国内视频| 法律面前人人平等表现在哪些方面| 欧美国产精品va在线观看不卡| 无遮挡黄片免费观看| 欧美日韩中文字幕国产精品一区二区三区 | 成人永久免费在线观看视频| www.自偷自拍.com| av视频免费观看在线观看| 满18在线观看网站| 18禁裸乳无遮挡动漫免费视频| 精品国内亚洲2022精品成人 | 欧美在线黄色| 中文字幕人妻丝袜制服| 两人在一起打扑克的视频| 在线观看免费高清a一片| 飞空精品影院首页| 亚洲男人天堂网一区| 国产色视频综合| 精品第一国产精品| 国产精品一区二区精品视频观看| 黄色女人牲交| 麻豆国产av国片精品| 国产成人精品久久二区二区免费| 国产亚洲精品第一综合不卡| 精品国产乱码久久久久久男人| e午夜精品久久久久久久| 精品久久久精品久久久| 亚洲精品在线观看二区| 男女之事视频高清在线观看| 久久国产精品人妻蜜桃| 欧美+亚洲+日韩+国产| cao死你这个sao货| 国产成人av激情在线播放| 在线视频色国产色| 亚洲精品自拍成人| 国产日韩一区二区三区精品不卡| 国产亚洲精品久久久久久毛片 | 国产精品久久电影中文字幕 | 国产成人精品在线电影| 操美女的视频在线观看| 国产乱人伦免费视频| 亚洲国产看品久久| 日韩大码丰满熟妇| 久久青草综合色| 一级a爱片免费观看的视频| 日本a在线网址| 少妇的丰满在线观看| 好看av亚洲va欧美ⅴa在| 欧美精品高潮呻吟av久久| 国产精品久久视频播放| 午夜免费鲁丝| 婷婷精品国产亚洲av在线 | 侵犯人妻中文字幕一二三四区| 日韩视频一区二区在线观看| 亚洲精品国产色婷婷电影| 亚洲伊人色综图| 久久精品成人免费网站| 少妇猛男粗大的猛烈进出视频| 精品一品国产午夜福利视频| 18禁裸乳无遮挡免费网站照片 | 18禁美女被吸乳视频| 大码成人一级视频| 亚洲专区国产一区二区| 成年人免费黄色播放视频| 欧美成人免费av一区二区三区 | 免费在线观看完整版高清| 两人在一起打扑克的视频| 亚洲三区欧美一区| 亚洲九九香蕉| 丰满迷人的少妇在线观看| 亚洲精品在线美女| 人人妻人人澡人人爽人人夜夜| √禁漫天堂资源中文www| 亚洲 国产 在线| 久久天躁狠狠躁夜夜2o2o| 久久久久久久久免费视频了| 少妇粗大呻吟视频| 中文字幕人妻丝袜制服| 黑人欧美特级aaaaaa片| 一a级毛片在线观看| 国产国语露脸激情在线看| 老司机亚洲免费影院| 日本黄色日本黄色录像| 亚洲avbb在线观看| 日本五十路高清| 国产精品 国内视频| 国产成人免费观看mmmm| 欧美激情久久久久久爽电影 | 亚洲精品国产色婷婷电影| 一级a爱片免费观看的视频| 欧美日韩视频精品一区| 老司机亚洲免费影院| 亚洲精品粉嫩美女一区| 国产精品久久电影中文字幕 | 精品电影一区二区在线| 国产亚洲精品久久久久5区| 欧洲精品卡2卡3卡4卡5卡区| 精品国产超薄肉色丝袜足j| 国产三级黄色录像| 777米奇影视久久| 精品国产超薄肉色丝袜足j| 精品久久久久久久毛片微露脸| 校园春色视频在线观看| 精品福利观看| 露出奶头的视频| 精品少妇久久久久久888优播| 久久久精品免费免费高清| 极品人妻少妇av视频| 少妇粗大呻吟视频| 亚洲人成电影观看| 一本综合久久免费| 十八禁网站免费在线| 亚洲黑人精品在线| 国产99久久九九免费精品| 亚洲国产看品久久| 99精品久久久久人妻精品| 一二三四在线观看免费中文在| 香蕉丝袜av| 在线观看免费高清a一片| 亚洲视频免费观看视频| 国产区一区二久久| 不卡一级毛片| 交换朋友夫妻互换小说| 丁香六月欧美| 欧美黑人精品巨大| 日韩一卡2卡3卡4卡2021年| 丰满饥渴人妻一区二区三| 国产成人精品久久二区二区91| 91麻豆av在线| 涩涩av久久男人的天堂| 日韩大码丰满熟妇| 两性午夜刺激爽爽歪歪视频在线观看 | 十八禁人妻一区二区| 99国产精品99久久久久| 国产精品香港三级国产av潘金莲| 久99久视频精品免费| 一级a爱视频在线免费观看| 99国产综合亚洲精品| 精品国内亚洲2022精品成人 | 亚洲av第一区精品v没综合| 啦啦啦在线免费观看视频4| 国产不卡一卡二| 午夜精品久久久久久毛片777| 日韩免费高清中文字幕av| 亚洲熟妇中文字幕五十中出 | 狂野欧美激情性xxxx| 不卡一级毛片| 亚洲五月天丁香| 国产精品乱码一区二三区的特点 | 国产免费男女视频| 亚洲av日韩精品久久久久久密| 最新美女视频免费是黄的| 免费黄频网站在线观看国产| 大香蕉久久成人网| 久久精品亚洲熟妇少妇任你| 亚洲av日韩精品久久久久久密| 成人18禁高潮啪啪吃奶动态图| 午夜免费成人在线视频| 欧美亚洲 丝袜 人妻 在线| 又黄又粗又硬又大视频| 日韩三级视频一区二区三区| 高清在线国产一区| 精品国产美女av久久久久小说| 国产激情久久老熟女| av欧美777| 国产成人欧美在线观看 | 99热国产这里只有精品6| 国产亚洲一区二区精品| 俄罗斯特黄特色一大片| 国产精品一区二区在线不卡| 成年动漫av网址| 老司机午夜福利在线观看视频| 久久精品国产a三级三级三级| 一边摸一边抽搐一进一小说 | 美女高潮到喷水免费观看| 老司机午夜福利在线观看视频| av天堂在线播放| 欧美亚洲 丝袜 人妻 在线| 国产乱人伦免费视频| 男女之事视频高清在线观看| 搡老熟女国产l中国老女人| 大陆偷拍与自拍| 精品国产国语对白av| 亚洲精品粉嫩美女一区| 欧美国产精品一级二级三级| 老熟女久久久| 免费观看a级毛片全部| 变态另类成人亚洲欧美熟女 | 精品少妇久久久久久888优播| 他把我摸到了高潮在线观看| 中出人妻视频一区二区| 国产在线一区二区三区精| 亚洲熟女精品中文字幕| 一级黄色大片毛片| 中文字幕人妻丝袜制服| 久久精品国产综合久久久| 国产免费现黄频在线看| 久久热在线av| 人妻 亚洲 视频| 色婷婷av一区二区三区视频| 国产无遮挡羞羞视频在线观看| 啪啪无遮挡十八禁网站| 美女福利国产在线| 99国产精品免费福利视频| 国产精品永久免费网站| 亚洲第一av免费看| 亚洲欧美激情综合另类| 国产高清视频在线播放一区| 高清毛片免费观看视频网站 | 一级作爱视频免费观看| 中文亚洲av片在线观看爽 | 国产激情欧美一区二区| 亚洲,欧美精品.| 法律面前人人平等表现在哪些方面| 久久久国产欧美日韩av| 精品国产一区二区久久| 每晚都被弄得嗷嗷叫到高潮| 桃红色精品国产亚洲av| 一级a爱视频在线免费观看| 男女免费视频国产| 搡老熟女国产l中国老女人| 大香蕉久久网| 亚洲欧美日韩高清在线视频| 91麻豆精品激情在线观看国产 | 欧美黄色淫秽网站| 亚洲av成人不卡在线观看播放网| 久久久久国内视频| 国产精品99久久99久久久不卡| 51午夜福利影视在线观看| 女人爽到高潮嗷嗷叫在线视频| 亚洲性夜色夜夜综合| 色综合婷婷激情| 欧美激情极品国产一区二区三区| 黄色怎么调成土黄色| 19禁男女啪啪无遮挡网站| 王馨瑶露胸无遮挡在线观看| 极品教师在线免费播放| 在线十欧美十亚洲十日本专区| 老鸭窝网址在线观看| 亚洲专区国产一区二区| 青草久久国产| 三上悠亚av全集在线观看| 国产成人一区二区三区免费视频网站| 久久久国产欧美日韩av| 少妇的丰满在线观看| 亚洲国产毛片av蜜桃av| 欧美黄色淫秽网站| av中文乱码字幕在线| 99久久精品国产亚洲精品| av线在线观看网站| 久久这里只有精品19| 国产男靠女视频免费网站| 亚洲成a人片在线一区二区| 极品少妇高潮喷水抽搐| 国产成人欧美在线观看 | 最新在线观看一区二区三区| 欧美精品高潮呻吟av久久| 欧洲精品卡2卡3卡4卡5卡区| 久热这里只有精品99| 搡老乐熟女国产| 精品人妻在线不人妻| 久久国产精品人妻蜜桃| 老汉色av国产亚洲站长工具| 一级毛片精品| 俄罗斯特黄特色一大片| 亚洲国产精品合色在线| 久久精品成人免费网站| 黄网站色视频无遮挡免费观看| 后天国语完整版免费观看| 国产深夜福利视频在线观看| 18禁裸乳无遮挡动漫免费视频| 亚洲国产毛片av蜜桃av| 色综合欧美亚洲国产小说| 久久精品熟女亚洲av麻豆精品| 校园春色视频在线观看| 日韩精品免费视频一区二区三区| 午夜久久久在线观看| 国产免费av片在线观看野外av| 久久午夜综合久久蜜桃| 国产欧美日韩精品亚洲av| 在线观看舔阴道视频|