• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Spatio-Temporal Context-Guided Algorithm for Lossless Point Cloud Geometry Compression

    2024-01-12 14:48:28ZHANGHuiranDONGZhenWANGMingsheng
    ZTE Communications 2023年4期

    ZHANG Huiran , DONG Zhen, WANG Mingsheng

    (1. Guangzhou Urban Planning and Design Survey Research Institute,Guangzhou 510060, China;2. Guangdong Enterprise Key Laboratory for Urban Sensing, Monitoring and Early Warning, Guangzhou 510060, China;3. State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China)

    Abstract: Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence, autonomous driving, and cultural heritage preservation. However, point cloud data are distributed irregularly and discontinuously in spatial and temporal domains, where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem. In this paper, we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression. The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis. Then, it introduces a prediction method where both intraframe and inter-frame point clouds are available, by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm. Finally, the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques. Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information, and is suitable for 3D point cloud compression applicable to various types of scenes.

    Keywords: point cloud geometry compression; single-frame point clouds; multi-frame point clouds; predictive coding; arithmetic coding

    1 Introduction

    With the improvement of multi-platform and multiresolution acquisition equipment performance,light detection and ranging (LiDAR) technology can efficiently simulate 3D objects or scenes with massive point sets. Compared with traditional multimedia data, point cloud data contain more physical measurement information which represents objects from free viewpoints, even scenes with complex topological structures. This results in strong interactive and immersive effects that provide users with a vivid and realistic visualization experience. Additionally, point cloud data have stronger anti-noise ability and parallel processing capability, which seems to have gained attraction from the industry and academia, notably for application domains such as cultural heritage preservation, 3D immersive telepresence and automatic driving[1-2].

    However, point cloud data usually contain millions to billions of points in spatial domains, bringing burdens and challenges to the storage space capacity and network transmission bandwidth. For instance, a common dynamic point cloud utilized for entertainment usually comprises roughly one million points per frame, which, at 30 frames per second,amounts to a total bandwidth of 3.6 Gbit/s if left uncompressed[3]. Therefore, the research on high efficiency geometry compression algorithms for point clouds has important theoretical and practical value.

    Prior work tackled this problem by directly building grids or on-demand down-sampling, due to limitations in computer computing power and point cloud collection efficiency, which resulted in low spatio-temporal compression performance and loss of geometric attribute feature information. Recent studies were mainly based on computer graphics and digital signal processing techniques to implement block operations on point cloud data[4-5]or combined video coding technology[6-7]for optimization. In 2017, the Moving Picture Experts Group(MPEG) solicited proposals for point cloud compression and conducted subsequent discussions on how to compress this type of data. With increasing approaches to point cloud compression available and presented, two-point cloud data compression frameworks—TMC13 and TMC2 were issued in 2018. The research above shows remarkable progress has been made in the compression technology of point cloud. However, prior work mostly dealt with the spatial and temporal correlation of point clouds separately but had not yet been exploited to their full potential in point cloud compression.

    To address the aforementioned challenges, we introduce a spatio-temporal context-guided method for lossless point cloud geometry compression. We first divide point clouds into unit layers along the main axis. We then design a prediction mode via a travelling salesman algorithm, by adopting spatiotemporal correlation. Finally, the residuals are written into bitstreams with a utilized context-adaptive arithmetic encoder.Our main contributions are as follows.

    1) We design a prediction mode applicable to both intraframe and inter-frame point cloud, via the extended travelling salesman problem (TSP). By leveraging both the spatial and temporal redundancies of point clouds, the geometry prediction can make better use of spatial correlation and therefore enable various types of scenarios.

    2) We present an adaptive arithmetic encoder with fast context update, which selects the optimal 3D context from the context dictionary, and suppresses the increase of entropy estimation. As a result, it enhances the probability calculation efficiency of entropy encoders and yields significant compression results.

    The rest of this paper is structured as follows. Section 2 gives an outline of related work on point cloud geometry compression. Section 3 firstly presents an overview of the proposed framework. Then, the proposed method is descibed in detail. Experimental results and conclusions are presented in Sections 4 and 5, respectively.

    2 Related Work

    There have been many point cloud geometry compression algorithms proposed in the literature. CAO et al.[8]and GRAZIOSI et al.[9]conduct an investigation and summary of current point cloud compression methods, focusing on spatial dimension compression technology and MPEG standardization frameworks respectively. We provide a brief review of recent developments in two categories: single-frame point cloud compression and multi-frame point cloud compression.

    2.1 Single-Frame Point Cloud Compression

    Single-frame point clouds are widely used in engineering surveys, cultural heritage preservation, geographic information systems, and other scenarios. The octree is a widely used data structure to efficiently represent point clouds, which can be compressed by recording information through the occupied nodes. HUANG et al.[10]propose an octree-based method that recursively subdivides the point cloud into nodes with their positions represented by the geometric center of each unit.FAN et al.[11]further improve this method by introducing cluster analysis to generate a level of detail (LOD) hierarchy and encoding it in a breadth-first order. However, these methods can cause distortion due to the approximation of the original model during the iterative process.

    To address these limitations, scholars have introduced geometric structure features, such as the triangular surface model[12], the planar surface model[13-14], and the clustering algorithm[15], for inter-layer prediction and residual calculation.RENTE et al.[16]propose a concept of progressive layered compression that first uses the octree structure for coarse-grained encoding and then uses the graph Fourier transform for compression and reconstruction of cloud details. In 2019, MPEG released the geometry-based point cloud compression (GPCC) technology for both static and dynamic point clouds,which is implemented through coordinate transformation, voxelization, geometric structure analysis, and arithmetic coding step by step[17].

    Since certain octants within an octree may be sparsely populated or even empty, some methods have been proposed to optimize the tree structure by pruning sub-nodes and therefore conserve memory allocation. For example, DRICOT et al.[18]propose an inferred direct coding mode (IDCM) for terminating the octree partition based on predefined conditions of sparsity analysis, which involves pruning the octree structure to save bits allocated to child nodes. ZHANG et al.[19]suggest subdividing the point cloud space along principal components and adapting the partition method from the binary tree,quadtree and octree. Compared with the traditional octree partitioning, the hybrid models mentioned above can effectively reduce the number of bits used to represent sparse points,therefore saving nodes that need to be encoded. However, complex hyperparameter conditions and mode determination are required in the process, making it difficult to meet the requirements of self-adaptation and low complexity.

    With deep neural networks making significant strides in image and video compression, researchers have explored ways to further reduce bit rates by leveraging super prior guidance and the redundancy of latent space expression during the compression process. QUACH et al.[20]and HUANG et al.[21]propose methods that incorporate these concepts. GUARDA et al.combine convolutional neural networks and autoencoders to exploit redundancy between adjacent points and enhance coding adaptability in Ref. [22]. Recently, WANG et al.[23]propose a point cloud compression method based on the variational auto-encoder, which improves the compression ratio by learning the hyperprior and reducing the memory consumption of arithmetic coding. The aforementioned methods use neural network encoders to capture the high-order hidden vector of the point cloud, the entropy model probabilities, and the edge probabilities of which fit better, thus reducing the memory consumption of arithmetic coding.

    Generally speaking, the research on single-frame point cloud geometric compression is relatively mature, but there are two challenges that remain yet. Spatial correlation has not been utilized effectively, and most methods do not code the correlation of point cloud data thoroughly and efficiently.Besides, the calculation of the probability model for entropy coding appears long and arduous due to the massive number of contexts.

    2.2 Multi-Frame Point Cloud Compression

    Multi-frame point clouds are commonly used in scenarios such as real-time 3D immersive telepresence, interactive VR,3D free viewpoint broadcasting and automatic driving. Unlike single-frame point cloud compression, multi-frame point cloud compression prioritizes the use of time correlation, as well as motion estimation and compensation. The existing methods for multi-frame point cloud compression can be divided into two categories: 2D projection and 3D decorrelation.

    The field of image and video compression is extensive and has been well-explored over the past few decades. Various algorithms convert point clouds into images and then compress them straightforwardly by FFmpeg and H.265 encoders, etc.AINALA et al[24]introduce a planar projection approximate encoding mode that encodes both geometry and color attributes through raster scanning on the plane. However, this method causes changes in the target shape during the mapping process, making accurate inter-prediction difficult. Therefore,SCHWARZ et al.[25]and SEVOM et al.[26]suggest rotated planar projection, cube projection, and patch-based projection methods to convert point clouds into 2D videos, respectively.By placing similar projections in adjacent frames at the same location in adjacent images, the video compressor can fully remove temporal correlation. In Ref. [27], inter-geometry prediction is conducted via TSP, which computes the one-to-one correspondence of adjacent intra-blocks by searching for the block with the closest average value. MPEG released the video-based point cloud compression (V-PCC) technology for dynamic point clouds in 2019[28]. This framework divides the input point cloud into small blocks with similar normal vectors and continuous space, then converts them to the planar surface through cubes to record the occupancy image and auxiliary information. All resulting images are compressed by mature video codecs, and all bitstreams are assembled into a single output file. Other attempts have been made to improve the effectiveness of these methods. COSTA et al.[29]exploit several new patch packaging strategies from the perspective of optimization for the packaging algorithm, data packaging links,related sorting, and positioning indicators. Furthermore,PARK et al.[30]design a data-adaptive packing method that adaptively groups adjacent frames into the same group according to the structural similarity without affecting the performance of the V-PCC stream.

    Due to the inevitable information loss caused by point cloud projection, scholars have developed effective techniques to compress the point cloud sequence of consecutive frames using motion compensation technology based on 3D space.KAMMERL et al.[31]propose an octree-based geometric encoding method, which achieves high compression efficiency by performing the exclusive OR (XOR) differences between adjacent frames. This method has been not only adopted in the popular Point Cloud Library (PCL)[32]but also widely used for further algorithm research. Other interframe approaches convert the 3D motion estimation problem into a feature matching problem[33]or use reconstructed geometric information[34]to predict motion vectors and identify the corresponding relationship between adjacent frames accurately. Recent explosive studies[35-36]have shown that the learned video compression offers better rate-distortion performance over traditional ones,bringing significant reference significance to point cloud compression. ZHAO et al.[37]introduce a bi-directional inter-frame prediction network to perform inter-frame prediction and bring effective utilization of relevant information in spatial and temporal dimensions. KAYA et al.[38]design a new paradigm for encoding geometric features of dense point cloud sequences,optimizing the CNN for estimating the encoding distribution to realize lossless compression of dense point clouds.

    Despite progress in the compression coding technology of multi-frame point cloud models, two problems persist. The existing multi-frame point cloud compression approaches mainly rely on video coding and motion compensation, which inevitably involves information loss or distortion caused by mapping and block edge discontinuity. In addition, predictive coding exhibits low applicability due to the inconsistency of inter-frame point cloud geometry. The apparent offset of points between frames and the unavoidable noise increases the difficulty of effectively using predictive coding in inter-frame compression.

    3 Proposed Spatio-Temporal Context-Guided Lossless Geometry Point Cloud Compression Method

    3.1 Overview

    The overall pipeline of our spatio-temporal context-guided algorithm is shown in Fig. 1. First, we preprocess the input point cloud by applying voxelization and scale transformation.Then, the point cloud is divided into unit thickness sliced layers along the main axis. Next, we design a prediction mode that makes full use of the temporal and spatial correlation information within both intra-frame and inter-frame. We calculate the shortest path of points of reference layers (R-layers)via travelling salesman algorithms, and the results of the Rlayers are then used to predict spatio-temporally and encode the rest of the point clouds, namely predicted layers (Players). Finally, the improved entropy coding algorithms are adopted to obtain the compressed binary file.

    ▲Figure 1. Proposed framework for spatio-temporal context-guided lossless point cloud geometry compression

    3.2 Image Sliced-Based Hierarchical Division

    1) Pre-processing

    The pre-processing module includes voxelization and scale transformation, for better indexing of each certain point. In voxelization, we divide the space into cubes of sizeN, which corresponds to the actual resolution of the point cloud. Each point is assigned a unique voxel based on its position. A voxel is recorded as 1; if it is positively occupied, it is 0 otherwise.

    Scale transformation can reduce the sparsity for better compression by zooming out the point cloud, where the distance between points gets smaller. We aggregate the point cloud coordinates (x,y,z) using a scaling factors, i.e.,

    To ensure lossless compression, we need to ensure that the scaling factorscannot cause geometry loss and needs to be recorded in the header file.

    2) Sliced-layer division

    This module works by dividing the 3D point cloud along one of its axes, creating several unit-sliced layers with occupied and non-occupied information only that can be further compressed using a predictive encoder and an arithmetic coder. The function is defined as:

    whereGrefers to the input point cloud coordinate matrix, axis refers to the selected dimension, andS(a,b) is the 2D slice extracted by each layer.

    In general, we conduct experiments on a large number of test sequences, and results suggest that division along the longest axis of point cloud spatial variation yields the lowest bitrate, i.e.

    3) Minimum bounding box extraction

    In most cases, on-occupied voxels are typically unavoidable and greatly outnumber occupied voxels. As a result, processing and encoding both types of voxels simultaneously burdens the computational complexity and encoding speeds of the compression algorithm. Therefore, we adopt the oriented bounding box (OBB)[39]to calculate the minimum bounding box for each sliced layer, ensuring that the directions of the bounding boxes are consistent across layers. In subsequent processing, only the voxels located within the restricted rectangle are compressed.

    3.3 Spatial Context-Guided Predictive Encoding

    The goal of spatial context-guided predictive encoding is to encode all the points layer by layer. Inspired by the TSP, we design a prediction mode to explore the potential orders and correlation within each sliced layer. This module consists of partition and the shortest path calculation.

    At first, we partition the sliced layers and determine the Rlayer and R-layers for each group. We traverse the point cloud layer by layer along the selected axis. When the length of the main direction of the minimum bounding box between adjacent layers differs by a specified unit length, it is recorded as the same group. Otherwise, it is used as the reference layer of the next group, and each point cloud in the following group uses the same shortest path. In this paper, we set the first layer of each group as the R-layer, and the others as P-layers.We also carry out experiments on a large number of test sequences and recommend setting this specified parameter as 3 units to obtain the best compression.

    Afterwards, we conduct the shortest path calculation on the R-layers and record the residuals of P-layers. According to the distribution regulation of the point cloud of each slice layer,we optimally arrange the irregular point clouds for each slice layer based on the TSP algorithm. This allows us to efficiently compute the shortest path to the point cloud of the R-layers,and then record the residuals of the corresponding prediction layers. Algorithm 1 shows the pseudo-code of the prediction procedure.

    Firstly, we define the distance calculation rule between points in the local area and initialize the path state with a randomly selected point pc1. In each iteration, whenever a new point pciis added, the permutation is dynamically updated through the state transition equation path(P-i,i) until all added points are recorded inPin the order of the shortest path. This process is modified gradually based on the minimal distance criterion. After all iterations are completed in the total shortest path, we calculate the mindist(pci, pcj) in each of the R-layers, and return the shortest path record table of point clouds in each of the R-layers. For further compression, we calculate the deviation of the P-layers from the shortest path of the R-layer within the same group and record them as predictive residuals. Finally, the shortest path of the Rlayer and the residuals of each group are output and passed to the entropy encoder to compress prediction residuals further.

    3.4 Spatio-Temporal Context-Guided Predictive Encoding

    The spatial context-guided prediction mode encodes single-frame point clouds individually. However, applying spatial encoding to each single-frame point cloud separately can miss out on opportunities exposed by the temporal correlations across multi-frame point cloud. Considering that multi-frame point cloud shares large chunks of overlaps, we focus on using temporal redundancy to further enhance the compression efficiency. Hence, based on the proposed spatial context-guided prediction mode, we can compress multiframe point cloud by identifying a correspondence between adjacent layers across frames.

    1) Inter-frame partition

    To enhance the effectiveness of inter-frame prediction mode, it is crucial to ensure adequate similarity between adjacent layers of frames. As a result, we need to partition the groups between adjacent frames and determine the R-layers and P-layers across frames. By estimating the shortest path of the P-layers based on the shortest path of the R-layers, we record the prediction residuals and further compress them through the entropy encoder. Algorithm 2 shows the pseudocode of the inter-frame partition.

    Algorithm 2. Inter-frame partition 1: Input: point cloud sliced-layers S1,S2,…,Sn, and principal axis lengths hi of Si inter-frame point cloud sliced layers SS1,SS2,…,SSn, and principal axis lengths hhi of SSi 2: Output: correspondence and partition of the adjacent layers’ relationship 3: Initialization: set S1 and SS1 as corresponding layers 4: for new Si and SSi do :5: coarse partition: set Si and SSi as corresponding layers 6: if |hi - hhi|≤3 :7: fine partition: set Si and SSi as corresponding layers 8: else if 9: compare |hi - hhi|, |hi - hhi - 1|, and |hi - hhi + 1|, and pick the minimum 10: set the slice layer corresponding to the minimum and SSi as corresponding layers 11: else 12: set as a single layer 13: end for

    Based on sliced-layers orientation alignment, we realize coarse partition and fine partition successively. For coarse partition, we sort the sliced layers of each frame based on the coordinates corresponding to the division axes, from small to large. As a result, each slice layer of each frame has a unique layer number, allowing us to coarsely partition the slice layers with the same number between adjacent frames. Afterward, we compute the difference between the principal axis lengths of the minimum bounding boxes of adjacent layers with the same number. If this value is less than or equal to a specified length unit, the layers will be partitioned into the same group. Otherwise, we compare the difference in the length of the main direction axis of the minimum bounding box in the corresponding layer of the adjacent frame with the specified layer before and after the number in the adjacent frame. The layer with the smallest difference is then partitioned into the same group.This ensures a fine partition between adjacent layers, and so as to realize the fine partition of the adjacent relationship.

    2) Spatio-temporal context-guided prediction mode

    Based on the partition, we apply and expand the prediction mode mentioned in Section 3.3. We incorporate inter-frame context in the process, meaning that the first layer of each group, which serves as the R-layer, may not necessarily yield the best prediction result. To fully explore the potential correlation between adjacent layers, we need to expose the optimal prediction mode.

    Firstly, we calculate the prediction residuals for each sliced-layer in the current group when used as the R-layer. By comparing the prediction residuals in all cases, we select the R-layer with the smallest absolute residual value as the best prediction mode. For R-layer shortest path calculation, we use the travelling salesman algorithm to compute the shortest path of the R-layers under the best prediction mode. Moreover, we calculate the prediction residuals for each group under their respective best prediction modes. We also record the occupancy length and R-layer information of each group for further compression in subsequent processing.

    In the follow-up operation, we use arithmetic coding based on the best context selection for the above information to complete the entire process of the multi-frame point cloud geometry compression algorithm.

    3.5 Arithmetic Coding Based on Context Dictionary

    The massive amount of context in point cloud significantly burdens the overall compression scheme in terms of arithmetic coding computational complexity. We improve the arithmetic coding from the following two modules. 1) We set up a context dictionary, and select and update the global optimal value according to the entropy estimate, and then 2) we adopt adaptive encoders to efficiently calculate the upper and lower bounds of probabilities.

    1) Context dictionary construction

    We construct a context dictionary that represents a triple queue, consisting of coordinates of the point cloud at each sliced-layer and the integer representation of its corresponding non-empty context. Thus, we associate the voxels contained in the point cloud with the minimum bounding box of each layer with its non-empty context. To illustrate the construction of the triple queue array of the context dictionary clearly, we give an intuitive explanation in Fig. 2.

    For the shaded two squares in Fig. 2, only the context map positions pc1and pc2are considered. The context contribution along thex-axis and they-axis is recorded to the two queuesQX-andQY-respectively. Thus the context dictionary consists ofQX-andQY-. Queue elements with the same coordinates are integrated into a triplet, the context integer representation of which is computed as the sum of the context contributions of the merged triplet.

    Therefore, the context of each voxel can be computed as the sum of the independent contributions of occupied voxels in its context dictionary. This structure helps determine whether a voxel should be added to the context dictionary without tedious matrix lookups, resulting in a significant reduction in computational complexity and runtime.

    2) Probability calculation

    To calculate entropy probability, both the length of the sequence and the context of its constituent voxels must be taken into account. In this module, we design an adaptive encoder that first estimates the upper and lower cumulative probability bounds for each group from the context dictionary, and then encodes it subsequently.

    First of all, we construct a binary tree based on the Markov chain model. By traversing the occupancy of voxels, we assign values of 1 and 0 to occupied and empty voxels, respectively,and calculate the probability based on the tree structure. Starting from the root node, when a voxel is occupied, we record the left child node as 1. Otherwise, we mark the right child node as 0 and proceed to the next step of judgment and division. The calculation formula for the run probability of occupied voxels can be found in Eq. (4).

    wherelis the length of the run ending at the occupied voxel.

    ▲Figure 2. Construction of the context dictionary

    For run lengths less than or equal ton, there may be 2nof tree nodes representing the occupancy states of voxels. Therefore, the probability of any occupied voxel is represented by the independent joint probability of traversing all states starting at the root and ending at any childless node of the tree.

    Based on Eq. (4), to perform arithmetic encoding on the occupancy of the voxel sequence, we need the cumulative upper and lower probabilities of the sequence, as shown in Eq. (5).

    Employing this approach, we can utilize the adaptive properties of arithmetic coding to adjust the probability estimation value of each symbol based on the optimized probability estimation model and the frequency of each symbol in the current symbol sequence. This allows us to calculate the upper and lower bounds of the cumulative probability of occupied voxels and complete the encoding process.

    4 Experiment

    4.1 Implementation Details

    1) Dataset. To verify the performance of our proposed method, extensive experiments were conducted over 16 point cloud datasets that can be downloaded from Ref. [40], as shown in Fig. 3, in which Figs. 3(a)-3(l) are portraits with dense points, and Figs. 3(m) - 3(p) are architecture with sparse points. Figs. 3(a)-3(h) are voxelized upper bodies point cloud data sequences of two spatial resolutions obtained from Microsoft. Figs. 3(i)-3(l) are chosen from 8i voxelized full bodies point cloud data sequences. Remaining large-scale sparse point clouds in Figs. 3(k)-3(p) are static facade and architecture datasets.

    2) Evaluation metrics. The performance of the proposed method is evaluated in terms of bit per point (BPP). The BPP refers to the sum of bits occupied by the coordinate information attached to the point. The lower the value, the better the performance.

    where Sizedigrepresents the number of bits occupied by the coordinate information of point cloud data, andkrefers to the number of points in the original point cloud.

    3) Benchmarks. We mainly compare our method with other baseline algorithms, including: PCL-PCC: octree-based compression in PCL; G-PCC (MPEG intra-coders test model) and interEM (MPEG inter-coders test model) target single-frame and multi-frame point cloud compression respectively; The Silhouette 3D (S3D)[41]and Silhouette 4D (S4D)[42]target singleframe and multi-frame point cloud compression, respectively.For PCL, we use the octree point cloud compression approach in PCL-v1.8.1 for geometry compression only. We set octree resolution parameters from point precision and voxel resolution. For G-PCC (TM13-v11.0), we choose a lossless geometry—lossless attributes condition in an octree-predictive mode,leaving parameters as default. For interEM (tmc3v3.0), we use the experimental results under lossless geometry and lossless attributes conditions as a comparison[43]. For S3D and S4D, we follow the default conditions and parameters.

    ▲Figure 3. Point cloud sequences used in experiments: (a) Andrew_vox09, (b) Andrew_vox10, (c) David_vox09, (d) David_vox10, (e)Ricardo_vox09, (f) Ricardo_vox10, (g) Sarah_vox09, (h) Sarah_vox10, (i)Longdress_vox10, (j) Loot_vox10, (k) Redandblack_vox10, (l) Soldier_vox10, (m) Facade_00009_vox12, (n) Facade_00015_vox14, (o)Arco_Valentino_Dense_vox12, and (p) Palazzo_Carignano_Dense_vox14

    4) Hardware. The proposed algorithm is implemented in Matlab and C++ using some functions of the PCL-v1.8.1. All experiments have been tested on a laptop with Intel Core i7-8750 CPU @2.20 GHz with 8 GB memory.

    4.2 Results of Single-Frame Point Cloud Compression

    1) Compression results of portraits of dense point cloud data sequences

    Table 1 shows the performance of our spatial contextguided lossless point cloud geometry compression algorithms compared with PCL-PCC, G-PCC and S3D methods on portraits of dense point cloud data sequences.

    It can be seen from Table 1 that for all the point cloud of the same sequences, the proposed method achieves the lowest compression BPP compared with other methods. Our algorithm offers averaged gains from -1.56% to -0.02% against S3D, and gains from -10.62% to -1.45% against G-PCC. It shows a more obvious advantage, that is, the compression performance gains of the proposed algorithm range from -10.62%to -1.45%; For PCL-PCC, the proposed algorithm shows a nearly doubled gain on all sequences, ranging from -154.43%to -85.39%.

    2) Compression results of large-scale sparse point cloud data

    Because the S3D cannot work in this case, we only compare our spatial context-guided lossless geometry point cloud compression algorithm with PCL-PCC and G-PCC methods on large-scale sparse point cloud data.

    Again, our algorithm achieves considerable performance with G-PCC and PCL-PCC, as shown in Table 1. Results have shown that averaged BPP gains ranging from - 8.84% to-4.35% are captured compared with G-PCC. For PCL- PCC,our proposed algorithm shows more obvious advantages, with gains ranging from -34.69% to -23.94%.

    3) Summary

    To provide a more comprehensible comparison of the singleframe point cloud compression results, Table 2 presents the average results between our spatial context-guided compression method and other state-of-the-art benchmark methods.Compared with S3D, our proposed method shows average gains ranging from -0.58% to -3.43%. As for G-PCC and PCL-PCC, the average gains achieve at least -3.43% and-95.03% respectively.

    Experimental analysis reveals that our spatial contextguided compression method exceeds current S3D, G-PCC and PCL-PCC by a significant margin. Thus, it can satisfy the lossless compression requirements of point cloud geometry for various scene types, e.g., dense or sparse distributions, and the effectiveness of our method consistently remains.

    4.3 Results of Multi-frame Point Cloud Compression

    We evaluate our proposed spatial-temporal context-guided point cloud geometry compression algorithm against existing compression algorithms such as S4D, PCL-PCC, G-PCC and interEM. Only portraits of dense point cloud data sequences are used in this experiment. The results are illustrated inTable 3. As we can see, after optimizations in prediction mode and arithmetic encoder, the proposed algorithm shows superiority on all test sequences. Specifically, compared with interEM and G-PCC, the proposed algorithm shows significant gains ranging from -51.94% to -17.13% and -46.62%to -5.7%, respectively. Compared with S4D, the proposed algorithm shows robust improvement ranging from -12.18% to-0.33%. As for PCL-PCC, our proposed algorithm has nearly halved over all test sequences.

    ▼Table 1. BPP comparisons of our spatial context-guided compression algorithm and the baseline methods

    ▼Table 2. BPP comparison with state-of-the-art algorithms on single-frame point cloud data

    Furthermore, we summarize the compression results and gains of the proposed method on the portraits dense point cloud data sequences, listed in Table 4. On average, it delivers gains between -11.5% and -2.59% compared with the spatial context-guided point cloud geometry compression algorithm proposed previously. Moreover, it shows superior average gains of - 19% compared with G-PCC, and has achieved an average coding gain of -24.55% compared with interEM. Additionally, compared with S3D and S4D, it gains more than -6.11% and -3.64% on average respectively.

    The overall experimental analysis shows that the spatiotemporal context-guided point cloud compression method can make full use of both the spatial and temporal correlation of adjacent layers within intra-frames and inter-frames.We also improve the global context selection and probability model of the arithmetic encoder to obtain a lower bit rate.The proposed method surpasses the performance of state-of-the-art algorithms, so as to meet the requirements of point cloud geometry lossless compression in multimedia application scenarios such as dynamic portraits.

    ▼Table 3. Bit per point comparisons of our spatio-temporal context-guided compression algorithm and the baseline methods

    ▼Table 4. Bit per point comparison with state-of-the-art algorithms on multi-frame point cloud data

    4.4 Ablation Study

    We perform ablation studies on predictive encoding over 8i voxelized full-body point cloud data sequences to demonstrate the effectiveness of the partition. It can be seen from Table 5 that the improvement shows a stable gain of -70% on multiframe point cloud compression and -60% on single-frame point cloud compression against the non-partition predictive coding.

    Next, we perform an ablation experiment on arithmetic coding to demonstrate the effectiveness of the context dictionary. As shown in Table 6, a robust improvement of-33% on multi-frame point cloud compression and that of-41% on single-frame point cloud compression against the arithmetic coding without context dictionary are observed inour method.

    ▼Table 5. Ablation study on predictive encoding

    ▼Table 6. Ablation study on arithmetic coding

    4.5 Time Consumption

    We test the time consumption to evaluate the algorithm complexity and compare the proposed methods with others.The algorithm complexity is analyzed by encoders and decoders independently, listed in Table 7. As we can see, G-PCC,interEM and PCL-PCC can achieve an encoding time of less than 10 s and a decoding time of less than 5 s for portrait dense point cloud data. They also perform well in large-scale sparse point cloud data compared with others. Our proposed algorithms take around 60 s and 15 s to encode and decode portrait sequences, even more on facade and architecture point cloud data. There is a trade-off between bitrates and compression speed. Compared with S3D and S4D, which take hundreds of seconds to encode, our time-consuming method can show superiority.

    In summary, the time consumption of our proposed methods is medium among all the compared algorithms but still necessary to be further improved.

    5 Conclusions

    In this paper, we propose a spatio-temporal contextguided method for lossless point cloud geometry compression. We consider sliced point cloud of unit thickness as the input unit and adopt the geometry predictive coding mode based on the travelling salesman algorithm, which applies to both intra-frame and inter-frame. Moreover, we make full use of the global context information and adaptive arithmetic encoder based on context fast update to achieve lossless compression and decompression results of point clouds. Experimental results demonstrate the effectiveness of our methods and their superiority over previous studies. For future work, we plan to further study the overall complexity of the algorithm, by reducing algorithm complexity to achieve a high-speed compression rate and low bit rate compression results. A low bit rate and real-time/low-delay supported method is highly desired in various types of scenes.

    ▼Table 7. Time consumption comparison with state-of-the-art algorithms in encoding and decoding

    日本黄色视频三级网站网址| 99精品久久久久人妻精品| 久久久国产成人精品二区| 99热精品在线国产| 熟女电影av网| 黄色女人牲交| 88av欧美| 日本一二三区视频观看| 午夜久久久久精精品| 老熟妇乱子伦视频在线观看| 日本色播在线视频| 俄罗斯特黄特色一大片| 久久午夜亚洲精品久久| 日韩欧美国产在线观看| 亚洲欧美日韩东京热| 又粗又爽又猛毛片免费看| 久久人人爽人人爽人人片va| 好男人在线观看高清免费视频| 亚洲精品乱码久久久v下载方式| 人人妻人人看人人澡| 日韩欧美 国产精品| 亚洲美女视频黄频| 亚洲精品久久国产高清桃花| 精品久久久久久久久久免费视频| 中文亚洲av片在线观看爽| 草草在线视频免费看| 久久6这里有精品| 两人在一起打扑克的视频| 成人一区二区视频在线观看| 22中文网久久字幕| 丰满乱子伦码专区| 成人鲁丝片一二三区免费| 国产aⅴ精品一区二区三区波| 亚洲国产日韩欧美精品在线观看| 又紧又爽又黄一区二区| 久久亚洲精品不卡| 中亚洲国语对白在线视频| 99久国产av精品| 亚洲,欧美,日韩| 亚洲真实伦在线观看| a级毛片免费高清观看在线播放| 久久国内精品自在自线图片| 赤兔流量卡办理| 亚洲专区中文字幕在线| 床上黄色一级片| 麻豆国产av国片精品| 天堂影院成人在线观看| 亚洲国产高清在线一区二区三| 黄片wwwwww| 欧美性猛交╳xxx乱大交人| 久久精品国产自在天天线| 亚洲av第一区精品v没综合| 日韩 亚洲 欧美在线| 国产在线精品亚洲第一网站| av福利片在线观看| 国产精品国产三级国产av玫瑰| 最近视频中文字幕2019在线8| 国产亚洲av嫩草精品影院| 精品久久久久久成人av| 麻豆成人av在线观看| 日韩中字成人| 高清毛片免费观看视频网站| 午夜久久久久精精品| 全区人妻精品视频| 长腿黑丝高跟| 国产欧美日韩一区二区精品| 日韩欧美在线二视频| 国产伦在线观看视频一区| 国产精品福利在线免费观看| 岛国在线免费视频观看| 亚洲第一电影网av| .国产精品久久| 男女下面进入的视频免费午夜| 香蕉av资源在线| av视频在线观看入口| 亚洲精品456在线播放app | 在线免费观看的www视频| 色播亚洲综合网| 欧美丝袜亚洲另类 | 亚洲熟妇中文字幕五十中出| 99热这里只有是精品在线观看| 可以在线观看毛片的网站| 久久久久久九九精品二区国产| 亚洲综合色惰| 熟女人妻精品中文字幕| 中文字幕高清在线视频| 欧美日韩精品成人综合77777| 国产一区二区三区在线臀色熟女| 国产精品伦人一区二区| 在线观看免费视频日本深夜| 亚洲自偷自拍三级| 一级黄片播放器| 国产伦在线观看视频一区| 亚洲成人中文字幕在线播放| 少妇猛男粗大的猛烈进出视频 | 国产精品美女特级片免费视频播放器| 看黄色毛片网站| 国产乱人视频| 欧美成人性av电影在线观看| 观看免费一级毛片| 99久久久亚洲精品蜜臀av| 久久久久久国产a免费观看| 午夜激情欧美在线| 国产精品乱码一区二三区的特点| 美女大奶头视频| 毛片一级片免费看久久久久 | 午夜影院日韩av| 亚洲最大成人av| 一级黄片播放器| 欧美人与善性xxx| av国产免费在线观看| 久久久久久久精品吃奶| 国产 一区 欧美 日韩| 我的老师免费观看完整版| 国国产精品蜜臀av免费| 婷婷丁香在线五月| 在线观看66精品国产| 九九久久精品国产亚洲av麻豆| videossex国产| 十八禁国产超污无遮挡网站| 免费看a级黄色片| 18禁在线播放成人免费| 观看免费一级毛片| 无遮挡黄片免费观看| 99国产精品一区二区蜜桃av| 九色国产91popny在线| 国内精品美女久久久久久| 国产精品日韩av在线免费观看| 国产美女午夜福利| 又黄又爽又免费观看的视频| 精品国产三级普通话版| av女优亚洲男人天堂| netflix在线观看网站| 精品久久久久久久久亚洲 | 在线天堂最新版资源| 长腿黑丝高跟| 日本黄色片子视频| 动漫黄色视频在线观看| 国产一区二区在线av高清观看| 51国产日韩欧美| 欧美+日韩+精品| 成人国产麻豆网| 春色校园在线视频观看| 日韩在线高清观看一区二区三区 | 能在线免费观看的黄片| 欧美不卡视频在线免费观看| 国产成人福利小说| 欧美极品一区二区三区四区| 99久久久亚洲精品蜜臀av| av在线天堂中文字幕| 蜜桃久久精品国产亚洲av| 夜夜看夜夜爽夜夜摸| 亚洲国产欧美人成| 亚洲精品乱码久久久v下载方式| 中文亚洲av片在线观看爽| 男女边吃奶边做爰视频| 一卡2卡三卡四卡精品乱码亚洲| 亚洲色图av天堂| 国产精品一及| 国产69精品久久久久777片| 精品福利观看| 国产黄a三级三级三级人| 最新在线观看一区二区三区| 国产精品98久久久久久宅男小说| 国产成人福利小说| 女的被弄到高潮叫床怎么办 | 可以在线观看的亚洲视频| 国产精品99久久久久久久久| 国产精品一及| 欧美中文日本在线观看视频| 国产女主播在线喷水免费视频网站 | 亚洲内射少妇av| 欧美中文日本在线观看视频| 日本黄色视频三级网站网址| av在线天堂中文字幕| 直男gayav资源| 91久久精品电影网| 女人十人毛片免费观看3o分钟| 午夜久久久久精精品| 欧美+亚洲+日韩+国产| 国内毛片毛片毛片毛片毛片| 久久精品国产鲁丝片午夜精品 | 国产单亲对白刺激| 精品久久久久久久久av| 国内精品久久久久久久电影| 亚洲经典国产精华液单| 丰满乱子伦码专区| 蜜桃久久精品国产亚洲av| 99久久久亚洲精品蜜臀av| 亚洲av第一区精品v没综合| 精品人妻一区二区三区麻豆 | 午夜免费成人在线视频| 国产成人影院久久av| 国产探花在线观看一区二区| 一a级毛片在线观看| 高清毛片免费观看视频网站| 在线国产一区二区在线| 色综合亚洲欧美另类图片| 能在线免费观看的黄片| 亚洲av二区三区四区| 九色成人免费人妻av| 老司机午夜福利在线观看视频| 亚洲人成网站在线播放欧美日韩| 有码 亚洲区| 成人亚洲精品av一区二区| 两个人的视频大全免费| 日本 欧美在线| 亚洲中文日韩欧美视频| 精品久久久久久久久亚洲 | 欧美人与善性xxx| 午夜影院日韩av| 人人妻人人澡欧美一区二区| avwww免费| 免费大片18禁| 性欧美人与动物交配| av福利片在线观看| 亚洲自偷自拍三级| 国内精品宾馆在线| 国产精品久久电影中文字幕| 熟女电影av网| 久久久久久久午夜电影| 一级黄片播放器| 国产毛片a区久久久久| 国产精品一及| 能在线免费观看的黄片| 欧美一级a爱片免费观看看| 国产精品久久久久久亚洲av鲁大| 九九久久精品国产亚洲av麻豆| 亚洲精品久久国产高清桃花| 久久人妻av系列| 欧美激情久久久久久爽电影| 美女高潮喷水抽搐中文字幕| 一边摸一边抽搐一进一小说| 美女被艹到高潮喷水动态| 深夜a级毛片| 亚洲18禁久久av| 国产精品女同一区二区软件 | 日本黄大片高清| 舔av片在线| 国产精品久久久久久久久免| 伦精品一区二区三区| 国产不卡一卡二| 亚洲av熟女| 国产av一区在线观看免费| 网址你懂的国产日韩在线| 亚洲美女视频黄频| 桃红色精品国产亚洲av| 啦啦啦观看免费观看视频高清| 国产免费一级a男人的天堂| 久久久久久大精品| 免费看日本二区| 免费无遮挡裸体视频| 亚洲自偷自拍三级| 熟女人妻精品中文字幕| 特级一级黄色大片| 国产精品,欧美在线| bbb黄色大片| 色综合婷婷激情| 在线观看一区二区三区| 男人的好看免费观看在线视频| 午夜精品一区二区三区免费看| 国产高潮美女av| 亚洲一级一片aⅴ在线观看| 欧美在线一区亚洲| 亚洲久久久久久中文字幕| 免费av不卡在线播放| 天堂√8在线中文| 成年人黄色毛片网站| 天堂网av新在线| 国产男靠女视频免费网站| 亚洲真实伦在线观看| 精品久久久久久久久av| 最新在线观看一区二区三区| 亚洲,欧美,日韩| 他把我摸到了高潮在线观看| 日韩强制内射视频| 啦啦啦啦在线视频资源| 久久精品国产亚洲av涩爱 | 午夜爱爱视频在线播放| 在线播放国产精品三级| 色综合色国产| 一区福利在线观看| 午夜福利视频1000在线观看| 如何舔出高潮| 午夜免费男女啪啪视频观看 | 神马国产精品三级电影在线观看| 一级毛片久久久久久久久女| 麻豆av噜噜一区二区三区| 国产视频一区二区在线看| 久久精品综合一区二区三区| 久久精品国产鲁丝片午夜精品 | 夜夜夜夜夜久久久久| 女同久久另类99精品国产91| 国产欧美日韩精品亚洲av| 欧美3d第一页| 欧美不卡视频在线免费观看| 亚洲精品色激情综合| 99久国产av精品| 999久久久精品免费观看国产| 免费高清视频大片| 精品无人区乱码1区二区| 免费无遮挡裸体视频| 成人特级av手机在线观看| 内地一区二区视频在线| 欧美又色又爽又黄视频| 成年人黄色毛片网站| 伊人久久精品亚洲午夜| 成年女人永久免费观看视频| 高清在线国产一区| 最好的美女福利视频网| 在现免费观看毛片| 日韩精品中文字幕看吧| 国产精品爽爽va在线观看网站| 尤物成人国产欧美一区二区三区| 午夜福利高清视频| 亚洲精华国产精华精| 成人性生交大片免费视频hd| 亚洲午夜理论影院| 少妇被粗大猛烈的视频| 嫩草影院新地址| 又黄又爽又刺激的免费视频.| 韩国av一区二区三区四区| 亚洲 国产 在线| 一区二区三区四区激情视频 | 丰满人妻一区二区三区视频av| 夜夜夜夜夜久久久久| 国产极品精品免费视频能看的| 男女之事视频高清在线观看| 国内精品久久久久精免费| 久久这里只有精品中国| 美女高潮的动态| 国内少妇人妻偷人精品xxx网站| 最近最新免费中文字幕在线| 久久久久久久久中文| 欧美bdsm另类| 亚洲成人久久爱视频| 免费av不卡在线播放| a级毛片免费高清观看在线播放| 久久草成人影院| 中文亚洲av片在线观看爽| 噜噜噜噜噜久久久久久91| 亚洲不卡免费看| 三级国产精品欧美在线观看| 亚洲国产欧洲综合997久久,| 国产v大片淫在线免费观看| 12—13女人毛片做爰片一| 精品久久久久久久久av| 亚洲天堂国产精品一区在线| 亚洲国产精品sss在线观看| 日韩高清综合在线| 黄色一级大片看看| 舔av片在线| 精品国内亚洲2022精品成人| 久久久成人免费电影| 老司机福利观看| 99热6这里只有精品| 国产成人一区二区在线| 成人特级av手机在线观看| 淫妇啪啪啪对白视频| 日韩中字成人| 成年免费大片在线观看| 国产精品综合久久久久久久免费| 精品久久久久久久末码| 国产午夜精品论理片| 五月玫瑰六月丁香| 免费看av在线观看网站| 99久久久亚洲精品蜜臀av| 男女视频在线观看网站免费| 中文资源天堂在线| 亚洲av成人av| 99在线视频只有这里精品首页| 国产欧美日韩精品一区二区| 久久久久久伊人网av| 最新在线观看一区二区三区| 日韩中文字幕欧美一区二区| 嫩草影院精品99| 国产高清三级在线| 在线观看一区二区三区| 国产视频内射| 免费在线观看影片大全网站| 日本免费a在线| 精品久久久噜噜| 搡女人真爽免费视频火全软件 | 国产精品三级大全| 在线播放国产精品三级| 午夜免费成人在线视频| 亚洲va在线va天堂va国产| 亚洲最大成人中文| 综合色av麻豆| 免费在线观看日本一区| 最新中文字幕久久久久| 桃色一区二区三区在线观看| 两个人的视频大全免费| 成人无遮挡网站| 国产免费男女视频| 久久久久久久久久黄片| 蜜桃亚洲精品一区二区三区| 91午夜精品亚洲一区二区三区 | 久久久久国产精品人妻aⅴ院| 久久精品影院6| 黄色丝袜av网址大全| 久久人人精品亚洲av| 能在线免费观看的黄片| 欧美三级亚洲精品| 一个人观看的视频www高清免费观看| 啦啦啦观看免费观看视频高清| 欧美一区二区精品小视频在线| 女人十人毛片免费观看3o分钟| 国产精品野战在线观看| 亚洲av.av天堂| 老司机午夜福利在线观看视频| 熟女电影av网| 日本 欧美在线| 少妇熟女aⅴ在线视频| 听说在线观看完整版免费高清| 亚洲avbb在线观看| 免费无遮挡裸体视频| 免费不卡的大黄色大毛片视频在线观看 | 国产在线男女| 国产精品国产三级国产av玫瑰| 久久九九热精品免费| 老司机深夜福利视频在线观看| 99久久精品热视频| 精品一区二区三区视频在线| 窝窝影院91人妻| 少妇丰满av| 免费看日本二区| 22中文网久久字幕| 给我免费播放毛片高清在线观看| 国产午夜精品论理片| 综合色av麻豆| 亚洲av成人精品一区久久| 在线观看一区二区三区| 97超级碰碰碰精品色视频在线观看| 亚洲精品一卡2卡三卡4卡5卡| 亚洲最大成人av| 精品久久久久久久人妻蜜臀av| 亚洲精品粉嫩美女一区| 麻豆一二三区av精品| 天堂√8在线中文| 老女人水多毛片| 在线观看免费视频日本深夜| 欧美激情久久久久久爽电影| 色在线成人网| 色播亚洲综合网| 久久久精品大字幕| 日本 av在线| 亚洲av免费高清在线观看| 国产精品国产高清国产av| 国产欧美日韩精品亚洲av| 精品久久久久久成人av| 精品国产三级普通话版| 亚洲第一电影网av| 国产私拍福利视频在线观看| 国产成人aa在线观看| 国产精品综合久久久久久久免费| 国产亚洲av嫩草精品影院| 色哟哟哟哟哟哟| 高清毛片免费观看视频网站| 欧美bdsm另类| 精品欧美国产一区二区三| 麻豆久久精品国产亚洲av| 成年人黄色毛片网站| 久久久久久久久中文| 极品教师在线免费播放| 成人欧美大片| 成人特级av手机在线观看| 欧美高清成人免费视频www| 99九九线精品视频在线观看视频| 少妇的逼水好多| 九九热线精品视视频播放| 在线观看舔阴道视频| 少妇的逼好多水| 别揉我奶头 嗯啊视频| 九九久久精品国产亚洲av麻豆| 日本一二三区视频观看| 欧美区成人在线视频| 亚洲黑人精品在线| 内射极品少妇av片p| 极品教师在线免费播放| 亚洲三级黄色毛片| 久久精品影院6| 看片在线看免费视频| 亚洲精品色激情综合| 老女人水多毛片| 亚洲国产欧洲综合997久久,| 免费在线观看日本一区| 国产成年人精品一区二区| 老女人水多毛片| 亚洲国产欧洲综合997久久,| 亚洲av中文字字幕乱码综合| 俄罗斯特黄特色一大片| 赤兔流量卡办理| 色综合站精品国产| 亚洲性久久影院| 91在线精品国自产拍蜜月| 中国美白少妇内射xxxbb| 亚洲av第一区精品v没综合| 国产伦在线观看视频一区| 岛国在线免费视频观看| 在线a可以看的网站| 桃红色精品国产亚洲av| 国内精品一区二区在线观看| 一级毛片久久久久久久久女| 久久久久久久亚洲中文字幕| 九色成人免费人妻av| 国产中年淑女户外野战色| av天堂在线播放| 美女高潮的动态| 最新中文字幕久久久久| 级片在线观看| 国产高清视频在线观看网站| 久久久久久久亚洲中文字幕| 少妇的逼好多水| 国产伦精品一区二区三区四那| 久久久久久久午夜电影| 久久久久免费精品人妻一区二区| 露出奶头的视频| 日韩一区二区视频免费看| 亚洲无线在线观看| 一个人看视频在线观看www免费| or卡值多少钱| 日韩大尺度精品在线看网址| 亚洲熟妇中文字幕五十中出| 99精品在免费线老司机午夜| 97超视频在线观看视频| 欧美潮喷喷水| 久久精品综合一区二区三区| 色综合婷婷激情| 俄罗斯特黄特色一大片| 天天一区二区日本电影三级| 性插视频无遮挡在线免费观看| av天堂中文字幕网| 给我免费播放毛片高清在线观看| 99久久久亚洲精品蜜臀av| 夜夜夜夜夜久久久久| 97碰自拍视频| 国产日本99.免费观看| 少妇人妻一区二区三区视频| 国内久久婷婷六月综合欲色啪| 国产精品免费一区二区三区在线| 亚洲天堂国产精品一区在线| 亚洲熟妇熟女久久| 成人美女网站在线观看视频| 国产精品一及| 淫妇啪啪啪对白视频| 精品久久久久久久久av| 中国美白少妇内射xxxbb| 我要看日韩黄色一级片| 亚洲无线观看免费| 18禁在线播放成人免费| 老司机深夜福利视频在线观看| 偷拍熟女少妇极品色| 国产亚洲精品综合一区在线观看| 亚洲无线在线观看| 搡老熟女国产l中国老女人| 最近视频中文字幕2019在线8| 亚洲成av人片在线播放无| 99国产精品一区二区蜜桃av| 精品一区二区三区视频在线观看免费| 日本一二三区视频观看| 两个人的视频大全免费| 久久久久国内视频| 午夜免费激情av| 一区二区三区高清视频在线| 久久久国产成人精品二区| 国产亚洲91精品色在线| 免费无遮挡裸体视频| 99热6这里只有精品| 国产精品免费一区二区三区在线| 欧美日本视频| 美女cb高潮喷水在线观看| 自拍偷自拍亚洲精品老妇| 亚洲不卡免费看| 乱人视频在线观看| 欧美日韩黄片免| 亚洲精品456在线播放app | 色av中文字幕| 久久久久久大精品| 国产乱人伦免费视频| 精品一区二区三区视频在线| 久久这里只有精品中国| 中文资源天堂在线| 好男人在线观看高清免费视频| 97人妻精品一区二区三区麻豆| 久久久久久久亚洲中文字幕| 99国产精品一区二区蜜桃av| 日韩欧美国产一区二区入口| 国产av一区在线观看免费| 少妇裸体淫交视频免费看高清| av.在线天堂| 欧美成人免费av一区二区三区| 国产av不卡久久| 亚洲专区国产一区二区| 美女 人体艺术 gogo| 九九久久精品国产亚洲av麻豆| 午夜免费男女啪啪视频观看 | 欧洲精品卡2卡3卡4卡5卡区| 欧美性猛交黑人性爽| av.在线天堂| 精品久久久久久成人av| 99九九线精品视频在线观看视频| 夜夜看夜夜爽夜夜摸| 一卡2卡三卡四卡精品乱码亚洲| 精品久久国产蜜桃| 熟女电影av网| 国产 一区 欧美 日韩| 91在线精品国自产拍蜜月| 日韩精品青青久久久久久| 国产又黄又爽又无遮挡在线| 狠狠狠狠99中文字幕| 12—13女人毛片做爰片一| 日韩欧美在线二视频| 日韩欧美免费精品| 国产一区二区三区视频了|