• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Inversion of Oceanic Parameters Represented by CTD Utilizing Seismic Multi-Attributes Based on Convolutional Neural Network

    2020-11-30 03:20:54ANZhenfangZHANGJinandXINGLei
    Journal of Ocean University of China 2020年6期

    AN Zhenfang, ZHANG Jin, *, and XING Lei

    Inversion of Oceanic Parameters Represented by CTD Utilizing Seismic Multi-Attributes Based on Convolutional Neural Network

    AN Zhenfang1), 2), ZHANG Jin1), 2), *, and XING Lei1), 2)

    1),,,266100,2),,266071,

    In Recent years, seismic data have been widely used in seismic oceanography for the inversion of oceanic parameters represented by conductivity temperature depth (CTD). Using this technique, researchers can identify the water structure with high horizontal resolution, which compensates for the deficiencies of CTD data. However, conventional inversion methods are model- driven, such as constrained sparse spike inversion (CSSI) and full waveform inversion (FWI), and typically require prior deterministic mapping operators. In this paper, we propose a novel inversion method based on a convolutional neural network (CNN), which is purely data-driven. To solve the problem of multiple solutions, we use stepwise regression to select the optimal attributes and their combination and take two-dimensional images of the selected attributes as input data. To prevent vanishing gradients, we use the rectified linear unit (ReLU) function as the activation function of the hidden layer. Moreover, the Adam and mini-batch algorithms are combined to improve stability and efficiency. The inversion results of field data indicate that the proposed method is a robust tool for accurately predicting oceanic parameters.

    oceanic parameter inversion; seismic multi-attributes; convolutional neural network

    1 Introduction

    Oceanic parameters, including temperature, salinity, den- sity, and velocity can be obtained directly by conductivity temperature depth (CTD) instruments or indirectly by in- version utilizing seismic data. Although the vertical resolution of CTD data is higher than that of seismic data, its horizontal resolution is far lower. Joint CTD-seismic inversion combines the advantages of both to obtain oceanic parameters with high resolution.

    However, traditional inversion methods, such as constrained sparse spike inversion (CSSI) and full waveform inversion (FWI), typically assume the existence of prior deterministic mapping operators between the geophysical responses and the geophysical parameters, such as a convolution operator and a wave equation operator. For some oceanic parameters, however, such as temperature and salinity, it is difficult to establish their mapping relationships and seismic responses by mathematical modeling.

    A recent trend in many scientific fields has been to solve inverse problems using data-driven methods and a revived use of deep neural networks (DNNs). According to the universal approximation theorem (Hornik., 1990), a DNN can theoretically approximate any conti- nuous function when there are enough neurons in the hidden layer. Machine learning based on a DNN is usually referred to as deep learning. The convolutional neural network (CNN) is a DNN that has two special characteristics: local connection and shared weights, which improves computational efficiency by reducing the number of weights. Due to the significant advances in image pro- cessing and speech recognition, CNNs have attracted wide- spread attention and have been successfully applied in the fields of agriculture (Kamilaris and Prenafeta-Boldú, 2018; Qiu., 2018; Teimouri., 2018), medicine (Liu., 2017; Acharya., 2018; Wachinger., 2018), and transportation (Li and Hu, 2018; Li., 2018; Wang., 2018a).

    Here, we introduce CNNs into marine geophysics to solve the inverse problem described above. Using deep learning, a CNN can automatically search and gradually approximate inverse mapping operators from the seismic responses to oceanic parameters, thus eliminating the need for prior deterministic mapping operators. In other words, CNN is purely data-driven.

    In solid geophysics, CNNs are mainly applied to classification problems, such as fault interpretation (Guo., 2018; Ma., 2018; Wu., 2018a), first-break picking (Duan., 2018; Hollander., 2018; Yuan., 2018), seismic facies identification (Dramsch and Lüthje, 2018; Zhao, 2018), and seismic trace editing (Shen., 2018). Inversion belongs to the category of regression techniques. Generally, seismic inversion based on a CNN takes seismic records as input data and stratigraphic parameters as output data. In one study, generated normal- incidence synthetic seismograms served as input data, and the acoustic impedance served as output data (Das., 2018). In another study, synthetic two-dimensional multi- shot seismic waves were encoded into a feature vector, then the feature vector was decoded into a two-dimen- sional velocity model (Wu., 2018b). In a third study, a modified fully convolutional network was tuned to map pre-stacked multi-shot seismic traces to velocity models (Wang., 2018b).

    The inversion methods mentioned above use one- or two-dimensional seismic records as input data and the corresponding subsurface models as output data. To obtain a large number of samples, many subsurface models are built to generate synthetic seismic records. These methods have made some progress, but continue to have a number of disadvantages. First, they require prior deterministic forward operators to synthesize the seismic records. Second, accurate inversion results cannot be obtained based on a small number of samples. Third, the problem of multiple solutions becomes more prominent as only seismic records are used as input data.

    The samples for our study were obtained from actual data, including both CTD and seismic data acquired from the East China Sea. We took the CTD curves as output data and the seismic traces near the CTDs as input data. Given the fact that if only seismic records are used as input data, the problem of multiple solutions inevitably arises, to re- duce the multiplicity of solutions, we used stepwise regre- ssion to select only the optimal attributes and their combination and took the two-dimensional images of the selected attributes as input data. Because of the frequency difference between the CTD and seismic data, we assumed that one sampling point on the CTD curve corresponded to multiple adjacent sampling points on the seismic trace near the CTD.

    2 Theory of Convolutional Neural Networks

    CNNs (Venkatesan and Li, 2018) consist of an input layer, hidden layers, and an output layer. The hidden layers between the input and output layers generally consist of convolution layers, pooling layers, and a fully connectedlayer. Each hidden layer incorporates several feature maps, and each feature map contains one or more neurons. Fig.1 shows the architecture of the CNN designed for this study, the model parameters of which are listed in Table 1.

    The CNN workflow can be divided into forward and backward propagation. Model output can be obtained afteran original image is successively processed by convolution, pooling, and its weighted sums during forward propagation. Then, the error between the model output and the de- sired output propagates backward from the output layer to the first hidden layer. Meanwhile, the convolution kernels, weights, and biases are updated during backward propagation. These updates will continue until the error tolerance value is reached.

    Fig.1 Architecture of the convolutional neural network, wherein the squares represent maps of neurons. The number on the left of @ denotes the number of maps, and the two numbers on the right of @ denote the number of rows and columns in the map, respectively.

    Table 1 Model parameters of the convolutional neural network

    2.1 Forward Propagation

    For this process, let the current layer be layer, and the layer in front of the current layer be layer?1.

    The convolution layer can be obtained as follows:

    where(l?1)represents the original image of the input layer or the output of the pooling layer;(l)andb(l)are the convolution kernel and the bias of the convolution layer, respectively;(l)and(l)are the input and output of the convolution layer, respectively; rot90(·) performs the 180-degree counterclockwise rotation of(l); conv2(·) performs the two-dimensional convolution of(l?1)and(l), wherevalidreturns only those parts of the convolution that are computed without zero-padded edges; and(·) denotes the activation function.

    The pooling layer can be obtained by the following:

    where(l?1)represents the output of the first convolution layer;(l)and(l)are the input and output of the pooling layer, respectively; and down(·) performs the down-sampling operation of(l?1).

    The fully connected layer and the output layer can be obtained as follows:

    where(l?1)represents the output of the second convolution layer or the fully connected layer;(l)and(l)are the weight and bias of the fully connected layer or the output layer, respectively;(l)and(l)are the input and output of the fully connected layer or the output layer, respectively; and the symbol * denotes multiplication between the two matrices. In the setting=(L),is the total number of layers.

    To prevent vanishing gradients, we use a rectified linear unit (ReLU) function (Nair and Hinton, 2010) as the activation function of the hidden layer. Because the desired output is normalized during data preprocessing, we use the sigmoid function as the activation function of the output layer.

    2.2 Backward Propagation

    Learning rules are driven by the cost function, which is defined as follows:

    whereyanddare the model output and the desired output, respectively,is the number of data points, andis the cost function.

    According to the gradient descent method, the convolution kernels, weights, and biases are updated as follows:

    whereis the learning rate anddenotes the convolution kernel, the weight, or the bias.

    The partial derivatives ofwith respect to(l)and(l)of the output layer and the fully connected layer can be calculated as follows:

    where the superscript T represents the transpose of(l?1)and(l)denotes the delta of the output layer or the fully connected layer. The delta is defined as the element- by-element multiplication of the error and the derivative of the weighted sum:

    where the symbol ?* denotes element-by-element multiplication and(l)is the error of the output layer:

    or the error of the fully connected layer:

    The partial derivatives ofwith respect to(l)andb(l)of the convolution layer can be calculated as follows:

    where sum(·) performs summation of the elements of(l). The delta of the convolution layer can be obtained using Eq. (11). The error of the second convolution layer can be obtained using Eq. (13). Because down-sampling is performed from the first convolution layer to the pooling layer during forward propagation, up-sampling should be performed from the pooling layer to the first convolution layer during backward propagation. The error of the first convolution layer can be obtained by the following:

    , (16)

    where up(·) performs the up-sampling operation of(l+1).

    The error of the pooling layer can be calculated as follows:

    wherefullreturns the full two-dimensional convolution.

    3 Methodology Used to Perform Inversion

    The CTD data and the seismic data near the CTD are different responses at the same sea-water location, so they have an inherent relationship that can be approximated by the CNN. First, the CTD curves are taken as the desired output data and the seismic traces near the CTDs are taken as the input data. The CNN automatically approximates the inverse mapping operator from the input data to the desired output data. Then, the seismic traces obtained far away from the CTDs are input into the trained network model that serves as the inverse mapping operator. Finally, the inversion results are output from the network model.

    3.1 Input and Output of Convolutional Neural Network

    The CTD curves and the seismic traces near the CTDs are taken as the desired output data and the input data, respectively. If only seismic records are taken as input data, the problem of multiple solutions inevitably arises. To solve this problem, stepwise regression is performed to select the optimal attributes and their combination, including the instantaneous frequency, instantaneous amplitude, instantaneous phase, average frequency, dominant frequency, apparent polarity, and the derivative, integral, coordinate, and time attributes. Then, we take the two-dimensional images of the selected attributes as input data instead of the one-dimensional seismic records. The selected attributes are the derivative, time and-coor- dinate, which are located in the left-hand, middle, and right-hand columns of the images, respectively.

    Because the frequency of CTD data differs from that of seismic data, their sampling points do not have a one- to-one correspondence, but one-to-many. We assume that one sampling point in the CTD data corresponds to five adjacent sampling points near the CTD in the seismic data. As shown in Fig.2, we took images with 5×3 pixels as input data and the corresponding target point as the desired output data.

    3.2 Optimization of Training for Network Model

    We divided the samples into training and validation datasets and used cross validation to determine whether the network model was overfitting the data. The root-mean- square error (RMSE) between the model and desired out- puts is defined as follows:

    where e is the RMSE, yk and dk are the model and desired outputs, respectively, and N is the number of data points. If the RMSE of the validation dataset is consistently too large, this indicates that the network model is overfitting.

    The training dataset was evenly divided into several groups, and the convolution kernels, weights, and biases were not updated until all the samples in one group had been trained. This optimization method, known as the mini- batch algorithm, has the high efficiency of the stochastic gradient descent (SGD) algorithm and the stability of the batch algorithm. If the training dataset is kept as just one group, the mini-batch algorithm becomes a batch algorithm. If each group has only one sample, the mini-batch algorithm becomes an SGD algorithm. The increments of the convolution kernels, weights, and biases are calcula- ted by the SGD and batch algorithms shown in Eqs. (19) and (20), respectively:

    whereis the convolution kernel, the weight, or the bias, and Δis the increment of convolution kernel, weight, or bias.

    With the mini-batch algorithm, we combined another op-timization method, known as Adam (Kingma and Ba, 2015), the name of which is derived from adaptive moment esti- mation. Adam combines the advantages of the AdaGrad and RMSProp algorithms, and works well with sparse gradients as well as in non-stationary settings. In the Adam algorithm, convolution kernels, weights, and biases are updated as follows:

    whereis the convolution kernel, the weight, or the bias; the subscriptis the timestep, making=+1, with0=0 as the initialized timestep;is the learning rate;is a very small value; and′ and′ are the bias-corrected first moment estimate and the bias-corrected second raw- moment estimate, respectively:

    where1and2are the exponential decay rates for the respective moment estimates, andandare the biased first moment estimate and the biased second raw-moment estimate, respectively:

    where0=0 and0=0 are the initialized first moment vector and the initialized second raw-moment vector, respectively, andis the gradient with respect to the stochastic objective function, namely the vector of the partial derivatives of() with respect toevaluated at timestep:

    where() denotes the stochastic objective function with parameter. Good default settings for the tested machine learning problems are=0.001,1=0.9,2=0.999, and=10?8. In this study, however,=0.0001, because the mo- del output will fluctuate around the true solution when=0.001. All operations on vectors are element-wise.

    4 Field Data

    A post-stack seismic profile (Fig.3) with 1000 traces was acquired from the East China Sea, and three CTDs were dropped near the seismic traces numbered 85, 471, and 854. It is difficult to distinguish vertical variations in the ocean water from Fig.3 due to its low vertical resolution, which shows only the strong reflection interface. Fig.4 shows the temperature, salinity, density, and velocity curves plotted using data from CTD-1, CTD-2, and CTD- 3, in which we can see that temperature and velocity decrease with depth, whereas salinity and density increase with depth.

    Figs.5, 6, and 7 show the selected derivative, time, and-coordinate attributes obtained using stepwise regression. The derivative attribute is the nonlinear transformation of the seismic data, which records differences in amplitude values between adjacent sampling points. We know that the frequency of seismic data is much lower than that of the CTD data, but the frequency difference between them decreases significantly with the increased high-frequency component after derivative processing, which improves the correlation between the seismic and CTD data. Use of the coordinate attribute can reduce the multiplicity of solutions and improve generalizability. Because CTD data is converted from the depth domain to the time domain, time takes the place of the-coordinate.

    CNNs tend to overfit data when the number of samples is small. To avoid overfitting, we obtained six virtual CTDs by linear interpolation between the three actual CTDs to supply the CNN with more samples. For testing, we used CTD-2 and the seismic attributes extracted from the seismic trace numbered 471, and used the other eight pairs of inputs and outputs for training.

    4.1 Data Preprocessing

    CTD and seismic data are in different domains, so we converted the CTD data to the time domain using the following depth-time conversion equation:

    whereis the interval velocity,is the depth for each interval velocity,0is the two-way vertical time to1, andtis the two-way vertical time corresponding tod.

    Due to the differences in the dimensions and units of the seismic attributes and the oceanic parameters, it is dif- ficult for the CNN to converge if these attributes and parameters are directly used for training. Therefore, we no- malized the seismic attributes and oceanic parameters using Eqs. (28) and (29), respectively:

    Fig.4 Oceanic parameters measured by CTD-1 ((a) and (d)), CTD-2 ((b) and (e)), and CTD-3 ((c) and (f)), which are located near the seismic traces numbered 85, 471, and 854, respectively. The blue, green, red, and cyan lines denote the temperature, salinity, density, and velocity curves, respectively.

    Fig.5 Derivative attribute profile.

    Fig.6 Time attribute profile.

    Fig.7 x-coordinate attribute profile.

    where abs(·) represents the absolute value of the elements of, and max(·) and min(·) are the largest and smallest elements in, respectively.

    4.2 Inversion of Oceanic Parameters Represented by CTD

    Four network models were established for inverting tem- perature, salinity, density, and velocity. After 10000 epo- chs, the four network models ended training as the error tolerance had been reached, as shown in Fig.8.

    A series of images containing 5×3 pixels were then ge- nerated from the three attribute profiles shown in Figs.5, 6, and 7. These images were then input into the four trained network models in turn. Finally, the predicted temperature, salinity, density, and velocity profiles were quickly output, as shown in Figs.9–12, respectively.

    Figs.3 and 9–12 show different responses to the same sea water, with Fig.3 reflecting the variation of impedance and Figs.9–12 reflecting the respective variations of temperature, salinity, density, and velocity. The vertical resolutions of Figs.9–12 as compared with that of Fig.3 have improved.

    Fig.8 Error variation with epoch. The blue, green, red, and cyan lines represent the error curves of temperature, salinity, density, and velocity, respectively.

    Since the desired outputs were normalized in data preprocessing, the model outputs were then post-processed using the following inverse transformation equation:

    Fig.10 Salinity profile predicted by convolutional neural network.

    Fig.11 Density profile predicted by convolutional neural network.

    Fig.12 Velocity profile predicted by convolutional neural network.

    The accuracy of the inversion results can be determined by comparing the model outputs with the desired outputs in the location of CTD-2. Fig.13 shows fold plots of the model and desired outputs, in which we can see that the predicted and actual oceanic parameters exactly match, and the Pearson correlation coefficients of the four oceanic parameters exceed 90%. Therefore, we can conclude that the proposed method is confirmed to have generalizability for accurately predicting temperature, salinity, density, and velocity.

    Fig.13 Fold plots of the predicted and actual oceanic parameters in the location of CTD-2. The blue solid and red dotted lines denote the desired and model outputs, respectively.

    5 Conclusions

    It is difficult for the traditional seismic inversion me- thod to establish mapping relationships between oceanic parameters and seismic responses by mathematical modeling. In this paper, we presented a CTD-seismic joint in- version method based on a CNN, which is purely data- driven. Using deep learning, the CNN can automatically approximate the inverse mapping operator from input data to the desired output data. To reduce the multiplicity of solutions, we utilized stepwise regression to select the best attributes and their combination and took the two- dimensional images of the selected attributes as input data. The derivative attribute was shown to improve the correlation between the seismic and CTD data, and the coordinate attribute to reduce the multiplicity of solutions and improve generalizability. The inversion results of our field data proved that the proposed method can accurately predict oceanic parameters.

    Joint CTD-seismic inversion based on CNN is data- driven, which gives it bright prospects for the future, although a number of problems remain to be solved. Among them the most prominent is the multiplicity of solutions. To reduce the multiplicity of solutions, existing appro- aches basically involve increasing the number of samples. However, the resulting trained network model is still only applicable to local areas. This is an urgent problem that must be addressed to further reduce the multiplicity of so- lutions and improve generalizability.

    Acknowledgements

    This research is jointly funded by the National Key Research and Development Program of China (No. 2017 YFC0307401), the National Natural Science Foundation of China (No. 41230318), the Fundamental Research Funds for the Central Universities (No. 201964017), and the National Science and Technology Major Project of China (No. 2016ZX05024-001-002).

    Acharya, U. R., Fujita, H., Oh, S. L., Raghavendra, U., Tan, J. H., Adam, M., Gertych, A., and Hagiwara, Y., 2018. Automated identification of shockable and non-shockable life- threatening ventricular arrhythmias using convolutional neural network., 79: 952- 959.

    Das, V., Pollack, A., Wollner, U., and Mukerji, T., 2018. Convolutional neural network for seismic impedance inversion.. Anaheim, 2071-2075.

    Dramsch, J. S., and Lüthje, M., 2018. Deep learning seismic facies on state-of-the-art CNN architectures.. Anaheim, 2036-2040.

    Duan, X. D., Zhang, J., Liu, Z. Y., Liu, S., Chen, Z. B., and Li, W. P., 2018. Integrating seismic first-break picking methods with a machine learning approach.. Anaheim, 2186-2190.

    Guo, B. W., Liu, L., and Luo, Y., 2018. A new method for automatic seismic fault detection using convolutional neural network.. Anaheim, 1951-1955.

    Hollander, Y., Merouane, A., and Yilmaz, O., 2018. Using a deep convolutional neural network to enhance the accuracy of first break picking.. Anaheim, 4628-4632.

    Hornik, K., Stinchcombe, M., and White, H., 1990. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks., 3 (5): 551-560.

    Kamilaris, A., and Prenafeta-Boldú, F. X., 2018. Deep learning in agriculture: A survey., 147: 70-90.

    Kingma, D. P., and Ba, J. L., 2015. Adam: A method for stochastic optimization.(). San Diego, 1-15.

    Li, B., and Hu, X., 2018. Effective vehicle logo recognition in real-world application using mapreduce based convolutional neural networks with a pre-training strategy., 34 (3): 1985-1994.

    Li, Y., Huang, Y., and Zhang, M., 2018. Short-term load forecasting for electric vehicle charging station based on niche immunity lion algorithm and convolutional neural network., 11 (5): 1253.

    Liu, F., Zhou, Z., Jang, H., Samsonov, A., Zhao, G., and Kijowski, R., 2017. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoske-letal magnetic resonance imaging., 79 (4): 2379-2391.

    Ma, Y., Ji, X., BenHassan, N. M., and Luo, Y., 2018. A deep learning method for automatic fault detection.. Anaheim, 1941-1945.

    Nair, V., and Hinton, G. E., 2010. Rectified linear units improve restricted Boltzmann machines.(). Omnipress, 807-814.

    Qiu, Z., Chen, J., Zhao, Y., Zhu, S., He, Y., and Zhang, C., 2018. Variety identification of single rice seed using hyperspectral imaging combined with convolutional neural network., 8 (2): 212.

    Shen, Y., Sun, M. Y., Zhang, J., Liu, S., Chen, Z. B., and Li, W. P., 2018. Seismic trace editing by applying machine learning.. Anaheim, 2256- 2260.

    Teimouri, N., Dyrmann, M., Nielsen, P., Mathiassen, S., Somer- ville, G., and J?rgensen, R., 2018. Weed growth stage estimator using deep convolutional neural networks., 18 (5): 1580.

    Venkatesan, R., and Li, B. X., 2018.:. CRC Press, Boca Raton, 1-183.

    Wachinger, C., Reuter, M., and Klein, T., 2018. DeepNAT: Deep convolutional neural network for segmenting neuroanatomy., 170: 434-445.

    Wang, Q., Gao, J., and Yuan, Y., 2018a. A joint convolutional neural networks and context transfer for street scenes labeling., 19 (5): 1457-1470.

    Wang, W. L., Yang, F. S., and Ma, J. W., 2018b. Velocity model building with a modified fully convolutional network.. Anaheim, 2086-2090.

    Wu, X. M., Shi, Y. Z., Fomel, S., and Liang, L. M., 2018a. Con- volutional neural networks for fault interpretation in seismic images.. Ana- heim, 1946-1950.

    Wu, Y., Lin, Y. Z., and Zhou, Z., 2018b. InversionNet: Accurate and efficient seismic waveform inversion with convolutional neural networks.. Anaheim, 2096-2100.

    Yuan, S. Y., Liu, J. W., Wang, S. X., Wang, T. Y., and Shi, P. D., 2018. Seismic waveform classification and first-break picking using convolution neural networks., 15 (2): 272-276.

    Zhao, T., 2018. Seismic facies classification using different deep convolutional neural networks.. Anaheim, 2046-2050.

    . E-mail: zhjmeteor@163.com

    January 30, 2019;

    May 10, 2019;

    May 6, 2020

    (Edited by Chen Wenwen)

    亚洲情色 制服丝袜| 99国产综合亚洲精品| 一个人观看的视频www高清免费观看 | 日韩精品青青久久久久久| 久久精品国产亚洲av香蕉五月| 国产精品九九99| 亚洲一码二码三码区别大吗| 日本vs欧美在线观看视频| 久久人人爽av亚洲精品天堂| 水蜜桃什么品种好| 天天躁狠狠躁夜夜躁狠狠躁| 少妇裸体淫交视频免费看高清 | 日日夜夜操网爽| 亚洲五月色婷婷综合| 最好的美女福利视频网| 成人免费观看视频高清| 久久性视频一级片| 国产精品一区二区在线不卡| 又紧又爽又黄一区二区| 免费av毛片视频| 久久99一区二区三区| 极品人妻少妇av视频| 脱女人内裤的视频| 纯流量卡能插随身wifi吗| 欧美激情 高清一区二区三区| 一个人观看的视频www高清免费观看 | 在线观看午夜福利视频| www.精华液| 亚洲va日本ⅴa欧美va伊人久久| 黑人巨大精品欧美一区二区蜜桃| 黑丝袜美女国产一区| 欧美色视频一区免费| 国产高清国产精品国产三级| 久久香蕉精品热| 在线永久观看黄色视频| 久久精品91无色码中文字幕| 久久伊人香网站| 午夜日韩欧美国产| 一边摸一边抽搐一进一出视频| 黄网站色视频无遮挡免费观看| 欧美日韩黄片免| 精品少妇一区二区三区视频日本电影| 水蜜桃什么品种好| 国产精品乱码一区二三区的特点 | 国产高清国产精品国产三级| 国产精品永久免费网站| 欧洲精品卡2卡3卡4卡5卡区| 老司机福利观看| 亚洲一区高清亚洲精品| 日韩有码中文字幕| 国产成人av激情在线播放| 可以在线观看毛片的网站| 在线观看66精品国产| 免费少妇av软件| 99精国产麻豆久久婷婷| av片东京热男人的天堂| 国产成人精品久久二区二区免费| 久久久久亚洲av毛片大全| 欧美激情 高清一区二区三区| 夜夜爽天天搞| 国产成人精品久久二区二区91| 视频区图区小说| 丁香六月欧美| 亚洲全国av大片| 丰满的人妻完整版| 极品教师在线免费播放| 午夜精品国产一区二区电影| 中出人妻视频一区二区| 51午夜福利影视在线观看| 久久九九热精品免费| 久久久国产成人免费| 麻豆国产av国片精品| 在线观看午夜福利视频| 欧美乱妇无乱码| 18禁黄网站禁片午夜丰满| 欧美一级毛片孕妇| 精品国产一区二区久久| 视频区图区小说| 日本wwww免费看| 亚洲一卡2卡3卡4卡5卡精品中文| 麻豆一二三区av精品| 日韩免费av在线播放| 国产黄a三级三级三级人| 一区在线观看完整版| 人人妻人人添人人爽欧美一区卜| 男人舔女人的私密视频| 黑丝袜美女国产一区| 日本vs欧美在线观看视频| 久久精品国产综合久久久| 人人妻,人人澡人人爽秒播| 国产成人精品在线电影| 亚洲成国产人片在线观看| 一本大道久久a久久精品| 亚洲国产精品sss在线观看 | 成人18禁在线播放| 亚洲中文字幕日韩| 久久精品影院6| 99精品久久久久人妻精品| 可以在线观看毛片的网站| 男人舔女人的私密视频| 亚洲av电影在线进入| 美女 人体艺术 gogo| 国产精品野战在线观看 | 亚洲免费av在线视频| 一本综合久久免费| 日本 av在线| 电影成人av| svipshipincom国产片| 侵犯人妻中文字幕一二三四区| 国产蜜桃级精品一区二区三区| 母亲3免费完整高清在线观看| 成人18禁在线播放| 12—13女人毛片做爰片一| 在线天堂中文资源库| 久久久国产成人免费| 日日爽夜夜爽网站| 久久国产乱子伦精品免费另类| 亚洲欧美一区二区三区黑人| 国产精品一区二区在线不卡| 好男人电影高清在线观看| 一个人免费在线观看的高清视频| 精品日产1卡2卡| 成年版毛片免费区| 天堂√8在线中文| 国产极品粉嫩免费观看在线| 亚洲午夜理论影院| 男女做爰动态图高潮gif福利片 | e午夜精品久久久久久久| av电影中文网址| 精品久久久久久,| 亚洲欧美一区二区三区黑人| 亚洲精品国产一区二区精华液| 成年版毛片免费区| 国产欧美日韩一区二区三| 欧美成狂野欧美在线观看| 啦啦啦在线免费观看视频4| 国产深夜福利视频在线观看| 精品福利永久在线观看| 久久青草综合色| 久久人人97超碰香蕉20202| videosex国产| 一级片'在线观看视频| 亚洲专区国产一区二区| 视频区欧美日本亚洲| av片东京热男人的天堂| 国产精品98久久久久久宅男小说| 亚洲久久久国产精品| 69精品国产乱码久久久| 国产免费男女视频| 免费av毛片视频| 久久九九热精品免费| 亚洲色图综合在线观看| 无遮挡黄片免费观看| 久久精品国产亚洲av高清一级| 成熟少妇高潮喷水视频| 老汉色∧v一级毛片| 日本黄色日本黄色录像| 极品人妻少妇av视频| 老司机深夜福利视频在线观看| 国产不卡一卡二| 日韩人妻精品一区2区三区| av天堂久久9| 激情视频va一区二区三区| 国产欧美日韩精品亚洲av| 一进一出抽搐动态| 亚洲av五月六月丁香网| 免费高清视频大片| 国产成人欧美在线观看| 亚洲熟女毛片儿| 一二三四社区在线视频社区8| 日韩有码中文字幕| 激情在线观看视频在线高清| 亚洲久久久国产精品| 99久久国产精品久久久| 手机成人av网站| 国产一卡二卡三卡精品| 色婷婷av一区二区三区视频| 侵犯人妻中文字幕一二三四区| 天堂动漫精品| 国产亚洲欧美98| 岛国在线观看网站| 亚洲成人国产一区在线观看| 夜夜夜夜夜久久久久| 午夜激情av网站| 亚洲,欧美精品.| 在线观看日韩欧美| 午夜激情av网站| 免费高清视频大片| 亚洲男人的天堂狠狠| 成熟少妇高潮喷水视频| 女人被躁到高潮嗷嗷叫费观| 国产亚洲精品久久久久5区| www.精华液| 精品一区二区三区视频在线观看免费 | 午夜福利在线观看吧| 人妻丰满熟妇av一区二区三区| 久久人人97超碰香蕉20202| 免费观看人在逋| 国产亚洲欧美精品永久| 亚洲精品国产区一区二| 亚洲一码二码三码区别大吗| 国产精品电影一区二区三区| 在线观看一区二区三区激情| 香蕉国产在线看| 午夜福利影视在线免费观看| 宅男免费午夜| 99热国产这里只有精品6| 精品高清国产在线一区| 国产免费av片在线观看野外av| 成人手机av| 亚洲欧美一区二区三区久久| 在线观看一区二区三区| 欧美日韩一级在线毛片| 亚洲色图av天堂| 长腿黑丝高跟| 国产亚洲欧美精品永久| 亚洲全国av大片| 亚洲国产欧美一区二区综合| av电影中文网址| 国产成年人精品一区二区 | av在线播放免费不卡| 香蕉国产在线看| 欧美精品啪啪一区二区三区| 国产免费现黄频在线看| 最近最新中文字幕大全电影3 | 中文字幕高清在线视频| 麻豆久久精品国产亚洲av | videosex国产| 日韩三级视频一区二区三区| 国产伦人伦偷精品视频| 免费观看人在逋| 操美女的视频在线观看| 久久 成人 亚洲| 成人特级黄色片久久久久久久| 婷婷六月久久综合丁香| 亚洲av第一区精品v没综合| 99久久精品国产亚洲精品| 色播在线永久视频| 丝袜在线中文字幕| 国产av一区二区精品久久| 亚洲精品美女久久av网站| 青草久久国产| 久久国产精品人妻蜜桃| 在线观看免费视频日本深夜| 日韩 欧美 亚洲 中文字幕| 欧美成人性av电影在线观看| 成年人黄色毛片网站| 天堂影院成人在线观看| 欧美精品亚洲一区二区| 日本 av在线| 1024视频免费在线观看| 老司机福利观看| 极品教师在线免费播放| 久久九九热精品免费| 久久草成人影院| 免费在线观看日本一区| 又黄又爽又免费观看的视频| 国产三级黄色录像| 丰满人妻熟妇乱又伦精品不卡| 午夜精品国产一区二区电影| av网站免费在线观看视频| 亚洲欧美日韩高清在线视频| 亚洲成人免费av在线播放| 夜夜爽天天搞| 久久精品亚洲精品国产色婷小说| 88av欧美| 日本三级黄在线观看| 久久人人97超碰香蕉20202| 久久亚洲真实| 女人爽到高潮嗷嗷叫在线视频| 精品国产美女av久久久久小说| 国产精品秋霞免费鲁丝片| 麻豆成人av在线观看| 精品久久久久久久毛片微露脸| 欧洲精品卡2卡3卡4卡5卡区| 91成人精品电影| 欧美成狂野欧美在线观看| 亚洲国产精品合色在线| 俄罗斯特黄特色一大片| 日日摸夜夜添夜夜添小说| 日本 av在线| 国产亚洲欧美98| 丝袜在线中文字幕| 91国产中文字幕| 国产精品国产高清国产av| 亚洲中文av在线| 久久影院123| 丁香欧美五月| 97超级碰碰碰精品色视频在线观看| 在线观看www视频免费| 国产成人精品久久二区二区91| 国产成人精品久久二区二区免费| 日韩三级视频一区二区三区| 国产99久久九九免费精品| 国产欧美日韩一区二区三| 少妇的丰满在线观看| 每晚都被弄得嗷嗷叫到高潮| 午夜免费激情av| 超碰成人久久| 丝袜美腿诱惑在线| 亚洲中文日韩欧美视频| 国产精品九九99| 欧美大码av| av在线播放免费不卡| 亚洲人成伊人成综合网2020| 久久久久久人人人人人| 两性夫妻黄色片| 一级a爱视频在线免费观看| 久久中文字幕人妻熟女| 精品卡一卡二卡四卡免费| 国产精品一区二区三区四区久久 | 国产欧美日韩一区二区三| 欧美黄色淫秽网站| 午夜视频精品福利| 香蕉国产在线看| 黄色成人免费大全| 老鸭窝网址在线观看| 国产精品久久久久成人av| 精品久久久久久久毛片微露脸| 久久热在线av| 18禁国产床啪视频网站| 高清在线国产一区| 国产在线观看jvid| 亚洲免费av在线视频| 大型黄色视频在线免费观看| 在线观看免费午夜福利视频| 欧美日韩亚洲高清精品| 精品熟女少妇八av免费久了| 少妇粗大呻吟视频| 制服诱惑二区| 激情视频va一区二区三区| 一边摸一边抽搐一进一小说| 脱女人内裤的视频| 亚洲一区二区三区色噜噜 | 欧美人与性动交α欧美精品济南到| 最新在线观看一区二区三区| 丁香六月欧美| 乱人伦中国视频| 欧美成人性av电影在线观看| 91大片在线观看| 成人国产一区最新在线观看| 最近最新中文字幕大全免费视频| 久久久久精品国产欧美久久久| 动漫黄色视频在线观看| 五月开心婷婷网| 亚洲成a人片在线一区二区| 亚洲一区中文字幕在线| 国产欧美日韩一区二区三| 成人黄色视频免费在线看| 久久国产亚洲av麻豆专区| 亚洲国产毛片av蜜桃av| 交换朋友夫妻互换小说| 亚洲精品av麻豆狂野| a级片在线免费高清观看视频| 久久久国产一区二区| 三上悠亚av全集在线观看| 国产精品自产拍在线观看55亚洲| 最近最新中文字幕大全免费视频| 国产亚洲欧美在线一区二区| 欧美成人性av电影在线观看| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲午夜精品一区,二区,三区| 一a级毛片在线观看| 在线观看午夜福利视频| 99国产精品99久久久久| av在线播放免费不卡| 一区在线观看完整版| 久久精品亚洲精品国产色婷小说| 国产成年人精品一区二区 | 国产伦一二天堂av在线观看| 国产高清激情床上av| 久99久视频精品免费| 岛国在线观看网站| 男女床上黄色一级片免费看| 精品无人区乱码1区二区| 九色亚洲精品在线播放| 19禁男女啪啪无遮挡网站| 久久草成人影院| 激情视频va一区二区三区| 9191精品国产免费久久| 亚洲精品一卡2卡三卡4卡5卡| 国产99白浆流出| 欧美激情 高清一区二区三区| 狂野欧美激情性xxxx| 黄色 视频免费看| 欧美成人性av电影在线观看| 国产aⅴ精品一区二区三区波| 亚洲伊人色综图| 夜夜爽天天搞| 看黄色毛片网站| 免费av毛片视频| 极品人妻少妇av视频| 国产亚洲精品久久久久久毛片| 日韩视频一区二区在线观看| 国产片内射在线| 国产精品影院久久| 午夜免费激情av| 欧美人与性动交α欧美软件| 亚洲久久久国产精品| 欧美成人免费av一区二区三区| 国产高清视频在线播放一区| 亚洲黑人精品在线| 成人精品一区二区免费| 亚洲激情在线av| 狂野欧美激情性xxxx| 少妇粗大呻吟视频| 色精品久久人妻99蜜桃| 99久久99久久久精品蜜桃| 精品国产美女av久久久久小说| 精品熟女少妇八av免费久了| 成人永久免费在线观看视频| 亚洲精品久久成人aⅴ小说| 在线观看免费午夜福利视频| 女人爽到高潮嗷嗷叫在线视频| 18禁国产床啪视频网站| 日本撒尿小便嘘嘘汇集6| 亚洲午夜精品一区,二区,三区| 亚洲精品美女久久久久99蜜臀| x7x7x7水蜜桃| 免费日韩欧美在线观看| 国产欧美日韩精品亚洲av| 91麻豆av在线| 午夜视频精品福利| 最近最新中文字幕大全免费视频| 一级毛片精品| 国产精品香港三级国产av潘金莲| 99香蕉大伊视频| 老司机亚洲免费影院| 人人妻人人爽人人添夜夜欢视频| 亚洲欧美日韩高清在线视频| 90打野战视频偷拍视频| 免费在线观看视频国产中文字幕亚洲| 女性生殖器流出的白浆| 精品欧美一区二区三区在线| 亚洲精品中文字幕在线视频| 99国产精品免费福利视频| 国产精品亚洲av一区麻豆| 亚洲成人久久性| 久久精品成人免费网站| 欧美久久黑人一区二区| 搡老岳熟女国产| 久久性视频一级片| 久久久久精品国产欧美久久久| 欧美亚洲日本最大视频资源| 俄罗斯特黄特色一大片| 精品欧美一区二区三区在线| 一区二区三区激情视频| 黄色视频,在线免费观看| 亚洲在线自拍视频| 国产真人三级小视频在线观看| 日本一区二区免费在线视频| 亚洲精品一卡2卡三卡4卡5卡| 国产亚洲欧美在线一区二区| 天堂影院成人在线观看| 黑人猛操日本美女一级片| 男女午夜视频在线观看| 国产单亲对白刺激| 日本 av在线| 精品久久久久久成人av| 91麻豆精品激情在线观看国产 | 国产成人系列免费观看| 国产精品久久电影中文字幕| 91麻豆精品激情在线观看国产 | 国产欧美日韩一区二区三| 久久午夜综合久久蜜桃| 亚洲av成人av| 美女福利国产在线| 久久久水蜜桃国产精品网| 手机成人av网站| 久久中文字幕一级| 欧美精品啪啪一区二区三区| www.www免费av| 热re99久久精品国产66热6| 69精品国产乱码久久久| 91字幕亚洲| 91麻豆av在线| 久久精品国产清高在天天线| 久久国产亚洲av麻豆专区| 亚洲av第一区精品v没综合| 变态另类成人亚洲欧美熟女 | 夜夜躁狠狠躁天天躁| 露出奶头的视频| 日韩精品中文字幕看吧| 波多野结衣一区麻豆| 欧美日本亚洲视频在线播放| 伦理电影免费视频| 成人亚洲精品一区在线观看| 老司机在亚洲福利影院| 一区在线观看完整版| 亚洲成人免费av在线播放| 欧美成人免费av一区二区三区| 天天影视国产精品| a级毛片在线看网站| 性少妇av在线| 大型黄色视频在线免费观看| 国产精品成人在线| 97人妻天天添夜夜摸| 免费搜索国产男女视频| 美女国产高潮福利片在线看| 狠狠狠狠99中文字幕| 久久精品亚洲精品国产色婷小说| 欧美日韩亚洲综合一区二区三区_| 欧美日韩国产mv在线观看视频| 亚洲狠狠婷婷综合久久图片| 精品卡一卡二卡四卡免费| 免费高清在线观看日韩| 国产成人精品久久二区二区免费| 国产成人精品在线电影| 一级毛片高清免费大全| 一个人免费在线观看的高清视频| 精品卡一卡二卡四卡免费| 黑丝袜美女国产一区| 美国免费a级毛片| av在线天堂中文字幕 | 亚洲专区中文字幕在线| 99riav亚洲国产免费| 一二三四在线观看免费中文在| 老鸭窝网址在线观看| 久热爱精品视频在线9| 天天添夜夜摸| 91老司机精品| 日本欧美视频一区| 久久中文字幕一级| 亚洲精品av麻豆狂野| 琪琪午夜伦伦电影理论片6080| 制服人妻中文乱码| 日韩精品青青久久久久久| 人人妻人人澡人人看| 黄色毛片三级朝国网站| 免费少妇av软件| 欧美色视频一区免费| 亚洲中文字幕日韩| 成人亚洲精品一区在线观看| 丁香六月欧美| 国产蜜桃级精品一区二区三区| 女人被狂操c到高潮| 色哟哟哟哟哟哟| 真人一进一出gif抽搐免费| 757午夜福利合集在线观看| 国产午夜精品久久久久久| 午夜成年电影在线免费观看| 午夜激情av网站| 久久久久久久午夜电影 | 国产伦一二天堂av在线观看| 亚洲中文av在线| www.精华液| 久久欧美精品欧美久久欧美| 久久精品91无色码中文字幕| 一二三四在线观看免费中文在| 黄频高清免费视频| www日本在线高清视频| 国产成年人精品一区二区 | 两性午夜刺激爽爽歪歪视频在线观看 | 国产99久久九九免费精品| 亚洲熟妇中文字幕五十中出 | 日韩成人在线观看一区二区三区| 国产色视频综合| 亚洲中文字幕日韩| 日本vs欧美在线观看视频| 亚洲国产精品999在线| 日本wwww免费看| 亚洲精品在线美女| 亚洲一区二区三区欧美精品| 国产精品免费一区二区三区在线| 精品无人区乱码1区二区| 欧美日韩黄片免| 18美女黄网站色大片免费观看| 黄网站色视频无遮挡免费观看| 久久久久久久精品吃奶| 韩国av一区二区三区四区| 国产精品二区激情视频| 老司机福利观看| 日韩欧美国产一区二区入口| 人人妻人人澡人人看| 精品国产亚洲在线| 性色av乱码一区二区三区2| 久久精品亚洲精品国产色婷小说| 亚洲中文av在线| 黄色丝袜av网址大全| 国产真人三级小视频在线观看| 欧美日韩一级在线毛片| 女人高潮潮喷娇喘18禁视频| 夜夜夜夜夜久久久久| 少妇粗大呻吟视频| 国产精品 国内视频| 免费日韩欧美在线观看| 亚洲精品中文字幕一二三四区| 国产成人免费无遮挡视频| 亚洲熟妇中文字幕五十中出 | 久久久久久久久中文| 久久精品国产亚洲av香蕉五月| 最新在线观看一区二区三区| 精品人妻1区二区| 日韩高清综合在线| 男男h啪啪无遮挡| 日韩精品免费视频一区二区三区| 欧美另类亚洲清纯唯美| 日韩中文字幕欧美一区二区| 男女下面插进去视频免费观看| 777久久人妻少妇嫩草av网站| 淫秽高清视频在线观看| 亚洲精品久久午夜乱码| 琪琪午夜伦伦电影理论片6080| 夜夜躁狠狠躁天天躁| 女性生殖器流出的白浆| 国产人伦9x9x在线观看| 亚洲片人在线观看| bbb黄色大片| 一区二区三区激情视频| 亚洲精品成人av观看孕妇| 亚洲国产看品久久| 99香蕉大伊视频| 黑人巨大精品欧美一区二区mp4| 国产精品av久久久久免费| 国产日韩一区二区三区精品不卡|