• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    SegNet-based first-break picking via seismic waveform classification directly from shot gathers with sparsely distributed traces

    2022-03-30 13:52:00SnYiYunYueZhoToXieJieQiShngXuWng
    Petroleum Science 2022年1期

    Sn-Yi Yun ,Yue Zho ,To Xie ,Jie Qi ,Shng-Xu Wng ,*

    a State Key Laboratory of Petroleum Resources and Prospecting,CNPC Key Laboratory of Geophysical Exploration,China University of Petroleum,Beijing 102249,China

    b China Oilfield Services Ltd.,Tanggu,Tianjin 300452,China

    c University of Oklahoma,ConocoPhillips School of Geology and Geophysics,Norman,OK,USA

    Keywords:First-break picking Deep learning Irregular seismic data Waveform classification

    ABSTRACT Manually picking regularly and densely distributed first breaks (FBs) are critical for shallow velocitymodel building in seismic data processing.However,it is time consuming.We employ the fullyconvolutional SegNet to address this issue and present a fast automatic seismic waveform classification method to pick densely-sampled FBs directly from common-shot gathers with sparsely distributed traces.Through feeding a large number of representative shot gathers with missing traces and the corresponding binary labels segmented by manually interpreted fully-sampled FBs,we can obtain a welltrained SegNet model.When any unseen gather including the one with irregular trace spacing is inputted,the SegNet can output the probability distribution of different categories for waveform classification.Then FBs can be picked by locating the boundaries between one class on post-FBs data and the other on pre-FBs background.Two land datasets with each over 2000 shots are adopted to illustrate that one well-trained 25-layer SegNet can favorably classify waveform and further pick fully-sampled FBs verified by the manually-derived ones,even when the proportion of randomly missing traces reaches 50%,21 traces are missing consecutively,or traces are missing regularly.

    1.Introduction

    Seismic data have been commonly used to understand the stratigraphic structures and to predict the distribution of oil and gas in the depth of thousands of meters.The first arrivals correspond to first breaks (FBs) contain important feature information.For instance,they can be adopted to remove the effect of the shallow fluctuating weather layer,which is the cornerstone of the subsequent seismic data processing and interpretation (Yilmaz,2001).Moreover,they can be utilized to estimate the source wavelet,which is the basis of the forward modelling and seismic inverse problem,and to build the shallow velocity model,which is useful for seismic migration and full waveform inversion(Yilmaz,2001;Li et al.,2019;Liu et al.,2020;Yao et al.,2020).Consequently,to detect the first arriving waves or to pick FBs is helpful for geophysicists.

    Manual picking is the most straightforward method.In addition,it can introduce any prior knowledge,such as the spatial continuity of first arriving waves.Even for low-quality traces or missing traces,the brain networks can still interpret FBs by making use of the information at neighborhood traces.However,due to increasingly seismic traces,especially for wide-azimuth and high-density acquisition,manual picking is time-consuming and expensive.Various (semi-)automated first-break picking methods have been proposed and developed to improve the picking efficiency and to alleviate the pressure of interpreters.Each method has its own advantages and disadvantages.One traditionally automatically first-break picking technique is based on single-trace time-or frequency-domain methods operating on single-or multicomponent recordings from an individual receiver level,such as energy based methods (Coppens,1985;Sabbione and Velis,2010),entropy based methods (Sabbione and Velis,2010),fractal dimension based methods (Boschetti et al.,1996;Jiao and Moon,2000),and higher-order statistics based methods (Yung and Ikelle,1997;Saragiotis et al.,2004;Tselentis et al.,2012).In general,such methods can quickly and stably pick FBs on data with high signalto-noise ratio (SNR).Compared with the time-or frequencydomain methods,time-frequency domain methods are developed to increase one time dimension or one frequency dimension,and have thus more potential to stably pick FBs in the case of relatively low SNR and to even detect weak signals or weak first-break waves(Galiana-Merino et al.,2008;Mousavi et al.,2016).

    In contrast to single-trace (semi-)automated methods,multitrace ones,such as cross-correlation based methods (Gelchinsky and Shtivelman,1983;Molyneux and Schmitt,1999) or template matching methods (Plenkers et al.,2013;Caffagni et al.,2016),make use of information on multiple receivers within the array simultaneously.In essence,multi-trace methods can pick FBs by taking maximum values of the cross-correlation or convolution results from trace(s) to trace(s).Therefore,the original multi-trace time-space amplitude information are generally utilized to pick FBs.Due to the usage of simultaneously multiple traces or highdimension (2D or 3D) information,the multi-trace cross-correlation or template matching based methods can identify weak signals or pick FBs at low SNR(Gibbons and Ringdal,2006;Caffagni et al.,2016) to some extent.However,they often do not adapt to the situations that the waveforms change with the trace and there exist missing traces or bad traces.

    Most single-or multi-trace methods compute only one attribute for each time sampling point and subsequently select the locations with maximum-or minimum-value attributes as FBs.To improve the picking stability,some attributes including those in the time domain,frequency domain,time-frequency domain and/or timespace domain are considered to combine together to classify waveforms and to further pick FBs by artificial rules (Gelchinsky and Shtivelman,1983;Akram and Eaton,2016;Khalaf et al.,2018)or the traditional fully-connected artificial neural networks(ANNs)(McCormack et al.,1993;Gentili and Michelini,2006;Maity et al.,2014).Multi-attribute first-break picking approaches based on artificial rules are straightforward and explicable.However,they increase the number of parameters that need to be tuned,when multi-attribute generators are chosen.For multi-attribute approaches based on the fully-connected ANNs,they can automatically analyze and combine multiple attributes extracted from the data at the cost of learning a large number of network parameters.Nevertheless,they usually require extra attribute extraction and attribute optimization.

    Convolutional neural networks(CNNs),which is one developed type of ANNs,have become popular in recent years by breaking previous records for various classification tasks especially in image and speech processing(LeCun et al.,2015;Russakovsky et al.,2015;Goodfellow et al.,2016;Silver et al.,2016;Esteva et al.,2017;Perol et al.,2018).At the cost of training big data,CNNs have the advantages to automatically extract features or attributes and meanwhile classify data.Moreover,they can be designed deep,due to the properties of local perception and weight sharing.In recent several years,CNNs were successfully introduced into the field of solid Earth geoscience including waveform classification (Bergen et al.,2019;Dokht et al.,2019;Pham et al.,2019;Wu et al.,2019).For instance,several 1D-CNNs are developed to effectively detect or classify earthquake phases or micro-seismic events,and to further pick FBs trace by trace or receiver by receiver (Chen et al.,2019;Wang et al.,2019;Wu et al.,2019;Zhu and Beroza,2019).Several 2D-CNNs are presented to effectively detect or classify seismic waveforms and to further pick FBs without involving the cases of missing traces or poor-quality traces (Yuan et al.,2018;Hu et al.,2019;Xie et al.,2019;Zhao et al.,2019;Liao et al.,2021).

    In this paper,we apply the deep fully-convolutional SegNet,consisting of an end-to-end encoder-decoder automatic spatiotemporal feature extractor and a following pixel-wise classifier(Badrinarayanan et al.,2017),to automatically segment post-FBs data and pre-FBs background in common-shot gathers.At the cost of learning various representative shot gathers with sparsely distributed traces and the manually interpreted dense FBs,one well-trained deep SegNet can be yielded to pick FBs of any unseen gather having differently distributed missing traces or regularly distributed traces with a rough trace interval.The network architecture is designed deeply to contain a set of convolutional layers and pooling layers,in order to see more information of adjacent seismic traces and to further pick FBs especially in the positions of missing traces.The presented approach does not need preextracted and pre-chosen attribute steps,as well as an additional interpolation step.The interpolation is often required to reconstruct sparsely distributed traces and useful for subsequent seismic data processing and inversion (Jia and Ma,2017;Sun et al.,2019;Wang et al.,2019).

    We begin this paper with the introduction of the motivation and general framework,as well as the description of data preparation,network architecture design,network model evaluation,model update and the optimum model application.One seismic dataset including 2251 shot gathers and the other dataset including 2273 gathers are then adopted to illustrate the effectiveness of the proposed 25-layer SegNet for classifying waveform and further picking FBs directly from shot gathers with different distributions of missing traces.Finally,a discussion and some conclusions are given.

    2.Methodology

    2.1.Motivation and general framework

    First breaks (FBs) refer to the earliest arrival travelling time of seismic wave to receivers in the field of P-wave exploration.Therefore,there is only one FB in each seismic trace.Because some reflected waveforms below FBs,such as some separated waves,are similar to first-break ones,and there exists the unbalanced label data challenge,direct classification of FBs and other non-FBs categories may give rise to some false FBs (Yuan et al.,2018).As everyone sees,the waveform features above and below FBs are typically different in seismic common-shot gathers.There is only background or noise above FBs,which are usually spatially uncorrelated,while there are dominant signals below FBs,which are more spatially correlated.Consequently,we can divide shot gathers into two classes including pre-FBs background and post-FBs data.In this way,the boundaries between two classes are FBs.

    It is important to obtain regularly and densely distributed FBs from sparsely distributed traces.One challenge of first-break picking is that trace intervals in each shot gather are often different,or trace intervals are regularly large.The quality of firstbreak picking for these sparsely distributed traces can decrease the quality of the shallow velocity-model building.Herein,we desire to predict the regularly dense waveform classification from shot gathers with randomly or regularly missing traces and to further pick FBs directly.More concretely,we want to find an optimal function to convert shot gathers with missing traces into mask data,where each trace is an idea step function.Due to missing traces,the optimal function should have the ability to capture information of adjacent seismic traces.The convolution-based networks have been demonstrated to be suitable to classify 2D or 3D images and easy to achieve more translation invariance for robust classification via series of high-dimensional convolution layers and pooling layers.Recently,the fully convolutional networks (FCNs)without fully-connected layers (FCLs),proposed by Long et al.(2015) in the context of image and semantic segmentation,was one popular architecture including at seismic exploration field.The FCNs have the advantages of achieving end-to-end pixel-wise classification and designing the network architectures flexibly,in contrast to the typical sliding-window CNNs involving one or several FCLs before the classification.Several variants of FCNs including U-net (Ronneberger et al.,2015),DeepLab (Chen et al.,2018),SegNet (Badrinarayanan et al.,2017) and DeconvNet (Noh et al.,2015),have been recently widely developed.Taking missing traces into account,the network should be designed relatively deep to see larger context and to utilize more information of adjacent seismic traces.From the viewpoint of the memory,we choose the SegNet architecture,which does not transfer the entire feature maps extracted in the encoder to the corresponding decoder but instead reuse pooling indices in the corresponding decoder,compared with U-net.

    Taking SegNet as the function set,the regularly dense pixel-wise classification directly from a certain shot gather X with missing traces can be expressed as

    where P represents the predicted probability with two channels indicating the background class and the signal-dominant class,and m represents the network parameters or variables in the function.SegNetrepresents a mapping function from X to P,involving some hyper-parameters such as the number of the convolution layers,the number of the (un)pooling layers,and the size of the filters in the convolution layers.As denoted in the SegNet architecture for waveform classification and first-break picking (Fig.1),SegNet consists of an end-to-end encoder-decoder feature extractor with trainable parameters and a following classifier without any learned parameters.The SegNet input is the preprocessed one-channel shot gather with missing traces,while the output is the two-channel map that is supervised by the given labels with one pre-FBs background class and the other post-FBs data class.

    At the cost of training a large amount of labeled data,which can be obtained by manually interpreting dense FBs,SegNet can automatically extract statistically spatio-temporal features or attributes rather than pre-computing several attributes sensitive to FBs.The optimal extractor learned from various labeled data can yield the high-level features condensed by the previous series of layers with the property that there is large difference between the high-level feature values above FBs and those below FBs,but there is small difference among feature values at the same category including those in missing traces.As a result,these high-level features can be fed into a classifier to convert into the pixel-wise probability,which can be used to interpret each pixel as either pre-FBs background class or post-FBs signal-dominant class.

    Our general framework for waveform classification and firstbreak picking from shot gathers with sparsely distributed traces is summarized as follows:

    1) Data preparation.Divide data into three subsets including the training set,validation set and test set,and preprocess them with the same processing flow.

    2) Network architecture design.Design the network architecture including the number of the(de)convolution layers,the number of the (un)pooling layers,the size of the filters,and the combination mode of different layers.

    3) Network model evaluation.Define a loss function and several performance metrics to quantitatively evaluate the quality of a large number of models.

    4) The optimum model choice.Find an optimum model by using the gradient descent and an early stopping strategy according to both the training set and the validation set.

    5) The optimum model application.Apply the optimum model to unseen gathers in the test set to automatically extract the features,to meanwhile classify waveforms and to further pick FBs.

    2.2.Data preparation

    To obtain an effectively well-trained generalizable model or SegNet classifier and picker,three subsets including the training,validation and test sets are prepared.The training set is used for training the model or optimizing network parameters,as well as designing the learning rate,which is a hyper-parameter in the model update.The validation set is adopted to evaluate whether the learnt network is overfitting,and to determine the network hyper-parameters.The test set is considered to investigate the generalization ability of the well-trained model to unseen shot gathers.In both the training and validation sets,various typical diverse samples including shot gathers with different kinds of noise,different geometries of FBs,as well as different distributions of missing traces should be prepared as soon as possible and are cropped into the same size of both time dimension and space dimension.Accordingly,the labels used for supervising the network model are acquired by manually picking dense FBs and then segmenting gathers into one pre-FBs background class and the other post-FBs data class.Due to only two classes,the labels are quantified by binary maps.The ideal binary vectors(1 0) and(0 1)are designed to denote post-FBs data class and pre-FBs background class,respectively.An example of the input shot gather and the corresponding output label can be seen in Fig.1.To learn the model stably,fast and effectively,all samples in different sets should be further preprocessed mainly including outliers or heavy noise elimination,surface wave attenuation,filtering,cropping and normalization.

    2.3.Network architecture design

    There exists only one boundary line in each shot gather,but real missing traces can complicate seismic waveform classification and first-break picking.The shallow-layer model can mainly learn the special commonness of waveforms,but it is not easy to accurately classify some complex waveforms with strong lateral variation and those at the locations of relatively dense missing traces.Therefore,the SegNet model is still required to be relatively deep.On the whole,several down-sampling stages in the encoder and upsampling stages with the same number in the decoder,as well as a final pixel-wise classification layer,are designed,as shown in Fig.1.Inside each stage,the (de)convolution layer consists of convolution,batch normalization (BN) and rectified linear unit(ReLU) activation,and (un)pooling is placed between several (de)convolution layers.

    For each convolution layer,as denoted in the blue boxes of Fig.1,its output can be expressed as

    where the symbol*is a 2D convolution or deconvolution operator,Wlis the 2D convolutional filter or the weights at thel-th layer,Blis an aggregation of the bias that is broadcasted to each neuron in the feature or attribute map at thel-th layer,Xlis the input of the convolutionlayer,and Clis the outputof the convolution layer.Whenlis 1,Xlis the input shot gathers or X in Eq.(1).BN represents a differentiable batch-normalization transformation that normalizes the convolution output across a mini-batch(Ioffe and Szegedy,2015),and thus has the advantage of less internal covariate shift from shallowlayerstodeep layers.The activation function maxorthestateof-the-art ReLU represents an element-wise maximization operation,which can enable the SegNet model to be a universal nonlinear function approximator.It has been demonstrated that ReLU has the advantages of avoiding the notorious vanishing gradient problem and promoting the model sparsity(Glorot et al.,2011).

    Fig.1.The SegNet architecture for seismic waveform classification and first-break picking directly from shot gathers with sparsely distributed traces.Blue boxes represent the convolution layers containing a convolution operation,a batch normalization operation and an activation operation,green boxes represent the pooling operators,red rectangles represent the unpooling operators implemented by pooling indices computed in the pooling step,and the yellow box represents the Softmax classifier.

    For the pooling layer,as denoted in the green boxes of Fig.1,its output can be expressed as

    where the symbol POOL represents a downsampling or pooling operator,and Dlrepresents the pooling result at thel-th layer,whose the size is smaller than that of the feature map Cl.We use max-pooling as a POOL operator to yield the maximum values by comparing the neighborhood of the feature map in a sliding window way,and record the locations of the maximum values with indices at the same time.The max-pooling layer can be used to typically compress the feature maps along all dimensions while retaining the most important information.Several stacked maxpooling layers can gradually reduce data dimension,and thus act as a strong regularization for the network(Goodfellow et al.,2016)to control overfitting.Furthermore,more pooling layers can achieve more translation invariance,which is useful for effectively classifying translational,rotational and deformed spatiotemporal waveform.It is also significant for adopting more pooling layers to see the larger input image context(spatial window),which is useful for the classification of shot gathers with sparsely distributed traces due to the usage of more spatial neighborhood of statistical information.However,the pooling operator can correspondingly lead to a loss of spatial resolution of the feature maps.The increasingly lossy of boundary details is not beneficial for spatiotemporal waveform segmentation where boundary delineation corresponding to FBs is vital.Here,the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear unpooling.This eliminates the need for learning to upsample.

    At last,a pixel-wise Softmax classifier is adopted to convert the output of the attribute extractor into probability distribution overKdifferent possible classes:

    where Pu,v,nrepresents the normalized probability that thev-th data point in then-th shot is classified into theu-th category,Yu,v,nrepresents the predicted feature representation using an encoderdecoder extractor composed of convolution layers and pool layers,andKrepresents the number of classes.Because the Softmax classifies each data point independently,the predicted P is also aKchannel.In our case,there are only two classes including noisedominant background and signal-dominant waveforms,soKequals to 2.The predicted segmentation corresponds to the class with the maximum probability at each pixel.

    2.4.Network model evaluation

    Because the quantity of the pre-FBs background class is comparable to that of the post-FBs data class,the binary cross-entropy criterion in an original form is chosen as the model evaluator or the loss function and is given as follows:

    whereNrepresents the total shot number,Vrepresents the total sampling point or pixel number in a shot gather,m represents the model or network parameters,Qk,v,n(k=1,2) represents the desired or true binary probability at thev-th data point of then-th shot,and Pk,v,nrepresents the predicted binary probability via Eq.(1).The closer the predicted probability distribution to the ideal one,the smaller is the cross-entropy value orO(m),and therefore the better the built model mapping X to P,as denoted in Eq.(1).

    To quantitatively evaluate the well-trained model obtained by minimizing Eq.(5)or the cross entropy as well as its generalization performance,the accuracy between the predicted classification results and the manually interpreted ones or the consistency between these two results are evaluated by using the following expressions:

    where true positive TPn(n=1,2,…,N) represents the number of the sampling points in then-th shot that correctly predict signaldominant class below FBs,true negative TNnrepresents the number of the sampling points in then-th shot that successfully predict background above FBs,and false negative FNnrepresents the number of the sampling points in then-th shot which fail to predict signal-dominant class.The accuracy Anandagive the percentage of correctly classified sampling points in then-th shot gather and all shot gathers in the training,validation or test sets,respectively.The recall rate Rnandrgive the percentage of correctly classified signaldominant sampling points in the whole signal-dominant sampling points of then-th shot gather and all shot gathers in the training,validation or test sets,respectively(Powers,2011).

    2.5.Model update

    A linear combination of the model update Δmt-1from the previous iterationt-1 and the negative gradient of the loss function is used to update the model mtat the current iterationtas follows:

    where μ represents the momentum coefficient or the weight of the last update Δmt-1,ηtrepresents the learning rate or the weight of the negative gradient at the iterationt,represents the gradient of the loss function with respect to the model or network parameters,?O(mt)/?Ytrepresents the gradient of the loss function with respect to the input of the Softmax classifier or the output of the encoder-decoder feature extractor,?O(mt)/?Ytis equal to the difference between the predicted probability P and the true probability Q,?Yt/?mtrepresents the gradient of the output of the encoder-decoder feature extractor with respect to the model parameters,and ?Yt/?mtcan be explicitly calculated by using the back-propagation algorithm(Rumelhart et al.,1986).Herein,the gradient term ?O(mt)/?mtis approximately calculated by randomly selecting only a small subset(mini-batch) ofwith improvements on efficiency and convergence,which is the idea of a mini-batch stochastic gradient descent (SGD) algorithm (LeCun et al.,1998).

    When the loss for the training set decreases,whereas the validation loss increases(known as overfitting)or remains stable for a certain period of iterations,the training or the update is stopped.The finally well-trained modelSegNetwith the optimal network parameters moptis saved as the automatic classifier.

    2.6.The optimum model application

    The saved optimum model can be used to convert any unseen shot gather Xtestin the test set into a two-channel normalized probability map with the following formula:

    When the predicted binary probability in a certain pixel of P is closer to(1 0),we classify the corresponding input sampling point in X into the post-FBs signal-dominant class.On the contrary,when the predicted probabilitiy is closer to (0 1),we classify the input sampling point into the pre-FBs background class.In an ideal case,the sampling points above FBs should be one class,while those below FBs are the other.Consequently,the boundary between the two classes or the segmentation line in the classification map can be located,i.e.,FBs are picked.Furthermore,we take the maximum of each position in the two-channel probability map to obtain a single-channel probability map.It can be utilized to statistically and directly evaluate the quality for waveform classification along with first-break picking to some extent.The smaller the probability in the single-channel probability map,the lower the accuracy of waveform classification,or the more unreliable waveform classification.In an ideal case,the single-channel probability curve for each trace including missing traces should be one impulse function,where there is only one minimum.The position of the minimum value in each trace can be regarded as the FB.Consequently,we can also pick FBs by locating the minimum values of the single-channel probability map one trace by one trace.

    3.Examples

    The proposed method is tested on two land real datasets.The two adjacent datasets are from two monitoring receiver lines approximately 500 m apart.The topography of the real working area is complicated with a surface elevation difference up to 500 m.One dataset including 2251 common-shot gathers is used for training and validation,where 1751 gathers are randomly chosen for training while the rest for validation.The other containing 2273 common-shot gathers is adopted for test.Each common-shot gather with the sampling time of 4 ms consists of 168 traces and is cropped into 550 time samples.For the two datasets,all the sources are the same type of explosives,and the distances among all receivers in each monitoring line are approximately the same.All common-shot gathers are preprocessed by implementing outliers or heavy noise elimination,surface wave attenuation,filtering,and normalization.For each processed gather with 92400 sampling points,the labels are made by manually picking FBs and then classifying pre-FBs background and post-FBs data.

    For our examples,25 layers including 18 convolution layers,3 pooling layers,3 unpooling layers and 1 Softmax layer are combined into our network architecture by trial and error,as shown in Fig.1.Except the number of convolution filters in the final convolution layer of the decoder is 2,the number of convolution filters in the other convolution layers is 64.For each convolution kernel,the size is fixed to 3 × 3 with stride of 1 (i.e.,overlapping window).Following the convolution layer,the max-pooling operator is performed over a 2 × 2 pixel window,with stride of 2 (i.e.,nonoverlapping window),and thus the resulting output through the pooling operator indicted by the green box is sub-sampled by a factor of 2 along both the time direction and the space direction.

    In all experiments,the 25-layer network is trained by using SGD with a momentum coefficient of 0.9 and a mini-batch size of 5 shot gathers.The learning rate starts from 0.01,and is reduced by a factor of 0.2 every 5 epochs.The full pass of the training algorithm over the entire training set using mini-batches is defined as an epoch.The maximum number of epochs for training is assigned to 200,and the cross-entropy loss error is set to 0.001.For all experiments,the biases are initialized as zero,and the weights are initialized as noise in the normal distribution with a mean of zero.The experimental machine configuration includes an Intel core i9-9900k CPU (Central Processing Unit) and an Nvidia GeForce RTX 2080Ti GPU (Graphics Processing Unit).

    To investigate the generalization performances of the welltrained SegNet classifier and picker from the training and validation sets without missing traces or regularly dense distributed traces,we apply it to three kinds of unseen testing data sets including 2273 common-shot gathers with different ratios of missing traces.For the well-trained network,the total classification accuracy of both the 1751 testing gathers and the rest 500 validation gathers are over 99%.It may be noticed that it takes about 170 s to classify 2273 gathers without parallel computing.Fig.2a-c shows the accuracy distributions of 2273 testing common-shot gathers with 0% missing traces (or without missing traces),with 10%randomly missing traces of each gather and with 50%randomly missing traces of each gather (i.e.,84 traces from 168 traces are randomly selected to fill in zero-valued amplitude) versus the source location.It can be observed from Fig.2a that the classification accuracy of most gathers in the original 2273 testing gathers without missing traces reach 98%,while the accuracy of only about 50 gathers is less than 98% and their corresponding sources are generally close to the black monitoring line.As denoted in Fig.2b and c,the well-trained network from the training and validation sets without missing traces presents the poor prediction effect on 2273 testing gathers with 10% and 50% randomly missing traces,having the accuracy generally below 80%.It is obvious that the welltrained network from shot gathers without missing traces or with regularly sampled traces cannot be generalized to classify shot gathers with sparsely randomly distributed traces well.It can also be seen that the gathers far from the black monitoring line have the lower accuracy than those close to the monitoring line.In addition,the more missing traces,the lower the accuracy.

    In order to generalize to classify shot gathers with missing traces well,we generate an improved well-trained SegNet classifier and picker from the training and validation sets with 50% randomly missing traces,and apply it to various testing common-shot gathers with different distributions of missing traces including 0% missing traces to test its performances.For the new well-trained model,the total classification accuracy of 1751 testing gathers and the rest 500 validation gathers are 99.68% and 99.60%,respectively.Fig.3a-c shows the accuracy distributions of 2273 testing gathers without missing traces,with 50% randomly missing traces of each gather and with 50% regularly missing traces of each gather versus the source location.The testing sets of Fig.3a and b are the same as those of Fig.2a and c,respectively.It can be observed from Fig.3 that the new well-trained model from shot gathers with missing traces has the excellent generalization ability with the accuracy of almost all gathers over 98% even in the case of regularly missing traces.The accuracy of only several gathers with sources located near the black monitoring line is relatively low.Fig.4 displays careful comparisons among the classification accuracy of 500 shot gathers in each testing dataset of Figs.2a and Fig.3a-c.For all these four dataset cases,gathers more than 95% in each testing set of 2273 gathers have the accuracy over 98%.

    Fig.2.The accuracy distributions of 2273 testing common-shot gathers with 0%(a),10%(b) and 50% randomly missing traces of each gather (c)based on the well-trained SegNet from the training and validation sets without missing traces versus the explosive source location.Colored dots denote source locations with the classification accuracy of single gather,and all receivers are located on the black monitoring line.In(a),the hollow pentagram and black hollow circle in the lower left corner,and the blue hollow circle in the upper right corner denote the locations of the 15th shot,243rd shot and 2213rd shot,respectively.The well-trained model from shot gathers without missing traces cannot be generalized to classify shot gathers with missing traces well.

    Fig.3.The accuracy distributions of 2273 testing common-shot gathers with 0% missing traces (a),with 50% randomly missing traces of each gather (b) and with 50% regularly missing traces(c)based on the new well-trained SegNet from the training and validation sets with 50%randomly missing traces of each gather versus the explosive source location.The well-trained model has favorable generalization ability with the accuracy of almost all gathers over 98%.

    Fig.4.Comparisons among the classification accuracy of the first 500 common-shot gathers arranged from the smallest value to the largest value corresponding to 2273 testing shot gathers in Figs.2a and 3a-c.The characters 0%MT/0%MT,50%MT/0%MT,50%MT/50%MT and 50%MT/50%regular MT in legend represents four dataset cases of Figs.2a and 3a-c,respectively.The accuracy of gathers more than 95% in each testing set is over 98%.

    Fig.5.The recall distributions of 2273 testing common-shot gathers versus the explosive source location corresponding to four dataset cases of Figs.2a and 3a-3c,respectively.The recall of almost all gathers in each testing set reach 97%.

    To focus on investigating the classification ability of post-FBs signal-dominant waveforms,we calculate the recall of 2273 testing common-shot gathers or four well-generalized cases of Figs.2a and Fig.3a-c,as shown in Fig.5a-d.For these four dataset cases,the recall of almost all gathers reach 97%,even in the cases of missing traces with different distributions.Compared with three other cases,the generalization accuracy of the new well-trained SegNet classifier and picker from samples with 50% randomly missing traces to 2273 testing shot gathers without missing traces are relatively most dependent on geometry related to the locations of sources.The lower the recall,the higher detection error the signal-dominant samples.In general,the wrong detection happens near sharp top FBs and weak reflection areas.

    To illustrate the effect of seismic waveform classification and first-break picking based on the new well-trained SegNet model from the training and validation sets with 50% randomly missing traces more clearly and intuitively,we select two representatively original common-shot gathers including the 243rdshot gather and the 2213rdone.These two chosen original gathers share different first-break shapes,different types of first-break waves,different distances from the source to the monitoring receiver line,different degrees of weak reflection below FBs,and different kinds of noise.Furthermore,we focus on investigating the generalization ability to the gathers with randomly,regularly and several special distributions of missing traces generated from these two representatively original common-shot gathers.

    Fig.6 shows comparisons among the classification results and first-break picking results of the gathers with five distributions of missing traces made from the 243rdoriginal shot gather(Fig.6f).It can be clearly observed that the classification results of the gathers without missing traces(Fig.6a),with 50%randomly missing traces(Fig.6b),with only missing traces on the left (Fig.6c),with only missing traces on the right(Fig.6d)and with 50%regularly missing traces (Fig.6e) are in good accordance each other,even for the locations of six consecutive missing traces (Fig.6b and d) between the 135thtrace and the 140thtrace.In addition,five segment lines(red lateral lines)between the pre-FBs background class(light blue part)and the post-FBs signal-dominant class(dark blue part)or the corresponding picked FBs are basically consistent and continuous along the space direction.The single-channel probability maps corresponding to Fig.6a-e are shown in Fig.7a-e,and local zoomed plots of low-value probability zones of the middle 84thtraces in Fig.7a-e are shown in Fig.8.It can be seen that the maximum probability curve at every trace approximates a pulselike function with a small width of low-probability values,which are related to the locations of FBs,as well as the uncertainty of waveform classification and first-break picking.Fig.9 shows the differences between the automatically picked FBs and the manually picked FBs versus the trace number.The average error between the automatically picked FBs from 168 traces in each gather case and the manually picked ones is less than two samples,suggesting that the automatically picked FBs are close to the manually picked ones.It can be inferred from Fig.9 that the differences of FBs do not so strongly depend on the distributions and the locations of missing traces.

    Fig.6.Comparisons among the classification results and first-break picking results of the gathers with five distributions of missing traces(a-e)generated from the 243rd original common-shot gather (f) based on the new well-trained SegNet.The representatively cropped original gather in (f) contains background,industrial electrical interference (red arrows),one very weak post-FBs reflection area(red rectangle)similar to the pre-FBs background,the surface wave(blue rectangle),near-offset direct waves and far-offset refracted waves.The classification results are overlaid by the picked FBs (red lateral lines) and the seismic amplitude in (a-e),where white vertical zones denote the positions of missing traces.The classification accuracy for five cases is over 99%.Furthermore,the picked FBs are basically consistent each other and spatially continuous.

    Fig.7.Comparisons among the single-channel probability maps corresponding to Fig.6a-e.The black and white represent the low-and high-value probability,respectively.Red lateral dashed lines represent the manually picked FBs from the common-shot gather of Fig.6f,which are regarded as the reference.It can be observed that almost all referenced FBs are located in low-value zones of five probability maps.

    Fig.10 shows comparisons among the classification results and first-break picking results of the gathers with five distributions of missing traces made from the other 2213rdoriginal common-shot gather (Fig.10f) based on the new well-trained SegNet from the training and validation sets with 50% randomly missing traces of each gather.The corresponding source is far from the monitoring receiver line,as denoted in the position of the blue hollow circle in Fig.2a.One can clearly see that the classification results of five gathers with different distributions are good.Moreover,the corresponding picked FBs are visually consistent and spatially continuous,even for the locations at 13 consecutive missing traces(Fig.10b and d)between the 30thtrace and the 44thtrace except for the 35thtrace and the 37thtrace.Fig.11a-e displays the singlechannel probability maps derived by predicting the original gather and the other four generated gathers,and Fig.12 displays their corresponding local zoomed plots of low-value probability zones in the middle traces.It can be observed that the probability curve at each trace looks like a pulse function with several continuous low-value probabilities.The differences between the automatically picked FBs and the manually picked FBs versus the trace number are shown in Fig.13.The average error between the automatically picked FBs from 168 traces in each gather case and the manually picked ones is not more than two samples.In contrast to five cases derived from the 243rdoriginal common-shot gather,the first-break picking of five distributions gathers generated from the 2213rdoriginal gather are relatively unstable and more dependent on the distributions of missing traces,as indicated in Figs.9 and 13.

    Fig.13.The differences between the automatically picked FBs (or red lines of Fig.10a-e) and the manually picked FBs versus the trace number.The errors of the automatically picked FBs from 168 traces including the locations of missing traces are commonly less than 4 samples,and the average errors corresponding to five cases are 1.1429,1.5536,0.9345,1.6548 and 1.7083 samples,respectively.

    To illustrate why the new well-trained SegNet classifier and picker can directly classify seismic waveform and subsequently pick FBs at the positions of missing traces,and to further test its generalization ability,we choose another representative commonshot gather in the test set and make a special gather obtained by replacing 21 consecutive traces of the original gather from the 74thto 94thtraces with zero.The chosen representative original gather(Fig.14c) is the 15thshot located at the location of the hollow pentagram in Fig.2a.As the classification results(Fig.14a and b)of the 15thoriginal shot gather and its corresponding modified gather(Fig.14d) indicate,the classification result predicted by applying the well-trained SegNet model to a shot gather with 21 consecutive missing traces is similar to that of the original fully-sampled gather,and the classification accuracy for the pair of gathers are 99.78%and 99.52%,respectively.As well,the picked FBs are suitable agreement even in the locations of missing traces.Furthermore,we extract 64 features in the 17thconvolution layer and 2 features in the 18thconvolution layer obtained by applying the new well-trained Seg-Net to two given gathers of Fig.14c and d,and implement careful comparisons,as shown in Figs.15 and 16.It can be observed that 64 deep-layer features automatically extracted from the gather with 21 consecutive missing traces are similar to the corresponding ones extracted from the original gather,even in the corresponding positions of consecutive missing traces.Moreover,as the depth of the convolution layer increases from the 17thconvolution layer to the 18thconvolution layer,the features automatically extracted from the gather with missing traces become closer to the corresponding ones extracted from the original gather.It is noticeable that the new well-trained SegNet model can extract features with obvious boundaries between the shallow background part and the deep signal-dominant part after both the 17thand 18thconvolution layers,even in the presence of 21 consecutive missing traces.

    Fig.8.Local zoomed plots of low-value probability zones of the middle 84th traces in Fig.7a-e.The characters 0%MT,50%MT,Left MT,Right MT and Reg MT in legend represent five cases of single-shot gathers with 0%,50% randomly,only the left,only the right and regularly missing traces,as indicated in Fig.6a-e,respectively.There are only about three continuous time samples with the classification probability less than 0.9,regardless of the distributions of missing traces.In addition,the differences among the minimum probability for five cases are obviously not more than one time sample.

    Fig.9.The differences between the SegNet-based automatically picked FBs or the red lines of Fig.6a-e and the manually picked FBs versus the trace number.The errors of the automatically picked FBs from 168 traces including the locations of missing traces even six consecutive missing traces are commonly less than four time samples,and the average errors corresponding to five cases are 0.7411,1.7589,1.2054,1.3185 and 1.6756 time samples,respectively.

    4.Discussion

    Fig.10.Comparisons among the classification results and first-break picking results of the gathers with five distributions of missing traces(a-e)generated from the 2213rd original common-shot gather (f) based on the new well-trained SegNet.The representatively cropped original common-shot gather in (f) contains background,industrial electrical interference and asymmetric fluctuant direct waves,which are obviously different from the characteristics of Fig.6f.The classification accuracy for five cases reaches 99%.Moreover,the corresponding picked FBs are basically consistent and continuous along the space direction.

    Fig.11.The single-channel probability maps corresponding to Fig.10a-e.The black represents low-value probability,and red lateral dashed lines represent the manually picked FBs from the common-shot gather of Fig.10f.It can be seen that the manually picked FBs are located in or near low-value zones of all probability maps.

    Fig.12.Local zoomed plots of low-value probability zones of the middle 84th traces in Fig.11a-e.There are about four continuous time samples with the classification probability less than 0.9.In addition,the differences among the minimum probability for five cases are from 0 to 5 time samples.In contrast to Fig.8,there exists relatively large uncertainty of low-value probability zones.

    We adopt a large number of preprocessed seismic common-shot gathers as the network input,and use the corresponding pre-FBs background and post-FBs data labels to supervise the network output to gain a well-trained end-to-end encoder-decoder extractor.The extractor can automatically extract features instead of pre-computed ones.The automated extractor constructed by stacking a series of artificial neurons enables FBs to be picked in a manner analogous to how a human would pick them.Although the well-trained automated extractor is built at the cost of learning big data,it is easy to prepare abundant seismic shot gathers and the corresponding labels by segmenting FBs,which can be picked manually.Nevertheless,it is important to learn representative and diverse shot gathers especially including those with different distributions of missing traces.Under the premise that the basic condition is met,our well-trained fully-convolutional SegNet consisting of an end-to-end encoder-decoder feature extractor and a following pixel-wise classifier adapts to automatically pick FBs of various unseen shot gathers fast even acquired in different geometries,such as different trace intervals,different distributions of receivers,or different number of receivers.

    We choose the data within the same source type of explosives in our examples in order to focus on investigating the effectiveness of SegNet-based automated first-break picking directly from shot gathers with sparsely distributed traces including randomly missing traces,consecutively missing traces and regularly missing traces.Furthermore,we attempt to account for the reason why our presented SegNet-based model can classify waveform and pick FBs in the locations of missing traces.It is also worth investigating that simultaneously learning a large number of seismic data from various sources,such as explosives,vibroseis or air gun may enable the well-trained SegNet to cause a broader generalization.

    The design of labels is not unimportant for learning one datadriven waveform classifier and first-break picker.From the viewpoint of balancing the number of different classes of labels,it should be appropriate to label seismic data with one class on post-FBs data and the other on pre-FBs background,which are just segmented by FBs.The design of network architecture and the performance of the network learning are dependent on the choice of the input size.We choose various cropped 2D seismic data with relatively large size of both time and space dimensions,which can be roughly determined by balancing post-FBs signal-dominant samples and pre-FBs background samples as well as the computer memory.And we input them into the end-to-end SegNet architecture to implement pixel-wise classification.The SegNet architecture can be designed deeper and more flexibly to see more adjacent information.It is useful for a deep SegNet to classify waveforms and pick FBs of shot gathers with consecutive missing traces.However,the end-to-end SegNet architecture accompanied by the cropped gather input with relatively large-size time-space dimension requires more preprocessing steps,such as the removal of some local noise (e.g.,outliers or 50-Hz industrial electrical interference).The space dimension of the input determines the maximum space limitation that the network can see.The space size of the reception field,which can be calculated by using the number and size of pooling layers and convolution layers,determines the real space range that the network can see.The larger the reception field,the more spatiotemporal information the network can capture.Besides directly learning spatiotemporal amplitude information,some other attributes extracted from seismic data can be considered as the input independently or simultaneously to learn other well-trained models for classifying waveform and further picking FBs.Furthermore,the ensemble learning for first-break picking is also worth investigating.

    We should consider the distribution of missing traces to design the number of pooling layers,especially for the number of consecutive missing traces.In contrast to convolution layers,the use of more pooling layers can increase the receptive field more rapidly and can help see a larger range of waveform,so more local statistics can be utilized to perform waveform classification and first-break picking.Moreover,more pooling layers along with convolution layers can achieve more translation invariance,which is useful for effectively classifying translational,rotational and deformed spatiotemporal waveform and further picking FBs.However,the usage of more pooling layers may blur some details related to FBs.It may be one main reason why there is the low-value probability of a certain width near the FB in each trace of the onechannel probability map,which is derived by the SegNet-based two-channel probability output map.This paper adopts only 2D pooling and 2D convolution filters to gradually include as much spatiotemporal statistics of neighborhood as possible in the input preprocessed common-shot gathers.However,it can be readily extended to a higher dimension to utilize more information.For instance,2D pooling and 2D convolutional filters can be directly replaced by 3D or higher dimensional those to implement waveform classification and first-break picking in multiple shots and multiple domains or by increasing a frequency or scale dimension.

    Fig.14.Comparisons between the classification results (a-b) of the 15th original common-shot gather (c) and its corresponding modified gather (d) involving 21 consecutive missing traces based on the new well-trained SegNet.The original cropped gather in(c)contains weak background,weak industrial electrical interference,the relatively weak post-FBs reflection area(red rectangle),the relatively weak residual surface wave(blue rectangle),near-offset direct waves and far-offset refracted waves.The pixel-wise classification of the shot gather with 21 consecutive missing traces is similar to that of the original gather without missing traces,and the picked FBs are also consistent even in the locations of missing traces.

    We do not impose any constraint of some common-sense factors related to spatio-temporal relationship of FBs on learning SegNet,such as small time delays between adjacent traces and time increase with offset distance,but the picked FBs can visually meet these factors,as illustrated in our examples.It suggests that the proposed well-trained 25-layer SegNet model has the ability to learn these common-sense factors from various seismic gathers with the corresponding labels.In turn,whether some commonsense factors or some experience of interpreters can be considered as a constraint or a guide to integrate into the process of network learning and what are its roles in classifying waveform and picking FBs is also worth studying.

    Fig.15.Comparisons between 64 features (a-b) in the 17th convolution layer automatically extracted by applying the new well-trained 25-layer SegNet to two given gathers of Fig.14c and d,as well as the histogram of the absolute difference between b and a.Although there are differences between the corresponding features near the missing traces,these corresponding features appear similar.Furthermore,there are obvious boundaries between the shallow background part and the deep signal-dominant part in each extracted feature even in the presence of 21 consecutive missing traces.

    Fig.16.Comparisons between 2 features (a-b) in the 18th (or final) convolution layer automatically extracted by applying the new well-trained 25-layer SegNet to two given gathers of Fig.14c and d,as well as the histogram of their absolute difference between b and a.There are relatively weak differences between the corresponding features near the missing traces.In addition,there are clear boundaries between the shallow background part and the deep signal-dominant part in all extracted features.

    We derive a single-channel probability map by taking the maximum of each binary probability in the output two-channel normalized probability map.For inputting any unseen shot gather,the resulting single-channel probability curve at each trace including the locations of missing traces is one impulse-like function,where there are the low-value probability of a certain width near FBs.The locations of the minimum values within the lowvalue probability zone can be interpreted as FBs,which correspond to the locations of the segmentation lines interpreted by directly using the output two-channel normalized probability map.There are properties for the single-channel probability map that continuous small-value probability always appears near FBs,and the minimum probability values are related to the locations of FBs.It can probably be inferred that the smaller the minimum probability value in the derived single-channel probability map,and the wider the range of-low-probability values,the more uncertain the first-break picking.This may be adopted or further quantified as an additional quality-control factor to evaluate the performance of the already-trained network model from a statistical point of view,and in turn to optimize the network architecture.Furthermore,some continuously low-value probability in the derived single-channel probability map indicates one time range for picking FBs at least.It may provide a new way to combine the SegNet-based first-break picker with a certain traditional picker.

    5.Conclusions

    We present a SegNet-based waveform classification and firstbreak picking method.The method can pick FBs from shot gathers with sparsely distributed traces or missing traces directly.At the cost of learning various shot gathers with randomly missing traces,our well-trained fully-convolutional SegNet model can automatically extract spatio-temporal features and classify seismic waveform at the same time.Due to the learning of samples with missing traces and usage of several stacked pooling layers and convolution layers,the well-trained 25-layer SegNet model can adapt to automatically pick FBs of various unseen shot gathers acquired in different geometries,even when the proportion of randomly missing traces reaches 50%,21 traces are missing consecutively,or traces are missing regularly.As our examples show,even when the proportion of randomly or regularly missing traces of each gather in the testing set reaches up to 50%,the classification accuracy of 2273 gathers is almost over 97%,and the average error between the SegNet-based automatically picked FBs and the manually picked ones is about two time samples.Although we do not impose any constraint of some common-sense factors related to spatio-temporal relationship of FBs on learning SegNet,FBs including at the locations of missing traces can be picked with good lateral continuities.Our work can be readily extended to simultaneous multi-shot or multi-domain learning and first-break picking,3D amplitude learning and first-break picking by increasing a frequency or scale dimension,as well as simultaneous multiple-attribute learning and first-break picking,which are also our future work.

    Acknowledgements

    This work was financially supported by the National Key R&D Program of China (2018YFA0702504),the Fundamental Research Funds for the Central Universities (2462019QNXZ03),the National Natural Science Foundation of China(42174152 and 41974140),and the Strategic Cooperation Technology Projects of CNPC and CUPB(ZLZX 2020-03).

    欧美zozozo另类| 香蕉av资源在线| 成在线人永久免费视频| 99在线视频只有这里精品首页| 日韩欧美在线二视频| 女人爽到高潮嗷嗷叫在线视频| 欧美成人午夜精品| 国产91精品成人一区二区三区| 国产成人精品无人区| 国内精品久久久久久久电影| 波多野结衣巨乳人妻| 悠悠久久av| 欧美黄色片欧美黄色片| 成年人黄色毛片网站| 国产日本99.免费观看| 黄色片一级片一级黄色片| 18美女黄网站色大片免费观看| 麻豆国产av国片精品| 亚洲欧美日韩高清专用| 听说在线观看完整版免费高清| 亚洲国产欧洲综合997久久,| 他把我摸到了高潮在线观看| 久久久久久久久久黄片| www.999成人在线观看| 亚洲九九香蕉| 精品久久久久久久末码| 欧美人与性动交α欧美精品济南到| 国产区一区二久久| 午夜成年电影在线免费观看| 最新美女视频免费是黄的| 在线国产一区二区在线| 岛国在线观看网站| 老汉色∧v一级毛片| av国产免费在线观看| 国产在线观看jvid| 亚洲aⅴ乱码一区二区在线播放 | 91麻豆精品激情在线观看国产| 99久久无色码亚洲精品果冻| 午夜久久久久精精品| 国产精品98久久久久久宅男小说| 国产91精品成人一区二区三区| 一边摸一边抽搐一进一小说| 国产av一区在线观看免费| 曰老女人黄片| 久久天堂一区二区三区四区| 欧美成狂野欧美在线观看| 黑人欧美特级aaaaaa片| 老司机靠b影院| 亚洲精品美女久久久久99蜜臀| 黄片大片在线免费观看| 久久热在线av| 麻豆久久精品国产亚洲av| 亚洲18禁久久av| 一本一本综合久久| 亚洲中文日韩欧美视频| 999久久久国产精品视频| 亚洲熟妇熟女久久| 在线观看免费午夜福利视频| 国产精品电影一区二区三区| 亚洲精品在线观看二区| 亚洲九九香蕉| 校园春色视频在线观看| 国产亚洲精品久久久久久毛片| 欧美午夜高清在线| 88av欧美| a在线观看视频网站| 午夜成年电影在线免费观看| 熟女电影av网| 亚洲精品美女久久久久99蜜臀| 久久久久九九精品影院| 国产高清videossex| 又紧又爽又黄一区二区| 亚洲片人在线观看| 日本 av在线| 欧美午夜高清在线| 成人三级黄色视频| 男人舔奶头视频| av天堂在线播放| 99久久精品热视频| 变态另类丝袜制服| 成人手机av| 欧美久久黑人一区二区| 嫩草影视91久久| 欧美性猛交黑人性爽| 1024手机看黄色片| 老司机午夜福利在线观看视频| 日本在线视频免费播放| xxx96com| 天堂影院成人在线观看| 99久久综合精品五月天人人| 国产精品 欧美亚洲| 亚洲国产中文字幕在线视频| 一级毛片女人18水好多| 亚洲自偷自拍图片 自拍| 亚洲精品在线美女| 97人妻精品一区二区三区麻豆| 宅男免费午夜| 免费观看人在逋| 最近在线观看免费完整版| 成人亚洲精品av一区二区| 日日摸夜夜添夜夜添小说| 国产黄片美女视频| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲国产欧美人成| 亚洲五月天丁香| 国产片内射在线| 成人特级黄色片久久久久久久| 88av欧美| 又爽又黄无遮挡网站| 欧美精品啪啪一区二区三区| 欧美黑人欧美精品刺激| 黄色片一级片一级黄色片| 国产午夜精品论理片| 午夜福利欧美成人| 欧美成狂野欧美在线观看| 成人精品一区二区免费| 久久精品夜夜夜夜夜久久蜜豆 | 黑人操中国人逼视频| 欧美日本亚洲视频在线播放| 欧美成人性av电影在线观看| 日韩欧美在线二视频| 亚洲 欧美 日韩 在线 免费| 舔av片在线| 老司机午夜福利在线观看视频| 婷婷精品国产亚洲av| 日韩欧美三级三区| 99热6这里只有精品| 一本大道久久a久久精品| 国产又色又爽无遮挡免费看| 国产欧美日韩一区二区精品| 两性午夜刺激爽爽歪歪视频在线观看 | 色噜噜av男人的天堂激情| 老鸭窝网址在线观看| 久久国产精品人妻蜜桃| 大型黄色视频在线免费观看| 久久精品国产亚洲av高清一级| 欧洲精品卡2卡3卡4卡5卡区| 国产一区二区三区视频了| or卡值多少钱| 一本精品99久久精品77| 99国产极品粉嫩在线观看| 精品国产乱码久久久久久男人| 欧美成人免费av一区二区三区| 日本成人三级电影网站| 一个人免费在线观看的高清视频| 日本在线视频免费播放| 伊人久久大香线蕉亚洲五| 琪琪午夜伦伦电影理论片6080| 亚洲国产欧美一区二区综合| 香蕉国产在线看| 欧美乱妇无乱码| 国产区一区二久久| 欧美黄色淫秽网站| 欧洲精品卡2卡3卡4卡5卡区| 欧美乱码精品一区二区三区| 午夜两性在线视频| 午夜福利18| 亚洲五月天丁香| 啦啦啦观看免费观看视频高清| 波多野结衣高清无吗| 18禁美女被吸乳视频| 成人18禁在线播放| 在线观看66精品国产| 成人av一区二区三区在线看| 一进一出好大好爽视频| 中文字幕人成人乱码亚洲影| 99在线人妻在线中文字幕| a在线观看视频网站| 一卡2卡三卡四卡精品乱码亚洲| 中文亚洲av片在线观看爽| 91成年电影在线观看| 99久久综合精品五月天人人| 999久久久国产精品视频| 国产黄色小视频在线观看| 成人18禁在线播放| 午夜影院日韩av| a级毛片在线看网站| 啦啦啦观看免费观看视频高清| 久久国产乱子伦精品免费另类| 久久精品91无色码中文字幕| 欧美乱码精品一区二区三区| 十八禁网站免费在线| xxxwww97欧美| 夜夜看夜夜爽夜夜摸| 九九热线精品视视频播放| 美女扒开内裤让男人捅视频| 伊人久久大香线蕉亚洲五| 欧美在线一区亚洲| 亚洲国产欧美网| 欧美黑人精品巨大| 免费在线观看亚洲国产| 亚洲精品在线美女| 啪啪无遮挡十八禁网站| 9191精品国产免费久久| 99热只有精品国产| 国产激情久久老熟女| 999精品在线视频| 精品熟女少妇八av免费久了| 婷婷亚洲欧美| 亚洲av日韩精品久久久久久密| 欧美成人免费av一区二区三区| 国产一区二区三区视频了| or卡值多少钱| 亚洲欧洲精品一区二区精品久久久| 又粗又爽又猛毛片免费看| 99国产精品一区二区蜜桃av| 亚洲国产欧美网| 他把我摸到了高潮在线观看| 91麻豆av在线| 不卡av一区二区三区| 亚洲国产高清在线一区二区三| 日本免费一区二区三区高清不卡| 婷婷精品国产亚洲av在线| 日本精品一区二区三区蜜桃| 欧美成人性av电影在线观看| 别揉我奶头~嗯~啊~动态视频| 精品欧美一区二区三区在线| 无限看片的www在线观看| 国产三级在线视频| 99国产综合亚洲精品| 99热6这里只有精品| 欧美高清成人免费视频www| 久久久久亚洲av毛片大全| 午夜福利免费观看在线| 无人区码免费观看不卡| 人人妻人人看人人澡| 婷婷精品国产亚洲av在线| 亚洲九九香蕉| 亚洲第一欧美日韩一区二区三区| 欧美日韩瑟瑟在线播放| 美女午夜性视频免费| 亚洲一码二码三码区别大吗| 精品日产1卡2卡| 欧美乱码精品一区二区三区| 欧美日本亚洲视频在线播放| 丰满的人妻完整版| av超薄肉色丝袜交足视频| 制服人妻中文乱码| 国产精品一区二区精品视频观看| 老司机靠b影院| 国产成年人精品一区二区| 男插女下体视频免费在线播放| 亚洲免费av在线视频| 亚洲国产日韩欧美精品在线观看 | 日本在线视频免费播放| 在线a可以看的网站| 久久精品国产清高在天天线| 午夜福利视频1000在线观看| 成人18禁高潮啪啪吃奶动态图| 亚洲精品在线美女| 亚洲熟妇熟女久久| 热99re8久久精品国产| 亚洲av片天天在线观看| 国产成人系列免费观看| a级毛片在线看网站| 夜夜躁狠狠躁天天躁| 老熟妇仑乱视频hdxx| 欧美日韩亚洲综合一区二区三区_| 国产在线精品亚洲第一网站| 国产精品久久久久久精品电影| 亚洲欧美激情综合另类| 国产爱豆传媒在线观看 | 久久久久久人人人人人| 男插女下体视频免费在线播放| 亚洲最大成人中文| 亚洲九九香蕉| 久久精品国产综合久久久| 免费在线观看完整版高清| 色av中文字幕| 精华霜和精华液先用哪个| 国产成人aa在线观看| 精品少妇一区二区三区视频日本电影| 我要搜黄色片| 国产精品野战在线观看| 亚洲熟妇熟女久久| 日韩大尺度精品在线看网址| 国产av一区二区精品久久| 女人被狂操c到高潮| 悠悠久久av| www.www免费av| 熟妇人妻久久中文字幕3abv| 国产一区二区三区在线臀色熟女| 国产精品野战在线观看| 日韩三级视频一区二区三区| 欧美大码av| 美女午夜性视频免费| 久久久久性生活片| 又粗又爽又猛毛片免费看| 给我免费播放毛片高清在线观看| 国产亚洲精品久久久久久毛片| cao死你这个sao货| 不卡av一区二区三区| or卡值多少钱| 欧美高清成人免费视频www| 亚洲熟妇中文字幕五十中出| 性欧美人与动物交配| 国产爱豆传媒在线观看 | 男女午夜视频在线观看| x7x7x7水蜜桃| 99国产极品粉嫩在线观看| 麻豆av在线久日| 欧美精品啪啪一区二区三区| 午夜福利免费观看在线| 午夜成年电影在线免费观看| 又黄又粗又硬又大视频| av福利片在线观看| 免费av毛片视频| www.www免费av| 亚洲欧洲精品一区二区精品久久久| 亚洲自偷自拍图片 自拍| 日韩精品免费视频一区二区三区| www.www免费av| 国产精品一区二区免费欧美| 又粗又爽又猛毛片免费看| а√天堂www在线а√下载| 欧美国产日韩亚洲一区| 全区人妻精品视频| 久久久久久大精品| 成年女人毛片免费观看观看9| 在线观看66精品国产| 国产三级黄色录像| 成人三级做爰电影| 在线观看www视频免费| 黄色视频不卡| 成年免费大片在线观看| 国产精品免费视频内射| 久久久国产精品麻豆| 久久草成人影院| 亚洲国产高清在线一区二区三| 国产一区二区激情短视频| 中出人妻视频一区二区| 在线观看www视频免费| 91成年电影在线观看| 最新在线观看一区二区三区| 岛国在线观看网站| 十八禁网站免费在线| 日本a在线网址| xxxwww97欧美| 老司机在亚洲福利影院| 神马国产精品三级电影在线观看 | 看片在线看免费视频| 美女免费视频网站| 毛片女人毛片| 看黄色毛片网站| avwww免费| 亚洲人成电影免费在线| 欧美激情久久久久久爽电影| 国产黄a三级三级三级人| 亚洲av电影不卡..在线观看| 亚洲国产高清在线一区二区三| 久久精品综合一区二区三区| 日本三级黄在线观看| 巨乳人妻的诱惑在线观看| 又爽又黄无遮挡网站| 亚洲成人精品中文字幕电影| 精品欧美一区二区三区在线| 午夜a级毛片| 一卡2卡三卡四卡精品乱码亚洲| 国产一区在线观看成人免费| 成熟少妇高潮喷水视频| 一a级毛片在线观看| 色av中文字幕| 久久99热这里只有精品18| √禁漫天堂资源中文www| 亚洲国产看品久久| www.精华液| avwww免费| 国产激情久久老熟女| 午夜a级毛片| 99热这里只有精品一区 | 亚洲精品美女久久av网站| 观看免费一级毛片| 天堂动漫精品| 看片在线看免费视频| 欧美日韩中文字幕国产精品一区二区三区| 国产激情偷乱视频一区二区| 亚洲性夜色夜夜综合| 欧美乱码精品一区二区三区| 亚洲精品中文字幕一二三四区| 一本大道久久a久久精品| 黄频高清免费视频| 国内精品一区二区在线观看| 久久天躁狠狠躁夜夜2o2o| 国产黄片美女视频| 男人的好看免费观看在线视频 | 日本撒尿小便嘘嘘汇集6| 免费看a级黄色片| 欧美丝袜亚洲另类 | 日本a在线网址| 成人高潮视频无遮挡免费网站| 亚洲一码二码三码区别大吗| 一级黄色大片毛片| 国产av又大| 亚洲 欧美 日韩 在线 免费| 国产高清有码在线观看视频 | ponron亚洲| 日本 欧美在线| 久久久久免费精品人妻一区二区| 岛国视频午夜一区免费看| aaaaa片日本免费| 欧美精品啪啪一区二区三区| 在线免费观看的www视频| 两人在一起打扑克的视频| 午夜两性在线视频| 亚洲狠狠婷婷综合久久图片| 亚洲avbb在线观看| 97碰自拍视频| 禁无遮挡网站| 成人高潮视频无遮挡免费网站| 一二三四社区在线视频社区8| 欧洲精品卡2卡3卡4卡5卡区| 亚洲美女黄片视频| 别揉我奶头~嗯~啊~动态视频| 在线观看66精品国产| 精品免费久久久久久久清纯| 99国产精品99久久久久| 欧美黄色片欧美黄色片| 校园春色视频在线观看| 人人妻人人澡欧美一区二区| 99精品久久久久人妻精品| 久久久久国产精品人妻aⅴ院| 90打野战视频偷拍视频| 精品国内亚洲2022精品成人| 久久中文看片网| 欧美中文综合在线视频| 美女免费视频网站| 丰满人妻熟妇乱又伦精品不卡| 国产精品久久久久久精品电影| 亚洲人成网站高清观看| 90打野战视频偷拍视频| www日本在线高清视频| 男女视频在线观看网站免费 | 国产真实乱freesex| 999久久久国产精品视频| e午夜精品久久久久久久| 搡老岳熟女国产| 国产精品 欧美亚洲| 99热只有精品国产| 亚洲中文日韩欧美视频| 两个人视频免费观看高清| 国产精品一区二区精品视频观看| 精品欧美国产一区二区三| 欧美+亚洲+日韩+国产| 女人被狂操c到高潮| av在线播放免费不卡| 一本精品99久久精品77| 琪琪午夜伦伦电影理论片6080| 欧美日韩国产亚洲二区| 最近最新免费中文字幕在线| 国产一级毛片七仙女欲春2| 狂野欧美白嫩少妇大欣赏| 女人高潮潮喷娇喘18禁视频| √禁漫天堂资源中文www| 欧美黑人欧美精品刺激| 久久久久免费精品人妻一区二区| а√天堂www在线а√下载| 三级男女做爰猛烈吃奶摸视频| 亚洲五月婷婷丁香| 91字幕亚洲| 久久草成人影院| 国产欧美日韩一区二区三| 欧美最黄视频在线播放免费| 欧美性猛交╳xxx乱大交人| 成人国产综合亚洲| 制服诱惑二区| 久久人人精品亚洲av| 国产久久久一区二区三区| 日本精品一区二区三区蜜桃| 欧美乱码精品一区二区三区| 色精品久久人妻99蜜桃| 国产精品免费视频内射| 国产精品久久久久久人妻精品电影| 黑人操中国人逼视频| 亚洲国产欧美人成| 日韩大尺度精品在线看网址| 亚洲成人国产一区在线观看| 99热这里只有精品一区 | 一本大道久久a久久精品| 欧美性猛交黑人性爽| 高潮久久久久久久久久久不卡| 欧美午夜高清在线| 伦理电影免费视频| 久久精品夜夜夜夜夜久久蜜豆 | 欧美激情久久久久久爽电影| 我要搜黄色片| 国产午夜精品论理片| 亚洲九九香蕉| 亚洲国产日韩欧美精品在线观看 | 中文亚洲av片在线观看爽| 最近最新免费中文字幕在线| 757午夜福利合集在线观看| 国产黄色小视频在线观看| 韩国av一区二区三区四区| 久久久水蜜桃国产精品网| 黑人欧美特级aaaaaa片| 一进一出好大好爽视频| 丰满人妻熟妇乱又伦精品不卡| 国产真人三级小视频在线观看| 久久久久久亚洲精品国产蜜桃av| 99国产极品粉嫩在线观看| 国产成人一区二区三区免费视频网站| 免费在线观看影片大全网站| 亚洲欧美日韩高清专用| 我的老师免费观看完整版| 1024手机看黄色片| 欧美不卡视频在线免费观看 | 国产精品一区二区三区四区免费观看 | 给我免费播放毛片高清在线观看| 成人特级黄色片久久久久久久| 男人的好看免费观看在线视频 | 国模一区二区三区四区视频 | 欧美一区二区国产精品久久精品 | 少妇裸体淫交视频免费看高清 | 老鸭窝网址在线观看| 一区二区三区高清视频在线| 欧美中文日本在线观看视频| 亚洲精品色激情综合| 久久精品亚洲精品国产色婷小说| 黄片大片在线免费观看| 欧美日本亚洲视频在线播放| 最近最新中文字幕大全电影3| 国产精品自产拍在线观看55亚洲| 黄色成人免费大全| 黄色片一级片一级黄色片| avwww免费| 国产激情偷乱视频一区二区| 国产精品 国内视频| 亚洲 国产 在线| 亚洲国产精品久久男人天堂| 亚洲国产欧洲综合997久久,| 欧美精品啪啪一区二区三区| 国产真实乱freesex| av在线播放免费不卡| 窝窝影院91人妻| 国产成人精品久久二区二区91| 亚洲精品中文字幕在线视频| 成年女人毛片免费观看观看9| 天堂动漫精品| 国产精品国产高清国产av| 欧美日韩一级在线毛片| 国产主播在线观看一区二区| 中文字幕高清在线视频| 久久久久国内视频| 村上凉子中文字幕在线| 国产成人av激情在线播放| 日本a在线网址| 国产一级毛片七仙女欲春2| 欧美黄色淫秽网站| 天堂动漫精品| 国产精品一区二区三区四区免费观看 | 久久久国产精品麻豆| 国产精品1区2区在线观看.| 无人区码免费观看不卡| 久久精品国产清高在天天线| 亚洲专区字幕在线| 1024手机看黄色片| 午夜a级毛片| 一区二区三区国产精品乱码| 亚洲五月天丁香| 午夜免费成人在线视频| 国产精品一区二区免费欧美| 日日干狠狠操夜夜爽| 12—13女人毛片做爰片一| 亚洲男人的天堂狠狠| a级毛片a级免费在线| 亚洲人成网站在线播放欧美日韩| 成人国语在线视频| АⅤ资源中文在线天堂| 又黄又爽又免费观看的视频| 热99re8久久精品国产| 香蕉av资源在线| 又黄又粗又硬又大视频| 国产真实乱freesex| 两个人免费观看高清视频| 亚洲国产日韩欧美精品在线观看 | 免费看十八禁软件| 亚洲乱码一区二区免费版| 欧美日韩亚洲国产一区二区在线观看| 国产免费av片在线观看野外av| 观看免费一级毛片| 国产欧美日韩精品亚洲av| 国产探花在线观看一区二区| 国产一区二区激情短视频| 91在线观看av| 男女视频在线观看网站免费 | 嫁个100分男人电影在线观看| 亚洲成人免费电影在线观看| 香蕉国产在线看| 精品电影一区二区在线| x7x7x7水蜜桃| 国产成人系列免费观看| 亚洲国产精品久久男人天堂| 99精品欧美一区二区三区四区| 成人欧美大片| 国产精品精品国产色婷婷| 亚洲国产精品合色在线| 亚洲精品久久国产高清桃花| 一进一出抽搐gif免费好疼| av片东京热男人的天堂| 欧美黑人精品巨大| 中文字幕久久专区| 亚洲九九香蕉| 久热爱精品视频在线9| 欧美色视频一区免费| 久久久国产欧美日韩av| 精品久久久久久久久久久久久| АⅤ资源中文在线天堂| 成人永久免费在线观看视频| 午夜福利在线观看吧| 日本免费一区二区三区高清不卡| 老司机午夜福利在线观看视频| 免费av毛片视频| 天堂av国产一区二区熟女人妻 | 成人亚洲精品av一区二区| 亚洲av美国av|