• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    DSNNs: learning transfer from deep neural networks to spiking neural networks ①

    2020-07-12 02:34:14ZhangLeiDuZidongLiLingChenYunji
    High Technology Letters 2020年2期

    Zhang Lei(張 磊) , Du Zidong, Li Ling, Chen Yunji

    (*State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, P.R.China) (**Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, P.R.China) (***Cambricon Tech. Ltd, Beijing 100010, P.R.China) (****Institute of Software, Chinese Academy of Sciences, Beijing 100190, P.R.China)

    Abstract

    Key words: deep leaning, spiking neural network (SNN), convert method, spatially folded network

    0 Introduction

    Deep neural networks (DNNs) perform the state-of-the-art results on many tasks, such as image recognition[1-4], speech recognition[5-7]and natural language processing[8,9]. Current state-of-the-art DNNs usually contain many layers with high abstracted neuron models, causing a heavy burden for computation. To highly efficiently process DNNs, many customized architectures have been proposed.

    Despite DNNs, another type of neural network from neuroscience is also emerging. Spiking neural networks (SNNs) mimic the biological brain bionically and consequentially are thought as the next generation of neural networks[10,11]. Spike, used in SNNs to pass information among neurons, is thought to be a more efficient hardware solution as 1-bit is enough for representing one spike. Some special hardware architectures have been proposed for SNNs[12-14]. However, currently, the bio-inspired, spike-based neuromorphic SNNs still fail to achieve comparable results with DNN.

    To close the performance gap between DNNs and SNNs, researchers have tried many solutions. IBM[15]proved that the structural and operational differences between neuromorphic computing and deep learning are not fundamental. ConvNets[16]applied a weights converting technique and IBM adopted back propagation (BP) in training. However, these techniques are only proven feasible using small networks on simple tasks, such as recognition on hand-written digital numbers (MNIST[17]). As a result, the capability of SNNs remains unclear, especially on large and complex tasks.

    This work proposes a simple but effect way to construct deep spiking neural networks (DSNNs) by transferring the learned ability of DNNs to SNNs. During the process, initial trained synaptic weights are converted and used in SNNs; features in SNNs are introduced to original DNN for further training. Evaluated with large and complex datasets (including ImageNet[18]), DSNNs achieve comparable accuracy with DNNs. Furthermore, to appeal the hardware design, this work proposes an enhanced SNN computing algorithm, called ‘DSNN-fold’, which also improves the accuracy of the directly converted SNN.

    Therefore the overall contribution is as follows:

    (1) An algorithm to convert DNN to SNN is proposed.

    (2) The algorithm is improved for more hardware friendly design.

    1 DNN vs. SNN

    In this section, two different models are briefly introduced : DNNs and SNNs,as depicted in Fig.1. Despite the layer based architecture, SNNs are different from DNNs in neuron model,input stimuli, results readout and training method.

    Fig.1 DNN vs. SNN

    1.1 Topology

    Both DNNs and SNNs mimic the biological brain but in different levels of abstraction. DNNs usually contain multiple layers where each layer contains numerous neurons; inputs are passed and processed through layers with different inter-layer connections (i.e., synapses) (Fig.1(a)). Recent development of deep learning leads to increasingly deeper and larger networks, i.e., more layers and more neurons in a layer. Meanwhile, connections among layers vary through different types of layers, which consequently leads to different types of layers and neural networks, e.g., multiple layer perceptron (MLP), convolutional neural network (CNN), recurrent neural network (RNN), and long-short-term-memory (LSTM).

    Similarly, SNNs consist of multiple layers, but less types than DNNs. Commonly, each neuron connects to not only all neurons in the last layer but also all other neurons in the current layer through an inhibition mechanism. Therefore, the state of each neuron is related to inputs (i.e., spikes) from previous layers and inhibition signals from its layer (Fig.1(b)). The inhibition mechanism, observed from biological nerve system, causes the so-called ‘Winner-Take-All’ effect, i.e., only one neuron can fire in a shot period (inhibition period), which has been proven to achieve good results in previous work[19].

    1.2 Neuron model

    A typical neuron in DNNs receives different inputs and generates output passing through synapses to following neurons, as shown in Fig.1(c). Formally, a neuron generates outputNoutasNout=f(∑cg(Ii,Wij)), whereIiis the inputs,Wijis the synapse weight,Cis the set of connected input neurons,g() andf() are processing operators.g() can be inner production as in fully-connected layers and convolutional layers, or unsampling in pooling layers.f() is the activation function, such as sigmoid and ReLU functions typically.

    A neuron in SNN accumulates input spikes continuously to its potential and fires out spikes to following neurons once its potential reaches the firing threshold; its potential will be reset afterwards. Formally, the potential of an output neuronPout(t) in the time window [T1,T2] can be described by

    1.3 Input stimulus

    Typical inputs, i.e., image pixels, audio information, are used directly with or without preprocessing like normalization and centralization. Texts are processed to digital representations through word embedding process, such as word2vec[20].

    Unlike DNNs, SNNs take spikes as inputs, thus an encoding process which converts numeric values into spikes is required. However, SNN encoding has been a quite controversial topic in the field of neuromorphic computing. There have been years of debates and discussions about better encoding schemes, e.g., rate coding, rank order coding, temporal coding, etc. Despite that, there is no obvious experimental evidence showing the superiority of temporal coding, which uses the precise time of firing —— it is believed that temporal coding carries more information but currently it is unclear how to leverage that. While all of them have been shown to be biologically plausible, researches have proved that SNN with temporal coding schemes is less accurate than rate coding schemes[13], with regard to hardware brevity. Here,rate coding is chosen as the coding scheme in the following sections.

    1.4 Readout

    The output layer of DNNs is used to classify or recognize the input sample. For example, each output neuron in an MLP corresponds to a label; for CNNs, the softmax function is applied to output layers to turn the output value into probability distributions. Usually, the winner neuron has the maximum output value and will label the input with its label.

    Readout in SNN is tightly related to the network topology and training method. All these different readouts aim to find the winner output neuron(s) as DNNs. The winner neuron can be the one having the largest potential, the one firing first or the one firing most times. Note that output neurons in SNNs could be much more than the labels[13]. Thus the current input sample can be labeled with the label of the winner neuron or neurons. In this work, considering the construction work from DNNs to SNNs, which is trained with supervised learning, 3 readout strategies are exploited that might fit in the transferred networks, i.e., FS (first spike), MS (maximum spike times) and MP (maximum accumulated potential). In this exploration, FS fails to achieve the same accurate results with other two strategies; while MS and MP show good performance on simple tasks such as MINIST or simple networks such as Lenet-5. However, MS fails on larger or deeper topology where the accuracy drops drastically. Therefore, MP readout method is the first choice which shows steady good performance.

    1.5 Training

    Training is essential and crucial to DNNs and several training methods have been proposed. Among them, BP, a supervised learning algorithm, has been proven to be most effective. During neural network training, errors between actual outputs and desired outputs are back propagated to input layers to adjust the network parameters gradually.

    SNN training techniques are far different from DNNs. Most SNNs adopt neuromorphic learning models in biology/neuroscience to optimize their training processes. For example, the well-known STDP (spike-timing-dependent plasticity) mechanism, an unsupervised learning algorithm, achieves similar accuracy as a two-layer MLP on MNIST dataset[21]. In STDP, the learning principle is to detect causality between input and output spikes (i.e., presynaptic and postsynaptic). If a neuron fires soon after receiving an input spike from a given synapse, it suggests that synapse plays an important role in the firing, and thus it should be reinforced by long-term potentiation (LTP). Conversely, if a neuron fires a long time after receiving an input spike, or shortly before receiving it, the corresponding synapse will be depressed by long-term depression (LTD). Additionally, neuron will adjust its potential threshold to keep the neuron firing at a reasonable speed through a homeostasis mechanism. Thus, all the neurons are forced to participate with similar activities. Recently, researchers begin to explore supervised learning with backward propagation. But none of them is able to achieve the comparable results as BP in DNNs, especially on tasks with larger problem sizes.

    2 Constructing DSNN

    In this section, a construction procedure that transfers learned ability in DNNs to SNNs is proposed. This work focuses on CNNs. As shown in Fig.2, the DSNN construction workflow can be divided into 2 stages: from CNN to SNN and from SNN to CNN. In the former stage, DSNN is constructed with weights and topology directly converted from CNN; in the latter stage, SNN features are introduced in the original CNN which will be modified for further training. Final DSNN is constructed with retrained weights.

    Fig.2 Flow of DSNN construction

    2.1 Intrinsic consistency between DNNs and SNNs

    The intrinsic consistency between DNNs and SNNs reveals a possibility of transferring the learned ability of DNNs to SNNs. Despite the differences of neuron models and training algorithms, regarding the inference, DNNs can be viewed as a simplified version SNNs by removing the timing information. Given an SNN and a DNN in a same topology, considering the formulas in Section 1, SNN turns out to convert the original input of the DNN from floating-point numbers or high fixed-width numbers into lower width integers of spikes if removing the time window. The following question is about the accuracy loss due to that conversion. Previous work show that spiking encoding currently works worse. However, recent work on less bit-width for data representation have been extended to binary neural networks[22-24]that use 1 bit for data. Such feature indicates that SNNs with rate encoding may not suffer accuracy loss due to moderate discretization of DNNs inputs.

    In addition, ReLU, the most popular activation function used in deep learning[25,26], may help to bridge the gap between DNNs and SNNs. ReLU eliminates the negative neuron outputs and preserves the linear property of the positive outputs. Its function is intrinsically consistent with the firing mechanism (IF model) in SNN that a neuron fires only when its potential (always≥0) is larger than the threshold. That indicates that an integrate-and-fire (IF) neuron[27]is equal to an ‘a(chǎn)rtificial neurons plus ReLU’ in some degree.

    2.2 From CNN to SNN

    TopologyTo transfer the learned ability, multiple layers are needed in SNN to achieve the functions of different layers in CNN. Intuitively and directly, this work constructs a new topology of SNN with SNN-CONV, SNN-POOL, and SNN-FC layers for convolutional (CONV), pooling (POOL), and fully-connected (FC) layers, as shown in Fig.3. In another words, SNN retains the connections and weights in the trained CNN during the transfer. Especially, for layers having no weights like POOL, SNN-POOL is constructed with fixed weights 1/Size(Kernel), whereSize(Kernel)is the number of presynaptic neurons in the kernel.

    Fig.3 DSNN construction from LeNet-5

    InputThis work has explored 2 commonly used methods uniform coding[12]and Poisson coding[28]to encode CNN input values into spike trains for SNN. With uniform coding, the input neuron fires periodically with a firing rate which is proportional to the input value. With Poisson coding, the input neuron fires spikes following a Poisson process whose time constant is inversely proportional to the input value. Additionally, note that the centralization and normalization techniques in DNNs can accelerate the convergence of the training process, but it will inevitably introduce negative input values. To overcome the difficulty that input spikes are unable to decrease the neuron potentials, ‘negative spike’ is introduced in the converted SNN model.

    For an input neuron firing a ‘negative spike’, received neurons integrate it similarly as positive spikes but decrease their potentials.

    ParametersThe converted SNN needs to decide 2 types of parameters: synapse weights and firing threshold in each neuron. For the former one,they are directly obtained from the fully trained CNN in the from SNN to CNN flow.

    For the latter one, previous methods such as model-based normalization and data-based normalization[16], work only on simple and small datasets/networks, such as MNIST/LeNet-5, but fail on larger datasets and complex networks, such as ImageNet/AlexNet. The model-based method requires large spike time window and leads to longer computation latency in SNN. Data-based method is worse, since it needs to propagate the entire network with the entire training data set and store all the activations which will further be calculated as scaling factors.Instead, this work proposes a greedy search based method to decide the firing thresholds, as shown in Algorithm 1, which makes better trade-offs between accuracy and efficiency.Briefly, first find the maximum possible outputMifor each layer based on the current weight model (in Algorithm 1,Mi=input_sum,input_wtis the synapse weight). The threshold for each layer is given byσ×Mi, whereσis a constant to be decided. Search widely onσin the set {1, 0.1, 0.01,…} until a satisfactory result is obtained. To guarantee the optimal thresholds, greedy search on the nearby thresholds is needed.

    Algorithm 1: Threshold set algorithmfor layer in layers do maxposinput = 0 for neuron in layer. neurons do inputsum=0 for inputwt in neuron.inputwts do inputsum+=max(0, inputwt) end for maxposinput = max(maxposinput, inputsum) end for layer.threshold=σ×maxposinput.end forSearch on σ in the set {1, 0.1,0.01, …} until a satis-factory result is obtained.

    2.3 From SNN to DNN

    After the first stage of transfer, features from the converted SNN are introduced to the original CNN model for further adjustments. The adjusted CNN will be trained finely to obtain parameters that better retain the accuracy on SNN.

    ReLUactivationsIn CNN, all the CONV layers and FC layers are made to use ReLU as an activation function, in order to eliminate negative neuron outputs (which could be only transferred as ‘negative spikes’ in SNN). There is no need to add ReLU functions after POOL layers since both MAX-POOL and Average-POOL do not change the polarity of input spikes. Fortunately, most of the mainstream CNNs have already included ReLU as activation function since it is shown to have better accuracy results.

    AveragepoolingRegarding the pooling layer in CNN, this work changes them to average pooling (AVG-POOL) as it is easier to be simulated in the form of spikes. Also, previous work have demonstrated that MAX-POOL or AVG-POOL does not have a significant impact on network accuracy[29].

    BiasNo suitable methods have been found to accurately simulate bias in SNN.

    The adjusted CNN in this stage will be fully trained to obtain new weights. Together with the SNN architecture in the first stage, a powerful DSNN is constructed.

    The performance of the DSNNs is reported in Section 4.

    3 Spatially folded DSNN

    Considering the contradiction of limited hardware resources and unlimited size of networks, architects have to design architecture flexible enough to be reused in time, i.e., a time division multiple accesses method. In another words, algorithms should compute different pieces at different time. Specifically, network should be folded spatially. For time-independent CNNs, they can be divided easily for a small footprint of hardware[30]. However, this spatially-foled property will not hold in any previous SNNs including the DSNNs, because the computation in each neuron is strongly related to the firing time of each pre-synaptic neuron. To solve this problem, previously proposed architectures usually use an expanded method for the computation which keeps the time causality.

    This work proposes an algorithm to further construct ‘DSNN-fold’ for hardware benefit while maintaining the accuracy results.The key feature of ‘DSNN-fold’ is the split two-phase computation, which is described in Fig.4 and Fig.5. In first phase, postsynaptic neurons accumulate their potentials if the corresponding presyanptic neuron emits negative spikes. Since negative spikes only reduce the neuron potentials, postsynaptic neurons will not fire spikes. In the second phase, positive spikes are fired to postsynaptic neurons. Other parts such as input encoding, readout and threshold policy are not changed. Obviously, the 2 phases are independent and will not affect the number of spikes. Thus, in DSNN-fold, the computation can be divided into pieces.

    Fig.4 The first phase of DSNN-fold

    Fig.5 The second phase of DSNN-fold

    By using the DSNN-fold method, the spatio-temporal correlation of the entire SNN is removed. In this way, the deployment of network segments of any size can be realized in hardware mapping. As shown in Fig.6, the computation of an SNN network is split into operations of independent layers. The operation of each layer can be divided into 2 phases and the polarity of the influence of the operations in each phase on the output neurons is independent and stable. Therefore, it is possible to split computations in each phase into several fragments. Computations in each fragment will be easily mapped to any hardware design.

    Fig.6 Folded SNN

    Interestingly, the accuracy results of DSNNs-fold are actually slightly higher than DSNNs. That is mainly because DSNNs-fold eliminates the disturbance of accidental spiking due to randomly input spikes from the previous layer.

    Also, the firing thresholds determination is much easier. In DSNNs, the threshold is sensitive as the neuron will fire too much times than the expected if positive spikes come first. However, in DSNNs-fold, the final numbers of spikes depend on inputs, regardless of the coming order.

    Additionally, maximum pooling is feasible in DSNNs-fold either, as it could be achieved by selecting the neurons with maximum number of spikes and inhibiting other neurons to propagate to the next layer.

    4 Evaluation

    4.1 Methodology

    In this work, 4 representative CNNs are selected as benchmarks, and implement those 4 CNN models with Caffe[31], including LeNet-5[17], caffe-cifar10-quick[2], AlexNet[1]and VGG-16[3], as shown in Table 1. The

    4 CNNs are designed for 3 different datasets: LeNet-5 for MNIST, caffe-cifar10-quick for CIFAR10, AlexNet and VGG-16 for ImageNet. Particularly, MNIST consists of 60 000 individual images (28×28 grayscale) of handwritten digits (0-9) for training and 10 000 digits for testing. CIFAR-10 consists of 60 k colorful images (32×32) in 10 classes. ImageNet ILSVRC-2012 includes high resolution(224×224) images in 1 000 classes and is split into 3 sets: training (1.3 M images), validation (50 k images), and testing (100 k images).

    The classification performance is evaluated using 2 measures: the top-1 error and top-5 error. The former reflects the error rate of the classification and the latter is often used as the criterion for final evaluation.

    Table 1 Network depth comparison

    4.2 Accuracy

    Table 2 compares the accuracies achieved by CNN, adjusted CNN (adjustments in stage from SNN to CNN), DSNN, and DSNN-fold. Adjusted CNN causes trivial accuracy loss (0.01% to 2.42%) compared to CNN. Even for the deepest network, VGG-16, the accuracy loss is only 2.42%. This illustrates that CNN training is able to make trade-offs on strategies like bias and max-pooling, if the only factor that is taken into consideration is accuracy, and other factors such as convergence speed are ignored in such cases.

    For small networks on MNIST and Cifar datasets, DSNN-fold achieves comparable results with adjusted-CNN, with accuracy decreases of 0.1% and 0.56% respectively. Moreover, for large scale networks, AlexNet and VGG-16, the top-1 and top-5 errors are restricted to a reasonable range (i.e., 1.03% and 1.838% for AlexNet, 3.42% and 2.09% for VGG-16). Compared to previous work of converting CNN to SNN, the results greatly improve the accuracy achieved by SNN.

    Table 2 Accuracy results

    As the number of network layers increases, the accuracy loss of SNN slowly increases due to parameter settings. Two of the key parameters are the image presentation time and the maximum spike frequency. Consistent with the original SNN, the maximum firing frequency is limited no larger than 100 Hz and the image presentation less than 500 ms in this work. SNN could be more efficient in practical tasks under these parameters. However, these limitations could lead to bad simulation of output behaviors of CNN neurons. This problem has been solved by applying the following DSNN-fold algorithm. Another crucial parameter is the firing threshold. In order to reduce the complexity of SNN, the same threshold value is set to neurons in the same layer despite they can be set independently. Although the simplified threshold setting strategy is able to reduce the workloads of threshold setting, this work sacrifices the higher accuracy that could be obtained by setting an independent threshold for each neuron.

    Compared to DSNN, DSNN-fold achieves a better accuracy, e.g., 88.39% for VGG-16, which is slightly higher than the accuracy achieved by DSNN. In original SNN, the positive and negative spike timings cross each other, and bring about unreasonable firing behaviors of postsynaptic neurons. However, in DSNN-fold, such behaviors avoided as negative spikes are computed before positive spikes.

    The pre- and post-conversion accuracy and accuracy of previous SNN on a typical network are presented in Fig.7. From left to right, the complexity of the network is gradually increasing, and the difficulty of identifying tasks is also gradually increasing. Although the performance of DNN, SNN and the proposed method is very similar in the simple task, the stability of our method is obviously better than that of the previous SNN network. Obviously, on the performance of complex tasks, the improvement of the proposed method compared to the previous SNN algorithm is significant. Considering that the best result of previous SNN work on ImageNet was 51.8%(top1) and 81.63%(top5)[32,33], this work improves the accuracy of the SNN on ImageNet by a maximum of 6.76%. It is clear that our SNN is able to achieve practical results on complex recognition tasks.

    Fig.7 Compare accuracy results among typical networks

    4.3 Maximum spikes vs. maximum potential

    This work selects the maximum potential (MP) strategy over the maximum spikes (MS) strategy as the readout strategy due to its ability to support large scale networks. These 2 strategies are evaluated on benchmarks shown in Fig.8. These 2 strategies achieve similar performance on small datasets and networks. However, on large datasets and networks, the performance of MS strategy is poor, as many neurons in the last layer produce the same number of maximum spikes, which seriously blocks the judgement of output labeling in MS.

    Fig.8 Comparison between 2 readout strategies

    4.4 Robustness

    The performance of the 2 encoding methods mentioned in Section 1 are compared in Fig.9. Both methods achieve satisfactory performance in the conversion methods. Poisson coding adds randomness to the input stimulus which proves that the converted SNN can still be effective under unstable input environment. Since Poisson encoding is statistically random and will increase the computational complexity, it is not recommended to be applied in algorithms or hardware designs.

    Fig.9 Comparison between 2 coding schemes

    5 Discussion

    Compared to traditional CNNs, the major advantage of DSNN is that it will significantly reduce hardware storage and computation overhead.On the one hand, DSNN converts floating-point numbers with large data width into fixed-point spikes with smaller data width,thereby reducing storage overhead. On the other hand, DSNN divides the dot product operation in CNN into add operations, which will significantly reduce the computational power consumption and area in the hardware.

    Compared with another similar network BNN (binary neural network)[23], the effect of SNN on reducing overhead is obviously inferior to it, because BNN only operates 1 bit neurons and weights. Besides, one add operation is needed for calculating the effect of one input neuron on an output neuron. Although DSNN is weaker than BNN in this respect, DSNN achieves better accuracy compared with BNN. Note that BNN has completely failed to complete the task on ImageNet.In summary, DSNN is very suitable for working in high-precision and low-cost work scenarios.

    The practice of using ReLU activations to avoid negative neurons has appeared in many articles, but it still lacks reasonable interpretations of the occurrence of negative weights. It is a generally accepted fact that it takes much more SNN neurons with inhibitory mechanisms to simulate negative weights, therefore the conversion techniques of turning CNN into a biological SNN is still worth exploring. If SNN would one day be considered in hardware design, the quantification of weights and neuron potentials are also critical, which requires the SNN to remain high precision with low-precision weights like half-precision floating-point weights or neuron potentials. The latest CNN technologies such as sparse and binary techniques also pose challenges to the accuracy of the SNN, and it remains unknown whether SNN can successfully transform them.

    In addition to the previous classic network algorithms, combined with the latest generative model, DNN is still making breakthroughs in multiple application scenarios[33-36]. How SNN completes new network technologies such as GAN in DNN is still worth studying.

    6 Conclusion

    This work proposes an effective way to construct deep spiking neural networks with ‘learning transfer’ from DNNs, which makes it possible to construct a high precision SNN without complicated training procedures. This kind of SNN has been able to match the accuracy of CNN in complex mission scenarios, which is a huge improvement over the previous SNN. This work also improves the computing algorithms of the transferred SNN in order to extend SNN to a spatially-folded version (DSNN-fold). The DSNN-fold turns out to be effective in both accuracy and computation, which can be a good reference for future hardware designs.

    久久久久久久久久久免费av| 观看av在线不卡| 成年人午夜在线观看视频| 日产精品乱码卡一卡2卡三| 精品亚洲成a人片在线观看| 一级毛片我不卡| 欧美精品国产亚洲| 水蜜桃什么品种好| 精品国产国语对白av| 考比视频在线观看| 亚洲成人一二三区av| 天堂8中文在线网| 国产精品一区二区在线观看99| 男女无遮挡免费网站观看| 国产亚洲精品第一综合不卡 | 99热国产这里只有精品6| 亚洲激情五月婷婷啪啪| 午夜福利网站1000一区二区三区| 大陆偷拍与自拍| 久久久久久久久久成人| 中文字幕人妻丝袜制服| 欧美最新免费一区二区三区| 欧美变态另类bdsm刘玥| 五月伊人婷婷丁香| 十八禁网站网址无遮挡| 欧美xxxx性猛交bbbb| 99热网站在线观看| 大话2 男鬼变身卡| 亚洲第一av免费看| 最后的刺客免费高清国语| av又黄又爽大尺度在线免费看| 国产免费福利视频在线观看| 自线自在国产av| 亚洲国产最新在线播放| 婷婷色综合大香蕉| 看十八女毛片水多多多| a级片在线免费高清观看视频| 国产国语露脸激情在线看| 亚洲无线观看免费| 久久精品国产鲁丝片午夜精品| 欧美xxxx性猛交bbbb| 日日爽夜夜爽网站| 成人漫画全彩无遮挡| 国产精品人妻久久久影院| 日本av免费视频播放| 欧美日韩综合久久久久久| 丝袜喷水一区| 51国产日韩欧美| 天堂8中文在线网| 交换朋友夫妻互换小说| kizo精华| 丝袜脚勾引网站| 久久韩国三级中文字幕| 91久久精品国产一区二区三区| 国产亚洲欧美精品永久| 免费看不卡的av| 欧美成人精品欧美一级黄| 国产 精品1| 五月天丁香电影| 2021少妇久久久久久久久久久| 午夜福利视频精品| 欧美国产精品一级二级三级| 国产熟女午夜一区二区三区 | 高清黄色对白视频在线免费看| 全区人妻精品视频| 国产精品一区二区三区四区免费观看| 精品人妻在线不人妻| 美女视频免费永久观看网站| 国产黄色视频一区二区在线观看| 99九九在线精品视频| 大片免费播放器 马上看| 欧美日韩国产mv在线观看视频| 国产精品嫩草影院av在线观看| 黄色毛片三级朝国网站| 久久久久人妻精品一区果冻| 国产精品人妻久久久影院| 久久久久精品久久久久真实原创| 亚洲精品自拍成人| 成人黄色视频免费在线看| 久久久午夜欧美精品| 黄色配什么色好看| 日韩一区二区视频免费看| 午夜精品国产一区二区电影| 国产黄片视频在线免费观看| 中文精品一卡2卡3卡4更新| 亚洲精品久久久久久婷婷小说| 黄色一级大片看看| 国产av一区二区精品久久| 亚洲av不卡在线观看| 老司机亚洲免费影院| 亚洲精品久久午夜乱码| 一级毛片电影观看| 精品国产一区二区久久| 99re6热这里在线精品视频| 日韩一区二区三区影片| 中文乱码字字幕精品一区二区三区| 久久国产亚洲av麻豆专区| 在线观看免费日韩欧美大片 | 亚洲精华国产精华液的使用体验| 国产黄片视频在线免费观看| 亚洲国产av影院在线观看| 国产成人精品久久久久久| kizo精华| 国产爽快片一区二区三区| 大香蕉97超碰在线| 亚洲精品乱久久久久久| 人成视频在线观看免费观看| 免费高清在线观看日韩| 欧美老熟妇乱子伦牲交| 国产一区有黄有色的免费视频| 久久久国产一区二区| 免费高清在线观看视频在线观看| 精品视频人人做人人爽| 午夜福利视频精品| 国产伦精品一区二区三区视频9| 美女脱内裤让男人舔精品视频| 国产免费又黄又爽又色| 2021少妇久久久久久久久久久| 免费高清在线观看视频在线观看| 国产成人精品福利久久| 亚洲欧美色中文字幕在线| 麻豆乱淫一区二区| 人人妻人人添人人爽欧美一区卜| 777米奇影视久久| 日韩中文字幕视频在线看片| 免费av中文字幕在线| 一边亲一边摸免费视频| 久久午夜综合久久蜜桃| 色视频在线一区二区三区| 黄片播放在线免费| 18禁在线播放成人免费| 丰满乱子伦码专区| 亚洲精品色激情综合| 最近2019中文字幕mv第一页| 国产精品熟女久久久久浪| 日韩免费高清中文字幕av| 日本欧美国产在线视频| 最后的刺客免费高清国语| 午夜91福利影院| 午夜福利影视在线免费观看| 有码 亚洲区| 晚上一个人看的免费电影| 国产 精品1| a级毛片黄视频| 国产老妇伦熟女老妇高清| 一边摸一边做爽爽视频免费| 久久精品夜色国产| 午夜精品国产一区二区电影| 中文字幕免费在线视频6| 婷婷成人精品国产| 熟妇人妻不卡中文字幕| 自线自在国产av| 国产无遮挡羞羞视频在线观看| 一级黄片播放器| 97精品久久久久久久久久精品| 性色avwww在线观看| 成人无遮挡网站| 丝袜脚勾引网站| 黑人欧美特级aaaaaa片| 99九九在线精品视频| 国产精品免费大片| 久久国产精品大桥未久av| 亚洲欧美一区二区三区黑人 | 免费观看av网站的网址| 插逼视频在线观看| 日本免费在线观看一区| 国产高清不卡午夜福利| 国产成人精品在线电影| 99精国产麻豆久久婷婷| 伦精品一区二区三区| 91精品一卡2卡3卡4卡| 欧美亚洲日本最大视频资源| 亚洲av在线观看美女高潮| 欧美变态另类bdsm刘玥| 美女中出高潮动态图| 精品少妇黑人巨大在线播放| 99精国产麻豆久久婷婷| 欧美精品一区二区免费开放| 五月伊人婷婷丁香| 国产精品无大码| 亚洲精品亚洲一区二区| 永久网站在线| 成人亚洲精品一区在线观看| 免费看光身美女| 色94色欧美一区二区| 三级国产精品欧美在线观看| 成人毛片60女人毛片免费| 免费观看a级毛片全部| 国产伦精品一区二区三区视频9| 777米奇影视久久| 嫩草影院入口| 免费观看无遮挡的男女| 人妻人人澡人人爽人人| 亚洲情色 制服丝袜| 美女主播在线视频| 国产极品粉嫩免费观看在线 | 男人操女人黄网站| 久久久久精品久久久久真实原创| 亚洲av男天堂| 国产精品一国产av| 欧美日韩视频精品一区| 欧美日韩在线观看h| 中文字幕人妻丝袜制服| 日韩,欧美,国产一区二区三区| 亚洲国产日韩一区二区| 91精品一卡2卡3卡4卡| 久久精品国产亚洲av涩爱| 亚洲无线观看免费| 亚洲精品久久成人aⅴ小说 | 免费黄网站久久成人精品| 久久久久久久久久久丰满| 一区二区三区免费毛片| 中文字幕最新亚洲高清| 日韩中文字幕视频在线看片| 国产成人精品婷婷| 美女国产视频在线观看| av福利片在线| 日本免费在线观看一区| 久久鲁丝午夜福利片| 人妻一区二区av| 国产成人freesex在线| 亚洲av二区三区四区| 五月开心婷婷网| tube8黄色片| 夜夜骑夜夜射夜夜干| 日本wwww免费看| 成人免费观看视频高清| 自线自在国产av| 搡老乐熟女国产| 久久鲁丝午夜福利片| 国产精品99久久99久久久不卡 | 国产在视频线精品| 极品人妻少妇av视频| 亚洲精品视频女| 99视频精品全部免费 在线| 久久鲁丝午夜福利片| 人人妻人人爽人人添夜夜欢视频| 伊人久久精品亚洲午夜| 黄色配什么色好看| 人体艺术视频欧美日本| 久久国产精品大桥未久av| 黑丝袜美女国产一区| 美女脱内裤让男人舔精品视频| 女人久久www免费人成看片| 大陆偷拍与自拍| 欧美日韩综合久久久久久| 日韩人妻高清精品专区| 亚洲欧美精品自产自拍| 天堂中文最新版在线下载| 熟女电影av网| 国产在线免费精品| 麻豆成人av视频| 尾随美女入室| 久久久精品免费免费高清| 乱码一卡2卡4卡精品| 国产乱人偷精品视频| 亚洲少妇的诱惑av| 日韩一区二区视频免费看| 亚洲精品一二三| 中国美白少妇内射xxxbb| 精品少妇黑人巨大在线播放| 成年人免费黄色播放视频| 免费大片黄手机在线观看| 人妻一区二区av| 亚洲,欧美,日韩| 女人久久www免费人成看片| 九九在线视频观看精品| 精品人妻偷拍中文字幕| 成人手机av| 日本vs欧美在线观看视频| 精品午夜福利在线看| 精品久久蜜臀av无| 男人操女人黄网站| 九九在线视频观看精品| 久久国产精品大桥未久av| 久久久久国产网址| 成人免费观看视频高清| 久久久国产欧美日韩av| 亚洲国产毛片av蜜桃av| 国产精品人妻久久久久久| 大香蕉久久网| 国产精品一区二区在线观看99| 满18在线观看网站| 国产成人精品福利久久| 99精国产麻豆久久婷婷| 日韩av在线免费看完整版不卡| 在线看a的网站| 人成视频在线观看免费观看| 午夜日本视频在线| 一区二区三区精品91| 国产男女超爽视频在线观看| 在线免费观看不下载黄p国产| 女的被弄到高潮叫床怎么办| 老熟女久久久| 亚洲欧美成人精品一区二区| 你懂的网址亚洲精品在线观看| 成人手机av| 制服诱惑二区| av福利片在线| 欧美国产精品一级二级三级| 午夜久久久在线观看| 欧美性感艳星| 丰满迷人的少妇在线观看| 中文字幕av电影在线播放| 美女内射精品一级片tv| 国产片特级美女逼逼视频| 亚洲欧美一区二区三区国产| 国产日韩欧美视频二区| 中文字幕免费在线视频6| 亚洲少妇的诱惑av| 97在线人人人人妻| 亚洲精品久久成人aⅴ小说 | 国产亚洲精品久久久com| 搡老乐熟女国产| 日本色播在线视频| 亚洲熟女精品中文字幕| 制服丝袜香蕉在线| 男女啪啪激烈高潮av片| 国产黄频视频在线观看| 日韩伦理黄色片| 777米奇影视久久| 女的被弄到高潮叫床怎么办| 如何舔出高潮| 在线观看人妻少妇| 另类亚洲欧美激情| videosex国产| 精品人妻熟女毛片av久久网站| 亚洲三级黄色毛片| 精品少妇黑人巨大在线播放| 欧美日韩一区二区视频在线观看视频在线| 中文字幕人妻丝袜制服| 亚洲精品日韩在线中文字幕| 国产极品天堂在线| 大话2 男鬼变身卡| 久久99蜜桃精品久久| 18禁裸乳无遮挡动漫免费视频| 人妻少妇偷人精品九色| 热99国产精品久久久久久7| 亚洲国产最新在线播放| 久久久久视频综合| 99久久中文字幕三级久久日本| www.色视频.com| 国产男人的电影天堂91| a级毛片免费高清观看在线播放| 亚洲精品国产色婷婷电影| 久久久国产一区二区| 九色成人免费人妻av| 欧美日韩视频高清一区二区三区二| 国产免费一区二区三区四区乱码| 99热全是精品| 成年女人在线观看亚洲视频| 日本黄大片高清| 成人国产av品久久久| av国产精品久久久久影院| 亚洲人与动物交配视频| 亚洲av成人精品一区久久| 国产成人精品福利久久| 视频在线观看一区二区三区| 免费不卡的大黄色大毛片视频在线观看| 少妇猛男粗大的猛烈进出视频| 99久国产av精品国产电影| 91成人精品电影| kizo精华| 久久久午夜欧美精品| 午夜激情久久久久久久| 亚洲婷婷狠狠爱综合网| 精品少妇内射三级| 丰满少妇做爰视频| 一本久久精品| 黑人巨大精品欧美一区二区蜜桃 | 国产亚洲最大av| 天美传媒精品一区二区| 丝袜脚勾引网站| 纯流量卡能插随身wifi吗| 日韩一区二区视频免费看| 老司机影院毛片| 亚洲av.av天堂| 亚洲在久久综合| 丝瓜视频免费看黄片| 欧美人与善性xxx| 久久99蜜桃精品久久| 2018国产大陆天天弄谢| 热re99久久国产66热| 超碰97精品在线观看| 亚洲国产精品999| 午夜日本视频在线| 少妇人妻久久综合中文| 亚洲成人手机| 韩国av在线不卡| 国产深夜福利视频在线观看| 久久av网站| 狠狠婷婷综合久久久久久88av| 国产午夜精品久久久久久一区二区三区| 国产精品国产三级国产av玫瑰| 高清黄色对白视频在线免费看| 亚洲综合色网址| a级毛片黄视频| 中文字幕人妻熟人妻熟丝袜美| www.av在线官网国产| 亚洲在久久综合| 亚洲精品日韩在线中文字幕| 男女边吃奶边做爰视频| 国产在视频线精品| 精品99又大又爽又粗少妇毛片| 考比视频在线观看| 国产成人精品久久久久久| 91午夜精品亚洲一区二区三区| 26uuu在线亚洲综合色| 街头女战士在线观看网站| 这个男人来自地球电影免费观看 | 国产高清三级在线| 日韩一区二区视频免费看| 伊人久久精品亚洲午夜| 亚洲av日韩在线播放| 精品久久蜜臀av无| 成人无遮挡网站| 久久久a久久爽久久v久久| 亚洲成人av在线免费| 国产黄色免费在线视频| 成人18禁高潮啪啪吃奶动态图 | 中国国产av一级| 在线观看www视频免费| 色5月婷婷丁香| 亚洲美女视频黄频| 好男人视频免费观看在线| av在线老鸭窝| 成年美女黄网站色视频大全免费 | 中文字幕av电影在线播放| 久久久久国产精品人妻一区二区| 热99久久久久精品小说推荐| 成年美女黄网站色视频大全免费 | 国产 精品1| 午夜免费观看性视频| 国产黄色免费在线视频| 亚洲欧洲日产国产| 人妻系列 视频| 丁香六月天网| 成年美女黄网站色视频大全免费 | 亚洲欧美精品自产自拍| 秋霞伦理黄片| 国产成人a∨麻豆精品| 日韩精品免费视频一区二区三区 | kizo精华| 哪个播放器可以免费观看大片| 婷婷色综合www| 国产伦理片在线播放av一区| 国产成人免费观看mmmm| 婷婷色av中文字幕| 男女边摸边吃奶| 插阴视频在线观看视频| 人人妻人人爽人人添夜夜欢视频| 美女大奶头黄色视频| 热re99久久国产66热| 人妻一区二区av| 18禁裸乳无遮挡动漫免费视频| 在线免费观看不下载黄p国产| 日韩,欧美,国产一区二区三区| 亚洲国产欧美在线一区| 狠狠精品人妻久久久久久综合| 一级,二级,三级黄色视频| av卡一久久| 另类亚洲欧美激情| 亚洲av男天堂| 丝袜在线中文字幕| 在线看a的网站| 国产精品久久久久久av不卡| 99久久精品一区二区三区| 视频区图区小说| 51国产日韩欧美| 性高湖久久久久久久久免费观看| 男女高潮啪啪啪动态图| 自线自在国产av| 大码成人一级视频| 久久人人爽人人片av| 自线自在国产av| 男女啪啪激烈高潮av片| 毛片一级片免费看久久久久| 日本91视频免费播放| 国产免费一级a男人的天堂| 亚洲欧美日韩卡通动漫| 国产成人精品一,二区| 一区在线观看完整版| 亚洲欧美中文字幕日韩二区| 黄色一级大片看看| 一边亲一边摸免费视频| 九九爱精品视频在线观看| 中文欧美无线码| 男人爽女人下面视频在线观看| 久久影院123| 99久久人妻综合| 亚洲一区二区三区欧美精品| 色视频在线一区二区三区| 精品亚洲成国产av| 九九久久精品国产亚洲av麻豆| 婷婷色av中文字幕| 男女啪啪激烈高潮av片| av国产久精品久网站免费入址| a级片在线免费高清观看视频| 97超视频在线观看视频| 亚洲欧美日韩另类电影网站| 夫妻性生交免费视频一级片| 亚州av有码| 久久久久久人妻| 久久99精品国语久久久| 男人爽女人下面视频在线观看| 老司机影院毛片| 久久综合国产亚洲精品| 欧美一级a爱片免费观看看| 国产欧美日韩综合在线一区二区| 精品少妇内射三级| 国产毛片在线视频| 亚洲综合色网址| 久久精品国产亚洲av天美| 亚洲精品日韩av片在线观看| 日韩 亚洲 欧美在线| 精品国产露脸久久av麻豆| 最近中文字幕高清免费大全6| 天天影视国产精品| 成人国产麻豆网| 国产69精品久久久久777片| 亚洲精品自拍成人| av国产久精品久网站免费入址| 国产成人一区二区在线| 日本猛色少妇xxxxx猛交久久| 国产日韩欧美在线精品| 日韩制服骚丝袜av| 欧美日韩亚洲高清精品| 91国产中文字幕| 欧美精品亚洲一区二区| 99视频精品全部免费 在线| 日韩成人av中文字幕在线观看| 另类亚洲欧美激情| 亚洲国产欧美在线一区| 精品亚洲成国产av| 久久99精品国语久久久| 丝袜脚勾引网站| 欧美三级亚洲精品| 婷婷色综合www| 成人亚洲欧美一区二区av| 91久久精品国产一区二区成人| 日韩在线高清观看一区二区三区| 久久久久久伊人网av| 少妇人妻精品综合一区二区| 免费看不卡的av| 国产男女内射视频| 午夜视频国产福利| 亚州av有码| 亚洲精品成人av观看孕妇| 全区人妻精品视频| 丰满少妇做爰视频| 色94色欧美一区二区| 久久 成人 亚洲| 成年人免费黄色播放视频| 色视频在线一区二区三区| 久久亚洲国产成人精品v| 伦理电影免费视频| 国产又色又爽无遮挡免| 精品一区在线观看国产| 尾随美女入室| 搡老乐熟女国产| 久热这里只有精品99| 久久久久久伊人网av| 丝瓜视频免费看黄片| 日本vs欧美在线观看视频| 久热久热在线精品观看| 黄片无遮挡物在线观看| 国产69精品久久久久777片| av在线观看视频网站免费| 狂野欧美激情性bbbbbb| 美女大奶头黄色视频| 精品一区二区三区视频在线| 欧美日韩一区二区视频在线观看视频在线| 卡戴珊不雅视频在线播放| 亚洲国产日韩一区二区| 午夜免费鲁丝| 伦精品一区二区三区| 中国国产av一级| 蜜桃在线观看..| 男女高潮啪啪啪动态图| 日韩一区二区视频免费看| 最近最新中文字幕免费大全7| 精品99又大又爽又粗少妇毛片| 亚洲天堂av无毛| 成人二区视频| 两个人的视频大全免费| 五月天丁香电影| 欧美xxⅹ黑人| 大香蕉97超碰在线| 一级毛片黄色毛片免费观看视频| videossex国产| 国产男女超爽视频在线观看| 大片免费播放器 马上看| av有码第一页| 日日摸夜夜添夜夜爱| 女人久久www免费人成看片| 黄色毛片三级朝国网站| 久久女婷五月综合色啪小说| 99久久综合免费| 老司机影院毛片| 国内精品宾馆在线| 边亲边吃奶的免费视频| 少妇熟女欧美另类| 亚洲色图 男人天堂 中文字幕 | 欧美日韩视频精品一区| 纯流量卡能插随身wifi吗| 精品久久久久久电影网| 国产色爽女视频免费观看| 久久人人爽人人爽人人片va| 午夜免费观看性视频| 久久久久人妻精品一区果冻| 99久久精品一区二区三区| 国产精品欧美亚洲77777| av视频免费观看在线观看| 精品一区在线观看国产| 免费看不卡的av|