• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    LCF:A Deep Learning-Based Lightweight CSI Feedback Scheme for MIMO Networks

    2022-08-23 02:20:34KyuhaengLee
    Computers Materials&Continua 2022年6期

    Kyu-haeng Lee

    Dankook University,Yongin-si,Gyeonggi-do,16890,Korea

    Abstract:Recently,as deep learning technologies have received much attention for their great potential in extracting the principal components of data,there have been many efforts to apply them to the Channel State Information(CSI)feedback overhead problem,which can significantly limit Multi-Input Multi-Output(MIMO)beamforming gains.Unfortunately,since most compression models can quickly become outdated due to channel variation,timely model updates are essential for reflecting the current channel conditions,resulting in frequent additional transmissions for model sharing between transceivers.In particular, the heavy network models employed by most previous studies to achieve high compression gains exacerbate the impact of the overhead,eventually cancelling out the benefits of deep learning-based CSI compression.To address these issues,in this paper,we propose Lightweight CSI Feedback(LCF), a new lightweight CSI feedback scheme.LCF fully utilizes autoregressive Long Short-Term Memory(LSTM)to generate CSI predictions and uses them to train the autoencoder, so that the compression model could work effectively even in highly dynamic wireless channels.In addition, 3D convolutional layers are directly adopted in the autoencoder to capture diverse types of channel correlations in three dimensions.Extensive experiments show that LCF achieves a lower CSI compression error in terms of the Mean Squared Error (MSE), using only about 10% of the overhead of existing approaches.

    Keywords:CSI;MIMO;autoencoder

    1 Introduction

    Wireless communication systems have significantly benefited from utilizing Channel State Information(CSI)at the transmitter.As one indicator of CSI,the Signal-to-Interference-plus-Noise Ratio(SINR) has been used to enable intelligent transmission functionalities such as dynamic data rate adaptation, admission control, and load balancing since the early days of wireless communication.With the advent of the Multi-Input Multi-Output (MIMO) method, which has now become a core underlying technology in most current systems,such as 5G and Wi-Fi[1,2],the importance of CSI at the transmitter has been highlighted,since proper MIMO beamforming weights can be calculated only through CSI values that accurately reflect the attenuation of the actual channel between a transmitter and a receiver.For this reason,both cellular and Wi-Fi systems already operate their own CSI feedback protocols to allow the transmitter to acquire channel information a priori.

    A CSI feedback process is essential for modern communication systems, yet it faces a critical overhead issue that could greatly limit the potential of MIMO beamforming gains.The amount of CSI that needs to be sent to the transmitter is basically proportional to the number of transmitting and receiving antennas;in Orthogonal Frequency Division Multiplexing(OFDM)systems,the number of subchannels also contributes to an increase in the feedback size, since CSI for every subchannel is required for OFDM transmission.Moreover,for reliable CSI feedback,CSI is typically transmitted at low data rates(e.g.,6.5 Mbps over Wi-Fi),which further exacerbates the impact of the overhead.According to a study,the overhead can reach up to 25×the data transmission for a 4×4 160 MHz Wi-Fi MIMO channel[3],and such substantial overhead will not only limit the network capacity but also prevent the realization of advanced MIMO functionalities,such as massive MIMO,distributed MIMO,and optimal user selection[4–7].

    Thus far,numerous schemes have been proposed to address the CSI feedback overhead problem.One widely accepted idea is to exploit the diverse types of channel correlation that could be readily observed in the temporal,frequency,and spatial domains.The channel coherence time has been used to eliminate unnecessary CSI feedback transmissions in many studies[3,8–10],and similarly adaptive subcarrier grouping is employed to reduce the feedback size in OFDM systems [3,10].A rich body of literature focuses on utilizing the spatial channel correlation used in multi-antenna communication for the same purpose[10–12].Recently,as deep learning technologies have received attention for their superior ability to extract the principal components of data, there have been many efforts to apply them to CSI compression [13–18] and estimation [19–23].In particular, in this field, autoencoders are commonly employed: A receiver compresses CSI data with the encoder of an autoencoder, and the transmitter reconstructs the original CSI using the decoder.A novel CSI compression scheme,CSINet[13],is built on a convolutional autoencoder,where the authors regard the CSI compression problem as a typical 2D image compression task.Along this line,numerous variants of CSINet have been developed for various purposes[14–17].

    Although the aforementioned approaches show that deep learning can be used as a very effective tool for CSI compression, there still remain several critical issues to be solved in terms of how the transceivers can practically share the models.Neural network-based CSI compression schemes are basically premised on sharing a model between a transmitter and a receiver,which means that some transmissions for this model sharing,and their accompanying cost,are unavoidable.In this paper,we refer to this as model sharing overhead.Unfortunately,this overhead has not been thoroughly taken into account in most existing studies, and in many cases, it is assumed that the transceivers already share a model or that model sharing will rarely happen.However,as we will see later,model sharing can occur quite often in practice,since the model cannot guarantee a high degree of generalization to wireless channel data.If the model cannot properly cope with CSI values that it has not experienced during training, then the compression and recovery will fail, which leads to an inevitable process of model re-training and sharing.Of course,in some channel environments where a clear pattern is found,as shown in Fig.1a,the overhead problem may not be so serious.However,this situation cannot be always guaranteed; the actual channels may look more like the one in Fig.1b.In addition, due to the strong randomness of the change in the wireless channel state, simply increasing the amount of training data does not result in noticeable generalization enhancement.Rather,it is more important to use a proper set of training data that reflects the pattern and tendency of the current channel status well,and for this,an appropriate channel status prediction can be of great help.

    Figure 1:Examples of channel coefficient changes over time for two scenarios.Only the real parts of the complex channel coefficients of the first path between the first transmitting antenna and three receiving antennas are displayed.(a)Stable channel case(b)Dynamic channel case

    As mentioned earlier,most of the previous approaches focus only on making the model work at higher compression ratios,and thus they prefer deep and wide networks in their designs.For example,CSINet [13] uses five convolutional layers and two fully connected layers, and it successfully compresses CSI with a high compression ratio of up to 1/64,which obviously outperforms conventional compressed sensing-based approaches[24,25].However,from the model sharing perspective,the model is still too big to share;roughly calculated,for an 8×2 MIMO channel with 8 paths,it needs more than 1,000 decoder parameters in total,which actually makes it larger than the original CSI(i.e.,256=8×8×2×2,where the last number denotes the real and imaginary parts of complex channel coefficients).In this case,model sharing is practically not available since it consequently cancels out the benefits of compression.

    In order to overcome these limitations, in this paper, we propose a lightweight CSI feedback scheme (LCF).Similar to recent approaches, LCF exploits deep neural networks to achieve better CSI compression, but we focus more on ensuring that the model does not impose a substantial burden on the network when being shared.LCF mainly consists of two parts:CSI prediction and CSI compression.First,for CSI prediction,LCF employs a long short-term memory(LSTM)structure to infer future CSI values,which are in turn used to train the CSI compression model.In particular,to generate multiple future CSI predictions effectively,we apply an autoregressive approach to our model.The actual channel compression and reconstruction is conducted by a convolutional autoencoder,where 3D convolution layers are adopted to capture channel correlations in the three dimensions of the transmitting antenna,receiving antenna,and path delays.The resulting compression model appears to be simple compared to recent proposals;however,we will show that this lightweight network structure is still sufficient for achieving high CSI compression performance,with a much cheaper model sharing overhead.

    The proposed CSI feedback scheme is developed and evaluated in a TensorFlow[26]deep learning framework.In order to investigate the performance of LCF for various channel environments, we simulate channel coefficients by applying the WINNER II channel model[27,28].We provide microbenchmarks that evaluate the performance of each CSI prediction and compression process,and we compare the overall performance of LCF with those of AFC [3] and CSINet [13] in terms of CSI recovery accuracy and model sharing overhead.Through extensive experiments, we show that LCF obtains more stable and better CSI compression performance in terms of the mean squared error(MSE),with as low as 10%of the model sharing overhead of the existing approaches.We summarize the main contributions of this paper as follows:

    1.We propose a novel deep learning-based CSI feedback scheme,LCF,which effectively reduces the CSI feedback overhead by using CSI prediction based on autoregressive LSTM and CSI compression with a convolutional autoencoder.We propose the use of CSI predictions to train the autoencoder,so that the compression model can be valid even in highly dynamic wireless channels.

    2.We design a CSI feedback algorithm to make the transmitter and the receiver effectively share the compression model,which has not been investigated well in previous studies.The proposed algorithm can be applied to the existing deep learning-based CSI compression approaches as well.

    3.The performance of LCF is evaluated for various wireless channel scenarios using the WINNER II model,and it is also compared with those of other approaches through extensive experiments.LCF shows more stable and better CSI compression performance, using only about 10%of the overhead of existing approaches.

    The rest of this paper is organized as follows.In Section 2,we review the previous works related to this paper.In Section 3, we provide the preliminaries of this work, and Section 4 describes LCF in detail.Section 5 evaluates the performance of the proposed scheme,and we conclude this paper in Section 6.

    2 Related Work

    Numerous sc hemes have been proposed to address the CSI feedback overhead problem using diverse types of channel correlations.The channel coherence time, during which the channel state remains highly correlated, has been used as a key metric for eliminating unnecessary CSI feedback transmissions in many studies [3,8–10,12].Huang et al.[8] analyze the effect of time-domain compression, based on a theoretical model of channel correlation over time.Sun et al.[9] simulate the 802.11n single-user MIMO(SU-MIMO)performance in time-varying and frequency-selective channel conditions.AFC [3] computes the expected SINR by comparing the previous and the current CSI values and then utilizes it to determine whether to skip a CSI feedback transmission or not.

    Similar ideas can be applied to compressing the frequency domain CSI values.Since in OFDM systems, the channel estimation should be performed on each subcarrier, appropriate subcarrier grouping can reduce the feedback size significantly.In MIMO systems,spatial correlation could also be used for CSI compression.Gao et al.[10]design a channel estimation scheme for an MIMO-OFDM channel using both temporal and spatial correlations.Ozdemir et al.[11] analyze the parameters affecting spatial correlation and its effect on MIMO systems, and Karabulut et al.[12] investigate the spatial and temporal channel characteristics of 5G channel models, considering various user mobility scenarios.These schemes can be further improved with proper quantization schemes that encode the original CSI data with a smaller number of bits.AFC[3]employs an adaptive quantization scheme on top of the integrated time and frequency domain compression,and CQNET[14]is designed for optimizing codeword quantization using deep learning for massive MIMO wireless transceivers.Among other things, it is actually being used for codebook-based CSI reporting in current cellular and Wi-Fi systems[1,2].

    Recently,as deep learning technologies have received attention for their powerful performance in extracting the principal components of data,there have been many efforts to use this capability for CSI compression[13–18]and estimation[19–23].The autoencoder model is widely used in this field since it best fits the problem context.A novel CSI compression scheme,CSINet[13],uses a convolutional autoencoder to solve the CSI compression problem by turning it into a typical 2D image compression problem.Along this line,numerous variants of CSINet have been developed so far[14–18].RecCsiNet[15] and CSINET-LSTM [16] incorporate LSTM structures into the existing autoencoder model to benefit from the temporal and frequency correlations of wireless channels.The authors of PRVNet[17]employ a variational autoencoder to create generative models for wireless channels.In DUalNet[18],the channel reciprocity is utilized for CSI feedback in FDD scenarios.Most of these approaches validate the feasibility of deep learning as an effective tool for CSI compression and feedback;however,there remain several practical open issues related to model sharing and generalization,which will be discussed in the following section.

    3 Preliminaries

    In this section,we describe the channel model and propagation scenarios used in this paper,and we explain the model sharing overhead problem,which motivates this work,in greater detail.

    3.1 Channel Model and Propagation Scenarios

    We consider an SU-MIMO communication scenario in which a receiver equipped withNrantennas feeds the estimated CSI back to its transmitter,which is equipped withNtantennas.Uniform Linear Array(ULA)antennas with 2 cm spacing are assumed for both the transmitter and the receiver.For simplicity,moving network scenarios are not considered.In order to simulate channel coefficients for diverse channel environments,we adopt the WINNER II channel model[27–31],which has been widely used in wireless communication research activities; it was recommended as a baseline for measuring radio communication performance in ITU-R (International Telecommunication Union-Radio communication sector)[29,30].According to this model,the channel coefficients are generated based on a Clustered Delay Line(CDL)model(Fig.2),where the propagation channel is described as being composed of a number of separate clusters with different rays,and each cluster has a number of multipath components that have the same delay values but differ in the Angle-of-Departure(AoD)and Angle-of-Arrival(AoA).

    In this paper, we consider two different propagation scenarios, namely “stable”and “dynamic”scenarios, which model indoor office environments and bad city macro cells, respectively.Here, the former,as the name suggests,has a smaller channel status variation than the latter.Tab.1 shows the basic statistics of the two channels.We use MATLAB to gather channel coefficient data for each case,and CSI data is sampled every 2 ms at the center frequency of 5.25 GHz.Note that MATLAB provides a toolbox for working with the WINNER model [27], which allows us to freely customize network configuration such as sampling rate, center frequency, number of base stations and mobile stations,and their geometry and location information.In particular, since the main channel parameters for various radio propagation scenarios defined by the WINNER model are already configured, the corresponding channel coefficient values can be easily obtained through this.For each scenario,channel coefficient data is expressed as a four-dimensional normalized complex matrix whose shape is (Ns×Nd×Nt×Nr), whereNsis the length of the sampled data andNdis the number of path delays.Throughout this paper,we use the terms“CSI”and“channel coefficients”interchangeably.

    Figure 2: Concept of the Clustered Delay Line (CDL) model [28].Each cluster has a number of multipath components that have the same delay values but differ in the angle-of-departure and angleof-arrival

    Table 1: Statistics of the two channel models

    To show the difference between the two scenarios,we plot the changes over time of the channel coefficients for each scenario in Fig.1.These channel coefficients are the values corresponding to the first path of receiving antennas 1–3 and transmitting antenna 1,and only the real parts of these values are displayed in the plot.From the figure,we can observe the spatial and temporal correlation of the channels for both scenarios,even though they differ in degree.In the case of the stable channel scenario(Fig.1a),similar signal patterns are repeated quite clearly over time and also among the three receiving antennas.In the case of the dynamic channel scenario(Fig.1b),it is difficult to find a clear pattern like that in the previous case,yet we can still see correlations in the two domains.

    We can see the difference between the two channels in terms of correlation more clearly in Fig.3.In this figure,we measure the correlation coefficient of any two CSI instances separated byT,using the following formula[3,32]:

    whereLis the total length of CSI instances andh(t)is the CSI instance at timet.Note that the above equation can be also applied to computing the correlation in the spatial domain (Fig.3b) by changing the definition of the separation.As expected,the stable scenario has overall higher temporal correlations than the dynamic channel,as shown in Fig.3a;their coherence times1Channel coherence time is defined as the point when the correlation value drops to 0.5[33].are 420 and 40 ms,respectively.Compared to the temporal correlation result,higher spatial channel correlations among receiving antennas are observed in both cases(Fig.3b),though the degree of correlation in the stable channel is still higher than that in the dynamic channel.

    Figure 3: Temporal and spatial correlation of the two channels used in this paper.(a) Temporal correlation(b)Spatial correlation

    3.2 Model Sharing Overhead

    As we saw earlier,wireless channels are basically diverse,and their characteristics are thus hard to generalize; some channels remain highly correlated over time for long periods of time, e.g., the stable channel,while others may experience large channel fluctuations,e.g.,the dynamic channel.This aspect, unfortunately, has not been fully taken into account in most of the previous deep learningbased CSI compression schemes, although this leads to a substantial model sharing overhead that eventually limits the gains of deep learning.To identify the model sharing overhead in more detail,we revisit the CSI compression performance of CSINet[13]for three different channel environments,including the two channels described in the previous subsection.As a baseline,we additionally consider a purely random channel, where the channel coefficients are sampled from the normal distribution with zero mean and a variance of 0.1.Note that the stable channel used in this paper belongs to an ideal case in which we can readily predict how the channel changes in the future, while the random channel can be viewed as being at the other end of the spectrum.The mean squared error(MSE)in Section 5 is used as a performance metric,and we train the model using Adam optimization[34]with 10,000 CSI datasets and a maximum of 1,000 epochs.

    First,Fig.4a shows the compression performance according to varying compression ratios.What we pay attention to here is the performance in the dynamic channel:It deteriorates rapidly with the compression ratio,and its MSE value reaches 0.1 even at the low compression ratio of 1/8,which is too large to be used practically.Throughout this paper,the compression failure criterion,denoted asδthr,is set to the MSE of 0.1,and considering that for any wireless channels,the performance of CSINet will be somewhere between the two curves of the stable(i.e.,blue curve)and the random(i.e.,black dotted curve)channels,we can conclude that it is practically available only with the compression ratio of 1/8.When compression fails, the model has to be retrained and shared between the transceivers again,which eventually causes additional transmissions,i.e.,model sharing overhead.Unfortunately,most of the previous approaches focus only on making the model work at higher compression ratios,and thus they prefer deep and wide networks in their designs,hoping that model sharing will not occur frequently.However,as can be seen,it could occur very frequently.In the case of the dynamic channel,at compression ratios greater than 1/16, for every CSI sample the model will always produce a high recovery error above the threshold;in this case,its heavy network structure will accelerate the feedback overhead increase.

    Figure 4:Performance of an existing deep learning-based CSI compression scheme for various channel environments.(a)MSE vs.Compression ratio(b)MSE vs.Time(c)MSE vs.Amount of training data

    Therefore,we have to consider proper model sharing when designing a deep learning-based CSI compression scheme.Fig.4 shows the result when 50 consecutive CSI values are compressed with a fixed model.If the channel is quite reliable, such as in the stable channel case, we might keep the high compression gains of deep learning, but this is not always guaranteed; as can be seen in the dynamic channel case, high recovery errors could continue over time, thus leading to additional feedback transmissions.One might think that this problem can be solved through training the model with more data and thus strengthening the model generalization.Unfortunately, this approach may not be very effective when dealing with wireless channel data,which generally have large irregularities over time.As shown in Fig.4,even if we increase the number of CSI training data samples,there are only slight performance gains.In this case, the amount of training data may not be that important;rather,it is more important to use a proper set of training data that reflects the pattern and tendency of the current channel status well,and for this,appropriate channel status predictions could be very helpful.

    We describe the proposed CSI feedback scheme in detail in the following section.

    4 LCF

    4.1 Overview

    In this section, we provide an overview of the proposed scheme.As mentioned before, LCF is composed of two main processes, as shown in Fig.5, for which different deep learning models are used:1)autoregressive LSTM is used for CSI prediction,and 2)a convolutional autoencoder is used for CSI compression and reconstruction.In the first process,as the name suggests,a receiver generates predictions for future channel states using accumulated CSI values,which in turn will be used as the training dataset for autoencoder optimization.This process is described in detail in Section 4.2.Next,in the second step,the actual CSI compression and recovery is performed.Using the encoder of the autoencoder,a receiver compresses the estimated CSI into anM-byte codeword and then sends it back to the transmitter.Upon receiving the compressed CSI,the transmitter reconstructs the original CSI with the decoder of the autoencoder,which has been already shared by the receiver.More details on the compression model will be described in Section 4.3.Ideally,the feedback process of LCF requires only anM-byte data transmission,and therefore the feedback overhead can be significantly reduced.However, as mentioned earlier, such a gain is not always achievable; in some channel environments,the compression model could quickly become less effective and invalid.To tackle this issue,in LCF,a receiver dynamically updates and shares the compression model depending on the expected CSI recovery error obtained during the CSI compression.We explain this in more detail in Section 4.4.

    Figure 5: Overall structure of LCF.It consists of an LSTM-based CSI prediction process and an autoencoder-based CSI compression and reconstruction process.Note that the modules in the dotted box are executed as needed

    4.2 CSI Prediction Using Autoregressive LSTM

    Figure 6:Autoregressive LSTM for CSI prediction.CSI predictions are made for each combination of the path delay,transmitting antenna,and receiving antenna

    The proposed prediction model has two layers, an LSTM layer and a fully connected layer, as shown in Fig.6.CSI predictions are made for each combination of the path delay, transmitting antenna, and receiving antenna.Additionally, we handle the real and imaginary parts of complex channel coefficients separately,since complex numbers are inherently not comparable in size and thus cannot be directly used in optimization.That is, we use 2Nd Nt Nrmodels in total, and each model is used to generate CSI predictions for the corresponding combination.The input data shape for the models is(batch×Ni× 1),where the last number indicates the number of units(features)in the fully connected layer.One distinct feature of CSI data is that input data samples keep arriving sequentially to the model, and relatively old data samples can quickly become less effective.In this case, we can use online learning;instead of always training on the entire data set,i.e.,in most cases,training is performed only on a small data set containing new data samples,i.e.,In particular,the weights obtained in the previous step can be reused for performance.We basically use the MSE as the objective for optimization and employ Adam optimization[34].

    For better understanding, we illustrate an example of CSI prediction in Fig.7.The channel coefficients shown in the figure are sampled from the channel of the first path between transmitting antenna 1 and receiving antenna 1 in the stable channel case.We depict two curves for both the real and imaginary parts of the channel coefficients in the figure.This example corresponds to the case ofNi=No= 20,which means the channel coefficients from index 10 to index 29 are used to generate the next 20 CSI values from index 30 to index 49.As expected,the prediction performance is basically dependent on the previous prediction results;errors are continuously accumulated as predictions are made,and thus the prediction accuracy gradually drops as time advances.Starting from the MSE of 0.0001,the gap between the actual data and the prediction becomes apparent continuously,and at the last position,it grows to up to 0.0032.However,we note that this level of error is acceptable for the training data,as we will see later.

    Figure 7:CSI prediction example for the channel of the first path between transmitting antenna 1 and receiving antenna 1

    4.3 Convolutional Autoencoder-Based CSI Compression and Reconstruction

    The proposed CSI compression model has the typical structure of an autoencoder, as shown in Fig.8.It consists of two parts: An encoder and a decoder.Letf encandf decbe the encoder and the decoder,respectively.The encoder takes the current CSI(i.e.,Hi)as input,which is a four-dimensional channel coefficient matrix whose shape is (Nd×Nt×Nr× 2), where the last element denotes the real and the imaginary parts.The first layer in the encoder is a 3D convolution layer, where threedimensional filters are used to capture the channel correlation in both the spatial (for both the transmitting and receiving antennas)and delay domains.By default,we use 16(3×3×3)filters,and the LeakyReLU activation function with a parameter of 0.3 is applied.Stripping is not used.The feature maps acquired from this layer are then transferred to a fully connected layer withMunits through average downsampling with a shape of(2×2×2)and flattening,and thus theM-byte compressed CSI, denoted asHM, can be obtained as a result, i.e.,HM=f enc(Hi;θenc), whereθencis a set of the encoder parameters.

    The decoder is basically the mirror of the encoder.It first passes the encoded data(i.e.,HM)to a fully connected layer withNfunits,whereNfis the size of the output of the convolution layer in the encoder, and then transfers the outcome to a convolution layer through the upsampling layer.Like the convolution layer in the encoder,the convolution layer in the decoder also takes three-dimensional filters;however,we employ a transposed convolution layer in the decoder to match the input shape and the output shape,i.e.,(Nc×Nt×Nr×2).The same LeakyReLU activation function is applied,and L2 regularizers with a parameter of 0.001 are applied to all layers in the model.As a result,the decoder reconstructs the original CSI data from the compressed data,HM;that is, ?H=f dec(f enc(Hi;θenc);θdec)=f dec(HM;θdec),whereθdecis the decoder parameters.In the following subsection,we will explain how the model parameters(θencandθdec)are trained and shared between transceivers.

    Figure 8:The proposed CSI compression and reconstruction model.3D convolution layers are adopted in both the encoder and decoder

    4.4 Model Training and Sharing

    In LCF, the whole parameters of the autoencoder (bothθencandθdec) are trained by a receiver,but only the decoder parameters(θdec)are sent to the transmitter if needed.At the very beginning,the receiver trains the autoencoder with(No+1)CSI values,including the current CSI,i.e.,Hi,and the newly generatedNoCSI predictions,i.e.,nd through this step,it obtains the trained encoder and decoder parameters,respectively.Since the decoder model is updated,its parameters need to be sent to the transmitter by the receiver.Now,the process of compressing the target CSI is conducted by retraining the model on it.Note that in this step,training is performed with the decoder parameters fixed,since the decoder parameters are already shared with the transmitter.In this process,θencparameters are still trainable,so they may have different values before and after training.However,since they are not shared with the transmitter and are used only by the receiver,they do not have a significant impact on the system as a whole.

    Every time the receiver compresses CSI,as a result of training,it obtains an optimization error,denoted asδ,which corresponds to the expected CSI recovery error at the transmitter.Depending on this value, it makes a decision about whether to send the decoder parameters to the transmitter or not.If aδvalue is less than a predefined threshold, i.e.,δthr, the receiver sends only the compressed CSI,i.e.,HM,as this implies that the decoder parameters are still valid enough for the current target CSI thanks to the CSI prediction.Otherwise,the receiver obsoletes the previous decoder parameters;then, it re-trains the entire model and sends the newly trained decoder parameters (i.e.,θdec) to the transmitter with the compressed CSI.

    Algorithm 1:Receiverimages/BZ_1364_265_2386_1765_2868.png

    Algorithm 2:Transmitterimages/BZ_1365_264_444_1764_689.png

    To summarize, we provide the entire proposed CSI feedback algorithm in Algorithms 1 and 2.Basically, the proposed method is designed for two communication entities, but it can be extended to multi-user scenarios as well.However,in this case,the transmitter may have to maintain different models for different users, which can cause additional operational burdens such as increased model sharing overhead.Therefore,we should consider mixing the proposed deep learning-based approach with traditional approaches.We leave this issue for our future work.

    5 Performance Evaluation

    5.1 Settings

    In this section, the performance of LCF is evaluated.We use TensorFlow 2 [26] to develop the proposed deep learning models of LCF and conduct extensive experiments on an Intel-i7 machine with 16 GB RAM and an NVIDIA RTX 3080 GPU.Using the MATLAB WINNER II model[27],we generate CSI datasets for both scenarios,which are sampled every 2 ms.When training the models,we use 70%of the total dataset for training,and the other 20%and 10%are used for validating and testing,respectively.For model optimization,we use the MSE as the objective,and employ Adam optimization[34]with a maximum of 1,000 epochs and a learning rate of 0.001.The default parameters used in the experiments are described in Tab.2.

    Table 2: Default parameters

    To compare the performance of LCF with those of other approaches,we additionally implement AFC[3]and CSINet[13].Unfortunately,since all these schemes have different features and feedback policies,we have to make some modifications to them to ensure a fair comparison.The main changes are as follows:

    · AFC: As AFC is not a machine learning-based approach, it does not require a training step and determines the degree of compression by calculating the expected compression error each time it receives CSI.The adaptive bit-quantization scheme is excluded since it can be applied to the other schemes as well.In the original AFC,the compression ratio can also be dynamically adjusted depending on the channel status,which is different from the other two schemes,which use a fixed compression ratio.In this study,for simplicity,we apply a fixed compression ratio to AFC.

    · CSINet: CSINet considers only single-antenna receiver cases.In order to extend it to multiantenna receiver cases,we repeatedly apply it to the channel of each receiving antenna.We use the same training configuration(both dataset and optimizer)for both CSINet and LCF.

    · Both: All these schemes can skip a CSI transmission or model (i.e., decoder) parameter transmission if the expected CSI recovery error is less thanδthr.Note that even if this condition is satisfied,LCF and CSINet should still send the compressed CSI to the transmitter.

    Compression ratioαis defined as the ratio of the compressed data size to the original CSI data size(i.e.,),and as a key performance metric,we measure the MSE,defined in the following equation [13]: MSE =.In addition to the MSE, we use the cosine similarity between the original CSI and the reconstructed CSI to determine the value ofδthr.The imperfect CSI due to the compression causes changes in the resulting beam steering vectors,which can be measured as the cosine similarity between the two CSI values[3,13].Fig.9 shows the cosine similarity values as a function of MSE forNt=Nr= 4 andNd= 16.To draw this plot,for each MSE value,we generate two sets of CSI matrices:One is randomly generated from the standard normal distribution,and the other is generated by adding random noise of the given MSE value to the previous matrix.After that,we compute the cosine similarity between the two matrices for each MSE value.The result is quite predictable;the cosine similarity decreases with the value of MSE.Based on this result,we setδthras 0.1,where the cosine similarity is around 0.95.

    In the following subsections, we first investigate the performance of each model used in LCF through micro-benchmarks,and then we compare the overall performance of LCF with those of the other approaches.

    5.2 Micro-Benchmarks

    5.2.1 CSI Prediction Model

    We investigate the impact of LSTM parameters on the prediction performance, according to varying numbers of LSTM units and differentNiandNocombinations.Fig.10 illustrates the plots ofNo=Nicases for each scenario.It can be seen that the prediction accuracy decreases with the value ofNo,except for the case whereNo= 5 and 256 units are used in the dynamic channel.This result is consistent with the previous observation shown in Fig.7,where,as the model predicts CSI values for times farther in the future,the prediction error becomes larger.

    Figure 9:The same cosine similarity defined in CSINet[13]is used.We set the threshold for retraining as the point where the cosine similarity is around 0.95

    Figure 10:CSI prediction performance for two channel cases,according to different numbers of LSTM units and Ni and No value combinations.In most cases,the prediction accuracy becomes better with more LSTM units and smaller Ni and No values.(a)Stable channel(b)Dynamic channel

    The CSI prediction performance is mainly affected by theNovalue,but the number of LSTM units also has an effect.For all cases,the prediction becomes more accurate as the number of LSTM units increases,except for theNo=5 case for the dynamic channel.This exception is due to the overfitting problem;using more LSTM units increases the model capacity too much,making it difficult to handle unobserved CSI data.Recall that in this evaluation,the dynamic channel has a relatively high and rapid channel variation compared to the stable one, and thus it is more likely to suffer from overfitting.Overall, better CSI prediction results are observed in the stable channel case than in the dynamic channel case;the worst MSE for the stable case is around 0.02,while for the dynamic case,it reaches almost 0.9.However,by taking smallNiandNovalues,we can improve CSI predictions in the dynamic channel as well;whenNois 5 and the number of LSTM units is 32,the prediction error is at its lowest value of 0.02.

    5.2.2 CSI Compression and Recovery Model

    In this subsection, we evaluate the performance of the CSI compression model.To do this, for each scenario we first generate 20 CSI predictions withNi=No= 20 and use them to train the autoencoder.After that,we compress and restore the corresponding CSI label data with the trained model,measuring the difference between the two data.Recall that the decoder parameters are fixed once they are trained.We repeat this experiment 100 times and take the average value.

    Fig.11 shows the results in terms of the MSE and the number of decoder parameters,according to various numbers of encoder filters and compression ratios.First, from Figs.11a and 11b, we can observe that the number of filters affects the compression performance greatly.For the same compression ratio,using more filters causes lower recovery errors.Unfortunately,in return for this high performance,the use of more filters causes the model to be larger,resulting in a high model sharing overhead, as shown in Fig.11c.The number of filters is not the only factor to have an impact on the compression performance;the compression ratio affects the performance as well,since it directly affects the number of units in the fully connected layers of the autoencoder.As expected,the decoder size decreases with the compression ratio, at the expense of a high compression error.We also find that the recovery errors in the stable channel are overall lower than those in the dynamic channel,even though this difference is very subtle.

    5.3 Macro-Benchmarks

    In this evaluation,we compare the performance of LCF with those of AFC[3]and CSINet[13].We run each scheme for 20 consecutive CSI values and measure the average MSE and feedback size at different compression ratios from 1/8 to 1/128.Here,the feedback size is defined as the combined sizes of the model and the compressed CSI that are sent to the transmitter.We repeat this evaluation for both the stable and dynamic channel scenarios,and Fig.12 shows the results.

    From the results,we can see that AFC takes advantage of a small feedback size;for both scenarios,the maximum overhead is only 32,which is a much smaller number than those of other deep learningbased schemes.However, it suffers from high CSI recovery errors.As shown in Figs.12a and 12d,its MSE values are all larger than 1, which is practically implausible.However, we note that the actual AFC can perform better than this modified AFC because of its adaptive compression ratio and quantization,which have been excluded in this evaluation.Compared to AFC,CSINet and LCF both have lower MSE results.When the compression ratio is 1/8, CSINet obtains minimum MSE values of 0.07 and 0.24 for both cases, respectively.Although these values are better than those of AFC,they still seem to be somewhat unstable.In particular,when using a higher compression ratio such as 1/128,or in a highly dynamic wireless channel scenario,the CSI recovery error increases significantly.Its MSE values are at around 0.5 in the dynamic case, as shown in Fig.12d, which verifies our hypotheses that CSI compression would quickly become less effective without proper model updates.To make matters worse,this result eventually incurs a substantial model sharing overhead,as shown in Figs.12b and 12e.The reason the two curves of the two scenarios show different patterns,i.e.,one is going up while the other is going down, is because the major factor affecting the feedback size is different.In the stable channel case,model sharing rarely occurs at low compression ratios,and thus the feedback overhead decreases with the compression ratio;from Fig.12c,we can see that only 1–2 model sharing transmissions happen when the compression ratio is lower than 1/32.However,using a higher compression ratio results in more frequent model sharing,causing the decoder size to take up most of the feedback overhead;as a result,when the compression ratio is 1/128,model feedback occurs for every CSI sample(Fig.12c).Conversely,for the dynamic channel case,model sharing occurs for all CSI,as shown in Fig.12f,so the number of model transmissions no longer has a significant impact on the results.In this case,the higher the compression ratio,the smaller the model size,which at the same time reduces the feedback overhead.

    Figure 11:Impact of model parameters on the CSI compression model performance.For(a)and(b),‘FN’denotes the number of filters.In this evaluation,(3×2×3)filters are used,since the number of receiving antennas is two.(a)Recovery error(stable)(b)Recovery error(dynamic)(c)Decoder size

    Overall,LCF outperforms the other approaches in terms of MSE.Even at the highest compression ratio of 1/128,it achieves an MSE value of 0.05,which is much lower than those of the other schemes.More surprisingly, LCF obtains this result with lower feedback overhead; the average feedback overhead values are only around 40 and 120,respectively,for both cases.From Fig.12b,we can see that LCF has a higher feedback overhead than CSINet in the stable channel case when compression ratios are low (e.g., 1/8 and 1/16).This is due to the fact that LCF directly takes three-dimensional channel data as input,and thus the number of units in the fully connected layers is inherently larger than that of CSINet for the same compression ratio.As shown here, the gains of LCF may not be noticeable in these special situations,where model updates are not required much.However,in most cases, compared to the existing CSI feedback approaches, LCF obtains more stable and higher CSI compression performance,with only 10%of the model sharing overhead of the other approaches.

    Figure 12:LCF outperforms AFC and CSINet in terms of MSE and feedback overhead.Even at the highest compression ratio of 1/128,it obtains much lower MSE values with lower feedback overhead.(a)Recovery error(stable)(b)Feedback size(stable)(c)#Feedback transmissions(stable)(d)Recovery error(dynamic)(e)Feedback size(dynamic)(f)#Feedback transmissions(dynamic)

    6 Conclusion and Future Work

    In this paper, we propose LCF, which addresses the issues of conventional autoencoder-based CSI feedback schemes, specifically that CSI compression quickly becomes less effective and incurs an excessive model sharing overhead over time.Employing an autoregressive LSTM model, LCF generates CSI predictions and then exploits them to train the autoencoder, so that the compression model will be valid even for highly dynamic wireless channels.In order to fully capture the channel correlations to achieve higher CSI compression, three-dimensional convolutional layers are directly applied to the autoencoder.As a result, compared to the existing CSI feedback approaches, LCF obtains more stable and better CSI compression performance in terms of MSE,with only 10%of the model sharing overhead of the other approaches.

    The LSTM model in LCF performs properly for forecasting time-series CSI data, but unfortunately it has the well-known drawback of a long training time.Several approaches can be considered to remedy this issue.First, instead of training the prediction model on all of the data, we can use an ensemble learning strategy that would update the model with the new data, and combine it with the existing model [35,36].To overcome this limitation of LCF, we could also consider using a different type of network.Gated Recurrent Units (GRU) [37] could be one good alternative since it can take advantage of low computation with a smaller number of parameters compared to LSTM.Generally, Convolutional Neural Networks (CNNs) are computationally cheaper than the models in the Recurrent Neural Network (RNN) family, and thus they could be used for this task as well.In this case, it is easier to combine the two models of LCF, which are currently separated, into one model,resulting in higher efficiency.These schemes should be carefully considered not only with the two channel models currently used in this paper, but also with more realistic and diverse channel environments.We leave these issues for our future work.

    Funding Statement:This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1F1A1049778).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美变态另类bdsm刘玥| 欧美久久黑人一区二区| 汤姆久久久久久久影院中文字幕| 宅男免费午夜| 少妇粗大呻吟视频| 免费日韩欧美在线观看| 一本综合久久免费| 自线自在国产av| 久久久欧美国产精品| 亚洲精品美女久久av网站| 天天添夜夜摸| 国产av一区二区精品久久| 搡老熟女国产l中国老女人| 国产成人啪精品午夜网站| 欧美日韩一级在线毛片| 欧美日韩一级在线毛片| 午夜激情久久久久久久| 99热网站在线观看| 成年人免费黄色播放视频| 在线观看人妻少妇| 国产高清视频在线播放一区| 国产欧美日韩一区二区三区在线| 国产精品麻豆人妻色哟哟久久| 精品国产一区二区三区四区第35| 新久久久久国产一级毛片| 久久久久久免费高清国产稀缺| 亚洲午夜精品一区,二区,三区| 精品国产国语对白av| 成人永久免费在线观看视频 | 飞空精品影院首页| 久久婷婷成人综合色麻豆| 久久精品亚洲熟妇少妇任你| 熟女少妇亚洲综合色aaa.| 精品国产一区二区久久| 亚洲国产av影院在线观看| 黄色片一级片一级黄色片| 国产激情久久老熟女| 亚洲午夜理论影院| 麻豆乱淫一区二区| 国产精品久久久久久精品古装| 老司机亚洲免费影院| 啦啦啦在线免费观看视频4| 精品少妇久久久久久888优播| 久久午夜综合久久蜜桃| 成年人午夜在线观看视频| 国产成人系列免费观看| 在线天堂中文资源库| 久久人妻熟女aⅴ| 国产又爽黄色视频| 亚洲成人免费电影在线观看| 午夜福利影视在线免费观看| 美女国产高潮福利片在线看| 欧美日韩精品网址| 老司机靠b影院| 男女床上黄色一级片免费看| 久久久精品国产亚洲av高清涩受| 国产成人精品久久二区二区免费| 亚洲欧美精品综合一区二区三区| 动漫黄色视频在线观看| 中文字幕高清在线视频| 一级片'在线观看视频| 人人妻人人澡人人看| 波多野结衣一区麻豆| 超碰成人久久| 国产主播在线观看一区二区| a在线观看视频网站| svipshipincom国产片| 一二三四在线观看免费中文在| 精品人妻在线不人妻| 亚洲av电影在线进入| 欧美精品人与动牲交sv欧美| 在线看a的网站| 久久狼人影院| 搡老岳熟女国产| 国产日韩欧美在线精品| 日韩 欧美 亚洲 中文字幕| 日韩欧美一区视频在线观看| 国产精品 国内视频| 午夜久久久在线观看| 国产精品98久久久久久宅男小说| 国产av一区二区精品久久| 欧美日韩亚洲综合一区二区三区_| 久久精品亚洲av国产电影网| 欧美精品一区二区大全| 久久久国产精品麻豆| 日本黄色日本黄色录像| 亚洲精品粉嫩美女一区| 99国产综合亚洲精品| 亚洲精品自拍成人| 国产一区二区在线观看av| 亚洲国产av新网站| 亚洲国产欧美一区二区综合| 亚洲av国产av综合av卡| 汤姆久久久久久久影院中文字幕| 亚洲人成电影免费在线| 欧美日韩国产mv在线观看视频| 国产日韩欧美亚洲二区| 日韩人妻精品一区2区三区| 成人影院久久| 午夜视频精品福利| 男女之事视频高清在线观看| 人妻久久中文字幕网| 精品高清国产在线一区| 一边摸一边抽搐一进一出视频| 老司机深夜福利视频在线观看| 在线观看舔阴道视频| 夜夜骑夜夜射夜夜干| 999久久久精品免费观看国产| 91字幕亚洲| 欧美在线黄色| 天堂中文最新版在线下载| 亚洲精品国产一区二区精华液| 亚洲欧美精品综合一区二区三区| 他把我摸到了高潮在线观看 | 老司机福利观看| 免费av中文字幕在线| 国产激情久久老熟女| www.精华液| 人人妻人人添人人爽欧美一区卜| 丝袜美足系列| 在线观看66精品国产| 国产成人系列免费观看| 汤姆久久久久久久影院中文字幕| 9191精品国产免费久久| 精品一品国产午夜福利视频| 亚洲七黄色美女视频| 国产麻豆69| 大片电影免费在线观看免费| 欧美人与性动交α欧美精品济南到| 国产精品av久久久久免费| 纵有疾风起免费观看全集完整版| 久久精品亚洲av国产电影网| av网站在线播放免费| av电影中文网址| 老汉色∧v一级毛片| 在线观看人妻少妇| 在线观看人妻少妇| 女人高潮潮喷娇喘18禁视频| 精品国产一区二区三区四区第35| 欧美黑人欧美精品刺激| cao死你这个sao货| 性少妇av在线| av天堂久久9| 老司机在亚洲福利影院| 亚洲男人天堂网一区| 久久久国产成人免费| 国产精品一区二区免费欧美| 久久国产精品影院| 成人特级黄色片久久久久久久 | 老熟妇仑乱视频hdxx| 亚洲av成人不卡在线观看播放网| 久热这里只有精品99| 女性生殖器流出的白浆| 久久精品成人免费网站| 国产人伦9x9x在线观看| 纯流量卡能插随身wifi吗| 欧美激情极品国产一区二区三区| 亚洲五月婷婷丁香| 成人18禁在线播放| 久久人人爽av亚洲精品天堂| 国产一区二区 视频在线| 久久热在线av| 99九九在线精品视频| 欧美精品亚洲一区二区| 12—13女人毛片做爰片一| 国产在线免费精品| 黄色片一级片一级黄色片| 啦啦啦视频在线资源免费观看| 国产精品免费视频内射| 黄色视频在线播放观看不卡| 曰老女人黄片| 久久久久久久大尺度免费视频| 宅男免费午夜| 久久人妻福利社区极品人妻图片| 免费观看a级毛片全部| 色播在线永久视频| 宅男免费午夜| 777米奇影视久久| 亚洲精品在线美女| 亚洲成a人片在线一区二区| 久久ye,这里只有精品| 黄色a级毛片大全视频| 脱女人内裤的视频| 午夜福利影视在线免费观看| 亚洲熟女毛片儿| 日韩视频在线欧美| 国产欧美亚洲国产| 成年人黄色毛片网站| 丝袜美足系列| 人人妻人人爽人人添夜夜欢视频| 大片电影免费在线观看免费| 中文字幕制服av| 纵有疾风起免费观看全集完整版| 免费久久久久久久精品成人欧美视频| www日本在线高清视频| 动漫黄色视频在线观看| 纯流量卡能插随身wifi吗| 精品一区二区三卡| 蜜桃在线观看..| 99国产综合亚洲精品| 黄色片一级片一级黄色片| 色综合婷婷激情| 国产真人三级小视频在线观看| 狠狠狠狠99中文字幕| 一本色道久久久久久精品综合| 亚洲情色 制服丝袜| 精品少妇久久久久久888优播| 丁香六月欧美| 变态另类成人亚洲欧美熟女 | 一个人免费看片子| 国产淫语在线视频| 国产又爽黄色视频| 热99久久久久精品小说推荐| 大香蕉久久网| 黄片大片在线免费观看| 亚洲精品国产色婷婷电影| 亚洲精品一卡2卡三卡4卡5卡| 丝瓜视频免费看黄片| 999精品在线视频| 国产野战对白在线观看| 亚洲av美国av| 久久99一区二区三区| av又黄又爽大尺度在线免费看| 精品国产超薄肉色丝袜足j| 国产精品九九99| 精品一区二区三区四区五区乱码| 日日夜夜操网爽| 亚洲一区中文字幕在线| 麻豆成人av在线观看| 中文字幕制服av| 宅男免费午夜| 亚洲精华国产精华精| 伦理电影免费视频| 欧美精品高潮呻吟av久久| 69av精品久久久久久 | 变态另类成人亚洲欧美熟女 | 国产精品99久久99久久久不卡| 亚洲成人免费av在线播放| 99国产精品免费福利视频| 18在线观看网站| 亚洲中文日韩欧美视频| 人人妻人人爽人人添夜夜欢视频| 国产精品久久电影中文字幕 | 亚洲国产中文字幕在线视频| 免费一级毛片在线播放高清视频 | 国产精品99久久99久久久不卡| 女人被躁到高潮嗷嗷叫费观| 91成人精品电影| 国产在线免费精品| 桃红色精品国产亚洲av| 制服人妻中文乱码| 一个人免费看片子| 51午夜福利影视在线观看| 18禁裸乳无遮挡动漫免费视频| 午夜日韩欧美国产| 精品亚洲乱码少妇综合久久| 深夜精品福利| 国产精品自产拍在线观看55亚洲 | 国产精品二区激情视频| 69av精品久久久久久 | 欧美 日韩 精品 国产| 少妇被粗大的猛进出69影院| 国产精品美女特级片免费视频播放器 | 丰满迷人的少妇在线观看| 免费在线观看视频国产中文字幕亚洲| 国产无遮挡羞羞视频在线观看| 免费少妇av软件| 91av网站免费观看| 成人三级做爰电影| 无限看片的www在线观看| 女人高潮潮喷娇喘18禁视频| 国产成人精品在线电影| 国产精品电影一区二区三区 | 国内毛片毛片毛片毛片毛片| 亚洲av成人一区二区三| 亚洲第一青青草原| 精品国产一区二区久久| 老司机亚洲免费影院| 亚洲第一av免费看| 成人免费观看视频高清| cao死你这个sao货| 一进一出抽搐动态| 国产精品电影一区二区三区 | 老司机在亚洲福利影院| 国产免费av片在线观看野外av| 午夜老司机福利片| 天天躁夜夜躁狠狠躁躁| 久9热在线精品视频| 少妇 在线观看| 51午夜福利影视在线观看| 美女福利国产在线| 女人久久www免费人成看片| 最近最新中文字幕大全电影3 | 天天躁夜夜躁狠狠躁躁| 久久久国产精品麻豆| 成年人黄色毛片网站| 777米奇影视久久| 高清欧美精品videossex| 99久久人妻综合| 女人久久www免费人成看片| 日韩大片免费观看网站| 色综合婷婷激情| 亚洲国产毛片av蜜桃av| 91国产中文字幕| 精品福利观看| 亚洲欧美一区二区三区黑人| 久久精品亚洲精品国产色婷小说| 国产精品av久久久久免费| 日日摸夜夜添夜夜添小说| 欧美黄色片欧美黄色片| 国产单亲对白刺激| 精品国产亚洲在线| 人人妻人人澡人人爽人人夜夜| 满18在线观看网站| 日本一区二区免费在线视频| 精品卡一卡二卡四卡免费| 午夜福利乱码中文字幕| 视频区图区小说| 精品少妇一区二区三区视频日本电影| 国产精品自产拍在线观看55亚洲 | 日本一区二区免费在线视频| 久久人人爽av亚洲精品天堂| 色尼玛亚洲综合影院| 亚洲欧美日韩另类电影网站| 国产不卡av网站在线观看| 纵有疾风起免费观看全集完整版| 精品国产一区二区久久| 一区二区三区国产精品乱码| 91字幕亚洲| 中文字幕另类日韩欧美亚洲嫩草| 美女扒开内裤让男人捅视频| 妹子高潮喷水视频| 亚洲成av片中文字幕在线观看| 18禁黄网站禁片午夜丰满| www日本在线高清视频| 99re6热这里在线精品视频| 中文字幕人妻丝袜制服| 国产成人精品在线电影| 精品国产一区二区久久| 欧美激情久久久久久爽电影 | 日本一区二区免费在线视频| 亚洲精品国产色婷婷电影| 国产亚洲一区二区精品| 男女床上黄色一级片免费看| 后天国语完整版免费观看| 男男h啪啪无遮挡| 精品视频人人做人人爽| a在线观看视频网站| 免费在线观看日本一区| 十八禁高潮呻吟视频| 天天操日日干夜夜撸| 一区二区三区乱码不卡18| 午夜福利免费观看在线| 在线观看舔阴道视频| 精品久久久久久久毛片微露脸| 国产高清videossex| 亚洲精品在线观看二区| 大型av网站在线播放| 亚洲精品国产色婷婷电影| 亚洲色图av天堂| 国产精品香港三级国产av潘金莲| 母亲3免费完整高清在线观看| 成人影院久久| 亚洲精品成人av观看孕妇| 欧美日韩中文字幕国产精品一区二区三区 | 精品福利观看| 人妻 亚洲 视频| 国产成人影院久久av| 91老司机精品| videos熟女内射| 视频在线观看一区二区三区| 黄色视频在线播放观看不卡| 亚洲av日韩在线播放| 亚洲色图综合在线观看| 国产精品香港三级国产av潘金莲| 别揉我奶头~嗯~啊~动态视频| 亚洲午夜理论影院| 久久精品aⅴ一区二区三区四区| 日韩大码丰满熟妇| xxxhd国产人妻xxx| 精品少妇一区二区三区视频日本电影| 国产精品久久久人人做人人爽| 久久人妻福利社区极品人妻图片| 国产麻豆69| 日日爽夜夜爽网站| 男人舔女人的私密视频| 国产精品一区二区在线观看99| 夜夜夜夜夜久久久久| 一边摸一边抽搐一进一出视频| 深夜精品福利| 欧美一级毛片孕妇| 日本黄色视频三级网站网址 | 十八禁高潮呻吟视频| 人人妻人人澡人人爽人人夜夜| av片东京热男人的天堂| 9热在线视频观看99| 亚洲成人国产一区在线观看| 在线观看免费午夜福利视频| 一边摸一边抽搐一进一出视频| 大型av网站在线播放| 精品福利观看| 成人手机av| 美女视频免费永久观看网站| 黄色片一级片一级黄色片| 欧美日韩亚洲高清精品| 国产主播在线观看一区二区| 一夜夜www| 2018国产大陆天天弄谢| 国产无遮挡羞羞视频在线观看| 一个人免费在线观看的高清视频| 少妇的丰满在线观看| 午夜视频精品福利| 亚洲一码二码三码区别大吗| 看免费av毛片| 日本欧美视频一区| 一本—道久久a久久精品蜜桃钙片| 在线观看一区二区三区激情| 久久久精品94久久精品| 精品少妇黑人巨大在线播放| 亚洲av成人一区二区三| 51午夜福利影视在线观看| 丝袜美足系列| 男女床上黄色一级片免费看| 午夜福利影视在线免费观看| kizo精华| av又黄又爽大尺度在线免费看| 动漫黄色视频在线观看| 精品久久久久久久毛片微露脸| 法律面前人人平等表现在哪些方面| 日韩中文字幕欧美一区二区| 国产一区二区三区在线臀色熟女 | 国产精品美女特级片免费视频播放器 | 亚洲男人天堂网一区| 午夜激情久久久久久久| 男人舔女人的私密视频| 日韩大码丰满熟妇| 丝袜美足系列| 久久精品国产亚洲av香蕉五月 | 国产精品久久久久久人妻精品电影 | 国产精品1区2区在线观看. | 成人亚洲精品一区在线观看| 亚洲自偷自拍图片 自拍| 亚洲av第一区精品v没综合| 777久久人妻少妇嫩草av网站| 欧美亚洲 丝袜 人妻 在线| 免费在线观看影片大全网站| 国产97色在线日韩免费| 涩涩av久久男人的天堂| 欧美日韩视频精品一区| 国产精品国产高清国产av | 一区二区日韩欧美中文字幕| 欧美乱妇无乱码| 无限看片的www在线观看| 一本一本久久a久久精品综合妖精| 女警被强在线播放| 亚洲欧美日韩另类电影网站| 99国产综合亚洲精品| 免费日韩欧美在线观看| 又大又爽又粗| 欧美精品啪啪一区二区三区| 久久中文字幕一级| 黑人欧美特级aaaaaa片| av线在线观看网站| 亚洲av欧美aⅴ国产| 精品高清国产在线一区| 久久精品国产亚洲av香蕉五月 | 纵有疾风起免费观看全集完整版| 中文字幕av电影在线播放| 黄片小视频在线播放| 国产成人啪精品午夜网站| 免费黄频网站在线观看国产| 一区二区av电影网| 国产又色又爽无遮挡免费看| 久久久久久久久免费视频了| 国产av精品麻豆| 欧美日韩av久久| cao死你这个sao货| 不卡av一区二区三区| 国内毛片毛片毛片毛片毛片| 国产精品一区二区免费欧美| 天天影视国产精品| 久久香蕉激情| 国产亚洲av高清不卡| 久久中文字幕人妻熟女| 久久久久久久久免费视频了| 国产又色又爽无遮挡免费看| 亚洲国产av影院在线观看| 男人舔女人的私密视频| 热99re8久久精品国产| 啦啦啦免费观看视频1| 国产日韩一区二区三区精品不卡| 女人久久www免费人成看片| 丁香六月欧美| 国产三级黄色录像| 黄色a级毛片大全视频| 亚洲第一青青草原| 国产区一区二久久| 建设人人有责人人尽责人人享有的| 91成年电影在线观看| 欧美精品av麻豆av| 亚洲精品在线观看二区| 99香蕉大伊视频| 麻豆成人av在线观看| 亚洲一区二区三区欧美精品| 一级a爱视频在线免费观看| 日韩大码丰满熟妇| 韩国精品一区二区三区| 成人18禁在线播放| 午夜福利免费观看在线| 一二三四在线观看免费中文在| 精品国产乱码久久久久久小说| 国产又爽黄色视频| 怎么达到女性高潮| 最近最新中文字幕大全免费视频| 999久久久精品免费观看国产| 一进一出抽搐动态| 999久久久国产精品视频| 美女主播在线视频| 女人被躁到高潮嗷嗷叫费观| 18在线观看网站| 动漫黄色视频在线观看| 中文欧美无线码| 欧美日韩黄片免| 两性夫妻黄色片| 国产精品免费视频内射| 免费观看av网站的网址| 国产成人免费无遮挡视频| 亚洲精品一二三| 丰满人妻熟妇乱又伦精品不卡| 天天躁夜夜躁狠狠躁躁| 久久香蕉激情| 老汉色∧v一级毛片| 如日韩欧美国产精品一区二区三区| 国产视频一区二区在线看| 欧美亚洲 丝袜 人妻 在线| 亚洲五月色婷婷综合| 久久久精品国产亚洲av高清涩受| 少妇被粗大的猛进出69影院| 亚洲国产欧美网| 这个男人来自地球电影免费观看| 免费看a级黄色片| 伦理电影免费视频| 亚洲成av片中文字幕在线观看| 国产99久久九九免费精品| 一二三四社区在线视频社区8| 美女扒开内裤让男人捅视频| 丝袜喷水一区| 18禁黄网站禁片午夜丰满| 在线十欧美十亚洲十日本专区| 久久精品人人爽人人爽视色| 窝窝影院91人妻| 黄色视频不卡| 中文字幕av电影在线播放| 国产精品.久久久| 国精品久久久久久国模美| 亚洲国产精品一区二区三区在线| 中文亚洲av片在线观看爽 | 久久亚洲真实| 免费日韩欧美在线观看| 久久精品国产亚洲av高清一级| 天天躁日日躁夜夜躁夜夜| 国产成人精品在线电影| 一级毛片精品| 精品少妇黑人巨大在线播放| 色视频在线一区二区三区| 成年人免费黄色播放视频| 一级毛片女人18水好多| 在线天堂中文资源库| 日日夜夜操网爽| 久久久国产精品麻豆| 五月天丁香电影| 中文字幕人妻熟女乱码| 两个人免费观看高清视频| 在线观看舔阴道视频| 欧美乱妇无乱码| 国产不卡一卡二| 男男h啪啪无遮挡| 高清毛片免费观看视频网站 | 久久久国产精品麻豆| 少妇猛男粗大的猛烈进出视频| 午夜福利在线免费观看网站| 99久久99久久久精品蜜桃| 亚洲精品美女久久久久99蜜臀| 黄色怎么调成土黄色| 精品少妇黑人巨大在线播放| 亚洲精品久久午夜乱码| 国产伦理片在线播放av一区| 国产成人av激情在线播放| 美女高潮到喷水免费观看| 咕卡用的链子| 国产成人精品无人区| 中文字幕制服av| 国产人伦9x9x在线观看| 精品免费久久久久久久清纯 | 99re6热这里在线精品视频| 亚洲午夜精品一区,二区,三区| 国产欧美日韩一区二区精品| 国产精品影院久久| 欧美日韩亚洲国产一区二区在线观看 | 69精品国产乱码久久久| 亚洲国产成人一精品久久久| 国产国语露脸激情在线看| 免费看a级黄色片| 欧美一级毛片孕妇| 一进一出好大好爽视频| xxxhd国产人妻xxx| 亚洲熟妇熟女久久| 精品卡一卡二卡四卡免费| 日本a在线网址| 麻豆乱淫一区二区| 别揉我奶头~嗯~啊~动态视频| 日韩欧美一区二区三区在线观看 | 97人妻天天添夜夜摸| 国产97色在线日韩免费| 久热这里只有精品99|