• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Machine Learning-based Stable P2P IPTV Overlay

    2022-08-23 02:19:58MuhammadJavidIqbalIhsanUllahMuhammadAliAtiqAhmedWaheedNoorandAbdulBasit
    Computers Materials&Continua 2022年6期

    Muhammad Javid Iqbal,Ihsan Ullah,Muhammad AliAtiq AhmedWaheed Noor and Abdul Basit

    1Directorate of Information Technology,Sardar Bahadur Khan Women’s University,Quetta,Pakistan

    2Department of Computer Science and Information Technology,University of Balochistan,Quetta,Pakistan

    Abstract: Live video streaming is one of the newly emerged services over the Internet that has attracted immense interest of the service providers.Since Internet was not designed for such services during its inception,such a service poses some serious challenges including cost and scalability.Peer-to-Peer(P2P) Internet Protocol Television (IPTV) is an application-level distributed paradigm to offer live video contents.In terms of ease of deployment,it has emerged as a serious alternative to client server, Content Delivery Network(CDN)and IP multicast solutions.Nevertheless,P2P approach has struggled to provide the desired streaming quality due to a number of issues.Stability of peers in a network is one of the major issues among these.Most of the existing approaches address this issue through older-stable principle.This paper first extensively investigates the older-stable principle to observe its validity in different scenarios.It is observed that the older-stable principle does not hold in several of them.Then,it utilizes machine learning approach to predict the stability of peers.This work evaluates the accuracy of several machine learning algorithms over the prediction of stability, where the Gradient Boosting Regressor(GBR)out-performs other algorithms.Finally,this work presents a proof-of-concept simulation to compare the effectiveness of older-stable rule and machine learning-based predictions for the stabilization of the overlay.The results indicate that machine learning-based stability estimation significantly improves the system.

    Keywords: P2P IPTV; live video streaming; user behavior; overlay networks;stable peers;machine learning

    1 Introduction

    Live video streaming is a challenging service because it has stringent playback deadlines.Centralized client/server, IP multicast and peer-to-peer (P2P) are major architectures that enable video streaming service.Client/server is a classical approach which allows clients to generate video requests to servers and servers are able to respond to these requests.One major issue of these systems is scalability which requires to enhance the resources at server side to accommodate a surge in user requests,resulting in increased cost.A similar model named as CDN[1]brings some improvements to the classical client/server model to offer video streaming.In CDNs, the source pushes contents to various servers that are distributed purposefully over the Internet.Clients get their contents from nearby servers via any-casting,instead of contacting to source server,which ultimately cuts down the startup delay and reduces the traffic on the network.IP multicast works at the network layer that enables routers to maintain and manage groups.In this approach,one stream can be replicated by a router towards multiple receivers.This approach is an efficient solution but several issues[2]curtailed its deployment over the global scale.However, it is used in smaller domains that are managed by a single administration.On the other hand,P2P strategy works at the application layer[3].It organizes peers into a virtual network where they not only receive but also transmit the contents.The major advantage of P2P systems over client-server and IP multicast is their easier deployment with low cost.

    P2P approach for live streaming has its own issues.One of the major issues of these systems is their reliance on intermittently available nodes which turn the whole system unstable.Numerous approaches have been proposed to overcome this issue, resulting into several systems which can be classified into multiple categories.Such approaches either focus on resilient designs or consider the peer’s stability for improvements.The former approach focuses on the overlay network structure and content dissemination strategies which too without stability are not enough as the management overhead is increased as well as the delay.

    Concerning stability of peers in existing approaches, it is mostly determined through statistical observation which considers a node to be stable if it is present in the network since long[4–6].We term this approach as the older-stable principle.This principle is based on general statistical correlation of elapsed time and remaining time of all participating peers in the system.However,users have different choices and habits which influence their viewing patterns.One of the major issues with this approach is the measurement of correlation for sessions of all users together.As a result,strong correlation between elapsed time and remaining time has been observed that assumes long lived peers as stable ones [4].However, it has been observed that there are different behavior patterns of users during watching television.For instance, a user may leave his/her system on a channel while he/she is busy doing something else, while another user may be joining and leaving channels frequently during browsing mode[7].This work assesses extensively the older-stable principle through finding the correlation of elapsed time and remaining time in different scenarios.It separates different modes of behaviors and analyzes the correlation in each mode.The results show that this relationship remains different in different scenarios.

    Stability of peers can also be predicted through machine learning approach;however,we find very few such efforts in the literature.Therefore,this study also considers machine learning approach for the prediction of stability.To extensively evaluate the accuracy of machine learning algorithms,this work trains and tests several machine learning models.The results show that Gradient Boosting Regressor performs better than other algorithms.Furthermore, to get a comparison of older-stable principle and machine learning based predictions, this work presents a proof-of-concept simulation, and the results indicate that incorporating machine learning-based predictions can significantly improve the performance.

    Since P2P networks are networks of users,stability of peers plays a crucial role in the performance of these system.Knowing about stable peers,enables to construct a stream delivery structure in order to improve quality of streaming and provide better user experience.Several types of P2P IPTV overlays exist which can greatly benefit from an accurate estimation of stability.Therefore,this work provides a base line for machine learning based intelligent P2P IPTV overlays.

    The contribution highlights of this work are:

    ■Analysis of older-stable principle through finding correlation between elapsed time and remaining time of a session in different scenarios,

    ■Assessment of several machine learning algorithms for the prediction of stability and

    ■Simulation to analyze the effectiveness of machine learning-based stability estimation in a treebased system.

    In the rest of the article,we discuss related work in Section 2.Discussion on analysis of older-stable principle through estimation of correlation and obtained results are presented in Section 3.Section 4 evaluates the efficiency of machine learning algorithms for the prediction of stability.Simulations to compare integration of older-stable principle and machine learning based predictions into a tree-based overlay are given in Section 5.Finally,Sections 6 and 7 present conclusion and future work.

    2 Related Work

    Tang et al.[4]analyze logs of user behavior and observe a positive correlation between the elapsed duration of the current user session and the remaining duration.This motivates them to propose the construction and maintenance of a tree.The tree is built and maintained in such a way that peers with longer presence in the current session are placed near the root of the tree while nodes with shorter presence are placed as leaves.Several other approaches are based on the same older-stable principle.For instance, Wang et al.[5] propose a hybrid push-pull overlay that creates a tree-based backbone called tree-bone from stable nodes and the unstable nodes are connected to the tree-bone.Stream is pushed on the tree-bone while other nodes,out of the tree-bone,function in a mesh network to pull the content.Here, the selection of stable nodes for tree-bone is based on the older-stable principle.Similarly,Tan et al.[6]combine both stability and bandwidth to construct a stable tree where,peer’s stability is determined from the online age of the peer.

    Tian et al.[7]perform a measurement study of an existing system and observe that nodes’lifetime follows a Pareto distribution.In their parent selection strategy, they propose to choose older node as a parent node because it is more likely to be a stable one.A two-tiered overlay is presented in [8]where,tier1 consists of stable nodes in the form of a labeled tree.The identification of stable nodes for tier 1 is based on peers’ages which is like older stable principle.Budhkar et al.[9]propose an overlay management strategy to improve stability of the system.A similar strategy has been used in hybrid P2P-CDN overlay[10].Fuzzy Logic based Hybrid Overlay(FLHyO)is proposed in[11]that utilizes the fuzzy system to find out the priority of all the peers during overlay construction by considering upload bandwidth, geographical location, age and utilization of a peer.Ma et al.[12] analyze a P2P-CDN based system and propose a peer selection strategy involving machine learning instead of localization.Miguel et al.[13]propose classification of peers based on video chunk contribution.To create partnerships,they restrict each peer to choose partners from high contributing groups as well as low contributing groups to keep the system running.Nonetheless,they do not consider stability of peers in their approach.

    On the other hand, prediction of user’s current session is proposed in [14] which models the influencing parameters of user behavior through a Bayesian network to predict the length of the current session.Based on the predicted session, they propose to maintain a tree structure that minimizes the impact of abrupt departure of an upstream peer.According to this approach, a peer estimates its own session’s length at arrival to the network and schedules a maintenance just before the end of the session.During this stage,it swaps its position with its most stable child while other child nodes are notified to connect to the same or any other stable node.An improvement over the prediction accuracy of the Bayesian network through Support Vector Regressor(SVR)model is presented in[15].

    Most of the existing works utilize older-stable principle which considers a peer’s age as stability indicator.Furthermore, very few approaches use machine learning to determine the stability which too,do not explore potentially efficient and accurate algorithms.Therefore,this work first extensively evaluates the older-stable principle and then assesses several machine learning algorithms for their accuracy which have not been studied before for this problem.

    3 Analysis of Older Stable Principle

    To analyze the older stable mechanism,real traces of user activities are required.This study utilizes IPTV traces from Lancaster University [16] where an IPTV service provider (Lancaster living lab)offers video streams both for television sets and PCs installed with P2P clients.These traces contain users’logs collected over a period of 6 months duration.

    3.1 Data Preparation

    Data contain start and stop time of a channel by a user which represent join and departure time.This information can be used to prepare the dataset based on elapsed and remaining time of user where the whole session time is split in one-minute intervals.This provides the data to analyze the correlation between elapsed and remaining time of a user over each minute.An initial analysis of the traces revealed that these traces either contained a large portion of very short sessions or very long sessions spreading over multiple days.Therefore,it is crucial to also analyze the impact of these extreme sessions on overall correlation.To do so, this work calculates the correlation on the whole dataset as well as it separates such sessions from others and analyze them separately.Next,a discussion of different groups of sub-datasets is presented which are separated for analysis.

    3.1.1 Surfing Users

    Users exhibit different behaviors while watching programs.A user is considered to be in surfing mode when he/she is browsing channels and switches channels quickly.In this mode,user checks the current programs of different channels to choose one for watching.Usually,in this mode,a user may hold a channel till to two minutes[17].

    3.1.2 Viewing Users

    A user in viewing mode actually watches the channel and is interested in the content being transmitted.Sessions whose durations remain between the limits of surfing users and idle users are considered in the viewing mode[18].Additionally,this work also analyzes the correlation over different times of a day as it is evident from the user behavior measurements that behavior changes over daytime.

    3.1.3 Idle Users

    The presence of very long sessions(days)in dataset demonstrates a very strange phenomenon from which we conclude that these sessions might have resulted from the fact that the television set remained on while the user was not actually watching the channel.Therefore,this study analyzes these sessions separately to understand their correlation.In this work,sessions longer than 4 h are considered to be of an idle user.

    This study calculates the Pearson coefficient of correlation as given in Eq.(1).Here,x is the elapsed time whileyis the remaining time.andare the mean of elapsed time and remaining time respectively.

    3.2 Results

    This sub-section presents the analysis of older-stable principle through finding correlation between elapsed time and remaining time for different behavior groups, as discussed earlier.In addition to existing works,this work studies the impact of daytime on the correlation as well as the impact of each group on the rest of the traces.The Pearson coefficient of correlation, r = 0.71, at p = 0.05 that is statistically significant positive correlation between elapsed time and remaining time for all the traces,ascertaining previous results as presented in[4].Similarly,for these same combined traces,correlation values for each hour are shown in Fig.1a.

    Figure 1:Correlation of elapsed and remaining time.(a)Group-wise correlation(b)Impact of a group on overall correlation

    Here,the correlation remains strong for most part of the day however,it remains quite low during early hours of the day.This shows a little deviation from the existing work when sessions are split in different times.Concerning the analysis of traces from different user groups, traces belonging to surfing behavior show a correlation coefficient of r=0.07 at p=0.05 which means that elapsed time has no significant correlation with remaining time.Furthermore, concerning the hourly values, the figure shows again that no significant correlation is found at any hour of the day for surfing behavior.

    The value of correlation for viewing users is r=0.08 at p=0.05 which is similar to the surfing group.Furthermore, hourly results do not show any strong correlation.On the other hand, the correlation of traces for idle users is r=0.5 at p=0.05 which shows that a correlation between elapsed time and remaining time exists.However,it is evident from the hourly result given in the figure,that there is positive as well as negative correlation at different hours of the day.This indeed is an interesting observation which contradicts the older-stable principle.To dig it further,this work combines different behavior groups and presents the results in Fig.1b.This figure analyzes the effect of each group on overall result.It can be seen that excluding surfing behavior traces has no effect whatsoever on overall result.Moreover,idle behavior has the most impact on the overall result.Therefore,the lack of idle sessions in the traces reduces the value of correlation coefficient.

    To understand variations in correlation coefficient during different times of the day, this work analyzes the sessions generated at different times.Fig.2 presents the histograms of frequency distributions of all the combined sessions at four different timings of the day.It is evident from Fig.2 that the number of sessions at 04:00,05:00,and 11:00 are low whereas at 23:00 this number is a lot higher.It means that the number of sessions has also an impact over the correlation coefficient value.

    Figure 2:Sessions’frequency distributions in different day times.(a)Number of sessions at 04:00(b)Number of sessions at 05:00(c)Number of sessions at 11:00(d)Number of sessions at 23:00

    Since the older-stable principle is applied to individual users while it is deduced from combined behaviors in existing works, this work also considers the individual case.To perform an analysis of individual behaviors this study extracts traces of two users with most appearances in the system.Figs.3a and 3b presents the correlations of overall sessions for each of these two users over different times of the day.These results show both very weak positive and negative correlations which means that the general observation of all users together cannot be localized for individual users.

    Figure 3:Correlations for individual users.(a)User 1(b)User 2

    To conclude the analysis of older-stable principle, this study observes through extensive investigation that this principle does not always hold.Furthermore,it is dependent on time of the day and overall population of users.The older-stable rule remains intact under the analysis of all the traces of all users together which was established in similar analyses.This study observed that the strong correlation is mainly the result of idle sessions which may be present in a dataset under investigation.

    4 Contextual Approach

    In contrast to older-stable principle, the contextual approach estimates the length of current session of a user from previous sessions as well as contextual information.This section presents an extensive study of the contextual approach to evaluate the efficiency of several machine learning models over the estimation of stability.First, it discusses the machine learning models used in this work and then it presents the methodology adapted in this approach as well as the results.

    4.1 Predictive Models for User Behavior

    Our goal is to estimate the current session of a user which is a continuous variable therefore it becomes a regression type problem.In the literature, only a Bayesian Network (BN) and Support Vector Regressor (SVR) have been used for the prediction of current session from contextual information[15].This work includes other machine learning models to get a comparison and enhance the accuracy further.Since,this is a Regression problem,this study applies several Regression models and selects those which produce comparable results.The newer models chosen for this study are KNearest Neighbors Regressor (KNNR), Random Forest Regressor (RFR) and Gradient Boosting Regressor(GBR).This work also includes the previously used Bayesian network and SVR models for comparison.First,we discuss all these models.

    4.1.1 K-Nearest Neighbor Regression

    K Nearest Neighbors Regression is an adapted form of K-Nearest Neighbor(KNN)for continuous variables.KNN is a supervised learning mechanism for the prediction of discrete variables.The core of this technique is the use of nearest neighbors for the estimation of unknown or missing variable.To determine the nearest neighbors,it uses the distance metric such as Euclidean distance from missing point.KNNR estimates the output label by taking an average of the labels of its k-nearest neighbors.

    This work uses the Python machine learning library to build the KNNR model.This library provides KNNR that can be configured with different settings according to the nature of the problem.Different parameters are set for this model to get best settings for this problem.A few major parameters and their settings as chosen for this model are described below.

    · n neighbors(K):

    Kis the number of nearest points called neighbors that the algorithm considers estimating a missing point.The value ofKhas direct impact on the query processing time of a few algorithms such as Ball tree and the KD tree.A larger value ofKincreases the query time due to two reasons: Firstly, a larger value ofKrequires to search a bigger part of the neighbors’space,leading to more time.Secondly,usingK >1 requires queuing of internal results during tree traversal.A value ofK=9 produces better results in this work.

    · algorithm:There are different algorithms available that include brute,ball tree,kd tree and auto values.One can set either of these algorithms or otherwise the auto option let the system choose the best one for the problem.This work chooses the latter option.

    · P:P specifies the method of computing distance between points.A value of 1 invokes manhattan distance,2 provides a method equivalent to Euclidean distance and any other random value sets the distance metric as minkowski distance.

    4.1.2 Random Forest Regressor

    Random Forest Regressor combines multiple decision tree classifiers by fitting them on subsamples of the dataset selected randomly.Further,it averages the results of these classifiers to control over-fitting and enhance accuracy.This work utilizes the model provided by the Python machine learning library[19]to use RFR.This model provides a number of configurable parameters that can be fine-tuned according to the problem.The selected parameters are presented next.

    · n estimators: This parameter specifies the number of trees to be used by RFR.This work evaluates the performance of the model with different values and chooses a value of 50 for this parameter as it provides better results.

    · criterion:RFR uses this parameter for the measurement of quality of a split.This study selects Mean Squared Error(MSE)as a value of this parameter.

    · max depth:This parameter specifies the maximum size of the tree.This work does not impose any such limitation and uses the default value ofNonefor this parameter.

    · bootstrap:This parameter is a boolean type which if false,the model uses the entire dataset to construct each tree.This study uses the valuetruefor this parameter.

    · max samples:In case the bootstrap value is true,this parameter can be used to limit the size of the sub-samples.This study uses the default value which isNone.

    4.1.3 Gradient Boosting Regressor

    Gradient Boosting is based on AdaBoosting which starts to train a weak learner such as a decision tree through assigning an equal weight to each observation.Once the first tree is evaluated,it changes the weights of observations in such a way that difficult to classify observations get increased weights and easy to classify observations get reduced weights.In the next iteration another tree is grown with new weighted observations.The goal is to improve the predictions of the first tree.At the end of the second iteration,the model is an ensemble of the two trees.This process is continued until a specified number of iterations is achieved.

    Gradient Boosting works in similar way by training several models gradually, additively, and sequentially.Gradient Boosting differs from AdaBoost in the identification of limitations of weak learners.In contrast to AdaBoost model,gradient boosting identifies the limitations by using gradients in the loss function such as squared error.This is actually a measure of goodness of fitting model’s coefficients to the data.Since this work attempts to predict the session of a user, the loss function would base on the error of the prediction between actual and predicted session.

    4.1.4 Support Vector Regressor

    Support Vector Regressor (SVR) is a generalization of Support Vector Machine (SVM) for regression problems.SVM models the binary classification problem as convex optimization problem through finding the maximum margins that separate the hyperplane.Support vectors represent the hyperplane.To generalize SVM to SVR, anε-insensitive region around the function is introduced.This region is termed asε-tube.This tube attempts to solve the optimization problem for continuous variable through finding a tube that best approximates the continuous variable as well as balancing the complexity of the model and prediction error.In brief,firstly,it finds the flattest tube that contains most of the training instances.Secondly,it solves the convex optimization problem,having a unique solution.The training samples lying outside the tube boundary are contained in support vectors as hyperplane.These are the most influential instances impacting the shape of the tube[20].

    This study utilizes the SVR model provided by the Python machine learning library.This model provides several parameters that can be set to fine-tune the model for customized solutions.To describe the configuration of the model used in this work,its parameters are given below.

    · Kernel:The Python machine learning library provides several kernel functions such as Linear,Polynomial,Sigmoid,and Radial Basis.Since the Kernel function performs the optimization,its selection is not straightforward.Therefore,this work analyzed the performance of each kernel empirically on the dataset and chose the Radial Basis Function(RBF)kernel provided by[19].

    · Gamma:The gamma parameter determines the influence of the spread of training sample.This influence is far for small values while close for large values.This work experiments with different values of this parameter and chooses 0.1 as its value in the developed model.

    · Epsilon(ε):Parameterεis used to control the width of the epsilon-tube.This value influences the number of support vectors which are used to construct the regression function.A greater value ofεmeans fewer support vectors.Nonetheless,greater values mean more flat estimates.This work experiments with several values and chooses 0.1 as a suitable one.

    ·C: This is called the regularization parameter which manages the tradeoff between achieving a low training error and a low testing error.It generalizes a model to unseen data.Again,this work chooses through experimentation the value of this parameter which is 100.

    4.1.5 Bayesian Network Model

    The joint distribution of a set of variables having possible mutual causal relationships is represented by a Bayesian network through a directed acyclic graph and conditional probability distributions.The acyclic graph consists of nodes and edges.Nodes represent variables that contain conditional probability distributions while edges represent their causal relationships.The goal of a Beyesian network is to estimate a posterior conditional probability distribution of a target variable after observing the evidence.Each variable in a Bayesian network is conditionally independent from all its child nodes given the states of its parents.This property sometimes significantly reduces the number of parameters that are required to characterize the Joint Probability Distribution (JPD) of variables.For a given Bayesian network,the joint probability distribution overUcan be computed as shown in Eq.(2).

    Building a Bayesian network model has two parts,namely structure and parameters.The structure of the model places variables in relationships while parameters are set as prior probabilities of each node.Both of them can be built manually or learned from data[21,22].This work builds the structure manually as well as learns it from data;however,no significant difference in overall results of the two models is observed.Concerning the parameters,they are learned from data.

    4.2 Methodology

    The methodology this work adapts for contextual approach is depicted in Fig.4.It starts with the generation of the dataset and ends by the prediction of session duration.The main stages are data generation and preparation into dataset,model building and validation.We discuss these major parts one by one.

    Figure 4:Contextual approach for stability estimation

    4.2.1 Data Generation and Preparation

    The dataset used earlier in correlation analysis is limited in size therefore,this work uses a synthetic dataset.This dataset is generated through a semi-Morkovian model presented in [23].The total number of instances generated from this model are 462,825.The dataset contains the following major features.

    · Join Time Tj:This field shows the time-of-the-day at which a user joins a channel.The time is represented in minutes and its value is from 1 to 1440.

    · Departure Time Td:This feature shows again time-of-the-day at which a user quits a channel.The value of departure time has the same range as of join time.

    · Session Duration: The time duration in minutes between the join time and departure time is session duration in our dataset.This variable shows the stability of a user and models are tested for accuracy over the estimation of this variable.

    · Arrival Rate: Arrival rate or joining rate represents the number of users joining during a particular minute of the day.This field contains value of arrival rates for each minute of the day.

    · Departure Rate: Departure rate represents the number of users leaving/quitting a channel during a specific minute of a day.Departure rates for each minute is recorded in this field.

    · Population: Just like arrival and departure rates, population is also a collective variable that contains the count of online users at a specific minute of the day.

    · Content Type:The model contains three generic kinds of contents which are fiction,sports and real.The dataset records the type of the content being broadcast at each minute.

    In feature extraction phase, this work selects variables of interest in light of the work presented in [18].It calculates the session duration variable from the join time and departure time and selects the variables shown in Fig.5.Here,session duration is the target variable while other variables have influencing relationships to it.

    Figure 5:Session duration and related variables

    The dataset is split into training set and test set(validation set)using holdout method through the train test split function of Python library[19].The random state parameter is set to 5,which ensures the same random number sequence generation at each run.The parameter test size is set as 0.3,which randomly splits 30%instances of the actual dataset into test set and remaining in the training set.

    4.2.2 Model Building and Validation

    This work configures each machine learning model according to the specifications given in the description of models above.Then,it builds every model using the training subset.To test(validate)the model,this work uses the testing set instances.Here,session duration(target variable)is dropped and is let to be predicted by the model for its accuracy assessment during the validation step.

    4.3 Experiments and Results

    As discussed above each of these models is assessed over the prediction of the session duration.To evaluate the accuracy of each model,this work uses three error calculation techniques namely,Mean Absolute Error(MAE),Mean Squared Error(MSE)and Root Mean Squared Error(RMSE).

    ·MAE:It is an average of absolute differences between the actual session and the predicted session.The MAE is a linear score which means that all the individual differences are weighted equally in the average.It can be computed through Eq.(3).

    where,ndenotes the number of records in the data set,yis the actual session,andis the predicted session by the model.Furthermore,yiandare predicted,and actual values respectively forithsession.

    ·MSE:It calculates the square difference between the model predicted session and the actual session for each point,and then averages these values.MSE can be computed through Eq.(4).

    ·RMSE:RMSE is calculated in two stages.First,normal mean error is taken.Then,the square root is added to allow the errors ratio to be the same as the targets number.The RMSE can be computed as in Eq.(5).

    To compare the results of newly tested models(GBR,KNNR and RFR)to previously used models(BN and SVR),the MAE,MSE and RMSE of all these models are depicted in Fig.6.It can be easily noted from the figure that GBR, KNNR and RFR produce lower errors than SVR and BN, hence showing more accuracy.

    Concerning the MAE,the RFR model produces the lowest error which is 0.83.It means that RFR performs better for outliers.However,being an absolute error,this metric is not considered as a good choice to compare models.Furthermore,the MAE does not assign weights to errors such as,a higher weight to larger errors.Therefore,to compare the accuracy of the models,this study focusses on MSE and RMSE.It can be noted that GBR produces the least MSE and RMSE among all the tested models.Therefore,based on these two metrics,GBR emerges as the best model for this problem.

    Figure 6:Models’accuracy

    To have a look at individual predictions,a scatter plot of actualvs.predicted values for each of GBR,KNNR,RFR,SVR and BN is shown in Fig.7.In each plot,the actual sessions are shown on the x-axis and the predicted sessions on the y-axis.In case of an ideally perfect model,the predicted session should be exactly the actual session, therefore the plot will make the slope.Hence, all the over-estimations shall appear above the slope and the underestimations shall appear below the slope.Similarly,more points near the slope with low spread show overall low error.

    Figure 7: Continued

    Figure 7: Actual vs. predicted session durations (in minutes).(a) GBR (b) KNR (c) RFR (d) SVR(e)BN

    Comparing these plots,affirms the overall performance of each model discussed earlier.Here,all the three newly proposed models show better accuracy then the previous models.Among the three,GBR produces the least spread of points from the slope and therefore shows better accuracy than all the compared models.

    5 Stabilization of P2P Overlays

    Predictions of machine learning models discussed above can be used to stabilize the P2P overlay.Since there are several stream distribution strategies in P2P IPTV networks,which drives the overlay structure therefore stabilization strategies can be in multiple ways.For instance, a tree can be constructed in such a way that the stable peers are placed near the root while the unstable ones are joined at the leaves.Furthermore, algorithms to maintain a stable tree during the frequent arrivals and departures shall be required.Similarly,in hybrid system,a stable peer may be chosen for the push operation in order to minimize the pull operations.

    This work presents a proof-of-concept simulation which considers a single tree system.The data are taken from the logs of one day of the same 10,000 users,used as a dataset in evaluation of machine learning models in three cases of tree construction.In the first case,a tree is built randomly in which peers arrive and leave the system according to their actual join time and departure time.It does not involve any stabilization mechanism.This is termedrandomin this work.The second case involves the older-stable principle which constructs and maintains the tree based on the elapsed time of the user.It means that the peer present since longer is considered the stable one and is placed higher in the tree.Finally,we construct the tree based on the predicted sessions of the machine learning model.The results of GBR are utilized in this case as GBR was the most accurate model among all tested models.

    The simulation starts at minute 1 and simulates the on and off time of each user as given in the dataset.On each minute it checks the join time of each user and if it matches with the current time,it lets the user join an existing peer as a child.Each peer has an outdegree between 1 and 5 which is randomly assigned.Similarly, the departure time of each user is checked on each minute and if it matches the current time, the peer exits.At the same time the number of descendant peers of the departing peer is recorded which shows the impact of departure.The tree is periodically stabilized after every 3 min time according to applied stabilization strategy.

    As discussed earlier that intermittent presence of peers is a major issue in P2P IPTV systems,this experiment analyzes the impact of an abrupt departure.The sudden departure of an upstream peer discontinues the stream to all its descendant peers.Therefore,on each departure,this number is counted to show the resilience of a particular tree against such departures.

    The simulation is run for each case and the number of impacted peers at every minute is record.Fig.8a depicts the number of impacted peers in each case.To note the variations in number of online users, the population is also shown in Fig.8b.It is obvious from Fig.8a that stabilization based on machine learning predictions minimizes the impact of departures significantly to a negligible level.However, at the other hand, the stabilization based on older-stable rule shows poor performance as compared to other approaches.It might be due to the fact that this dataset does not contain idle sessions therefore correlation-based prediction produces poor results.It also emphasizes the use of a more robust mechanism such as machine learning.

    Figure 8:Stabilization comparison.(a)Disruptions(b)Population

    6 Conclusion

    This work focusses on the stability of peers in a P2P IPTV architecture.It first extensively analyzed the older-stable principle,the widely used approach for determination of stable nodes.For this purpose,traces of IPTV systems were analyzed for the correlations of elapsed time and remaining time in several scenarios.This analysis revealed that the older stable principle stands true for user sessions when correlation is calculated together.However, in case different behavior groups are separated,this principle does not remain true as very weak, or no correlation is observed.Moreover, in case of individual users,again no correlation was found.

    Secondly,machine learning algorithms were evaluated for their accuracy to predict the stability.This study considered GBR, KNNR and RFR and compared their performance with previously used models namely, SVR and Bayesian network.Over the same training and testing data, all these algorithms performed better than SVR and Bayesian network.Furthermore, GBR produced better accuracy among the newly tested models.

    Finally,a proof-of-concept simulation was conducted which considered the construction of a tree structure based on three methods.In one method, the tree was randomly constructed without any stability mechanism while in other two methods tree was built and stabilized based on older stable principle and machine learning based predictions.Simulation results revealed that machine learning based stabilization can bring significant improvement to the system.

    7 Future Work

    This work presents a small-scale simulation which only considers a single tree.Therefore,future work needs to consider machine learning-based hybrid topology.Furthermore,concrete experiments on real network are needed to validate the system for implementation in deployed systems.Machine learning can be utilized to include other parameters such as available bandwidth and delay.The same approach can be extended to other user-oriented networks.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美色视频一区免费| 午夜免费观看网址| 性色av乱码一区二区三区2| av福利片在线| 淫妇啪啪啪对白视频| 在线观看66精品国产| 神马国产精品三级电影在线观看 | 黄色视频,在线免费观看| 大香蕉久久成人网| 啦啦啦观看免费观看视频高清| 国产亚洲av高清不卡| 亚洲国产精品999在线| 琪琪午夜伦伦电影理论片6080| 欧美成狂野欧美在线观看| 久久久久精品国产欧美久久久| 不卡av一区二区三区| 亚洲精品一区av在线观看| 亚洲,欧美精品.| 亚洲黑人精品在线| 丰满人妻熟妇乱又伦精品不卡| 亚洲一区中文字幕在线| 国产av一区二区精品久久| 亚洲精品中文字幕一二三四区| xxxwww97欧美| 不卡av一区二区三区| 精品久久久久久,| 露出奶头的视频| 国产精品美女特级片免费视频播放器 | 亚洲国产欧美一区二区综合| 久久久久精品国产欧美久久久| 国产精品98久久久久久宅男小说| 黄片小视频在线播放| 色精品久久人妻99蜜桃| 欧美久久黑人一区二区| 日韩欧美 国产精品| 亚洲av成人av| 久99久视频精品免费| 免费观看精品视频网站| 亚洲 国产 在线| 女生性感内裤真人,穿戴方法视频| 后天国语完整版免费观看| 高潮久久久久久久久久久不卡| 亚洲中文字幕日韩| 国产精品九九99| av福利片在线| 亚洲专区字幕在线| 国内揄拍国产精品人妻在线 | 99精品久久久久人妻精品| 校园春色视频在线观看| 91大片在线观看| 亚洲av中文字字幕乱码综合 | 日本免费a在线| 两个人视频免费观看高清| 黄频高清免费视频| av超薄肉色丝袜交足视频| 麻豆av在线久日| 在线观看免费日韩欧美大片| 久9热在线精品视频| 欧美丝袜亚洲另类 | 亚洲中文字幕一区二区三区有码在线看 | 观看免费一级毛片| 成人特级黄色片久久久久久久| 岛国视频午夜一区免费看| 91国产中文字幕| 国产精品一区二区免费欧美| 嫩草影视91久久| 成人亚洲精品一区在线观看| 99久久综合精品五月天人人| 精品不卡国产一区二区三区| 波多野结衣高清作品| 嫁个100分男人电影在线观看| 国产又黄又爽又无遮挡在线| 国产爱豆传媒在线观看 | 热99re8久久精品国产| netflix在线观看网站| 久久欧美精品欧美久久欧美| а√天堂www在线а√下载| 亚洲第一欧美日韩一区二区三区| 国产精品国产高清国产av| 久久亚洲精品不卡| 禁无遮挡网站| 国产成人精品久久二区二区免费| 日韩 欧美 亚洲 中文字幕| cao死你这个sao货| 淫妇啪啪啪对白视频| 日韩欧美 国产精品| 欧美性猛交黑人性爽| 国内精品久久久久久久电影| 757午夜福利合集在线观看| 午夜a级毛片| 国产精品久久视频播放| 国内精品久久久久精免费| 每晚都被弄得嗷嗷叫到高潮| 91av网站免费观看| 中文在线观看免费www的网站 | 黄色视频不卡| 免费看日本二区| 激情在线观看视频在线高清| 亚洲男人天堂网一区| 欧美人与性动交α欧美精品济南到| 一级毛片精品| 2021天堂中文幕一二区在线观 | 欧美日韩黄片免| 啦啦啦 在线观看视频| 国产成人av教育| 好看av亚洲va欧美ⅴa在| 国内揄拍国产精品人妻在线 | 日韩精品中文字幕看吧| 波多野结衣高清作品| 热99re8久久精品国产| 国产伦在线观看视频一区| 国产成人系列免费观看| 亚洲av电影在线进入| 别揉我奶头~嗯~啊~动态视频| 伦理电影免费视频| 久久精品国产综合久久久| 久久人人精品亚洲av| 久久精品国产亚洲av高清一级| 欧美日韩亚洲国产一区二区在线观看| 亚洲一区二区三区色噜噜| 欧美激情极品国产一区二区三区| 婷婷丁香在线五月| 亚洲国产欧美日韩在线播放| 搡老妇女老女人老熟妇| 伦理电影免费视频| 午夜福利在线在线| 国产精品久久视频播放| 亚洲av成人一区二区三| 一a级毛片在线观看| 嫁个100分男人电影在线观看| 一级毛片精品| 日本 av在线| 男人的好看免费观看在线视频 | 波多野结衣高清无吗| 性色av乱码一区二区三区2| 国产片内射在线| 身体一侧抽搐| 一夜夜www| 国产片内射在线| 一区二区三区高清视频在线| 亚洲一码二码三码区别大吗| 中文字幕av电影在线播放| 亚洲国产毛片av蜜桃av| 国产欧美日韩一区二区精品| 亚洲国产精品合色在线| 国产午夜精品久久久久久| 国产97色在线日韩免费| 女同久久另类99精品国产91| 国产精品自产拍在线观看55亚洲| 最新美女视频免费是黄的| 成人国产一区最新在线观看| 麻豆av在线久日| 可以在线观看毛片的网站| 亚洲成人精品中文字幕电影| 成年人黄色毛片网站| 中文字幕高清在线视频| 亚洲精品久久成人aⅴ小说| 91国产中文字幕| 9191精品国产免费久久| 国产精品免费一区二区三区在线| 女人被狂操c到高潮| 变态另类成人亚洲欧美熟女| 听说在线观看完整版免费高清| 成人亚洲精品av一区二区| 亚洲成人免费电影在线观看| 丝袜美腿诱惑在线| 久久人妻av系列| 欧美又色又爽又黄视频| 桃红色精品国产亚洲av| 亚洲av成人一区二区三| 久久天堂一区二区三区四区| 国产区一区二久久| 久久久久久免费高清国产稀缺| or卡值多少钱| 国产精品永久免费网站| 精品人妻1区二区| 国产精品 国内视频| 99精品在免费线老司机午夜| 午夜福利一区二区在线看| 最近在线观看免费完整版| 亚洲专区字幕在线| 亚洲精品粉嫩美女一区| 日本三级黄在线观看| 日本黄色视频三级网站网址| 欧美成人性av电影在线观看| 人人澡人人妻人| 婷婷精品国产亚洲av在线| 人成视频在线观看免费观看| 欧美日本视频| 亚洲av美国av| 欧美中文日本在线观看视频| 在线观看免费视频日本深夜| 国产黄色小视频在线观看| 国产精品一区二区免费欧美| 亚洲片人在线观看| 国产精品美女特级片免费视频播放器 | 老司机午夜福利在线观看视频| 久久精品91蜜桃| 嫩草影院精品99| 久久伊人香网站| 大型黄色视频在线免费观看| 18美女黄网站色大片免费观看| 国产1区2区3区精品| 久久亚洲精品不卡| 精品一区二区三区四区五区乱码| 亚洲精品国产区一区二| 1024视频免费在线观看| 亚洲午夜理论影院| 91麻豆精品激情在线观看国产| 亚洲人成伊人成综合网2020| 国产亚洲精品久久久久久毛片| 国产精品爽爽va在线观看网站 | 啦啦啦免费观看视频1| 真人一进一出gif抽搐免费| 超碰成人久久| a级毛片a级免费在线| 成人18禁在线播放| 一级a爱片免费观看的视频| 1024手机看黄色片| 国产亚洲av高清不卡| 99国产综合亚洲精品| 制服人妻中文乱码| avwww免费| 一进一出抽搐动态| 老汉色∧v一级毛片| 国产成人精品无人区| 免费一级毛片在线播放高清视频| 免费电影在线观看免费观看| 亚洲av电影不卡..在线观看| 亚洲自拍偷在线| 午夜精品在线福利| 一二三四在线观看免费中文在| 国产精品爽爽va在线观看网站 | 国内揄拍国产精品人妻在线 | 黄色 视频免费看| 人成视频在线观看免费观看| 欧美乱色亚洲激情| 久久久久久久午夜电影| 夜夜躁狠狠躁天天躁| 久久狼人影院| 国内精品久久久久久久电影| 国内少妇人妻偷人精品xxx网站 | 免费看十八禁软件| www日本在线高清视频| 国产黄a三级三级三级人| 国产久久久一区二区三区| 久久欧美精品欧美久久欧美| 精品熟女少妇八av免费久了| 久久精品91蜜桃| 亚洲av熟女| 热re99久久国产66热| 久久精品国产清高在天天线| 精品久久久久久成人av| 在线视频色国产色| 亚洲人成伊人成综合网2020| 男人舔女人的私密视频| 麻豆成人午夜福利视频| 欧美日韩黄片免| 国产亚洲欧美98| 欧美精品亚洲一区二区| 欧美成人性av电影在线观看| 人人妻人人看人人澡| 亚洲,欧美精品.| 一卡2卡三卡四卡精品乱码亚洲| 伊人久久大香线蕉亚洲五| 日韩av在线大香蕉| 两性午夜刺激爽爽歪歪视频在线观看 | 久久中文字幕一级| 色播在线永久视频| 国产亚洲欧美在线一区二区| 午夜免费鲁丝| 国产精品,欧美在线| 可以免费在线观看a视频的电影网站| 日韩免费av在线播放| 久久久久九九精品影院| 亚洲五月色婷婷综合| 91av网站免费观看| 中文资源天堂在线| 啦啦啦 在线观看视频| 中文字幕最新亚洲高清| 午夜福利一区二区在线看| www国产在线视频色| 亚洲 国产 在线| 国产精华一区二区三区| 国产视频一区二区在线看| 一夜夜www| 精品国产国语对白av| 99在线人妻在线中文字幕| 亚洲天堂国产精品一区在线| 黄色女人牲交| 国产成人啪精品午夜网站| 日韩成人在线观看一区二区三区| 在线看三级毛片| 99久久久亚洲精品蜜臀av| 久久久久国内视频| 一级a爱视频在线免费观看| 免费女性裸体啪啪无遮挡网站| 久久久久久久久中文| 老汉色av国产亚洲站长工具| 九色国产91popny在线| av在线播放免费不卡| 免费在线观看成人毛片| 神马国产精品三级电影在线观看 | 丁香六月欧美| 哪里可以看免费的av片| 怎么达到女性高潮| 亚洲精品美女久久av网站| 久久国产精品人妻蜜桃| 亚洲av美国av| 50天的宝宝边吃奶边哭怎么回事| 天堂动漫精品| 欧美最黄视频在线播放免费| 99在线人妻在线中文字幕| 欧美久久黑人一区二区| 深夜精品福利| 夜夜躁狠狠躁天天躁| 一进一出好大好爽视频| 制服人妻中文乱码| 欧美黄色片欧美黄色片| 观看免费一级毛片| 深夜精品福利| 一级毛片女人18水好多| АⅤ资源中文在线天堂| 国产精品野战在线观看| 久久久国产精品麻豆| 看免费av毛片| 久久久水蜜桃国产精品网| 两个人看的免费小视频| 欧美黑人欧美精品刺激| 怎么达到女性高潮| 亚洲精品国产精品久久久不卡| 黄色a级毛片大全视频| 1024香蕉在线观看| 久久久久久久久免费视频了| 亚洲精品美女久久久久99蜜臀| 91大片在线观看| 国产欧美日韩一区二区三| 国产精品 欧美亚洲| 午夜a级毛片| 人妻久久中文字幕网| 久久国产乱子伦精品免费另类| aaaaa片日本免费| 日韩欧美国产一区二区入口| 亚洲成a人片在线一区二区| 嫩草影视91久久| 日本一区二区免费在线视频| 91字幕亚洲| 日韩有码中文字幕| 久久久久亚洲av毛片大全| 国产极品粉嫩免费观看在线| 神马国产精品三级电影在线观看 | 亚洲国产日韩欧美精品在线观看 | 亚洲欧美精品综合一区二区三区| 中国美女看黄片| 精品福利观看| 国产精品 国内视频| 国产黄片美女视频| 亚洲精品久久国产高清桃花| 日本黄色视频三级网站网址| 真人一进一出gif抽搐免费| 亚洲一区二区三区色噜噜| 香蕉国产在线看| 亚洲 欧美 日韩 在线 免费| 久久久国产成人免费| 亚洲中文字幕一区二区三区有码在线看 | 欧美激情 高清一区二区三区| 中文字幕最新亚洲高清| 深夜精品福利| 丁香六月欧美| 亚洲狠狠婷婷综合久久图片| 国产精品亚洲美女久久久| 91av网站免费观看| 手机成人av网站| 人成视频在线观看免费观看| 精品少妇一区二区三区视频日本电影| 成人手机av| 在线观看www视频免费| 黄片大片在线免费观看| 99久久久亚洲精品蜜臀av| 亚洲成人久久爱视频| 自线自在国产av| 三级毛片av免费| 欧美又色又爽又黄视频| 国产单亲对白刺激| 久久国产精品男人的天堂亚洲| 欧美一级毛片孕妇| 久久婷婷人人爽人人干人人爱| 伦理电影免费视频| 十八禁网站免费在线| 免费在线观看日本一区| 色综合站精品国产| 国产亚洲精品一区二区www| 久久人妻福利社区极品人妻图片| 又黄又爽又免费观看的视频| 欧美成人一区二区免费高清观看 | 国产精品国产高清国产av| 国产精品久久久久久人妻精品电影| 免费无遮挡裸体视频| 免费看十八禁软件| 不卡一级毛片| 18禁黄网站禁片午夜丰满| 久久久久亚洲av毛片大全| 欧美丝袜亚洲另类 | av天堂在线播放| 国产区一区二久久| av欧美777| 中文字幕最新亚洲高清| 啦啦啦 在线观看视频| 亚洲中文日韩欧美视频| 黄频高清免费视频| 国产精品免费视频内射| 真人一进一出gif抽搐免费| 999久久久国产精品视频| 国内精品久久久久久久电影| 久久国产精品男人的天堂亚洲| 欧洲精品卡2卡3卡4卡5卡区| 国产成年人精品一区二区| 精品国内亚洲2022精品成人| 国产精品爽爽va在线观看网站 | 90打野战视频偷拍视频| 亚洲va日本ⅴa欧美va伊人久久| 满18在线观看网站| 亚洲专区国产一区二区| 最近在线观看免费完整版| 好男人电影高清在线观看| 最新在线观看一区二区三区| 国产一区在线观看成人免费| 观看免费一级毛片| 欧美性猛交黑人性爽| 成人永久免费在线观看视频| 亚洲国产欧美一区二区综合| 亚洲精品国产区一区二| aaaaa片日本免费| www日本黄色视频网| 欧洲精品卡2卡3卡4卡5卡区| 波多野结衣高清作品| 国产成人精品久久二区二区免费| av视频在线观看入口| 麻豆成人av在线观看| 香蕉国产在线看| 自线自在国产av| 可以免费在线观看a视频的电影网站| 久久精品aⅴ一区二区三区四区| 91大片在线观看| 国产黄a三级三级三级人| 黑人操中国人逼视频| 国产熟女xx| 制服丝袜大香蕉在线| 免费无遮挡裸体视频| 午夜福利视频1000在线观看| 97碰自拍视频| 亚洲精品国产一区二区精华液| 国产精品免费一区二区三区在线| 精品高清国产在线一区| 久久精品国产亚洲av高清一级| 亚洲va日本ⅴa欧美va伊人久久| 久久天躁狠狠躁夜夜2o2o| 日韩欧美一区二区三区在线观看| 国产成人av激情在线播放| 琪琪午夜伦伦电影理论片6080| 熟妇人妻久久中文字幕3abv| 国产成人影院久久av| 老熟妇仑乱视频hdxx| 亚洲成a人片在线一区二区| 日韩 欧美 亚洲 中文字幕| 色尼玛亚洲综合影院| АⅤ资源中文在线天堂| 超碰成人久久| 禁无遮挡网站| 国产三级在线视频| 亚洲天堂国产精品一区在线| 淫秽高清视频在线观看| 婷婷亚洲欧美| 午夜精品在线福利| 无遮挡黄片免费观看| 在线播放国产精品三级| 成人免费观看视频高清| 激情在线观看视频在线高清| 最近在线观看免费完整版| 最好的美女福利视频网| 国产成人啪精品午夜网站| 亚洲精品美女久久久久99蜜臀| 欧美乱码精品一区二区三区| 欧美成人一区二区免费高清观看 | 亚洲精品在线美女| xxxwww97欧美| 亚洲激情在线av| 久久国产亚洲av麻豆专区| а√天堂www在线а√下载| 日本黄色视频三级网站网址| 夜夜夜夜夜久久久久| 在线永久观看黄色视频| 人人妻人人看人人澡| 一个人观看的视频www高清免费观看 | 国产成人一区二区三区免费视频网站| 99re在线观看精品视频| 亚洲欧美日韩高清在线视频| 国产亚洲精品久久久久5区| 国产亚洲精品久久久久久毛片| 三级毛片av免费| av欧美777| 成人一区二区视频在线观看| 99国产精品一区二区蜜桃av| 黄色 视频免费看| 可以免费在线观看a视频的电影网站| 国产91精品成人一区二区三区| 色播在线永久视频| 免费看十八禁软件| 婷婷精品国产亚洲av在线| 丰满人妻熟妇乱又伦精品不卡| av有码第一页| 欧美精品亚洲一区二区| 三级毛片av免费| 欧美黑人精品巨大| 成人精品一区二区免费| 亚洲精华国产精华精| 亚洲 国产 在线| 亚洲国产中文字幕在线视频| 国产三级黄色录像| 国语自产精品视频在线第100页| 久久香蕉激情| 午夜a级毛片| 免费一级毛片在线播放高清视频| 夜夜爽天天搞| 91字幕亚洲| 国产精品 欧美亚洲| 欧美日韩中文字幕国产精品一区二区三区| 亚洲一区中文字幕在线| 免费在线观看黄色视频的| 在线观看一区二区三区| 99国产精品99久久久久| 麻豆av在线久日| 99国产极品粉嫩在线观看| 国产精品永久免费网站| 免费高清在线观看日韩| 国产精品亚洲一级av第二区| 老司机午夜十八禁免费视频| 欧美午夜高清在线| 91九色精品人成在线观看| 一本久久中文字幕| 日韩成人在线观看一区二区三区| 我的亚洲天堂| 国产成人精品久久二区二区91| 一进一出抽搐gif免费好疼| 亚洲avbb在线观看| 成人免费观看视频高清| 久久国产精品影院| 97超级碰碰碰精品色视频在线观看| 国产精品,欧美在线| 国产成+人综合+亚洲专区| 午夜精品在线福利| 国产又色又爽无遮挡免费看| 一级毛片高清免费大全| 丝袜人妻中文字幕| 亚洲欧美日韩无卡精品| 啦啦啦观看免费观看视频高清| 久久久久久久精品吃奶| 韩国精品一区二区三区| 最近最新中文字幕大全电影3 | 热re99久久国产66热| 又大又爽又粗| 亚洲国产精品999在线| 精品国内亚洲2022精品成人| 亚洲黑人精品在线| 欧美又色又爽又黄视频| 男女那种视频在线观看| av视频在线观看入口| 91字幕亚洲| 亚洲天堂国产精品一区在线| 免费女性裸体啪啪无遮挡网站| 悠悠久久av| 欧美国产日韩亚洲一区| 欧美日韩亚洲综合一区二区三区_| 精品国产亚洲在线| 欧美另类亚洲清纯唯美| 免费高清在线观看日韩| 精品一区二区三区视频在线观看免费| 中出人妻视频一区二区| 久久香蕉国产精品| 女同久久另类99精品国产91| 少妇的丰满在线观看| 国产黄a三级三级三级人| 一级a爱视频在线免费观看| 中文字幕精品亚洲无线码一区 | 99热这里只有精品一区 | 美女免费视频网站| 免费一级毛片在线播放高清视频| 一边摸一边抽搐一进一小说| 亚洲国产毛片av蜜桃av| 国产精品野战在线观看| 俺也久久电影网| 99re在线观看精品视频| 色播在线永久视频| 一本一本综合久久| 草草在线视频免费看| 韩国av一区二区三区四区| 欧美色视频一区免费| 国产av一区二区精品久久| 国产精品亚洲美女久久久| 在线观看免费午夜福利视频| 国产一卡二卡三卡精品| 久久久久国产精品人妻aⅴ院| 99热6这里只有精品| 中文字幕人妻丝袜一区二区| 欧美精品亚洲一区二区| 性欧美人与动物交配| 国产精品日韩av在线免费观看| 在线观看免费视频日本深夜| 特大巨黑吊av在线直播 | 一级黄色大片毛片| 亚洲成国产人片在线观看| 国产精品日韩av在线免费观看| 亚洲国产高清在线一区二区三 | 午夜福利18| 亚洲av中文字字幕乱码综合 |