• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Fifth Paradigm in Science: A Case Study of an Intelligence-Driven Material Design

    2023-11-14 08:01:54CanLngZhuoTangYiZhouZanTianWiQingHuangJiLiuKqinLiKnliLi
    Engineering 2023年5期

    Can Lng,Zhuo Tang*,Yi-G Zhou,Zan Tian,Wi-Qing Huang,Ji Liu,Kqin Li,g,Knli Li*

    a Science and Technology on Parallel and Distributed Processing Laboratory, National University of Defense Technology, Changsha 410073, China

    b Laboratory of Software Engineering for Complex Systems, National University of Defense Technology, Changsha 410073, China

    c National Supercomputing Center in Changsha, Changsha 410082, China

    d College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China

    e Institute of Chemical Biology and Nanomedicine, State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering,Hunan University, Changsha 410082, China

    f Department of Applied Physics, School of Physics and Electronics, Hunan University, Changsha 410082, China

    g Department of Computer Science, State University of New York, New Paltz, NY 12561, USA

    Keywords:Catalytic materials Fifth paradigm Intelligence-driven Machine learning Synergy of interdisciplinary experts

    ARTICLEINFO Science is entering a new era—the fifth paradigm—that is being heralded as the main character of knowledge integrating into different fields to intelligence-driven work in the computational community based on the omnipresence of machine learning systems.Here, we vividly illuminate the nature of the fifth paradigm by a typical platform case specifically designed for catalytic materials constructed on the Tianhe-1 supercomputer system, aiming to promote the cultivation of the fifth paradigm in other fields.This fifth paradigm platform mainly encompasses automatic model construction (raw data extraction),automatic fingerprint construction (neural network feature selection), and repeated iterations concatenated by the interdisciplinary knowledge (‘‘volcano plot”).Along with the dissection is the performance evaluation of the architecture implemented in iterations.Through the discussion,the intelligence-driven platform of the fifth paradigm can greatly simplify and improve the extremely cumbersome and challenging work in the research, and realize the mutual feedback between numerical calculations and machine learning by compensating for the lack of samples in machine learning and replacing some numerical calculations caused by insufficient computing resources to accelerate the exploration process.It remains a challenging of the synergy of interdisciplinary experts and the dramatic rise in demand for on-the-fly data in data-driven disciplines.We believe that a glimpse of the fifth paradigm platform can pave the way for its application in other fields.

    1.Introduction

    The earth-shaking changes in human society are inseparable from the exploration of nature.Such transformative changes have evolved from focusing on natural observations to gradually being realized through various tools and cutting-edge methods [1,2].In this process,different normative development paradigms covering the overall and interrelated assumptions of various disciplines have been formed[3,4].Each paradigm shift is caused by the result of changes in the basic assumptions within the ruling theory in a certain era to meet the subsequent requirements,thereby creating a new paradigm [5].The fifth paradigm has now been characterized as the intelligence-driven and knowledge-centric research paradigm following the paradigm shift from the data-intensive fourth paradigm and is coming on the heels of experimentation,theory, and computer-simulation paradigm shifts from the first to the third paradigms [6–10].

    In the fifth paradigm world view,the exploration of the physical universe is not merely projected by the mathematical probable realm of intensive data-driven by intelligence, but the entire research process also involves the undifferentiated conscious process of human expert knowledge.Based on these features, the application of the fifth paradigm can be regarded as a cognitive system or cognitive application [9,10].Taking the development of materials science as an example, the cognitive system of the fifth paradigm has evolved from the primitive early paradigm via classic evolutionary spiral processes, in which materials such as metals and ceramics were discovered and used in ancient times before the emergence of Newton’s laws and the advent of the theory of relativity.Then,the emergence of relativity and quantum mechanics made it possible to simulate the electronic structure of molecules [11–13].In recent years, the meteoric rise of artificial intelligence (AI) and machine learning has been transformational to the research of data-driven materials design[14–18].Therefore,by processing relevant innovative technologies into ever-larger datasets, the hidden properties of new materials, such as metals and ceramics,can be revealed[19–22].Since then,cognitive materials design has taken the relay baton and formed a new ecosystem through the intellectual collaboration of interdisciplinary experts,thus greatly accelerating the exploration process.

    At present,the fifth paradigm is in its emergent period and still has a long way to go.Unlike the mature fourth paradigm of dataintensive science, which has exploded rapidly in multiple application domains and has been used in industrial and scientific fields such as self-driving cars, computer vision, and brain modeling[23–27],the intelligence-driven,knowledge-centric fifth paradigm is still in the stage of vigorous development because it needs to break the boundaries of computational and data-intensive paradigms to form a new ecosystem by merging and extending existing technologies.Fortunately, scientists are now on the road to researching and solving these problems.For example, a Spark–message passing interface (MPI) integrated platform proposed by Malitsky et al.[10] can be used to promote the transformation of the fourth paradigm processing pipeline represented by dataintensive applications to the fifth paradigm of knowledge-centric applications.Cognitive computing, such as natural language processing, knowledge representation, and automatic reasoning, is exactly what Zubarev and Pitera [9] suggested that the fifth paradigm should possess.Furthermore, common aspects among diverse computing applications can be inferred in the fifth paradigm by the integration of expert knowledge in different fields and the intensive data from experimental observation and theoretical simulations,steering the development of complementary solutions to meet emerging and future challenges.Therefore,although the task of developing the fifth paradigm is arduous,the prospects of its application are broad.

    The strategic transition from data-intensive science toward the fifth paradigm of composite cognitive computing applications is a long-term journey with many unknowns.This paper addresses the fifth paradigm platform by dissecting a framework called generalized adsorption simulations in Python (GASpy) in catalytic materials?? https://github.com/ulissigroup/GASpy[28],aiming to bring together human wisdom,algorithms in high-performance scientific computing, and deep-learning approaches for tackling new frontiers of data-driven discovery applications.The remainder of the paper is organized as follows.Section 2 provides a brief overview and a discussion of the fifth paradigm platform.Section 3 further elaborates on the performance evaluation of the platform.Finally, Section 4 concludes with a summary.

    2.A platform of the fifth paradigm

    In the process of materials research, processing the synergy among experimental data, theoretical models, and machine learning requires experts in different fields to collaboratively analyze and process data, that is, huge human wisdom is needed.Therefore,the intelligence-driven function with knowledge-centric characters that combine each link with versatility and operate in a platform-like manner is particularly important.Here,we introduce a platform of the fifth paradigm used in catalytic materials, as shown in Fig.1.The platform of the fifth paradigm couples the third and fourth paradigms,and the latter two include the process of the first and second paradigms.Among them, the original data come from experimental observations in the first paradigm and theoretical guidance in the second paradigm, as well as numerical calculations in the third paradigm,which then can be intelligencedriven by machine learning in the fourth paradigm.Combining the knowledge of the work integration of experimental experts and theoretical experts, the materials selected by machine learning can be screened for the second time,and the screening results will be fed back to the numerical simulation of the third paradigm again.The results obtained in the third paradigm can still be driven by the data in the fourth paradigm.The prediction results can then be filtered again through the knowledge integration of experimental and theoretical experts and then fed back to the third paradigm for numerical simulation.These approaches have produced the fifth paradigm platform, which continuously provides samples for machine learning by intelligently controlling the calculation of high-throughput physical models to compensate for the lack of machine-learning samples.Moreover, by using the knowledge integrated into different fields, machine learning can be used to replace part of numerical calculations to solve the problem of the time-consuming massive model due to insufficient computing resources.

    The comprehensive work of the fifth paradigm platform stems from the framework designed by Tran and Ulissi [28], for the bimetallic catalysts research in materials science, which uses machine learning to accelerate the numerical calculation based on density functional theory (DFT) that is conducted by a Vienna ab initio simulation package(VASP)[29],and can intelligently drive the discovery of high-performance electrocatalysts.The platform can classify the active sites of each stable low-index surface of bimetallic crystals,resulting in hundreds and thousands of possible active sites.At the same time, an alternative model based on artificial neural networks was used to predict the catalytic activity of these sites[30].The discovered sites with high activity can be further used for future DFT calculations.

    2.1.Automatic model construction and verification

    Fig.1.The paradigms in science.The evolution of the scientific paradigm has been developed from the simple 1st paradigm to the complex 5th paradigm.The core of the 5th paradigm is knowledge-centric and intelligence-driven including the successive paradigms from 1st,2nd,3rd to 4th marked by the experiments,theory,simulation, and data-driven process, respectively.

    The ability of raw data extraction to be driven by intelligence is reflected in automatic model construction and verification in the fifth paradigm platform.The ever-larger structures with and without the adsorbates can be automatically constructed and verified by DFT calculations.Since the adsorption of surface species is an indispensable process in heterogeneous catalysis, constructing many structures in experiments and DFT calculations can be time-intensive before determining the catalytic activity by evaluating the adsorption energy.Therefore,automated model construction and verification are essential to solving the problem.

    As shown in Fig.2, the entire task calculation includes the preparation process of raw data for standard simulation and then the numerical calculation.All the raw data used for the theoretical simulation come from the Material Project website, which can be realized by the module of gas/bulk generation through the Generate_Gas/Generate_Bulk function, and they can be processed into a list form with the items of user information,task location,calculation status, and other attributes, as well as be stored in the database by the update_atom_collection function with the collection creation named ‘‘Firework,” ‘‘Atoms,” ‘‘Catalog,” and ‘‘Adsorption.”

    Fig.2.The framework of this fifth paradigm case.The intelligence-driven of raw data extraction in the framework of GASpy is realized through modules of atomic operation,generation, and calculation.(a) The function of the module is to automatically calculate the adsorption energy from the gas and slab phase in the fifth paradigm platform.(b) This module is used to automatically create high-throughput tasks for the optimization of gas, bulk, and adslab with/without adsorbates through Firework.(c, d) The modules represent the (c) slab generation and (d) gas generation and structural relaxation described in part (a).

    The relaxation calculation of the task in Fig.2(a) can then be generated by the FireWorks workflow manager for submission in Fig.2(b).The attribute of the results in FireWorks contains ‘‘gasphase optimization” as the list format for gas relaxation, as well as the ‘‘unit cell optimization” for bulk optimization (bulk_relaxation).The attribute ‘‘status” is the calculation status of ‘‘COMPLETED,” ‘‘RUNNING,” ‘‘READY,” and other statuses, such as‘‘FIZZLED,”among others,and is judged by the Find_Bulk/Find_Gas function to either store the completed calculation process in the Atoms collection or generate a FireWorks task workflow waiting for calculation that has not started yet.

    If the status determined by Find_Bulk/Find_Gas is ‘‘COMPLETED,” on one hand, the calculated result will be stored in the database.On the other hand,the irreducible crystal face index enumeration (realized by the EnumerateDistinctFacets function) can be carried out by obtaining the optimized crystal structure from the Atoms collection, followed by crystal slab cutting to generate slabs (realized by the Generateslabs function) according to the given Miller index, and then, all adsorption sites on the slab (realized by the GenerateAdsorptionSites function) are found by the extending primitive units (the function of Atom_operates),enumerating crystal slabs, and adding adsorbents, as shown in Figs.2(c)and(d).For all the adsorption sites on all bulk materials,the GenerateAllSitesFromBulks function,composed of the EnumerateDistinctFacets function and GenerateAdsorptionSites function,can enumerate the irreducible Miller index in each slab and generate all the adsorption sites.All such information is written into the Catalog collection by the function update_catalog_collection.

    Furthermore,for each slab in which the adsorption site has been found, the adsorbates will be added to the adsorption sites by the GenerateAdslabs function to generate a ‘‘slab + adsorbate optimization” calculation model (adslab_relaxation); in addition, the adsorbates can also be eliminated by the GenerateAdslabs function to generate a ‘‘bare slab optimization” calculation model(bare_slab_relaxation).These calculation models can then be submitted for calculation through the FireWorks workflow manager.

    When completed,all the calculated results will be stored in the database collections by the function update_atom_collection.The Find_Adslab function will determine whether a relaxation task should be started by finding if there is a corresponding calculated result in the Atoms collection.For the adsorption energy Eadcalculation, the CalculateAdsorptionEnergy function is used to extract the gas energy Eadsorbates, adsorbate_slab energy Eadsorbate_slab,and bare_slab energy Ebare_slabfrom the Atoms collection: Ead= Eadsorbate_slab- Ebare_slab- Eadsorbates.The Eadand the associated initial and final structure information can then be added to the Adsorption collection by the update_adsorption_collection function, where the neural network feature selection that will be discussed next can be extracted as the input of machine learning.Thus, the process of intelligence-driven model construction and verification is realized.

    2.2.Automated fingerprint construction

    The intelligence-driven quality of a neural network feature selection is reflected in the automatic fingerprint construction in the fifth paradigm platform.In this framework, the automatically constructed fingerprint is converted from all the atomic structures of each material adsorption model into a graphical representation of the numerical input of a convolutional neural network (CNN)[31].In the atomic structure information, three types of features are considered, as shown in Fig.3, namely atomic feature (FN1),neighbor feature (FN2), and connection distance (FN3).The basic atomic properties in atomic feature characteristics are atomic number, electronegativity, coordination number/covalent radius,group,period,the valence electron,first ionization energy,electron affinity,block,and atomic volume.The basic neighborhood feature properties are composed of the coordination number between adjacent atoms near the adsorption site calculated by the Voronoi polyhedron algorithm [32].The connection distances are the distances from the adsorbate to all atoms.The target fingerprint is the adsorption energy (EadN).

    Fig.3.The intelligence-driven of neural network feature selection in the fifth paradigm platform.It is realized by the automatic fingerprint construction in the framework of GASpy.(a)The DFT calculation is schematically viewed as an example dataset(N is the number of training examples);(b)the automatic fingerprint construction is achieved by a predictive model through the fingerprinting and learning steps process; (c) the learning problem is stated, followed by abandoning some materials from the learning results through the scaling relationship, and carrying out further DFT calculation screening.

    The process of automatic fingerprint construction includes the process of extracting the final structures and adsorption energy by DFT calculation, fingerprint generation, and the process of machine learning,as well as the learning problem stating.The fingerprint constructed in GASpy comes from the original model without DFT calculation and the DFT calculation result.After DFT calculation, the initial targets EadNare obtained, as shown in Fig.3(a), and then, these DFT relaxation structures are used to extract fingerprints {FN1, FN2, FN3} for learning and prediction, as shown in Fig.3(b).These features will be used as a crossvalidation dataset in machine learning,and then,the function f will be found by the learning process for the next prediction.In the prediction process,the fingerprints are obtained from the initial structure without any DFT calculation and are used to predict the adsorption energy of material X, as shown in Fig.3(c), and then,the DFT calculation candidates required for the next cycle are screened through the learning problem.This learning problem is determined by the famous scaling relationship [33,34], as shown in Fig.3.The scaling relationship is the adsorption energy–catalytic activity (also known as the binding energy–catalytic activity)curve, like a volcano, which rises first and then declines, also known as the ‘‘volcano plot.”

    The data on adsorption energy and catalytic activity in the scaling relationship come from the work of many attempts by theoretical and experimental scientists and are further used by AI experts to screen the results of machine learning.Hence, the knowledge-centric collaboration of these interdisciplinary experts formed this fifth paradigm platform.With the help of the knowledge-centric module, the predicted materials described in Fig.3(c) will be further exploited, which means that some materials with predicted adsorption energies that do not match the‘‘volcano plot” will be discarded, and only those predicted materials that match the ‘‘volcano plot” can be further quantified by DFT calculation.In the next cycle, the exploited candidates will be calculated again by DFT, and the dataset will be increased through exploration.As the types of materials calculated by DFT increase, the number of datasets also increases.The automated exploration and exploitation process enables the constantly updated number of fingerprints.

    2.3.The theoretical model for both DFT calculation and machine learning

    In the fifth paradigm platform, the Kohn–Sham theory and a method that integrates the CNN and Gaussian process (GP)[31,35–37] are the core theoretical models for both DFT and machine learning processes.Thus, we briefly introduce the details of these theoretical models.

    2.3.1.The theoretical model for DFT calculation

    In the process of numerical calculation, namely the DFT calculation, the adsorption energy calculation process mainly involves the optimization process of each slab through the continuous adjustment of the atomic and electronic structure to achieve the most energy-stable structural state, which can be achieved by approximately solving the many-body Schr?dinger equation based on quantum mechanics, and solving the Kohn–Sham equation DFT is one of the main methods for this approximate solution.

    The Kohn–Sham equation is

    Given a system that contains K ions, namely K occupied orbitals in three-dimensional coordinate space r, ψi(r ) refers to the wave function of ion i with its coordinate in r, while its conjugate wave function is ψ*.n(r) is the local electron density, namely the probability of finding an electron in r within the ion i.E[n(r )] is the energy of the total system.The ˉh is the Planck constant, m is the particle’s mass.εxc[n(r )] is the exchange–correlation energy of a homogeneous electron gas with the local electron density n(r).Exc[n(r )] refers to the exchange and correlation energies, for example, the local electron density approximation, which is one of the exchange–correlation functions, only takes the uniform electron gas density as a variable, while the generalized gradient approximation method considers the electron density and the gradient of the density as the variables.v(r) is the potential energy of ion i in the position of r.Hence the first item T[n(r)]in Eq.(1) refers to the kinetic energy, the second item∫v (r )n( r )d3r is the external potential.The last item in Eq.(1)refers to the Hartree energy (electron–electron repulsion), where r′is the coordinate perturbation relative to r, and r represents the vector of r.?is the vector differential operator, and ?2is the Laplacian for coordinate derivation.

    A self-consistent iterative procedure is described as follows.

    Given an initial electron density n(r)obtained from all occupied orbitals by an arbitrary wave function ψ0(r):

    The iterative procedure will exit prematurely when ψn+1(r) -ψn(r) reached the minimum convergence standard required,and the Eadcan be calculated by the energy gap between the Eadsorbate_slaband Ebare_slab+ Eadsorbates.

    2.3.2.The theoretical model for machine learning

    The convolution-fed Gaussian process (CFGP) [37] is a method that the pooled outputs of the convolutional layers of the network are used to supply features to a GP regressor [38], which then makes training to produce both mean and predictions on the adsorption energies.The CNN is applied by Chen et al.[39]and Xie and Grossman [40] on top of a graph representation of bulk crystals to predict various properties, and further modified by Back et al.[31], to collect neighbor information using Voronoi polyhedral [32] for the application in predicting binding energies(for example, the adsorption energy) on heterogeneous catalyst surfaces.In the CFGP method, a complete CNN is first trained to create the final fixed network’s weights.Then all the pooled outputs of the convolutional layers are used as features in a new GP.The GP would then be trained to use these features to produce both mean and uncertainty predictions on the adsorption energies.

    In the CFGP method, the crystal structure is represented by a crystal graph G, where the atoms and edges representing connections between atoms in a crystal are encoded by the nodes with the information of atomic features and neighbor features,and then a CNN is constructed on the top of the undirected multigraph[40].Due to the characteristics of periodicity for the crystal graphs,multiple edges are allowed between the same pair of end nodes, the number of each node is marked by i,and each node i can be represented by a feature vector vi.Similarly,each edge(i,j)kcan be represented by the feature vector u(i,j)k, which corresponds to the kth bond connecting atom i and atom j.Considering the differences of interaction between each atom feature and the neighbors, the first convolutional layers iteratively update the atom feature by

    where z(i,j)kis the updated atom feature of atom i and atom j connected by kth bond in crystal graph G.⊕denotes the concatenation of atom and bond feature.Then a nonlinear graph convolution function is defined as follows:

    where ⊙denotes an element-wise multiplication, σ is a sigmoid function, and g is a nonlinear activation function (for example, the‘‘Leaky ReLu” or ‘‘Softplus”); W and b denote weights and biases of the neural networks, respectively.The σ(·) function is a learned weight matrix to different interactions between neighbors; f and s represent the abbreviation of first and self,respectively.After R convolutional layers, resulting vectors are then fully connected via K hidden layers, followed by a linear transformation to scalar values.Then, the distance filters collected by the connection distances are applied to exclude contributions of atoms that are too far from the adsorbates.A mean pooling layer is then used for producing an overall feature vector vc, which can be represented by a pooling function,

    where P(v)is the constant mean of prior function and k(v,v′)is the Matern kernel with the length scale trained by the maximum likelihood estimation method,v and v′refer to different feature vector,respectively.All training and predictions were done with Tesla P100-PCIE GPU acceleration as implemented in GPyTorch [41].

    2.4.Iteration between machine learning and numerical calculations

    The intelligence-driven, knowledge-centric nature of the fifth paradigm platform can be well depicted by the iterations between machine learning and numerical calculation concatenated by the interdisciplinary knowledge of‘‘volcano plot.”This breaks through the new material bottleneck of artificial screening research between machine learning and numerical calculation and realizes the mutual promotion of scientific experiments and AI, as shown in Fig.4(a).The experiments involve the process of fetching the primitive crystals (or primitive cells) from the Material Project website to be stored in the database, as well as the information about ‘‘volcano plot.” Then, the model is automatically reconstructed to create a bulk of adsorption energy calculation models.Through numerical calculation (i.e., ab initio DFT calculation), the optimized model and adsorption energy data are stored in the database, and fingerprints are extracted from it to train a suitable machine-learning model.Then, the trained model can use the fingerprint extracted from the bulk materials that have not been theoretically calculated to predict their adsorption energy and can be stored in the database again.Adsorption energy prediction results are intelligently analyzed through‘‘volcano plot”to screen models that require further DFT calculations.Then the entire loop is①②③④⑤⑥⑦⑧⑨⑩, ④⑤⑥⑦⑧⑨⑩,..., ④⑤⑥⑦⑧⑨⑩.

    The cycle stops only when all the materials delivered in the framework are calculated in the machine learning or DFT processes.The characteristics of the fifth paradigm platform are well reflected in these steps.The step ⑤indicates that the dataset obtained by numerical calculation supplements the problem of no dataset and fewer datasets in the machine-learning process.The step ⑩indicates that the bulk of numerical calculations can be abandoned with the help of machine-learning prediction and the ‘‘volcano plot” to accelerate the entire DFT calculation.Moreover, the results of machine learning can be intelligently analyzed through the‘‘volcano plot”that integrates the knowledge of experimental and theoretical scientists (the synergy of interdisciplinary experts), forming a knowledge-centric fifth paradigm driven by intelligence.

    2.5.Information science tools

    The framework of the fifth paradigm is built by using various Python packages,for example,Python Materials Genomics(pymatgen), the automic simulation environment(ASE), FireWorks,Luigi,and MongoDB [42–45].Pymatgen is one of the powerful program packages supported by Python for high-throughput material calculations.It standardizes the initialization settings required before running high-throughput calculations and provides process analysis of the data generated by the calculations.The ASE aims to set up, steer, and analyze atomistic simulations.The function of Fire-Works is to perform job management in high-throughput computing workflows running on high-performance computing clusters.Luigi can be used to build complex batch job pipelines, handle dependency resolution, and conduct workflow management.MongoDB is written in the C++ language and is used for realtime data storage and can jointly meet the JavaScript Object Notation data-exchange format.

    As shown in Fig.4(b), the data-intensive DFT calculations can be done on the Tianhe-1 supercomputer using Lustre as the filestorage system [46].High-throughput computing jobs can be realized by running the security-monitoring system deployed on the cluster.Luigi is used to building the various physical models through dependency resolution (function dependencies,running, and output target), which are then configured and calculated by the task management through FireWorks and batched processing performance through the resource management Slurm in the supercomputer [47].These two task-management systems can automatically correct errors, re-run a single job,and simultaneously visualize the data through the installed visualization tools.

    3.Performance evaluation

    To illustrate the performance of the fifth paradigm platform in catalytic materials screening, we conducted a comparison test to explain how the machine-learning process accelerates numerical calculations and how the process of numerical calculations provides trainable samples for machine-learning iterations.In this article, we do not use the updated dataset containing the online DFT calculation process in the learning cycle of each model, but instead, we use the DFT calculated dataset to extract the corresponding fingerprints for research.Because target prediction is not directly related to the structure of DFT calculation,it is related to the fingerprint extracted from the initial structure without any simulation process.We believe that it will not affect the evaluation of the platform.

    The dataset we prepare to test the cross-validation process comes from Github?? https://github.com/ulissigroup/uncertainty_benchmarking.Five adsorbates of H,CO,OH,O,and N are consisted,of which the main dataset comes from the first two adsorbates(21 269 and 18 437).The method of CFGP is used to create a model to compare the impact of different machine learning models and the total number of the dataset on the accuracy of the catalyst screening through the performance metrics of the correlation coefficient (R2),and the mean absolute error (MAE), as well as the root-meansquare error (RMSE).Hyperparameters for the dataset have been tuned by Back et al.[31]and Tran et al.[37],while the research in this paper focuses on the performance of different models under the same method,thus these hyperparameters are still applicable.In our work,the statement of the learning problem is determined by the famous‘‘volcano plot”to evaluate the size and activity level of its adsorption energy.Taking the H adsorbate as an example,the hydrogen evolution reaction (HER) is a method that uses adsorption energies to predict catalytic performance.The optimal adsorption energy ΔEHis-0.27 eV [48], and the near-optimal range of the ‘‘volcano plot” is defined as[-0.37 eV,-0.17 eV].Therefore,if the result of each cycle reaches a range close to the optimal range(it can also be defined as a hit in the near-optimal range),then it is selected as a candidate to continue the DFT calculation before the start of the next cycle.

    One realization of the mutual feedback between machine learning and numerical calculation is that the trainable sample provided

    by DFT calculations can supplement machine-learning iterations.In this platform, once an iteration occurs, the dataset containing the target features is determined,which means the machine learning model for the corresponding iteration is determined.In addition, as a typical case of the fifth paradigm platform, the performance comparison of each iterative process is derived from the model comparison under the same data generation conditions.As shown in Table 1, the entire dataset is first randomly shuffled and split into ten models, and 10% of the total dataset is taken as the first model dataset, and then added in increments until 100%of the total dataset is taken as the tenth model to form the datasets corresponding to ten models.The dataset of the previous model is encompassed in the dataset of the next model.For the crossvalidation process, the train/validate/test ratio of each model is 64/16/20,and all the monometallic slabs are added to the training set, as described by Tran et al.[37].The cross-validation and its results are listed in Table 1 and Fig.5.The violin in Fig.5(a) refers to the R2of the training and testing samples.The greater the difference between the two values,the slenderer it becomes.Otherwise,it turns out to be stubby.If the two are the same, it can be a line.Therefore,the slender violins of models 1,2,5,6,and 9 are indicators of overfitting or underfitting, followed by models 3, 4, 7, and 10, and model 8 performs best.As the dataset increases, the MAE and RMSE in Table 1 gradually decrease, while the R2trend of the validation and testing process in Fig.5(a) gradually increases,which indicates that the training model is more accurate than the previous models.In addition, the hit numbers of H adsorbates verified by the DFT calculation(NDFT)and machine learning prediction (NML) are also listed, and their trend also increases with the expansion of the dataset, as shown in Fig.S1 (in Appendix A).The dataset of model 1 to be hit is set as the baseline.To find the performance of increasing trainable samples provided by the numerical calculation of machine-learning iteration, a formula is defined as follows:

    where η represents the increment of NDFTcompared with the NML.Dnand Mnrefer to NDFTand NMLof model n in the near-optimal range (namely the hit number).With the expansion of the dataset,the trend of η becomes larger and approaches 1,indicating that the hit number NMLis slowly approaching the hit number NDFT, which shows that the larger the training sample of numerical calculation,the higher the accuracy of the machine-learning model.Furthermore,η fits well in Fig.5(b),even if some points are not in the linear range.For example, η of model 4 is very small compared to other points, which we attribute to the compensation of larger values in models 5 and 6.

    Fig.5.Performance metrics evaluation of the learning model in the fifth paradigm platform.(a) The R2 correlation coefficient of the validation and testing process in the ten models; (b) the linear fit of η for all the models.

    The datasets with multiple adsorbates including H adsorbate are used for train/validation/test, and MAE and RMSE are used to evaluate the performance of the machine learning model.The number of surfaces for which low-coverage H adsorption energies in near-optimal activity in the ‘‘volcano plot” are verified by the DFT calculation and machine learning prediction, represented by NDFTand NML,respectively.η is used to evaluate the trend of model performance changes.

    To illustrate the realization of mutual feedback between machine learning and numerical calculation(e.g.,machine learning solves the time-consuming problem of massive models caused by insufficient computing resources in numerical calculations, and the numerical calculation process provides machine learningtraining samples), we prepared three types of prediction cases to understand the performance of the model trained and validated as described above.The dataset that we used in the prediction process is from the work of Tran and Ulissi [28], which encompassed 22 675 H adsorbates DFT results.To be honest,it has covered most of the 21 269 H-dataset mentioned above.However, we believe that it doesn’t matter of the repeated dataset, because our goal is to compare the performance of machine learning models generated on samples of different sizes and find out the acceleration behavior of machine learning under prediction samples of different sizes.Moreover, the material structure corresponding to the dataset to be predicted does not depend on whether simulation calculations have been performed.Therefore, the decision that this machine learning prediction dataset is taken from the DFT calculation dataset will not affect the overall evaluation of the intelligent driving process.

    Table 1Ten models constructed from the entire dataset to evaluate the performance of the fifth paradigm platform.

    In terms of the characteristics of the platform, the DFT calculations performed in each cycle (except the first cycle) are obtained from the machine-learning results.Three types of methods are designed for prediction in Table 2:Hit_no_split,No_hit_with_split,and No_hit_no_split.The No_hit_with_split method refers to the incremental dataset from 10%to 100%of the total prediction dataset corresponding to the machine learning model from model 1 to model 10 formed above.In addition, the entire prediction dataset can also be kept the same in each cycle, as defined by the No_hit_no_split method.As for the Hit_no_split method, it means that the model predicted by machine learning in the optimal range is discarded in the next model prediction.The process is as follows:Starting from model 1,4960 models predicted by machine learning are found to be hits in the entire 22 675 models predicted.When using model 2 to make predictions, 4960 models predicted by machine learning will be removed from 22 675 models predicted,leaving only 17 715 (22 675–4960=17 715) models.Then, model 2 finds that 860 models predicted by machine learning were hits,and provides another simplified sample 16 855 (17 715–860=16 855)for model 3 prediction.The hit and drop will not end until the predictions of the ten models are completed.Note that NHitsshould be equal to NML, but certain materials in the samples must be excluded from the near-optimal activity process.

    Table 2 lists the results of the three methods in the nearoptimal range.In the Hit_no_split method, because the NMLpredicted by the previous model is deducted from the prediction samples of the next model (except for model 1), NDFT, NML, and NHitsfrom model 1 to model 10 are also reduced accordingly.In the No_hit_with_split method,as the prediction samples increase,NDFTand NMLexpand gradually.In the No_hit_no_split method, NMLfluctuates between 4177 and 4556,while NDFTremains unchanged.We infer that this is caused by the different accuracies of the machine-learning model.Meanwhile, the more datasets there are in the model, the more NMLhits there are.From an acceleration point of view, the Hit_no_split method can ensure that the predicted reasonable samples will not be predicted again (of course,provided that it is reasonable), while the other two methods involve repeated predictions of the predicted samples.Therefore,ideally, the Hit_no_split method should be able to optimize the use of all samples that must be predicted to accelerate predictions and provide a faster machine-learning process for accelerating numerical calculations.

    To evaluate the difference of these methods in accelerating DFT calculations, we compared the number of NDFTreplaced by NML, as well as the value of NML/NDFTin Fig.6.The replacement of machine learning to replace DFT calculations is defined as follows:

    where RE and Tnare the number of DFT calculations replaced by machine learning and all prediction datasets in each model.As shown in Fig.6(a), the replacement amount of all models of Hit_no_split is more than 15 000,and the replacement amount from model 1 to model 10 is slightly reduced, but compared with other methods, it has the largest NDFTreplacement.For the No_hit_with_split method, the number of replacements increases linearly from 1800 to the same as other methods in model 10.For the No_hit_no_split method, except for model 1, the number of replacements for all the models is approximately 14 000, and there is a slight downward trend.For a large number of replacements of model 1 in the No_hit_no_split method and the subsequent sudden decrease,we believe that it is caused by underfitting because model 1 uses a small amount of the dataset to train the model to predict an ever-larger dataset.In these methods, the Hit_no_split can replace the maximum NDFT, as we expected.

    The reason that we compared the value NML/NDFTin Fig.6 is that it can reflect the performance of each model in another view.The ideal NML/NDFTvalues should all be equal to 1.In the No_hit_with_split and No_hit_no_split methods, the NML/NDFTis slightly increased to close to 1,which indicates that the prediction behavior of the two methods is similar and is suitable for accelerating DFT calculation.In the Hit_no_split method,except for model 1 set as the baseline,the NML/NDFTvalue is gradually reduced from model 2 to model 7 and then gradually increased in the remaining models,all below 0.5.On one hand,we infer that these smaller values are caused by changes in the accuracy of the machine-learning model since smaller datasets lead to underfitting.On the other hand, as the number of hits of the prediction samples decreases,the number NMLthat can hit in the next model gradually decreases.In addition, for the No_hit_with_split and No_hit_no_split methods, the number of hits in the previous model will be removed ineach model, and the NMLthat can be hit in the next model will gradually decrease.Since these methods does not involve hit material to be hit again in other iterations, the advantage in terms of speed then are more obvious.

    Table 2Three types of prediction methods in the near-optimal range and their performance of all models constructed in the fifth paradigm platform.

    Fig.6.The predictive performance of all models constructed in the fifth paradigm platform.(a) The number of DFT calculations (NDFT) replaced by the number of machine learning predictions(NML);(b)the change of NML/NDFT in the near-optimal range for different models within the prediction process.In the Hit_no_split method,model 1 is abandoned because of its baseline function to the other models.

    In addition,since the machine-learning model itself exhibits the characteristics of gradual reduction of poor fitting during the expansion process from small cross-validation samples, there will be a certain degree of accuracy loss in the prediction process from model 2 to model 10.For example,the predicted machine-learning dataset should have been hit but not hit,or the dataset should not be hit but hit,leading to hit data missing or non-hit data increasing in the dataset of the next model.Moreover, it is also possible that the sample size is not large enough,resulting in the underfitting or overfitting of the machine-learning model.Therefore, the Hit_no_split method has the advantage of replacing more DFT calculations,although the evaluation of its accuracy is not suitable for the indicators of NML/NDFT.However, this by no means indicates that the Hit_no_split method is not applicable to the fifth paradigm platform.We infer that when the prediction model is good enough and the dataset is large enough,it can reduce the repeated prediction process of data while maintaining the reliability of the results to accelerate the advantages of machine learning to,in turn,accelerate numerical calculations.

    Based on the results of the three types of methods,the accuracy loss of machine learning prediction relative to DFT calculation is used to evaluate the performance in the fifth paradigm platform.The accuracy loss can be defined as follows:

    where L is the accuracy loss.Given that the No_hit_with_split and No_hit_no_split methods have relatively suitable predictive performance,we only consider the accuracy loss of these two methods.As shown in Fig.7, for No_hit_with_split, although model 1 has the lowest accuracy loss, the dataset is small, and we exclude it and consider that model 9 has the lowest accuracy loss.For the No_hit_no_split method, model 5 has the lowest accuracy loss.Therefore, we believe that, as the dataset expands, machine learning will continue to replace DFT calculations,and there will be varying degrees of accuracy loss.The smallest accuracy-loss point is most conducive to this type of machine learning to accelerate the DFT calculation process.

    Fig.7.The accuracy of the fifth paradigm.The mutual verification process of scientific experiment,theoretical calculation,and machine learning in the process of exploring the unknown world represents the accuracy of the fifth paradigm.The accuracy loss(L)of No_hit_no_split and No_hit_with_split methods between machine learning and DFT calculation of all models is constructed in the fifth paradigm platform.

    We believe that the accuracy loss of this fifth paradigm case is related to the size of the sample involving machine learning,theoretical calculations, and experiments fed back from the ‘‘volcano plot,” which is exactly the knowledge-centric characteristic for the fifth paradigm in terms of precision.As shown in Fig.7, the accurate fifth paradigm should make machine learning,theoretical calculation, and scientific experiment unique to the result of the unknown world exploration.Although this standard is very demanding, it is always the ultimate goal for exploring the unknown world.

    4.Discussion of the fifth paradigm platform

    Automated model construction, automated fingerprint extraction,as well as intelligent coupling of intensive data with DFT calculation and machine learning by the ‘‘volcano plot” compose the architecture of the fifth paradigm platform.In the intelligencedriven framework,the workload of traditional modeling construction and calculation is reduced effectively by making full use of the current development of various information tools and methods,greatly simplifying and improving the extremely cumbersome and challenging work in materials research.

    One of the challenges this framework faces is the limited application areas implemented in the fifth paradigm.This is because the most typical feature of the fifth paradigm is intelligence-driven,which entails the synergy of interdisciplinary experts to carry out in-depth research.For example,in the materials science introduced in this work,it is necessary to intelligently drive the efficient synergy of experimental experts and theoretical experts,which can be achieved by filtering the machine-learning results through the‘‘volcano plot.” For some high-throughput interdisciplinary work,before designing a similar fifth paradigm framework, it is best to first consider appropriate methods of quantifying the collaborative work between these experts in different application fields.

    In addition,due to the lack of an ever-larger dataset,there must be an insufficient number of samples during the expansion process of the dataset, resulting in poor generalization ability of the training model.Therefore, more datasets must be accumulated to achieve a high-precision machine-learning process.Fortunately,for this fifth paradigm platform, the Open Catalyst project, jointly researched and developed by Facebook AI Research and the Department of Chemical Engineering of Carnegie Mellon University, has realized the Open Catalyst 2020 [49] dataset containing a dramatic rise in DFT calculation results, and it is still constantly updated online.Finally,the accuracy of the fifth paradigm utilized to realize the exploration of the unknown world is affected by machine learning, theoretical calculation, and scientific experiment.The high-precision fifth paradigm tends to explore the same objective thing from the unknown world through the three kinds of cooperation within the scope of its reasonable discovery,derivation, and judgment.We believe that the dissection of this fifth paradigm case can greatly promote the development of the fifth paradigm of materials science in the future.

    5.Conclusions

    In this work,we discuss the scientific explanation of the newest paradigm emerging due to the prosperity engendered by AI.Then,a detailed discussion is carried out using a fifth paradigm platform as a typical case, which conforms to a specific and well-defined framework capable of promoting the development of materials science.The interdisciplinary knowledge and intelligence-driven characteristics are the keys to the fifth paradigm, which can be addressed in the work encompassing automatic model construction and verification, automated fingerprint construction, as well as the theoretical model and repeated iteration between machine learning and theoretical calculations.These informatics tools needed for architecting the framework are also discussed in detail.Finally, tests and comparisons are conducted to show how the interaction between AI and numerical calculation in the framework of this fifth paradigm case meaningfully promotes each other to reduce numerical calculation and create more trainable samples in the mutual feedback process.The curation of the numerical calculation and machine-learning models, as well as the techniques,makes the fifth paradigm platform more interpretable.

    With the expansion of the dataset, on one hand, the more machine learning replaces the DFT calculation, the faster the screening of materials will be.On the other hand,the more consistent the number of candidate materials predicted by the final machine learning is with the number of candidate materials calculated by DFT,the more accurate the prediction by machine learning is.Under the conditions of satisfying these two judgments,machine learning will continue to replace DFT calculation with different degrees of accuracy loss, and the smallest accuracy loss model is most conducive to machine learning to accelerate the DFT calculation process.This minimum accuracy loss discrimination represents the precise exploration premise of materials research under the scientific fifth paradigm,which requires consistent results when machine learning, theoretical calculation, and scientific experiment are jointly exploring the unknown world.

    Although this article provides a scientific explanation for the fifth paradigm platform represented in the fields of catalytic materials, it also acknowledges that much more needs to be discussed.The overall development of the fifth paradigm across various fields still faces challenges in terms of the synergy between interdisciplinary experts and the dramatic rise in demand for data in datadriven disciplines.Despite these challenges, an ongoing endeavor in tandem with all the relevant parties can be envisioned to deepen the combination of AI technology and traditional disciplines, so that each simulation and calculation link has higher intelligence and automation characteristics, and finally runs as a platform to improve the efficiency of traditional scientific computing and promote the development of materials research in a more intelligent and high-precision direction.We believe that a glimpse of the fifth paradigm platform can pave the way for the application of the fifth paradigm in other fields.

    Acknowledgments

    We thank Prof.Zachary W.Ulissi and Prof.Pari Palizahti at Carnegie Mellon University for providing advice on the platform.This study was supported by the National Key Research and Development Program of China (2021ZD40303), the National Natural Science Foundation of China (62225205 and 92055213), Natural Science Foundation of Hunan Province of China (2021JJ10023)and Shenzhen Basic Research Project(Natural Science Foundation)(JCYJ20210324140002006).

    Compliance with ethics guidelines

    Can Leng, Zhuo Tang, Yi-Ge Zhou, Zean Tian, Wei-Qing Huang,Jie Liu, Keqin Li, and Kenli Li declare that they have no conflict of interest or financial conflicts to disclose.

    Appendix A.Supplementary data

    Supplementary data to this article can be found online at https://doi.org/10.1016/j.eng.2022.06.027.

    日韩av在线大香蕉| 一二三四在线观看免费中文在| 咕卡用的链子| 看免费av毛片| 在线观看舔阴道视频| 丰满迷人的少妇在线观看| 日本撒尿小便嘘嘘汇集6| 99riav亚洲国产免费| 国产成人精品在线电影| 亚洲人成77777在线视频| 在线观看免费高清a一片| 国产精品免费一区二区三区在线| 欧美老熟妇乱子伦牲交| 最好的美女福利视频网| 露出奶头的视频| 波多野结衣一区麻豆| 久久伊人香网站| 不卡av一区二区三区| 国产精品影院久久| 欧美日韩亚洲综合一区二区三区_| 国产精华一区二区三区| 我的亚洲天堂| 久久精品91蜜桃| 另类亚洲欧美激情| 一级,二级,三级黄色视频| 校园春色视频在线观看| av免费在线观看网站| 一区福利在线观看| av超薄肉色丝袜交足视频| 成人国语在线视频| 美国免费a级毛片| ponron亚洲| 他把我摸到了高潮在线观看| 黄片大片在线免费观看| 91老司机精品| 久久久久久人人人人人| 国产精品九九99| 麻豆国产av国片精品| 50天的宝宝边吃奶边哭怎么回事| 午夜精品国产一区二区电影| 91精品三级在线观看| 超色免费av| 久久久久精品国产欧美久久久| 久久久久国产精品人妻aⅴ院| 国产亚洲精品第一综合不卡| 满18在线观看网站| 国内久久婷婷六月综合欲色啪| 757午夜福利合集在线观看| 国产av在哪里看| 女生性感内裤真人,穿戴方法视频| 亚洲一卡2卡3卡4卡5卡精品中文| 天堂影院成人在线观看| 亚洲av五月六月丁香网| 国产精品影院久久| 纯流量卡能插随身wifi吗| 精品国产亚洲在线| av在线播放免费不卡| 日本免费a在线| 亚洲欧美激情综合另类| 在线十欧美十亚洲十日本专区| 精品久久久久久,| 亚洲狠狠婷婷综合久久图片| 可以免费在线观看a视频的电影网站| 中文字幕高清在线视频| 在线观看一区二区三区激情| 免费观看人在逋| 性欧美人与动物交配| netflix在线观看网站| 亚洲 国产 在线| 免费高清在线观看日韩| 婷婷丁香在线五月| av电影中文网址| av天堂在线播放| 国产有黄有色有爽视频| 狂野欧美激情性xxxx| 亚洲专区字幕在线| 久久精品国产亚洲av香蕉五月| 久久久久久久久久久久大奶| 国产精品一区二区免费欧美| 黑人巨大精品欧美一区二区蜜桃| 午夜亚洲福利在线播放| 国产亚洲欧美精品永久| 美女国产高潮福利片在线看| 久热这里只有精品99| 日本免费a在线| 成人亚洲精品av一区二区 | 国产一区在线观看成人免费| 亚洲人成77777在线视频| 国产亚洲欧美98| 日韩免费高清中文字幕av| 精品电影一区二区在线| 亚洲精品中文字幕一二三四区| 中出人妻视频一区二区| 一二三四在线观看免费中文在| 亚洲精品一二三| 久久午夜亚洲精品久久| 在线国产一区二区在线| 亚洲五月色婷婷综合| 色在线成人网| 亚洲精品在线美女| 女生性感内裤真人,穿戴方法视频| 99精品久久久久人妻精品| 久久久久久大精品| 最近最新中文字幕大全免费视频| 久久精品亚洲熟妇少妇任你| 琪琪午夜伦伦电影理论片6080| 一级黄色大片毛片| 多毛熟女@视频| 高潮久久久久久久久久久不卡| 亚洲国产欧美网| 激情视频va一区二区三区| 精品人妻1区二区| 天天躁狠狠躁夜夜躁狠狠躁| 99热只有精品国产| 99精国产麻豆久久婷婷| 一区二区三区国产精品乱码| 欧美国产精品va在线观看不卡| 国产亚洲精品综合一区在线观看 | 日日夜夜操网爽| 国产精品1区2区在线观看.| 女人高潮潮喷娇喘18禁视频| 国产高清激情床上av| 人人妻人人爽人人添夜夜欢视频| 日本免费a在线| 人人妻人人澡人人看| 50天的宝宝边吃奶边哭怎么回事| 搡老熟女国产l中国老女人| 国产无遮挡羞羞视频在线观看| 波多野结衣高清无吗| 亚洲,欧美精品.| 一二三四社区在线视频社区8| 国产av精品麻豆| 精品熟女少妇八av免费久了| 高清在线国产一区| 黑人操中国人逼视频| 麻豆久久精品国产亚洲av | 中文欧美无线码| 亚洲七黄色美女视频| 国产熟女午夜一区二区三区| 叶爱在线成人免费视频播放| 99国产极品粉嫩在线观看| 宅男免费午夜| a级片在线免费高清观看视频| 日韩欧美免费精品| 视频在线观看一区二区三区| 超色免费av| 午夜久久久在线观看| 中文字幕精品免费在线观看视频| 色哟哟哟哟哟哟| 亚洲片人在线观看| 日本a在线网址| 国产97色在线日韩免费| 在线国产一区二区在线| 男人舔女人的私密视频| 99国产精品免费福利视频| 久久精品91无色码中文字幕| 欧美中文日本在线观看视频| 9191精品国产免费久久| 韩国精品一区二区三区| 亚洲男人天堂网一区| 18禁美女被吸乳视频| 性少妇av在线| 色综合婷婷激情| 午夜视频精品福利| 免费在线观看完整版高清| 国产一区二区在线av高清观看| 午夜精品在线福利| 欧美乱码精品一区二区三区| 99国产精品免费福利视频| 97超级碰碰碰精品色视频在线观看| 手机成人av网站| 1024香蕉在线观看| 久久草成人影院| 亚洲成人免费电影在线观看| 精品免费久久久久久久清纯| 伦理电影免费视频| 亚洲自拍偷在线| 大型av网站在线播放| 午夜免费鲁丝| 久久久久亚洲av毛片大全| 亚洲精品美女久久久久99蜜臀| 免费人成视频x8x8入口观看| 黑人猛操日本美女一级片| 精品少妇一区二区三区视频日本电影| 国产成人精品在线电影| 久久人人爽av亚洲精品天堂| 亚洲国产毛片av蜜桃av| 又大又爽又粗| 91在线观看av| 长腿黑丝高跟| 国产精品久久电影中文字幕| 乱人伦中国视频| а√天堂www在线а√下载| 久久久久精品国产欧美久久久| 亚洲全国av大片| а√天堂www在线а√下载| 精品久久久久久久久久免费视频 | 亚洲狠狠婷婷综合久久图片| 亚洲在线自拍视频| 精品一品国产午夜福利视频| 麻豆国产av国片精品| 亚洲va日本ⅴa欧美va伊人久久| 亚洲激情在线av| 日本wwww免费看| 亚洲午夜精品一区,二区,三区| 成年人免费黄色播放视频| 可以免费在线观看a视频的电影网站| 在线观看www视频免费| 91成年电影在线观看| 欧美色视频一区免费| 亚洲欧洲精品一区二区精品久久久| 麻豆久久精品国产亚洲av | 激情视频va一区二区三区| 欧美日韩亚洲综合一区二区三区_| 欧美大码av| 久久欧美精品欧美久久欧美| 久久精品亚洲av国产电影网| 亚洲精品一二三| 1024视频免费在线观看| 国产免费男女视频| 美女高潮到喷水免费观看| 亚洲国产中文字幕在线视频| 怎么达到女性高潮| 99国产精品一区二区三区| 婷婷六月久久综合丁香| 涩涩av久久男人的天堂| 黄色视频不卡| 国产亚洲欧美98| 热99国产精品久久久久久7| 手机成人av网站| 免费高清视频大片| 亚洲五月天丁香| 中文字幕色久视频| 91九色精品人成在线观看| 大型黄色视频在线免费观看| 大陆偷拍与自拍| 嫁个100分男人电影在线观看| 黑人欧美特级aaaaaa片| 精品一区二区三区视频在线观看免费 | 超碰成人久久| 国产蜜桃级精品一区二区三区| 在线观看午夜福利视频| 精品一区二区三区av网在线观看| 悠悠久久av| 日韩三级视频一区二区三区| 在线观看免费午夜福利视频| 男男h啪啪无遮挡| 男女午夜视频在线观看| 一夜夜www| 欧美一级毛片孕妇| 窝窝影院91人妻| 日韩国内少妇激情av| 交换朋友夫妻互换小说| av有码第一页| www日本在线高清视频| 成人免费观看视频高清| 国产成人一区二区三区免费视频网站| 久久 成人 亚洲| 久久精品91蜜桃| 欧美激情极品国产一区二区三区| 美女 人体艺术 gogo| 男男h啪啪无遮挡| 无遮挡黄片免费观看| 亚洲欧美一区二区三区黑人| 成人三级做爰电影| 麻豆成人av在线观看| 国产91精品成人一区二区三区| 后天国语完整版免费观看| 在线国产一区二区在线| 一级a爱片免费观看的视频| 国产精品 国内视频| 国产在线精品亚洲第一网站| 91字幕亚洲| 国产精品自产拍在线观看55亚洲| 人妻久久中文字幕网| 一本大道久久a久久精品| 日本 av在线| 十分钟在线观看高清视频www| 国产亚洲av高清不卡| 国产免费现黄频在线看| 首页视频小说图片口味搜索| 热re99久久精品国产66热6| а√天堂www在线а√下载| 欧美中文综合在线视频| 国产精品影院久久| 男人操女人黄网站| 搡老熟女国产l中国老女人| 久久久水蜜桃国产精品网| 男女高潮啪啪啪动态图| 人人妻,人人澡人人爽秒播| 波多野结衣一区麻豆| 女同久久另类99精品国产91| 久久亚洲真实| 两性午夜刺激爽爽歪歪视频在线观看 | 黄片大片在线免费观看| 亚洲午夜精品一区,二区,三区| 黑丝袜美女国产一区| 黑人欧美特级aaaaaa片| 在线免费观看的www视频| 午夜成年电影在线免费观看| 国产三级在线视频| 很黄的视频免费| 精品国产亚洲在线| 91国产中文字幕| 黄色成人免费大全| 久久久久久久精品吃奶| 女性被躁到高潮视频| 亚洲九九香蕉| 色播在线永久视频| 女性生殖器流出的白浆| av国产精品久久久久影院| 少妇被粗大的猛进出69影院| 欧美另类亚洲清纯唯美| av欧美777| 老司机午夜福利在线观看视频| 一进一出抽搐gif免费好疼 | 精品国产超薄肉色丝袜足j| 中亚洲国语对白在线视频| 久久久久国产精品人妻aⅴ院| 久久中文字幕一级| 精品国产国语对白av| 五月开心婷婷网| 免费在线观看日本一区| 国产欧美日韩精品亚洲av| 最新美女视频免费是黄的| 无遮挡黄片免费观看| 99久久国产精品久久久| ponron亚洲| 亚洲色图 男人天堂 中文字幕| 午夜两性在线视频| 在线观看免费日韩欧美大片| 妹子高潮喷水视频| 高清毛片免费观看视频网站 | 亚洲激情在线av| svipshipincom国产片| 久久狼人影院| 欧美日韩亚洲国产一区二区在线观看| 精品一区二区三卡| 老司机午夜福利在线观看视频| 91成人精品电影| 无人区码免费观看不卡| 黄色 视频免费看| 中国美女看黄片| 777久久人妻少妇嫩草av网站| 亚洲一码二码三码区别大吗| 不卡av一区二区三区| 欧美激情高清一区二区三区| 亚洲专区字幕在线| 香蕉丝袜av| 亚洲午夜理论影院| 99精国产麻豆久久婷婷| 91老司机精品| 波多野结衣高清无吗| 精品免费久久久久久久清纯| 黄色a级毛片大全视频| 日韩 欧美 亚洲 中文字幕| 成人国语在线视频| 亚洲国产精品sss在线观看 | 日韩欧美国产一区二区入口| 免费在线观看黄色视频的| 国产一区二区三区视频了| 中亚洲国语对白在线视频| 一级毛片精品| 欧美老熟妇乱子伦牲交| 高清在线国产一区| 国产高清视频在线播放一区| 丰满的人妻完整版| www.www免费av| 不卡av一区二区三区| 欧美黑人欧美精品刺激| 精品一区二区三区四区五区乱码| 精品第一国产精品| 777久久人妻少妇嫩草av网站| 亚洲成av片中文字幕在线观看| 99久久国产精品久久久| 久久精品亚洲熟妇少妇任你| 国产深夜福利视频在线观看| 日韩高清综合在线| 欧美成狂野欧美在线观看| 亚洲欧洲精品一区二区精品久久久| 成人永久免费在线观看视频| 国产不卡一卡二| 成人手机av| 可以在线观看毛片的网站| 水蜜桃什么品种好| 99国产精品免费福利视频| 自拍欧美九色日韩亚洲蝌蚪91| 久久久久久免费高清国产稀缺| 久久人妻福利社区极品人妻图片| 大码成人一级视频| 亚洲五月色婷婷综合| 99精国产麻豆久久婷婷| 最新美女视频免费是黄的| 亚洲伊人色综图| 国产精品 国内视频| 怎么达到女性高潮| 一a级毛片在线观看| 日韩高清综合在线| 中国美女看黄片| 夜夜躁狠狠躁天天躁| 纯流量卡能插随身wifi吗| 一个人观看的视频www高清免费观看 | 我的亚洲天堂| 在线观看www视频免费| 国产亚洲精品久久久久5区| 美女扒开内裤让男人捅视频| 久久国产精品影院| 美女高潮喷水抽搐中文字幕| 欧美成人免费av一区二区三区| 亚洲一区二区三区色噜噜 | 成人三级做爰电影| 激情视频va一区二区三区| 啦啦啦 在线观看视频| 成熟少妇高潮喷水视频| 一进一出抽搐gif免费好疼 | 国产精品免费视频内射| videosex国产| 人人澡人人妻人| 免费少妇av软件| 操出白浆在线播放| 999久久久精品免费观看国产| 性色av乱码一区二区三区2| 精品一区二区三区av网在线观看| 男女做爰动态图高潮gif福利片 | 国产精华一区二区三区| 久久久久国产精品人妻aⅴ院| av有码第一页| 最新美女视频免费是黄的| 亚洲国产看品久久| 国产区一区二久久| 12—13女人毛片做爰片一| 国产精品久久久久久人妻精品电影| 搡老乐熟女国产| 欧美午夜高清在线| 久久午夜综合久久蜜桃| 丁香欧美五月| 久久香蕉激情| ponron亚洲| 极品人妻少妇av视频| 日韩精品青青久久久久久| 两人在一起打扑克的视频| 天天影视国产精品| 久久草成人影院| 人人妻,人人澡人人爽秒播| 欧美精品一区二区免费开放| 国产视频一区二区在线看| av天堂在线播放| 久久精品国产清高在天天线| 无人区码免费观看不卡| 午夜精品国产一区二区电影| 这个男人来自地球电影免费观看| 黑人巨大精品欧美一区二区mp4| 国产一区二区三区视频了| 国产亚洲欧美在线一区二区| 亚洲男人的天堂狠狠| 亚洲国产精品sss在线观看 | 日日爽夜夜爽网站| 久久天堂一区二区三区四区| 夜夜夜夜夜久久久久| 精品久久久久久,| 日本vs欧美在线观看视频| 亚洲精品一区av在线观看| 热99re8久久精品国产| 精品国产亚洲在线| 久久午夜亚洲精品久久| a级毛片黄视频| 亚洲精华国产精华精| 国产精品98久久久久久宅男小说| 久久中文字幕一级| 好看av亚洲va欧美ⅴa在| 19禁男女啪啪无遮挡网站| 国产单亲对白刺激| 波多野结衣av一区二区av| 久久久国产成人免费| 黄网站色视频无遮挡免费观看| 超色免费av| 黄色成人免费大全| 欧美在线一区亚洲| 日韩精品中文字幕看吧| 黄频高清免费视频| 欧美国产精品va在线观看不卡| 亚洲精品中文字幕一二三四区| 在线观看免费高清a一片| 国产欧美日韩精品亚洲av| 精品国产乱子伦一区二区三区| av欧美777| 国产精品永久免费网站| 天天影视国产精品| 色综合欧美亚洲国产小说| 亚洲第一av免费看| 欧美中文日本在线观看视频| 国产av在哪里看| 久久久精品国产亚洲av高清涩受| 美女国产高潮福利片在线看| 美女高潮喷水抽搐中文字幕| 欧美一级毛片孕妇| 亚洲专区中文字幕在线| 国产亚洲精品久久久久5区| 99精品在免费线老司机午夜| avwww免费| 女人精品久久久久毛片| 99久久人妻综合| 日本wwww免费看| 日本黄色视频三级网站网址| 十八禁人妻一区二区| 欧美不卡视频在线免费观看 | 桃色一区二区三区在线观看| 国产深夜福利视频在线观看| 国产成人一区二区三区免费视频网站| 国产高清视频在线播放一区| 后天国语完整版免费观看| 丝袜在线中文字幕| 一边摸一边抽搐一进一小说| 又黄又爽又免费观看的视频| 狠狠狠狠99中文字幕| 无限看片的www在线观看| 极品教师在线免费播放| 又黄又粗又硬又大视频| 视频区欧美日本亚洲| 精品一区二区三区av网在线观看| 亚洲精品美女久久久久99蜜臀| 精品一区二区三区av网在线观看| 黄片小视频在线播放| 黄色怎么调成土黄色| 国产欧美日韩精品亚洲av| 高潮久久久久久久久久久不卡| 天堂√8在线中文| 免费一级毛片在线播放高清视频 | 精品人妻1区二区| 精品欧美一区二区三区在线| 免费高清视频大片| 久久精品亚洲精品国产色婷小说| 18禁美女被吸乳视频| 在线观看66精品国产| 在线观看免费日韩欧美大片| 12—13女人毛片做爰片一| 欧美人与性动交α欧美精品济南到| 亚洲全国av大片| 中文字幕色久视频| 国产一区在线观看成人免费| 级片在线观看| 亚洲精品美女久久久久99蜜臀| 99国产精品99久久久久| 深夜精品福利| 黄片播放在线免费| 国产亚洲欧美在线一区二区| 午夜日韩欧美国产| 露出奶头的视频| 日韩免费高清中文字幕av| 欧美日韩亚洲综合一区二区三区_| 老司机亚洲免费影院| 精品免费久久久久久久清纯| 亚洲国产精品合色在线| a级毛片在线看网站| 日韩中文字幕欧美一区二区| bbb黄色大片| 亚洲成人免费av在线播放| 老司机福利观看| 久久天堂一区二区三区四区| 国产日韩一区二区三区精品不卡| 男女床上黄色一级片免费看| 怎么达到女性高潮| 久久精品影院6| 可以在线观看毛片的网站| 久久久国产精品麻豆| 欧美最黄视频在线播放免费 | aaaaa片日本免费| 亚洲自拍偷在线| 一级片'在线观看视频| 亚洲激情在线av| 人人妻人人添人人爽欧美一区卜| 岛国视频午夜一区免费看| 两个人看的免费小视频| 日本黄色日本黄色录像| 亚洲 欧美一区二区三区| 欧美激情极品国产一区二区三区| 色播在线永久视频| 黄频高清免费视频| 欧美激情久久久久久爽电影 | bbb黄色大片| 午夜免费成人在线视频| 国产单亲对白刺激| 午夜福利欧美成人| 国产精品永久免费网站| 99精品欧美一区二区三区四区| 淫妇啪啪啪对白视频| 亚洲国产精品一区二区三区在线| 在线观看免费视频日本深夜| 香蕉国产在线看| 日韩免费av在线播放| 成年女人毛片免费观看观看9| 国产不卡一卡二| 欧美大码av| 啦啦啦 在线观看视频| 色综合婷婷激情| 香蕉丝袜av| 欧美精品啪啪一区二区三区| 日韩人妻精品一区2区三区| 亚洲五月色婷婷综合| 欧美精品一区二区免费开放| 日韩免费av在线播放| x7x7x7水蜜桃| 亚洲自拍偷在线| 丰满人妻熟妇乱又伦精品不卡| 99热只有精品国产| 看片在线看免费视频| 久久中文字幕人妻熟女| 9191精品国产免费久久| 国产亚洲精品久久久久5区| 亚洲va日本ⅴa欧美va伊人久久| 午夜亚洲福利在线播放| 一级a爱视频在线免费观看| 嫁个100分男人电影在线观看| 每晚都被弄得嗷嗷叫到高潮| 亚洲人成电影免费在线|