• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Ethical Principles and Governance Technology Development of AI in China

    2020-09-14 03:42:18WenjunWuTiejunHuangKeGong
    Engineering 2020年3期

    Wenjun Wu*, Tiejun Huang, Ke Gong

    a State Key Laboratory of Software Development Environment, Beihang University, Beijing 100191, China

    b School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China

    c Chinese Institute of New Generation Artificial Intelligence Development Strategie, Nankai University, Tianjin 300071, China

    Keywords:AI ethical principles AI governance technology Machine learning Privacy Safety Fairness

    A B S T R A C T Ethics and governance are vital to the healthy and sustainable development of artificial intelligence(AI).With the long-term goal of keeping AI beneficial to human society,governments,research organizations,and companies in China have published ethical guidelines and principles for AI, and have launched projects to develop AI governance technologies. This paper presents a survey of these efforts and highlights the preliminary outcomes in China. It also describes the major research challenges in AI governance research and discusses future research directions.

    1. Introduction

    With the rapid development and deployment of a new generation of artificial intelligence (AI) algorithms and products, AI is playing an increasingly important role in everyday life,and is having a significant impact on the very fabric of the modern society.In particular, AI models and algorithms have been widely adopted in a variety of decision-making scenarios, such as criminal justice,traffic control, financial loans, and medical diagnosis. This emerging proliferation of AI-based automatic decision-making systems is introducing potential risks in many aspects, including safety and fairness.

    For example, there are many concerns with the safety of automated driving systems.In 2015,a fatal accident occurred to a Tesla vehicle in China, in which the autopilot system failed to identify a road sweeper truck and did not perform the correct maneuver to avoid it. Another example comes from intelligent justice,in which AI algorithms are adopted to decide whether to grant parole permission to a prisoner based on his/her behavioral characteristics.There have been complaints that such an algorithm could make biased and unfair decisions based on ethnicity and cultural background. In the financial arena, an AI-based digital lending algorithm might reject loan applications based on biased judgment.Government agencies,the academic community,and industry have all realized that the safety and governance of AI applications are an increasingly important issue, and that effective measures must be taken to mitigate potential AI-related risks.

    Today, governments from many countries, research organizations, and companies have announced their ethical guidelines,principles, and recommendations for AI. To enforce these principles in current AI systems and products, it is vital to develop governance technology for AI, including federated learning, AI interpretation, rigorous AI safety testing and verification, and AI ethical evaluation. These techniques are still under intense development and are not yet mature enough for widespread commercial adoption. Major technical obstacles are deeply rooted in fundamental challenges for modern AI research, such as human-level moral cognition, commonsense ethical reasoning, and multidisciplinary AI ethics engineering. In this paper, we aim to present a general survey on AI ethical principles and ongoing research efforts from the perspective of China.

    The rest of the paper is organized as follows: Section 2 introduces the ethical principles that have been published by government agencies and organizations, and highlights the major research efforts of Chinese researchers on AI governance.Section 3 compares China with other countries in terms of AI ethical principles and governance technology development. Section 4 discusses the grand challenges in AI governance research and suggests possible research directions.

    2. Ethical principles and emerging governance technology in China

    The Development plan of the new generation artificial intelligence,which was released in 2017, stresses that the dual technical and social attributes of AI must be carefully managed to ensure that AI is trustable and reliable. In 2019, Ministry of Science and Technology of the People’s Republic of China (MOST) established an National Governance Committee for the New Generation Artificial Intelligence and released the Governance principles for the new generation artificial intelligence—Developing responsible artificial intelligence [1]. The Beijing Academy of Artificial Intelligence (BAAI)also published the Beijing AI principles [2], proposing an initiative for the research, development, use, governance, and long-term planning of AI in order to support the realization of beneficial AI for humankind and the natural environment. In Ref. [3], researchers from the BAAI collected more than 20 well-known proposals of ethical principles for AI,and performed a topic analysis on the texts of these proposals.They identified the following keywords that are commonly mentioned in the proposals:security and privacy,safety and reliability, transparency, accountability, and fairness.

    (1) Security and privacy: AI systems should be secure and respect privacy.

    (2) Safety and reliability: AI systems should perform reliably and safely.

    (3) Transparency: AI systems should be understandable.

    (4) Accountability: AI systems should have accountability.

    (5) Fairness: AI systems should treat all people fairly.

    These common principles have been widely agreed upon by researchers, practitioners, and regulators in the field of AI across the world. These principles not only reflect society’s goodwill and moral beliefs,but also demand feasible and comprehensive technical frameworks and solutions to implement ethical constraints in AI models, algorithms, and products. Table 1 lists emerging techniques that hold great potential to support effective governance in accordance with AI ethical principles.

    2.1. Data security and privacy

    Data security is the most basic and common requirement of ethical principles for AI. Many governments including European Union (EU), United States, and China are establishing legislation to protect data security and privacy.For example,the EU enforced the General Data Protection Regulation (GDPR) in 2018, and China enacted the Cybersecurity Law of the People’s Republic of China in 2017.The establishment of such regulations aims to protect users’personal privacy, and poses new challenges to the data-driven AI development commonly adopted today.

    In the paradigm of data-driven AI,developers often need to collect massive data from users in a central repository and carry out subsequent data processing, including data cleaning, fusing, andannotation, to prepare datasets for training deep neural network(DNN)models.However,the newly announced regulations hamper companies from directly collecting and preserving user data on their cloud servers.

    Table 1 Major AI ethical principles and supporting governance technologies.

    Federated learning, which can train machine learning models across decentralized institutions, presents a promising solution to allow AI companies to address the serious problem of data fragmentation and isolation in a legal way.Researchers from the Hong Kong University of Science and Technology and other institutes[4]have identified three kinds of federated learning modes:horizontal federated learning,vertical federated learning,and federated transfer learning.Horizontal federated learning is applicable when participating parties have non-overlapping datasets, but share the same feature space in data samples. Vertical federated learning is applicable when the datasets from participants refer to the same group of entities but differ in their feature attributes. When the datasets cannot meet either condition(i.e.,they have different data samples and feature space),federated transfer learning is a reasonable choice. Using these modes, AI companies are always able to establish a united model for multiple enterprises without sharing their local data in a centralized place.

    Federated learning not only presents a technical solution for privacy protection in the collaborative development of distributed machine learning models among institutions, but also indicates a new business model for developing a trusted digital ecosystem for the sustainable development of an AI society. By running federated learning on blockchain infrastructure,it may be possible to motivate members in the digital ecosystem via smart contracts and trusted profit exchanges to actively share their data and create federated machine learning models.

    Federated learning is increasingly being adopted by online financial institutions in China. WeBank established an opensource project on federated learning and contributed the Federated AI Technology Enabler (FATE) framework to the Linux foundation.WeBank’s AI team [5] also launched an Institute of Electrical and Electronics Engineers (IEEE) standardization effort for federated learning and has started to draft an architectural framework definition and application guidelines.

    2.2. Safety, transparency, and trustworthiness of AI

    Decades of research in computer science and software engineering have ensured the safety and trustworthiness of largescale complex information systems. With increases in the scale and complexity of systems, it is a grand challenge to design and implement a reliable and trustable system in a cost-efficient and error-free manner. The AI components deployed in today’s autonomous systems inevitably aggravate this problem when they interact with uncertain and dynamic environments. Because a state-of-the-art AI model adopts very complex DNNs and endto-end training approaches, it acts as a black box, which not only hampers developers from fully understanding its structure and behavior,but also introduces implicit and potential vulnerabilities to the model from malicious inputs. Therefore, an AI governance framework must encompass multiple techniques enabling AI engineers to perform a systematic evaluation of AI behaviors and to present evidence that can build public’s trust toward AI systems.Fig. 1 displays the major building blocks of an AI behavior analysis and assessment framework,including testing,verification,interpretation, and provenance.

    These emerging AI governance technologies are all about examining and assessing AI behavior and inner-working mechanisms from different aspects. AI testing often focuses on evaluating the relationship of inputs and outputs to make sure the AI’s functions and behavior can conform to the desired goals and moral requirements. AI verification adopts rigorous mathematical models to prove the soundness of AI algorithms. AI interpretation aims at developing novel techniques to analyze and reveal how complex DNN models work internally. AI provenance can track the lineage of the training data,model,algorithm,and decision process to support auditing and accountability determination.

    Fig. 1. Testing, verification, interpretation, and provenance for trustworthy AI.

    The integration of these AI governance technologies is very important because it brings all the stakeholders together to understand, examine, and audit an autonomous and intelligent system.Users who are affected by decisions from an AI system have the right to know and comprehend the rationales behind the algorithmic decisions.Engineers who are in charge of AI development and maintenance must rely upon AI testing, verification, and interpretation tools to diagnose potential problems with AI algorithms and enact the necessary remedies and improvements. Managers who oversee AI engineering processes and the quality of AI products should utilize these tools to query procedural data, guide the enforcement of moral standards, and minimize the ethical and quality risks of the system. Government auditors who investigate the responsibility of AI systems in accidents or legal cases must exploit AI provenance to track the lineage of the system evolution and to collect relevant evidence.

    2.2.1. Safety and robustness of AI

    Adversarial examples for DNNs have recently become a very popular topic in the machine learning community. DNN models are vulnerable to adversarial examples, in which inputs with imperceptible perturbations mislead DNNs, resulting in incorrect results. For example, hackers can maliciously add smallmagnitude perturbations to an image of a street crossing with pedestrians walking on a road, thus generating adversarial examples that can fool DNNs into ignoring the pedestrians on the scene.Therefore, adversarial examples might lead to fatal accidents or pecuniary losses due to the severe impairment of practical deep learning applications such as automated driving and facial recognition systems. Two major approaches are used to address the AI safety issue and ensure the robustness of AI systems under perturbations: adversarial testing and formal verification.

    (1) Adversarial testing for AI safety. Many studies have investigated how to generate adversarial examples as test cases for testing DNN models. A straightforward way to generate test cases is to directly perturb the original inputs without affecting the overall visual image of the scene. This approach is limited to situations in which hackers have no access to the input sources and cannot add perturbations in the input images. Thus, researchers have started to explore the generative adversarial network(GAN)-based generation of adversarial examples that consist of a tiny image patch that can be easily posted on physical objects such as light poles and a human’s hat [6].

    Researchers at Beihang University [7] proposed a perceptualsensitive GAN that can enhance the visual fidelity of adversarial patches and generate more realistic testing samples for neural networks under safety testing. At the 2018 Conference on Neural Information Processing Systems (NIPS), researchers at Tsinghua University [8,9] published two papers on defense algorithms for DNNs: One paper proposed a new adversarial perturbation-based regularization method named deep defense for training DNNs against possible adversarial attacks, and the other suggested minimizing the reverse cross-entropy in the training process in order to detect adversarial examples. Researchers at Zhejiang University and at Alibaba [10] have implemented a DNN-testing platform named DEEPSEC, which incorporates more than a dozen state-of-the-art attack and defense algorithms.This platform enables researchers and practitioners to evaluate the safety of DNN models and to assess the effectiveness of attack and defense algorithms.

    (2) Formal verification of DNN models. Adversarial testing is unable to enumerate all possible outputs for a given set of inputs due to the astronomical number of choices for the input perturbation. As a complementary method to adversarial testing, formal verification have been introduced to rigorously prove that the outputs of a DNN model are strictly consistent with a specification of interest for all possible inputs.However,verifying neural networks is a difficult problem,and it has been demonstrated that validating even simple properties about their behavior is a non-deterministic polynomial (NP)-complete problem [11].

    The difficulties encountered in verification mainly arise from the presence of activation functions and the complex structure of a neural network. To circumvent the difficulties brought by the nonlinearities that are present in neural networks, most recent results focus on the activation functions of piecewise linear forms.Researchers are working on efficient and scalable verification approaches by focusing on the geometric bounds on the set of outputs. There are basically two kinds of formal verifiers for DNN models: complete verifiers and incomplete verifiers. Complete verifiers can guarantee no false positives but have limited scalability,as they adopt computationally expensive methods such as satisfiability modulo theory (SMT) solvers [12]. Incomplete verifiers may produce false positives,but their scalability is better than that of complete verifiers. Researchers at ETH Zurich [13,14] proposed an incomplete verifier based on abstract interpretation, in which shape-based abstract domains are expressed as the geometric bounds of nonlinear activation functions’ outputs to approximate infinite sets of behaviors of DNNs. Researchers from East China Normal University, the Chinese Academy of Science, and other institutes [15,16] have also introduced verification frameworks based on linear programming or symbolic propagation.

    These research efforts are still in their early stages and have not been generalized to different kinds of activation functions and neural network structures.Despite decades of effort in the field of formal verification,scalable verification methods are neither available nor mature for processing modern deep learning systems because of the complexity of deep learning.

    2.2.2. Transparency and accountability of AI

    AI transparency is critical in order for the public to be able to understand and trust AI in many decision-making applications,such as medical diagnosis, loan management, and law enforcement. AI interpretation helps to decipher the complicated inner workings of deep learning models and to generate humanunderstandable explanations of such models’ reasoning and inference.With improved AI transparency,people are more confident in utilizing AI tools to make decisions and in assessing the legitimacy and accountability of autonomous systems.

    Research efforts are being conducted on how to build explainable DNN frameworks and analysis tools.In this research direction,multiple approaches have been proposed to support model understanding. Some researchers have devised companion neural networks to generate natural language explanations in the process of DNN inference. Another popular approach called local interpretable model-agnostic explanation (LIME) attempts to construct a proxy model based on a simple model class (e.g., sparse linear models and decision trees) from the original complex model, in order to approximate the behaviors of the original model [17].Researchers from Shanghai Jiao Tong University and other institutes[18]introduced a decision-tree-based LIME method to quantitatively explain the rationales of each prediction made by a pre-trained convolutional neural network (CNN) at the semantic level.

    Information visualization is also widely regarded as an effective way to implement explainable DNN models.Researchers at Tsinghua University [19] presented an interactive DNN visualization and analysis tool to support model understanding and diagnosis. With the right knowledge representation of moral values, such visual analytics may enable AI engineers to intuitively verify whether their DNN models correctly follow human ethical rules.

    The other important research field that is closely related to AI interpretation is AI provenance, which emphasizes the recording,presenting, and querying of all kinds of lineage information relevant to data,models,and algorithms for future audits and forensic analysis. Although there are mature data and information provenance frameworks, few investigations have been performed on AI provenance. A joint research paper from Nanjing University and Purdue University[20]designed a provenance computation system for AI algorithms by tracking inner derivative computing steps.This method can assist algorithm designers in diagnosing potential problems.

    In addition to facilitating the development of AI models,provenance can play an important role in emerging AI forensic research.The recent well-known misuse of DeepFake technology,which utilizes a GAN to generate false facial images and videos, is posing a significant threat to social norms and security. Many researchers are developing new classification methods to detect these fake images and to ensure the credibility of visual content.For example,researchers at the Institute of Automation, Chinese Academy of Sciences[21]attempted to improve the generalization of DeepFake detection algorithms, and proposed a new forensic CNN model.Nevertheless, these efforts alone are insufficient to overcome DeepFake because malicious designers can always conceive better algorithms to fool known detection algorithms. Perhaps such efforts should be complemented with reliable provenance information for the original images, which would provide necessary clues to verify the legitimacy of an image’s origin. In particular, a blockchain-based provenance management system may help to establish a reliable and trustworthy digital ecosystem in which the authentic identity of digital resources can be tracked and verified in order to completely unmask fraudulent images and videos.

    2.3. Fairness evaluation of AI algorithms

    Fairness has recently emerged as an important nonfunctional characteristic for the evaluation of AI algorithms. Efforts in AI fairness research mostly focus on measuring and discovering the differences of AI outputs among different groups or individuals.Many fairness evaluation criteria have been proposed by researchers. Gajane and Pechenizkiy [22] surveyed how fairness is defined and formalized in the literature for the task of prediction. The major types of definitions of AI fairness are listed below:

    (1) Fairness through unawareness. According to this type of definition,an AI algorithm is fair as long as the protected attributes are not explicitly used in the AI-based decision-making process.For example,an intelligent fraud-detection system should exclude sensitive attributes such as race and gender in its feature set for risk estimation. Although this simple and blind approach might work in some cases, it has a very serious limitation, because excluding attributes can degrade predictive performance and, in the long run, yield fewer effective outcomes than an attributeconscious approach.

    (2) Group fairness. This requires the decisions made by an AI algorithm to exhibit an equal probability for user groups divided by a specific attribute. There are several types of group fairness,including demographic parity, equalized odds, and equal opportunity.This family of fairness definitions is attractive because it does not assume any special features of the training data and can be verified easily.

    (3)Individual fairness.According to this type of definition,an AI algorithm should present similar decisions if a pair of individuals have similar attributes.

    (4) Counterfactual fairness. In many decision scenarios, protected attributes such as racial and gender group may have a causal influence upon the predicted outcome. As a result, the ‘‘fairness through unawareness” metric may in fact lead to the group disparity that the metric is intended to avoid. To mitigate such an inherent bias, Kusner et al. [23] formulated a counterfactual fairness definition by leveraging the causal framework to describe the relationship between protected attributes and data. This measurement of fairness also provides a mechanism to interpret the causes of bias.

    At present, there is no consensus regarding which fairness definitions are most suitable; in some cases, these definitions are not even compatible with each other.The question of how to choose appropriate fairness criteria for machine learning under specific circumstances and design a fair,intelligent decision algorithm with full consideration of social context remains an open research problem.

    In addition to the multitude of fairness definitions, researchers have introduced different bias-handling algorithms to address the problem of AI fairness at different stages of an AI model’s life-cycle.For example,Bolukbasi et al.[24]devised a method to remove gender bias from word embeddings that are commonly used for natural language processing. Researchers at Shanghai Jiao Tong University [25] proposed the use of social welfare functions that encode fairness in the reward mechanism, and suggested that the fairness-of-resource-allocation problem be addressed in the framework of deep reinforcement learning.

    Large AI companies are active in developing fairness evaluation and debiasing tools to promote the implementation of AI fairness in real intelligent systems.Google released an interactive visualization tool named What-If that enables data scientists to examine complex machine learning models in an intuitive way. The tool integrates a few fairness metrics, including group unawareness,equal opportunity,and demographic parity,to assess and diagnose the fairness of machine learning models. IBM created AI Fairness 360 [26], an extensible open-source toolkit for handling algorithmic biases.The package integrates a comprehensive set of fairness criteria and debiasing algorithms in datasets and models.

    3. A comparison between China and other countries regarding AI ethical principles and governance technology development

    In this section, we compare the ongoing efforts of China with those of other countries regarding the development of ethical principles and governance technology for AI. From the governmental and institutional perspective, it can be seen that both governmental agencies and the private sector in China have taken active initiative in building ethical guidelines for AI and in promoting an awareness of the beneficial use of AI.From the perspective of academic research and industrial development, Chinese researchers and practitioners have been actively developing governance technologies for AI along with their international peers.

    3.1. Governmental and institutional perspective

    The world’s major economic powers have released their ethical guidelines and governance regulations for AI. In 2018, the EU announced the GDPR; in April of 2019, the EU’s High-Level Expert Group on AI presented the Ethics guidelines for trustworthy AI [27].In 2019,the White House issued the Executive order on maintaining American leadership in artificial intelligence,and demanded that the National Institute of Standards and Technology(NIST)devise a plan to develop technical standards for reliable,robust,and trustworthy AI systems[28]. Along with the EU and the United States, China is among the major governments that have launched nationwide AI governance and ethics initiatives. The United Nations (UN) is also promoting AI ethics, and declared its humanistic attitude toward AI at the United Nations Educational, Scientific and Cultural Organization (UNESCO) AI conference in March 2019, which stressed artificial intelligence with human values for sustainable development. However, no multinational joint action has been taken by multiple governments as yet.

    In addition, big tech corporations such as Google, Amazon, and Microsoft,as well as their Chinese counterparts Baidu,Alibaba,and Tencent, have been actively involved in AI ethics and governance initiatives, both domestically and internationally. Tencent announced its ‘‘a(chǎn)vailable, reliance, comprehensible, controllable”(ARCC)principles for AI in 2018,and released a report on AI ethics in a digital society in 2019 [29]. Baidu joined Partnership on AI

    [30], which is an international consortium consisting of major players in the AI industry. The mission of this consortium is to establish best practices for AI systems for socially beneficial purposes.

    3.2. Academic research and industrial development perspective

    In Section 2,we highlighted the development efforts of Chinese researchers in AI ethical principles and emerging governance technologies.In most of the four major areas relevant to AI ethical principles and governance, Chinese researchers have been promptly developing new models,algorithms,and tools in parallel with their international peers.

    In the area of data security and privacy(Section 2.1),WeBank’s FATE is one of the major open-source projects for federated learning.According to Ref.[31],FATE is the only framework that supports distributed federal learning among these open-source projects,in comparison with Google’s TensorFlow federated learning.

    In the area of the safety and robustness of AI,since the vulnerability of DNNs was revealed by Szegedy et al.[32],many studies have been carried out worldwide to address this issue. Among these efforts, new algorithms developed by Chinese researchers have demonstrated excellent performance in adversarial testing and defense. At 2017 Conference on NIPS, Google Brain organized an international competition on adversarial attack and defense methods,in which the team from Tsinghua University won the first position in both the attack and the defense tracks[33].As an example of international cooperation,Baidu has worked with researchers from the University of Michigan and the University of Illinois at Urbana-Champaign to discover the vulnerabilities of DNNs adopted in LiDAR-based autonomous driving detection systems[34].

    In the area of the transparency and accountability of AI,Chinese researchers from both the academic community and the private sector, including Alibaba and Baidu, have actively proposed new interpretation methods and visualization tools.Large international companies such as IBM, Facebook, and Microsoft have released their AI explainability tools,which implement general frameworks for AI interpretation.For example,IBM introduced AI Explainability 360,an open-source software toolkit that integrates eight AI interpretation methods and two evaluation metrics[35].In comparison,Chinese companies should make additional efforts to integrate new algorithms and prototypes into open-source tools and make them widely available to the world.

    Although the concept of AI fairness is relatively new, it has received a considerable amount of attention in the AI academic community.As mentioned in Section 2.3,investigation into AI fairness issues often requires an interdisciplinary approach. In 2016,the Association for Computing Machinery Conference on Fairness,Accountability, and Transparency (ACM FAT) was launched, with a focus on ethical issues pertaining to AI such as algorithmic transparency, fairness in machine learning, and bias. This conference attracted more than 500 attendees, including AI academics and scholars in social sciences such ethics, philosophy, law, and public policy. Although this conference has become one of the major venues for research on AI fairness, it is not well known among Chinese AI researchers. It is necessary to encourage more multidisciplinary research on this emerging field in the AI academic community in China.

    In summary,governments,academics,and industries across the world have recognized the significance of AI ethical principles and have taken the initiative to develop AI governance technologies.Among the world’s major governments, China has launched nationwide AI governance and ethics initiatives. We believe that it is necessary to foster international collaboration in this new field for the sake of the global community and our shared future. It is unfortunate that such joint efforts have not been emphasized at all levels, they must be further extended and strengthened.

    4. Grand challenges in AI governance research

    In fulfilling the fundamental principles of AI for the good of society,numerous research challenges remain in the task of bringing ethical values and regulations into the current AI governance framework.In this section,we elaborate the major challenges from the following aspects: the AI ethical decision framework, the AI engineering process, and interdisciplinary research.

    4.1. The ethical decision framework

    The concept of an ethical decision framework is a major topic in AI governance research. Researchers at Hong Kong University of Science and Technology and Nanyang Technological University

    [36]reviewed publications on existing ethical decision frameworks from leading AI conferences and proposed a taxonomy dividing the field into four areas:exploring ethical dilemmas,individual ethical decision frameworks, collective ethical decision frameworks, and ethics in human-AI interactions. Other researchers [37] presented a survey on artificial general intelligence(AGI)safety,in which the ethical decision problem is often formulated in a reinforcement learning framework. They assumed that rational intelligent agents can learn human moral preferences and rules through their experiences interacting with social environments. Thus, in the framework of reinforcement learning, AI designers can specify ethical values as reward functions in order to align the goal of a rational agent with its human partners and to stimulate the agent to behave according to human moral norms. It should be noted that in this nascent research area, scientists must overcome the main bottlenecks of the current data-driven DNNs to achieve humanlevel automated moral decision-making and to extensively evaluate these frameworks after their deployment in real and complicated moral circumstances.4.1.1. How to model moral rules and values

    In most cases, it is difficult to directly devise mathematical functions to model ethical values—especially moral dilemmas,where people must make difficult decisions among negative choices. It is viable to take a data-driven and learning-based approach to enable autonomous agents to learn appropriate ethical representations from human demonstrations. For example,researchers from Massachusetts Institute of Technology (MIT)researchers launched the Moral Machine project [38] to collect datasets about various ethical dilemmas in a crowdsourcing way.However,such a crowdsourcing self-reported preference on moral dilemmas can unavoidably deviate from actual decision behaviors because there is no mechanism to ensure genuine user choices.

    4.1.2. Common sense and context awareness in ethical decision-making

    Despite the rapid progress of modern AI technologies,DNN-based AI agents are mostly good at recognizing latent patterns,and are not very effective in supporting general cognitive intelligence within an open and unstructured environment.In situations with complicated moral dilemmas, state-of-the-art AI agents do not have sufficient cognitive ability to perceive the correct moral context and successfully resolve the dilemmas through commonsense reasoning. Recent efforts in this field have explored game-theoretical moral models or Bayesian-based utility functions.Researchers at Duke University [39] adopted the gametheoretical approach to model ethical dilemmas and align an AI’s ethical values with human values. Researchers at MIT [40] developed a computational model to describe moral dilemmas as a utility function, and introduced a hierarchical Bayesian model to represent social structure and group norms. These early attempts may not be general enough to support common moral scenarios,but they suggest new research directions on combining the powers of both DNNs and interpretable Bayesian reasoning models in the field of AI ethics.

    4.1.3. Safe reinforcement learning

    Many researchers have adopted deep reinforcement learning to model moral constraints as reward functions, and have used the Markov decision process to implement sequential decisions.However, deep reinforcement learning is far from mature, and has a long way to go before it becomes available for applications other than gaming. One of the major problems with this method relates to the safety of the reinforcement learning process. A malicious agent can have many options to bypass the regulatory ethical constraints by tricking the reward mechanism.For example,it can use reward hacking to obtain more rewards than intended by exploiting loopholes in the process of determining the reward.

    4.2. Integrating ethical principles in the AI engineering process

    Ethical principles should be transformed into software specifications guiding the design and implementations of AI systems.From the perspective of software engineering, the development of AI models and the operation of AI systems are often organized in a clearly defined life-cycle shown in Fig. 2, which includes AI task definition, data collection and preparation, model design and training, model testing and verification, and model deployment and application. Software specifications of AI security, safety, and fairness should be implemented through the entire AI development and operations (DevOps) life-cycle.

    At the beginning,AI tasks need to be defined and analyzed during the requirement analysis phase. Designers can adopt different kinds of ethical specifications and evaluation metrics toward the customized requirements in different application scenarios.During the data collection and preparation phase, engineers must ensure the validity of the training dataset by eliminating corrupted data samples and reducing the potential bias of the dataset.With a balanced and correct dataset,engineers can design appropriate model structures and perform model training according to the ethical specifications. After the model design and training phase, the preliminary model must be tested and verified in accordance with the moral specifications describing the constraints in term of fairness,robustness, transparency, and task performance. If the model cannot pass the model testing and verification phase,the engineers must redesign the model,recheck the data, and retrain the model.Otherwise, the model can be integrated with other software components and deployed in the intelligent system.During the running of the system, the runtime behaviors of the system must be constantly examined and must conform to the ethical principles.If any violations of the moral constraints occur,the engineers must decide to make further improvements on the AI models and launch a new DevOps life-cycle.

    To streamline such an ethically aware AI DevOps life-cycle,many AI engineering tools need to be developed and integrated into a comprehensive and flexible environment for AI model designers and system developers.As discussed in the previous sections, these tools must implement core techniques such as federated learning, adversarial testing, formal verification, fairness evaluation, interpretation, provenance, and runtime sandboxing,in addition to safety monitoring. At present, tools such as AI Fairness 360 are still under development; thus, major AI DevOps platforms have not yet encapsulated these tools as the main functions required by AI ethical principles. More research and engineering endeavors are essential in order to promote an open AI DevOps environment with built-in ethical support, where researchers and practitioners can conveniently explore novel AI ethical techniques,systematically evaluate different assessment metrics,and conceive new solutions to different moral situations in various application domains.

    With the progress of AI governance technologies, it can be expected that regulations and standards on ethical aspects of AI will be in place at the corporate/enterprise, group, national, and international levels, to enforce the compliance of AI systems and products. In fact, worldwide AI standardization research efforts have been underway for years. For example, the International Organization for Standardization (ISO)’s SC 24 launched a work group on AI trustworthiness in 2018, and the National Artificial Intelligence Standardization Steering Committee released a white paper analyzing AI ethical risks in 2019 [41]. Hopefully, combined efforts with AI engineering and standardization will further promote the awareness of the ethical problems of AI and will accelerate the integration of ethical values into AI systems and products within the AI industry and community.

    4.3. Interdisciplinary research on AI governance

    AI systems are complex and advanced social-technical systems,which often involve machine learning models,supporting software components, and social organizations. Researchers from multiple disciplines must conduct social-systems analysis of AI[42]in order to understand the impact of AI under different social,cultural,and political settings.Such a social-systems analysis demands an interdisciplinary research approach that leverages relevant studies of philosophy, law, and sociology, among other disciplines. Through multidisciplinary studies, AI designers and developers can work collaboratively with experts in law and sociology to conduct holistic modeling and analysis on the ethical aspects of intelligent systems by assessing the possible effects on all parties and by dealing with moral issues during every phase and state of AI DevOps.

    There is no doubt that such an interdisciplinary and holistic approach to socio-technical engineering demands deep collaboration among AI developers and their partners with expertise in other relevant domains. Despite the increasing awareness of AI ethical principles among researchers in computer science,philosophy, law, and sociology in China, most of these scholars’ research efforts are still being carried out on separate tracks and have not been fully synergized to address the grand challenges discussed above. Thus, we believe that it is critical to bring together experts from all relevant disciplines to work on the ethical problems of AI with clearly specified goals.First,based on or under the commonly accepted ethical principles of AI, we need to identify critical and typical ethical scenarios in applications such as autonomous driving, intelligent courts, and financial loan decisions, and call for novel research ideas and solutions from multidisciplinary teams.In these cases, complicated social contexts can be properly abstracted and described as AI ethical specifications. Second, an open and universal platform should be made available to foster interdisciplinary research on AI ethical principles. Such a platform will greatly enable researchers from different backgrounds to share their insights and contributions, and compare different frameworks and ethical criteria in building intelligent machines with ethical values and rules.

    5. Conclusion

    The rapid development and deployment of AI indicate an upcoming fundamental transformation of our society. This transformation can be a great opportunity to construct a human community with a shared future, and to promote the sustainable development of society and the natural environment. But without sufficient and effective governance and regulation,its implications might be unprecedented and negative. In order to ensure that these changes are beneficial before they are completely embedded into the infrastructure of our daily life,we need to build a solid and feasible AI governance framework to regulate the development of AI according to the ethics and values of humanity. In this way,we can make AI accountable and trustworthy, and foster the public’s trust toward AI technology and systems.

    This paper introduced the ongoing efforts to develop AI governance theories and technologies from the perspective of China.Many Chinese researchers have been motivated to address the ethical problems of current AI technologies.To overcome the security problem of data-driven AI, research teams from companies and universities in China have endeavored to develop federated learning technology. To ensure the safety and robustness of DNN models, researchers have proposed new algorithms in adversarial testing and formal verification. Furthermore, research teams are investigating effective frameworks in the areas of AI interpretation,provenance,and forensics.These efforts are mostly in their preliminary stages and need further strengthening in order to deliver mature solutions for widespread adoption and practice.

    We suggest the following actions to push forward current initiatives on AI governance: Firstly, governments, foundations, and corporations should conduct cross-disciplinary, cross-sector, and multinational collaborations to establish a consensus on AI ethical principles.Secondly,they must intensify the collaborative research and development of AI governance technologies in order to keep pace with the rapid progress of AI. Thirdly, open AI DevOps platforms with built-in ethics-relevant tools should be developed to support all the stakeholders of different AI systems in evaluating the functional and regulation compliance of AI systems. Fourthly,clearly defined AI moral scenarios with significant social impact should be identified so that experts from different disciplines can work collaboratively to address the ethical challenges of AI.Lastly,we must actively promote ethical education for every stakeholder in AI research and development, application, and management, so as to significantly enhance their awareness of ethics and promote general practices of responsible conduct with AI.

    Compliance with ethics guidelines

    Wenjun Wu,Tiejun Huang,and Ke Gong declare that they have no conflicts of interest or financial conflicts to disclose.

    精品久久久久久久久久免费视频| 黄色日韩在线| 国产美女午夜福利| 91午夜精品亚洲一区二区三区| 亚洲三级黄色毛片| 亚洲一区高清亚洲精品| 国产美女午夜福利| 午夜激情福利司机影院| 老女人水多毛片| 成年版毛片免费区| 亚洲欧美成人综合另类久久久 | 日产精品乱码卡一卡2卡三| 国产av不卡久久| 免费搜索国产男女视频| 亚洲成a人片在线一区二区| 日韩高清综合在线| 一a级毛片在线观看| 少妇被粗大猛烈的视频| 色噜噜av男人的天堂激情| 国产精品精品国产色婷婷| 麻豆国产av国片精品| 亚洲图色成人| 女同久久另类99精品国产91| 免费电影在线观看免费观看| 欧美激情在线99| 久久6这里有精品| 色噜噜av男人的天堂激情| 不卡一级毛片| 日韩大尺度精品在线看网址| 久久精品国产99精品国产亚洲性色| 日韩欧美国产在线观看| 国产毛片a区久久久久| 又粗又爽又猛毛片免费看| 亚洲熟妇熟女久久| 欧美xxxx性猛交bbbb| 日韩大尺度精品在线看网址| 国产三级中文精品| 偷拍熟女少妇极品色| 少妇人妻精品综合一区二区 | 老师上课跳d突然被开到最大视频| 国产毛片a区久久久久| 搡老岳熟女国产| 日本免费a在线| 欧美色欧美亚洲另类二区| 成人精品一区二区免费| 亚洲真实伦在线观看| 久久久久久久午夜电影| 国产日本99.免费观看| a级一级毛片免费在线观看| 色哟哟哟哟哟哟| 波多野结衣高清作品| 国产精华一区二区三区| 成人欧美大片| 久久久午夜欧美精品| 欧美一区二区国产精品久久精品| 最近的中文字幕免费完整| 看黄色毛片网站| 久久热精品热| 欧美一区二区亚洲| 如何舔出高潮| 老熟妇乱子伦视频在线观看| 男女下面进入的视频免费午夜| 女人十人毛片免费观看3o分钟| 成人二区视频| 黄色配什么色好看| 免费av毛片视频| 久久天躁狠狠躁夜夜2o2o| 亚洲不卡免费看| 国产黄片美女视频| 日韩制服骚丝袜av| 国产精品嫩草影院av在线观看| 欧美高清性xxxxhd video| 久久人人爽人人片av| 亚洲av熟女| 亚洲最大成人av| 色哟哟·www| 少妇人妻一区二区三区视频| 搡老岳熟女国产| 国产成人91sexporn| 亚洲内射少妇av| 给我免费播放毛片高清在线观看| 亚洲在线观看片| 身体一侧抽搐| 激情 狠狠 欧美| 最新在线观看一区二区三区| 亚洲色图av天堂| 热99在线观看视频| 欧美xxxx黑人xx丫x性爽| 别揉我奶头 嗯啊视频| .国产精品久久| 久久这里只有精品中国| 午夜精品在线福利| 在线播放无遮挡| 国产精品福利在线免费观看| 日日摸夜夜添夜夜爱| 精品久久久久久成人av| 日本-黄色视频高清免费观看| 久久久久久久久久黄片| 99热6这里只有精品| 18+在线观看网站| 婷婷精品国产亚洲av在线| 国产精品精品国产色婷婷| 欧美中文日本在线观看视频| 欧美最新免费一区二区三区| 菩萨蛮人人尽说江南好唐韦庄 | 国产精品久久久久久精品电影| 久久久久国内视频| 国产在线精品亚洲第一网站| 日韩精品青青久久久久久| or卡值多少钱| 欧美另类亚洲清纯唯美| 99久久久亚洲精品蜜臀av| 国产一区二区激情短视频| 最近2019中文字幕mv第一页| 熟妇人妻久久中文字幕3abv| 精品一区二区免费观看| 亚洲精品久久国产高清桃花| 国产 一区 欧美 日韩| 国产一级毛片七仙女欲春2| 精品久久久久久久久av| 桃色一区二区三区在线观看| 99久久九九国产精品国产免费| 99久久精品热视频| 亚洲国产色片| 久久久久久久久久成人| 国产不卡一卡二| 精品久久久久久久末码| 亚洲人成网站在线观看播放| av国产免费在线观看| 国产成人a∨麻豆精品| 久久久a久久爽久久v久久| 我的老师免费观看完整版| a级毛片免费高清观看在线播放| 久久人人爽人人片av| 久久精品国产亚洲av涩爱 | 少妇猛男粗大的猛烈进出视频 | 三级国产精品欧美在线观看| 国产精品女同一区二区软件| 在线播放国产精品三级| 国产精品久久久久久亚洲av鲁大| 亚洲精品影视一区二区三区av| 亚洲天堂国产精品一区在线| 大型黄色视频在线免费观看| 日日摸夜夜添夜夜爱| 久久人妻av系列| 日韩欧美 国产精品| 狂野欧美激情性xxxx在线观看| 欧美丝袜亚洲另类| 欧美最新免费一区二区三区| 国产高清视频在线观看网站| 最近2019中文字幕mv第一页| 亚洲av五月六月丁香网| 欧美一区二区国产精品久久精品| 色尼玛亚洲综合影院| 麻豆久久精品国产亚洲av| 久久久久久大精品| 国内久久婷婷六月综合欲色啪| 亚洲国产高清在线一区二区三| 亚洲电影在线观看av| 精品一区二区免费观看| 99在线视频只有这里精品首页| 欧美日本视频| 美女黄网站色视频| 久久久国产成人精品二区| 婷婷精品国产亚洲av| 色吧在线观看| 久久久久久久午夜电影| 激情 狠狠 欧美| 日日摸夜夜添夜夜爱| 给我免费播放毛片高清在线观看| 国产综合懂色| 神马国产精品三级电影在线观看| 性色avwww在线观看| 亚洲丝袜综合中文字幕| 国产精品av视频在线免费观看| 国产片特级美女逼逼视频| 欧美日韩国产亚洲二区| 97超视频在线观看视频| 国产精品伦人一区二区| 国产成人精品久久久久久| 99九九线精品视频在线观看视频| 国产 一区 欧美 日韩| 亚洲精品日韩在线中文字幕 | 午夜精品国产一区二区电影 | 美女大奶头视频| 亚洲高清免费不卡视频| 色5月婷婷丁香| 国产精品av视频在线免费观看| 简卡轻食公司| 人人妻人人澡人人爽人人夜夜 | 国产aⅴ精品一区二区三区波| 欧美一区二区精品小视频在线| 日韩成人av中文字幕在线观看 | a级毛片a级免费在线| 免费黄网站久久成人精品| 国产高潮美女av| 国产黄色小视频在线观看| 精品人妻一区二区三区麻豆 | 美女cb高潮喷水在线观看| 日韩成人伦理影院| 国产国拍精品亚洲av在线观看| 成年免费大片在线观看| 91久久精品电影网| 18禁在线无遮挡免费观看视频 | 免费人成视频x8x8入口观看| 99久久精品一区二区三区| 一级a爱片免费观看的视频| 毛片女人毛片| 少妇高潮的动态图| 国产一区二区三区在线臀色熟女| 欧美一级a爱片免费观看看| 亚洲美女黄片视频| 乱系列少妇在线播放| 久久鲁丝午夜福利片| 波多野结衣高清作品| 欧美成人精品欧美一级黄| 亚洲第一区二区三区不卡| 黄色视频,在线免费观看| 久久精品国产鲁丝片午夜精品| 波多野结衣高清无吗| 国产精品久久久久久av不卡| 熟妇人妻久久中文字幕3abv| 九九热线精品视视频播放| 亚洲久久久久久中文字幕| 悠悠久久av| 亚洲aⅴ乱码一区二区在线播放| av专区在线播放| 3wmmmm亚洲av在线观看| 97超级碰碰碰精品色视频在线观看| av天堂中文字幕网| 中国国产av一级| 国产在视频线在精品| 亚洲精品在线观看二区| 亚洲欧美清纯卡通| 国产精品国产高清国产av| 国产一区二区激情短视频| 亚洲人成网站在线播| ponron亚洲| 亚洲aⅴ乱码一区二区在线播放| 天堂av国产一区二区熟女人妻| 成人国产麻豆网| 久久人人爽人人片av| 在线观看66精品国产| 九九热线精品视视频播放| 精品久久久久久久久久久久久| 夜夜看夜夜爽夜夜摸| 日本精品一区二区三区蜜桃| 色av中文字幕| 在线天堂最新版资源| 精品午夜福利在线看| 亚洲av不卡在线观看| 国产精品综合久久久久久久免费| 18禁裸乳无遮挡免费网站照片| 日日摸夜夜添夜夜添av毛片| 变态另类丝袜制服| 日本三级黄在线观看| 午夜福利在线在线| 国产成人aa在线观看| 女同久久另类99精品国产91| 夜夜爽天天搞| 18+在线观看网站| 亚州av有码| 精品99又大又爽又粗少妇毛片| 日本免费一区二区三区高清不卡| 禁无遮挡网站| 欧美区成人在线视频| а√天堂www在线а√下载| 日日摸夜夜添夜夜添小说| 亚洲av.av天堂| 如何舔出高潮| 亚洲精品日韩av片在线观看| 久久精品国产亚洲网站| 欧美一区二区精品小视频在线| 亚洲精品久久国产高清桃花| 亚洲色图av天堂| 俄罗斯特黄特色一大片| 精品久久久久久久人妻蜜臀av| 久久精品国产清高在天天线| 在线播放国产精品三级| 亚洲精品日韩av片在线观看| 干丝袜人妻中文字幕| 日本免费一区二区三区高清不卡| 欧美色视频一区免费| 在线a可以看的网站| 搡老妇女老女人老熟妇| 日韩成人伦理影院| 久久久精品欧美日韩精品| 黄色欧美视频在线观看| 久久中文看片网| 女的被弄到高潮叫床怎么办| 97人妻精品一区二区三区麻豆| 国产人妻一区二区三区在| 在线观看66精品国产| 两个人视频免费观看高清| 看黄色毛片网站| 天堂动漫精品| 精品久久久噜噜| 丰满的人妻完整版| 免费黄网站久久成人精品| 午夜福利18| 午夜激情欧美在线| 老熟妇仑乱视频hdxx| 久久久久国内视频| 欧美区成人在线视频| 69av精品久久久久久| 国产伦一二天堂av在线观看| 18禁在线播放成人免费| 国产精品电影一区二区三区| 精品午夜福利在线看| 又爽又黄a免费视频| 欧美bdsm另类| 欧美成人精品欧美一级黄| 人人妻人人澡欧美一区二区| 99热全是精品| 亚洲真实伦在线观看| 国产成人aa在线观看| 99视频精品全部免费 在线| 在线观看美女被高潮喷水网站| 国产伦精品一区二区三区四那| 香蕉av资源在线| 蜜桃久久精品国产亚洲av| 国产精品无大码| 熟女人妻精品中文字幕| 日韩一本色道免费dvd| 国产亚洲精品久久久com| 国产高清不卡午夜福利| 亚洲精品久久国产高清桃花| 在线观看免费视频日本深夜| 精品久久久久久久末码| 日本成人三级电影网站| 国产精品久久视频播放| 国产老妇女一区| 亚洲aⅴ乱码一区二区在线播放| 欧美人与善性xxx| 我要看日韩黄色一级片| 97超碰精品成人国产| 国产精品三级大全| 亚洲av中文字字幕乱码综合| 成年女人永久免费观看视频| 国产在线男女| 国产一区二区亚洲精品在线观看| 国产av麻豆久久久久久久| 亚州av有码| 久久人妻av系列| 免费av毛片视频| 色尼玛亚洲综合影院| 国内揄拍国产精品人妻在线| 一卡2卡三卡四卡精品乱码亚洲| 亚洲美女黄片视频| 老司机午夜福利在线观看视频| 午夜影院日韩av| 三级毛片av免费| 大型黄色视频在线免费观看| 久久人人爽人人片av| 日韩精品青青久久久久久| 熟女人妻精品中文字幕| 波多野结衣高清无吗| 三级毛片av免费| 欧美+亚洲+日韩+国产| 国产黄色视频一区二区在线观看 | 国产高清不卡午夜福利| 一级毛片我不卡| 人人妻人人澡人人爽人人夜夜 | 国产精品乱码一区二三区的特点| 99国产精品一区二区蜜桃av| .国产精品久久| 18禁在线无遮挡免费观看视频 | 一进一出抽搐动态| 99国产极品粉嫩在线观看| 午夜激情福利司机影院| 午夜影院日韩av| 欧美丝袜亚洲另类| 国产 一区精品| 欧美丝袜亚洲另类| 久久人人精品亚洲av| 最新在线观看一区二区三区| 婷婷色综合大香蕉| 99热这里只有精品一区| 男女做爰动态图高潮gif福利片| 日本三级黄在线观看| 精品一区二区免费观看| 日本一本二区三区精品| 国产黄a三级三级三级人| av专区在线播放| 久久久久久久午夜电影| 亚洲性久久影院| av在线天堂中文字幕| 国产精品亚洲美女久久久| 欧美性感艳星| 两性午夜刺激爽爽歪歪视频在线观看| 最近中文字幕高清免费大全6| 搞女人的毛片| АⅤ资源中文在线天堂| 最新在线观看一区二区三区| 国产成人freesex在线 | 午夜老司机福利剧场| 在线播放无遮挡| 久久久久久大精品| 国产精品99久久久久久久久| 狠狠狠狠99中文字幕| 亚洲欧美日韩东京热| 最近中文字幕高清免费大全6| 欧美激情久久久久久爽电影| 在线观看午夜福利视频| 日本黄大片高清| 简卡轻食公司| 国产三级在线视频| 国产v大片淫在线免费观看| 国产不卡一卡二| 国产久久久一区二区三区| 久久99热这里只有精品18| 老女人水多毛片| 国产高清视频在线观看网站| 看黄色毛片网站| 国产精品女同一区二区软件| 欧美不卡视频在线免费观看| 最后的刺客免费高清国语| 成人特级黄色片久久久久久久| 给我免费播放毛片高清在线观看| 国产乱人视频| 男人狂女人下面高潮的视频| 九九久久精品国产亚洲av麻豆| 久久精品国产鲁丝片午夜精品| 日韩欧美三级三区| 午夜视频国产福利| 搡老妇女老女人老熟妇| 亚洲色图av天堂| 国产成人影院久久av| 麻豆久久精品国产亚洲av| 51国产日韩欧美| 久久久久久九九精品二区国产| 少妇熟女欧美另类| 国产不卡一卡二| 精品午夜福利视频在线观看一区| 自拍偷自拍亚洲精品老妇| 一进一出抽搐动态| 日本色播在线视频| 性欧美人与动物交配| 欧美一区二区精品小视频在线| 精品欧美国产一区二区三| 日本-黄色视频高清免费观看| 成人性生交大片免费视频hd| 欧美日韩在线观看h| 极品教师在线视频| 国产私拍福利视频在线观看| 99精品在免费线老司机午夜| 乱人视频在线观看| h日本视频在线播放| 免费观看的影片在线观看| 久久久久久九九精品二区国产| 夜夜夜夜夜久久久久| 少妇被粗大猛烈的视频| 国产亚洲精品久久久久久毛片| 99久国产av精品国产电影| ponron亚洲| 麻豆av噜噜一区二区三区| 五月伊人婷婷丁香| 色尼玛亚洲综合影院| 五月玫瑰六月丁香| 久久亚洲国产成人精品v| 亚洲aⅴ乱码一区二区在线播放| 国产亚洲精品久久久久久毛片| 在线免费十八禁| 免费看a级黄色片| 精品久久久久久成人av| 波多野结衣高清作品| 亚洲精华国产精华液的使用体验 | 久久精品国产99精品国产亚洲性色| 久久亚洲精品不卡| 精品一区二区三区av网在线观看| 亚洲性夜色夜夜综合| 老熟妇乱子伦视频在线观看| 亚洲欧美日韩高清专用| 亚洲激情五月婷婷啪啪| 欧美极品一区二区三区四区| 国产精品人妻久久久影院| 天堂av国产一区二区熟女人妻| 午夜亚洲福利在线播放| 在线天堂最新版资源| 51国产日韩欧美| 五月伊人婷婷丁香| 久久久精品大字幕| 一级毛片久久久久久久久女| 免费av不卡在线播放| 别揉我奶头~嗯~啊~动态视频| 最好的美女福利视频网| 人妻久久中文字幕网| 91麻豆精品激情在线观看国产| 99久久精品一区二区三区| 最近中文字幕高清免费大全6| 在线观看免费视频日本深夜| 午夜激情福利司机影院| 日本 av在线| 青春草视频在线免费观看| 我要搜黄色片| 日本成人三级电影网站| 非洲黑人性xxxx精品又粗又长| 一a级毛片在线观看| 精品久久久噜噜| 偷拍熟女少妇极品色| 欧美日韩综合久久久久久| 国产黄色视频一区二区在线观看 | 欧美日本亚洲视频在线播放| 久久精品国产鲁丝片午夜精品| 精品久久久久久久久av| 看免费成人av毛片| 最近在线观看免费完整版| 亚洲欧美日韩无卡精品| 亚洲天堂国产精品一区在线| 嫩草影院精品99| 国产一区二区在线观看日韩| 淫妇啪啪啪对白视频| 免费看美女性在线毛片视频| 日本成人三级电影网站| 在线看三级毛片| 色综合亚洲欧美另类图片| 波多野结衣巨乳人妻| 少妇猛男粗大的猛烈进出视频 | 久久久久久国产a免费观看| 搞女人的毛片| 男人和女人高潮做爰伦理| 国产av在哪里看| 欧洲精品卡2卡3卡4卡5卡区| 久久精品综合一区二区三区| 老熟妇乱子伦视频在线观看| 国产精品美女特级片免费视频播放器| 亚洲精品日韩av片在线观看| 亚洲欧美日韩高清在线视频| 91在线精品国自产拍蜜月| 三级国产精品欧美在线观看| 91精品国产九色| 免费高清视频大片| 久久精品国产亚洲av涩爱 | 欧美最黄视频在线播放免费| 亚洲av第一区精品v没综合| 亚洲电影在线观看av| 啦啦啦啦在线视频资源| 国产一区亚洲一区在线观看| 国产探花极品一区二区| 97热精品久久久久久| 人人妻,人人澡人人爽秒播| 亚洲精品乱码久久久v下载方式| 国产精品日韩av在线免费观看| 久久精品国产亚洲av涩爱 | 免费观看精品视频网站| 亚洲av熟女| 久久久久久九九精品二区国产| 成年av动漫网址| 国产黄a三级三级三级人| 白带黄色成豆腐渣| 三级毛片av免费| 丰满的人妻完整版| 久久久久免费精品人妻一区二区| 99久久精品国产国产毛片| 成人三级黄色视频| 精品一区二区三区av网在线观看| 91在线精品国自产拍蜜月| 国产高潮美女av| 欧洲精品卡2卡3卡4卡5卡区| 噜噜噜噜噜久久久久久91| 成年版毛片免费区| 久99久视频精品免费| 午夜福利18| 国内精品久久久久精免费| 欧美zozozo另类| 一区二区三区高清视频在线| 午夜精品国产一区二区电影 | 91久久精品国产一区二区三区| 国语自产精品视频在线第100页| 国产麻豆成人av免费视频| 国产精品久久久久久精品电影| 日韩欧美在线乱码| 午夜老司机福利剧场| 色在线成人网| 亚洲在线自拍视频| 日本a在线网址| 国产高潮美女av| 色尼玛亚洲综合影院| 听说在线观看完整版免费高清| 免费观看在线日韩| 精品日产1卡2卡| 成人特级av手机在线观看| 九九在线视频观看精品| 欧美3d第一页| 亚洲av成人av| 少妇高潮的动态图| 免费在线观看成人毛片| 日本精品一区二区三区蜜桃| 99久久九九国产精品国产免费| 在线免费观看不下载黄p国产| 国产精品免费一区二区三区在线| 天天一区二区日本电影三级| 可以在线观看的亚洲视频| 一进一出好大好爽视频| 搞女人的毛片| 国产成人aa在线观看| 色吧在线观看| 亚洲国产精品合色在线| 中文字幕精品亚洲无线码一区| 免费不卡的大黄色大毛片视频在线观看 | 嫩草影视91久久| 最近最新中文字幕大全电影3| av视频在线观看入口| 日韩人妻高清精品专区| 尤物成人国产欧美一区二区三区| 国产黄色小视频在线观看| 日韩人妻高清精品专区| 国产aⅴ精品一区二区三区波| 两个人视频免费观看高清| 波多野结衣高清无吗| 久久久精品大字幕| 亚洲高清免费不卡视频| 亚洲欧美日韩无卡精品| 麻豆一二三区av精品| 99riav亚洲国产免费|