• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    When Does Sora Show:The Beginning of TAO to Imaginative Intelligence and Scenarios Engineering

    2024-04-15 09:36:30ByFeiYueWangQinghaiMiaoLingxiLiQinghuaNiXuanLiJuanjuanLiLiliFanYonglinTianandQingLongHan
    IEEE/CAA Journal of Automatica Sinica 2024年4期

    By Fei-Yue Wang ,,, Qinghai Miao ,,, Lingxi Li ,, Qinghua Ni , Xuan Li , Juanjuan Li , Lili Fan , Yonglin Tian , and Qing-Long Han ,,

    DURING our discussion at workshops for writing“What Does ChatGPT Say: The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence (AI) would be in the direction of Imaginative Intelligence (II), i.e., something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments” [2] to replace conventional “Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]–[6].Now we have OpenAI’s Sora, so soon, but this is not the final, actually far away, and it is just the beginning.

    As illustrated in [1], [7], there are three levels of intelligence, i.e., Algorithmic Intelligence, Linguistic Intelligence,Imaginative Intelligence, and according to “The Generalized Godel Theorem” [1], they are bounded by the following relationship:

    AI ?LI ?II.

    Where AlphaGo was the first milestone of Algorithmic Intelligence while ChatGPT that of Linguistic Intelligence.Now with Sora is emerging as the first milestone of Imaginative Intelligence, the triad forms the initial technical version of the decision-making process outlined in Chinese classicI Ching(orBook of Changes, see Fig.1): Hexagrams(Rule and Composition), Judgements and Lines (Hexagram Statements and Line Statements,or Question and Answer),and Ten Wings (Commentaries, or Imagination and Illustration).

    Fig.1.I Ching: The Book of Changes for Decision Intelligence.

    What should we expect for the next milestone in intelligent science and technology? What are their impacts on our life and society? Based on our previous reports in [8], [9] and recent developments in Blockchain and Smart Contracts based DeSci and DAO for decentralized autonomous organizations and operations [10], [11], several workshops [12]–[16] have been organized to address those important issues.The main results have been summarized in this perspective.

    Historic Perspective

    Text-to-Image(T2I)and Text-to-Video(T2V)are two of the most representative applications of Imaginative Intelligence(II).In terms of T2I, traditional methods such as VAE and GAN have been unsatisfactory, prompting OpenAI to explore new avenues with the release of DALL-E in early 2021.DALL-E draws inspiration from the success of language models in the NLP field,treating T2I generation as a sequenceto-sequence translation problem using a discrete variational auto-encoder (VQVAE) and Transformer.By the end of 2021,OpenAI’s GLIDE introduced Denoising Diffusion Probabilistic Models(DDPMs)into T2I generation,proposing classifierfree guidance to improve text faithfulness and image quality.The Diffusion Model, with its advantages in high resolution and fidelity, began to dominate the field of image generation.In April 2022, the release of DALL-E2 showcased stunning image generation performance globally, a giant leap made possible by the capabilities of the diffusion model.Subsequently, the T2I field saw a surge, with a series of T2I models developed such as Google’s Imagen in May, Parti in June, Midjourney in July, and Stable Diffusion in August, all beginning to commercialize, forming a scalable market.

    Compared to T2I, T2V is a more important but more challenging task.On one hand, it is considered important because the model needs to learn the structure and patterns hidden in the video, similar to how humans understand the world through their eyes.Therefore, video generation is a task close to human intelligence and is considered a key path for achieving general artificial intelligence.On the other hand, it is considered difficult because video generation not only needs to learn the appearance and spatial distribution of objects but also needs to learn the dynamic evolution of the world in the temporal domain.In addition, the lack of high-quality video data (especially text-video paired data) and the huge demand for computing power pose great challenges.Therefore, compared to the success of T2I, progress in T2V moves slower.Similar to early T2I, T2V in its initial stages is also based on methods such as GAN and VAE, resulting in low-resolution, short-duration, and minimally dynamic videos that do not reach practical levels.

    Nevertheless, the field of video generation has rapidly evolved during the last two years, especially since late 2023,when a large number of new methods emerged.As shown in Fig.2, these models can be classified according to their underlying backbones.The breakthrough began with language models (Transformer), which fully utilize the attention mechanism and scalability of Transformers; later, the diffusion model family became more prosperous, with high definition and controllability as its advantages.Recently,the strengths of both Transformer and diffusion models have been combined to form the backbone of DiT [17].

    Fig.2.Brief history of video generation and representative models.Sora indicates the beginning of the Imaginative Intelligence new era.

    The families based on language models are shown on the left side of Fig.2.VideoGPT[18]utilizes VQVAE for learning discrete latent representations of raw videos, employing 3D convolutions and axial self-attention.A GPT-like architecture models these latents with spatiotemporal position encodings.NUWA [19], an autoregressive encoder-decoder Transformer,introduces 3DNA to reduce computational complexity, addressing visual data characteristics.CogVideo [20] is featured as a dual-channel attention Transformer backbone, with a multi-frame rate hierarchical training strategy to better align text and video clips.MaskViT [21] shows that we can create good video prediction models by pre-training transformers via Masked Visual Modeling (MVM).It introduces both spatial and spatiotemporal window attention, as well as a variable percentage of tokens masking ratio.TATS [22] focuses on generating longer videos.Based on 3D-VQGAN and transformers, it introduces a technique that extends the capabilities to produce videos in thousands of frames.Phenaki [23] is a bidirectional masked transformer conditioned on pre-computed text tokens.It also introduces a tokenizer for learning video representation which compresses the video into discrete tokens.Using causal attention in time, it allows us to work with variable-length videos.MAGVIT [24] proposes an efficient video generation model through masked token modeling and multi-task learning.It first learns a 3D Vector Quantized(VQ) autoencoder to quantize videos into discrete tokens, and then learns a video transformer through multi-task masked token modeling.MAGVIT-v2 is a video tokenizer designed to generate concise and expressive tokens for both video and image generation using a universal approach.With this new tokenizer, the authors demonstrated that LLM outperforms diffusion models on standard image and video generation benchmarks,including ImageNet and Kinetics.VideoPoet[25]adopts a multi-modal Transformer architecture with a decoderonly structure.It uses the MAGVIT-v2 tokenizer to convert images and videos of arbitrary length into tokens, along with audio tokens and text embeddings,unifying all modalities into the token space.Subsequent operations are carried out in the token Space, enabling the generation of coherent, high-action videos up to 10 seconds in length at once.

    The families based on diffusion models are shown on the right side of Fig.2.Video Diffusion Models (VDM) presents the first result on video generation using diffusion models by extending the image diffusion architecture.VDM employs a space-time factorized U-Net, jointly training on image and video data.It also introduces a conditional sampling technique for extending long and high-resolution videos spatially and temporally.Make-A-Video extends a T2I model to T2V with a spatiotemporally factorized diffusion model, removing the need for text-video pairs.It fine-tunes the T2I model for video generation, benefiting from effective model weight adaptation and improved temporal information fusion compared to VDM.Imagen Video [26] is a text-conditional video generation system that uses a cascade of video diffusion models.It incorporates fully convolutional temporal and spatial superresolution models and a v-parameterization of diffusion models,enabling the generation of high-fidelity videos with a high degree of controllability and world knowledge.Runway Gen-1 [27] extends latent diffusion models to video generation by introducing temporal layers into a pre-trained image model and training jointly on images and videos.PYoCo [28] explores fine-tuning a pre-trained image diffusion model with video data, achieving substantially better performance with photorealism and temporal consistency.VideoCrafter [29], [30]introduces two diffusion models: the T2V model generates realistic and cinematic-quality videos, while the I2V model transforms an image into a video clip while preserving content constraints.EMU VIDEO [31] generates images conditioned on the text and then generates videos conditioned on the text and generated image, using adjusted noise schedules and multi-stage training for high-quality,high-resolution video generation without a deep cascade of models.Stable Video Diffusion [32] is a latent video diffusion model that emphasizes the importance of a well-curated pre-training dataset,providing a strong multi-view 3D-prior for fine-tuning multiview diffusion models that generate multiple views of objects.Lumiere [33] is a T2V diffusion model with a Space-Time U-Net architecture that generates the entire temporal duration of a video at once, leveraging spatial and temporal down- and up-sampling and a pre-trained text-to-image diffusion model to generate full frame-rate, low-resolution videos on multiple space-time scales.

    The center of Fig.2 shows the fusion of the language model and the diffusion model [17], which is believed as the way leading T2V to the SOTA.Video Diffusion Transformer(VDT) [34] is the pioneer in the fusion of transformer and diffusion model, demonstrating its enormous potential in the field of video generation.VDT’s strength lies in its outstanding ability to capture temporal dependencies, enabling it to generate temporally coherent video frames,including simulating the physical dynamics of three-dimensional objects over time.The proposed unified spatiotemporal masking mechanism allows VDT to handle various video generation tasks,achieving wide applicability.VDT’s flexible handling of conditional information,such as simple token space concatenation,effectively unifies information of different lengths and modalities.Unlike UNet, which is primarily designed for images, Transformer can better handle the time dimension by leveraging its powerful tokenization and attention mechanisms to capture long-term or irregular temporal dependencies.Only when the model learns(or memorizes) world knowledge, such as spatial-temporal relationships and physical laws, can it generate videos that match the real world.Therefore,the model’s capacity becomes a key component of video diffusion.Transformer has proven to be highly scalable, making it more suitable than 3D UNet for addressing the challenges of video generation.In December 2023, Stanford and Google introduced W.A.L.T[35], a transformer-based approach for latent video diffusion models(LVDMs),featuring two main design choices.Firstly,it employs a causal encoder to compress images and videos into a single latent space, facilitating cross-modality training and generation.Secondly, it utilizes a window attention architecture specifically designed for joint spatial and spatiotemporal generative modeling.This study represents the initial successful empirical validation of a transformer-based framework for concurrently training image and video latent diffusion models.

    Sora’s highlight is just the beginning of a new era in video generation, and it’s foreseeable that this track will become very crowded.IT giants including Google, Microsoft, Meta,Baidu,startups like Runway,Pika,MidJourney,Stability.ai,as well as universities such as Stanford,Berkeley,Tsinghua,etc.,are all powerful competitors.

    Fig.3.Brief principle diagram of Sora.

    Looking into Sora: A Parallel Intelligence Viewpoint

    Upon its release, Sora sparked a huge wave of excitement, with its accompanying demos showcasing impressive results.Sora shows videos with high fidelity, rich details,significant object changes, and smooth transitions between multiple perspectives.While most video generation models can only produce videos lasting 3 to 5 seconds,Sora can create videos up to one minute in length while maintaining narrative coherence, consistency, and common sense.Sora represents a milestone advancement in AI following ChatGPT.

    What underpins Sora’s powerful video generation capabilities? From Sora’s technical report and the development history of video generation models, several key points can be summarized.

    The first is the model architecture.Sora adopts the Diffusion Transformer (DiT), as shown in the left-upper corner Fig 3.Transformers have demonstrated powerful capabilities in large language models, with their attention mechanism effectively modeling long-range dependencies in spatiotemporal sequential data.Unlike earlier methods that perform windowed attention calculations or the Video Diffusion Transformer (VDT)that computes attention in the temporal and spatial dimensions separately, Sora merges the time and space dimensions and processes them through a single attention mechanism.Moreover, Transformers exhibit high computational efficiency and scalability, forming the basis for the Scaling Law with large models.The Diffusion Model, on the other hand, with its solid foundation in probability theory, offers high-resolution and good generation quality, as well as flexibility and controllability in video generation processes conditioned on text or images.DiT combines the advantages of both the Transformer and the Diffusion Model.

    The second is data processing.As shown on the right side of Fig.3, Sora leverages existing tools, such as the captioner used in DALL-E 3, to generate high-quality captions for raw videos, addressing the lack of video-text pairs.Additionally,through GPT,it expands users’short prompts to provide more precise conditions for video generation over long periods.

    The third is feature representation.During training, Sora first compresses videos into a low-dimensional Latent Space(shown in the dashed rectangle on the left of Fig.3) in both the spatial and temporal dimensions.Corresponding to the tokenization of text, Sora patchifies the low-dimensional representation in Latent Space into SpaceTime Patches,which are input into DiT for processing and ultimately generating new videos.From the perspective of parallel intelligence[36]–[44], the original videos come from the real system, while the Latent Space is the virtual system.Operations on the virtual system are more convenient to take advantage of the Transformer and the Diffusion Model.

    Since OpenAI has not publicly disclosed the technical details of Sora,there may be other undisclosed technologies that have contributed to Sora’s breakthrough in video generation capabilities.It should be noted that Sora’s technical roadmap is far from mature.A large number of institutions are actively exploring and collaborating with each other.Microsoft,Google,Runway,Pika,Stanford,etc.have all iterated multiple versions and are still moving forward.The era of Imaginative Intelligence is just beginning.

    Is Sora a World Model?

    Although the released video clips from Sora have attracted a lot of attentions, OpenAI’s claim that Sora is essentially a World Simulator or a World Model has sparked considerable controversy.Among them, LeCun’s criticism is the most noteworthy.

    A world model is a system that comprehends the real world and its dynamics.By using various types of data, it can be trained in an unsupervised manner to learn a spatial and temporal representation of the environment, in which we can simulate a wide range of situations and interactions encountered in the real world.To create these models,researchers face several open challenges, such as keeping consistent maps of the environment and the ability to navigate and interact within it.A world model must also capture not just the dynamics of the world but also the dynamics of its inhabitants, including machines and humans.

    Thus, can Sora be called a world model? We analyze this from two perspectives.

    Firstly, has Sora learned a world model? From the output results,most video clips are smooth and clear,without strange or jumpy scenes, and they align well with common sense.Sora can generate videos with dynamic camera movements.As the camera moves and rotates, characters and scene elements move consistently in a 3D environment.This implies that Sora already has the potential to understand and create in Spatial-Temporal space.Through these official demos, some have exclaimed that Sora has blurred the boundaries between reality and virtual for the first time in history.Therefore, we can say that Sora has learned some rules of real-world dynamics.However, upon closer observation of these videos, there are still some scenes that violate the laws of reality.For example,the process of a cup breaking, the incorrect direction of a treadmill, a puppy suddenly appearing and disappearing, ants having only four legs, etc.This indicates that Sora still has serious knowledge flaws in complex scenes, time scales, etc.There is still a significant gap compared to a sophisticated physics engine.

    Secondly, does Sora represent the direction of world model development?From a technical perspective,Sora combines the advantages of large language models and diffusion models,representing the highest level of generative models.Scaling video generation models like Sora seems to be a promising approach to build a universal simulator for the physical world,which is a key step toward AGI.However, Yann LeCun has a different view.He believes that generative models need to learn the details of every pixel, making them too inefficient and doomed to fail.As an advocate for world models, he led Meta’s team to propose Joint Embedding Predictive Architecture (JEPA) [45], believing that predictive learning in joint embedding space is more efficient and closer to the way humans learn.The latest release of V-JEPA also demonstrates the preliminary results of this approach.

    In summary, Sora has gained a certain understanding of real-world dynamics.However, its functionality is still very limited,and it struggles with complex scenarios.Whether Sora ultimately succeeds or fails,it represents a meaningful attempt on the road to exploring World Models.Other diverse technical paths should also be encouraged.

    Impacts

    Sora and other video generation models have opened up new horizons for Imaginative Intelligence.PGC (Professional Generated Content)will widely adopt AI tools for production,while UGC (User Generated Content) will gradually be replaced by AI tools.This commercialization of AI-generated video tools will accelerate, profoundly impacting various social domains.In fields like advertising,social media,and short videos, AI-generated videos are expected to lower the barrier to short video creation and improve efficiency.Sora also has the potential to change traditional film production processes by reducing reliance on physical shooting,scene construction,and special effects, thereby lowering film production costs.Additionally, in the field of autonomous driving [46], [47],Sora’s video generation capabilities can provide training data,addressing issues such as data long-tail distribution and difficulty in obtaining corner cases [12].

    On the other hand, Sora has also brought about social controversies.For example, Sora has raised concerns about the spread of false information.Its powerful image and video generation capabilities reach a level of realism that can deceive people,changing the traditional belief of“seeing is believing,”making it harder to verify the authenticity of video evidence.The use of AI to forge videos for fraud and spread false information can challenge government regulation and lead to social unrest.Furthermore, Sora may lead to copyright disputes, as there could be potential infringement risks even in the materials used during the training process.Some also worry that generated videos could exacerbate religious and racial issues, intensifying conflicts between different religious groups, ethnicities, and social classes.

    TAO to the Future of Imaginative Intelligence

    Imaginative Intelligence.On the path to achieving imaginative intelligence, Sora represents a significant leap forward in AI’s ability to visualize human imagination on a plausible basis.Imaginative intelligence, the highest level of the three layers of intelligence, goes beyond learning data,understanding texts, and reasoning.It deals with high-fidelity visual expressions and intuitive representations of imaginary worlds.After ChatGPT made advances in linguistic intelligence through superior text comprehension and logical reasoning,Sora excels at transforming potential creative thoughts into visualized scenes, giving AI the ability to understand and reproduce human imagination.This achievement not only provides individual creators with a quick way to visualize imaginary worlds, but also creates a conducive environment for collective creativity to collide and merge.It overcomes language barriers and makes it possible to merge ideas from different origins and cultures on a single canvas and ignite new creative inspirations.Sora has the potential to be a groundbreaking tool for humanity, allowing exploration of unknown territories and prediction of future trends in virtual environments.As technology continues to advance and its applications expand, the development of Sora and analog technologies signals the beginning of a new era in which human and machine intelligence reinforce each other and explore the boundaries of the imaginary world together.

    Scenarios Engineeringplays a crucial role in promoting the smooth and secure operation of artificial intelligence systems.It encompasses various processes aimed at optimizing the environment and conditions in which artificial intelligence operates, thereby maximizing its efficiency and safety [48]–[51].With the emergence of advanced models like Sora,which specialize in converting text inputs into video outputs, not only new pathways for generating dynamic visual content are provided but also the capabilities of Scenarios Engineering are significantly enhanced [52]–[54].This, in turn, contributes to the improvement of intelligent algorithms through enhanced calibration, validation, analysis, and other fundamental tasks.

    Blockchain and Federated Intelligence.In its very essence,blockchain technology serves to underpin and uphold the”TRUE”characteristics,standing for trustable,reliable,usable,and effective/efficient[55].Federated control is achieved based on blockchain technology, supporting federated security, federated consensus,federated incentives,and federated contracts[56].Federated security comes from the security mechanism in the blockchain, playing a crucial role in the encryption,transmission,and verification of federated data[57].Federated consensus ensures distributed consensus among all federated nodes on strategies,states,and updates.Federated incentives in federated blockchain are established for maintenance and management [58].Therefore, designing fast, stable, and positive incentives can balance the interests between federated nodes,stimulate the activity of federated nodes, and improve the efficiency of the federated control system.Federated contracts[59]are based on smart contract algorithms that automatically and securely implement federated control.Federated contracts mainly function in access control, non-private federated data exchange,local and global data updates,and incident handling.

    DeSci and DAO/TAO.The emergence of new ideas and technologies presents great opportunities for paradigm innovation.For example, the wave of decentralized science (DeSci)is changing the way scientific research is organized.As AI research enters rapid iteration, there are calls to establish new research mechanisms to overcome challenges such as the lack of transparency and trust in traditional scientific cooperation,and to achieve more efficient and effective scientific discoveries.DeSci aims to create a decentralized, transparent, and secure network for scientists to share data, information, and research findings.The decentralized nature of DeSci enables scientists to collaborate more fairly and democratically.DAO,as a means of implementing DeSci, provides a new organizational form for AI innovation and application [60], [61].DAO represents a digitally-native entity that autonomously executes its operations and governance on a blockchain network via smart contracts, operating independently without reliance on any centralized authority or external intervention [62]–[64].The unique attributes of decentralization, transparency, and autonomy inherent in DAOs provide an ideal ecosystemic foundation for developing imaginative intelligence.However,practical implementation has also shed light on certain inherent limitations associated with DAOs, such as power concentration, high decision-making barrier, and the instability of value system [65].As such, TRUE autonomous organizations and operations (TAO) were proposed to address these issues,by highlighting their fundamental essence of being “TRUE”instead of emphasizing the decentralized attribute of DAOs[66].Within the TAO framework, decision-making processes are hinged upon community consensus,and resource allocation follows transparent and equitable rules, thereby encouraging multidisciplinary experts and developers to actively engage in complex and cutting-edge AI development.Supported by blockchain intelligence [67], TAO stimulates worldwide interest and sustained investment in intelligent technologies by devising innovative incentive mechanisms,reducing collaboration costs and enhancing flexibility and responsiveness of community management.As such, TAO provides an ideal ecosystem for nurturing, maturing, and scaling up the development of groundbreaking technologies of imaginative intelligence.

    When will Sora or Sora-like AI Technology show us the real road or TAO to Imagitative Intelligence that could be practically used for constructing a sustainable and smart society with intelligent industries for better humanity? We are still expecting, but more enthusiastically now.

    ACKNOWLEDGMENT

    This work was partially supported by the National Natural Science Foundation of China (62271485, 61903363,U1811463, 62103411, 62203250) and the Science and Technology Development Fund of Macau SAR (0093/2023/RIA2,0050/2020/A1).

    午夜免费鲁丝| 女性被躁到高潮视频| 久久 成人 亚洲| 一二三四中文在线观看免费高清| 久久久久久伊人网av| 亚洲国产欧美人成| 国产高清国产精品国产三级 | 国产精品精品国产色婷婷| a级毛片免费高清观看在线播放| 男女边摸边吃奶| 成人亚洲欧美一区二区av| 欧美激情极品国产一区二区三区 | 成年女人在线观看亚洲视频| 超碰97精品在线观看| 丝袜脚勾引网站| 国产精品一区二区三区四区免费观看| 国产亚洲av片在线观看秒播厂| 少妇人妻精品综合一区二区| 97精品久久久久久久久久精品| 精品久久久久久久末码| 色视频在线一区二区三区| 校园人妻丝袜中文字幕| 美女福利国产在线 | 下体分泌物呈黄色| 夜夜骑夜夜射夜夜干| 成人影院久久| 亚洲精品一二三| 日韩欧美一区视频在线观看 | 一级爰片在线观看| 亚洲av中文字字幕乱码综合| 国产伦在线观看视频一区| 久久久久久久久久久免费av| 国产成人精品久久久久久| 男人舔奶头视频| 3wmmmm亚洲av在线观看| 91久久精品国产一区二区三区| 美女内射精品一级片tv| 夜夜骑夜夜射夜夜干| 男人爽女人下面视频在线观看| 性色avwww在线观看| 国产亚洲91精品色在线| 十分钟在线观看高清视频www | 国产在线男女| 老熟女久久久| 久久精品国产鲁丝片午夜精品| 国产日韩欧美亚洲二区| 黄色欧美视频在线观看| 97超碰精品成人国产| 国产成人精品福利久久| 三级国产精品欧美在线观看| 午夜福利在线在线| 日韩不卡一区二区三区视频在线| 最近最新中文字幕免费大全7| 国产亚洲欧美精品永久| 免费久久久久久久精品成人欧美视频 | 精品一品国产午夜福利视频| 国产成人精品福利久久| 你懂的网址亚洲精品在线观看| 欧美+日韩+精品| 热99国产精品久久久久久7| 在线免费观看不下载黄p国产| 男人爽女人下面视频在线观看| 欧美3d第一页| 一级爰片在线观看| 成人亚洲精品一区在线观看 | 国产免费一级a男人的天堂| 久久精品国产自在天天线| 在线观看三级黄色| 国精品久久久久久国模美| 乱系列少妇在线播放| 国产乱来视频区| 十分钟在线观看高清视频www | 国产精品一区二区在线观看99| 欧美97在线视频| av在线老鸭窝| 国产成人午夜福利电影在线观看| 99国产精品免费福利视频| 蜜臀久久99精品久久宅男| 多毛熟女@视频| 夜夜骑夜夜射夜夜干| 精品人妻一区二区三区麻豆| 尾随美女入室| 国产精品一区二区性色av| 久久99蜜桃精品久久| 少妇的逼好多水| 亚洲国产最新在线播放| 女的被弄到高潮叫床怎么办| 国产又色又爽无遮挡免| 国产成人精品久久久久久| 精品国产乱码久久久久久小说| 精品一区二区免费观看| 日本vs欧美在线观看视频 | 制服丝袜香蕉在线| av福利片在线观看| 国内少妇人妻偷人精品xxx网站| 黄色一级大片看看| 成人亚洲欧美一区二区av| 午夜福利在线观看免费完整高清在| 欧美日韩一区二区视频在线观看视频在线| 久久精品国产亚洲网站| 成年av动漫网址| av在线app专区| 99热这里只有是精品50| 老师上课跳d突然被开到最大视频| 成年av动漫网址| 极品教师在线视频| 久久久久久久国产电影| 在线免费观看不下载黄p国产| 一二三四中文在线观看免费高清| 国产老妇伦熟女老妇高清| 在现免费观看毛片| 日韩一区二区三区影片| 成年女人在线观看亚洲视频| 久久国产精品大桥未久av | 亚洲成人中文字幕在线播放| 亚洲av福利一区| 少妇裸体淫交视频免费看高清| 夜夜骑夜夜射夜夜干| 国产亚洲一区二区精品| 亚洲精品国产av蜜桃| 国产av一区二区精品久久 | 国产精品成人在线| 爱豆传媒免费全集在线观看| 免费观看无遮挡的男女| 成人二区视频| 日韩成人伦理影院| 高清午夜精品一区二区三区| 亚洲欧洲国产日韩| 日本av手机在线免费观看| 欧美精品亚洲一区二区| 国产 一区 欧美 日韩| 免费大片黄手机在线观看| 日本午夜av视频| 日韩成人av中文字幕在线观看| av卡一久久| 亚洲精品中文字幕在线视频 | 国产精品蜜桃在线观看| 精品少妇黑人巨大在线播放| 亚洲精品亚洲一区二区| 高清av免费在线| 日韩成人av中文字幕在线观看| 国产 精品1| 欧美成人精品欧美一级黄| 欧美日韩视频精品一区| 欧美日韩精品成人综合77777| av免费观看日本| 国产亚洲欧美精品永久| 天美传媒精品一区二区| 观看av在线不卡| 久久ye,这里只有精品| 午夜激情福利司机影院| 男人添女人高潮全过程视频| 久久女婷五月综合色啪小说| 日本与韩国留学比较| 国产精品免费大片| 18禁裸乳无遮挡免费网站照片| 下体分泌物呈黄色| 美女中出高潮动态图| 亚洲无线观看免费| 国产在视频线精品| 精品亚洲成a人片在线观看 | 汤姆久久久久久久影院中文字幕| 国产精品一区二区三区四区免费观看| 国产爱豆传媒在线观看| 国产精品三级大全| 大片免费播放器 马上看| 亚洲成人手机| 亚洲人与动物交配视频| 美女中出高潮动态图| 黄色配什么色好看| 人体艺术视频欧美日本| 99热网站在线观看| 搡女人真爽免费视频火全软件| 各种免费的搞黄视频| 久久99热这里只频精品6学生| 最近中文字幕高清免费大全6| 一个人看的www免费观看视频| 一级毛片aaaaaa免费看小| 一个人看的www免费观看视频| 日韩人妻高清精品专区| 亚洲av中文av极速乱| 久久久久人妻精品一区果冻| 国产一级毛片在线| 伦精品一区二区三区| 在线观看av片永久免费下载| 国产男人的电影天堂91| 一区二区三区免费毛片| 亚洲国产欧美人成| 美女内射精品一级片tv| 新久久久久国产一级毛片| 亚洲经典国产精华液单| 搡女人真爽免费视频火全软件| 伦理电影免费视频| 男人爽女人下面视频在线观看| 在线免费十八禁| 免费播放大片免费观看视频在线观看| 亚洲欧美成人综合另类久久久| 高清不卡的av网站| 中文字幕人妻熟人妻熟丝袜美| 国产成人午夜福利电影在线观看| 人体艺术视频欧美日本| 亚洲国产av新网站| 一区二区三区乱码不卡18| 国产深夜福利视频在线观看| 国产精品久久久久久久电影| 亚洲av男天堂| 在现免费观看毛片| 欧美一级a爱片免费观看看| 久久久亚洲精品成人影院| 欧美日韩视频高清一区二区三区二| 在线免费观看不下载黄p国产| 欧美一级a爱片免费观看看| 亚洲成人av在线免费| 99久久精品热视频| 精品一区在线观看国产| a 毛片基地| 在线观看人妻少妇| 岛国毛片在线播放| 看非洲黑人一级黄片| 日韩免费高清中文字幕av| 观看免费一级毛片| 欧美激情极品国产一区二区三区 | 国产永久视频网站| 精华霜和精华液先用哪个| 91在线精品国自产拍蜜月| 欧美成人午夜免费资源| av天堂中文字幕网| 久久国产乱子免费精品| 精品一区在线观看国产| 精品一区二区三区视频在线| 97在线人人人人妻| 亚洲第一区二区三区不卡| 国产男人的电影天堂91| 成人无遮挡网站| 亚洲第一区二区三区不卡| 欧美日韩在线观看h| 亚洲国产精品一区三区| 久久久色成人| 精品亚洲成国产av| 天堂中文最新版在线下载| 久久久久久伊人网av| 王馨瑶露胸无遮挡在线观看| 乱码一卡2卡4卡精品| 日本免费在线观看一区| 中文字幕久久专区| 亚洲国产毛片av蜜桃av| 国产精品.久久久| 少妇人妻一区二区三区视频| 欧美一区二区亚洲| 午夜激情福利司机影院| 大又大粗又爽又黄少妇毛片口| 男女免费视频国产| 久久人人爽av亚洲精品天堂 | 成人亚洲精品一区在线观看 | 国产黄色视频一区二区在线观看| 蜜桃在线观看..| 视频区图区小说| 2021少妇久久久久久久久久久| 亚洲第一av免费看| av在线蜜桃| 国产视频首页在线观看| 99热这里只有精品一区| 夜夜看夜夜爽夜夜摸| 亚洲欧洲日产国产| 18禁动态无遮挡网站| 黄色日韩在线| 亚洲精品色激情综合| 日韩免费高清中文字幕av| 免费少妇av软件| 1000部很黄的大片| av一本久久久久| 熟女人妻精品中文字幕| 联通29元200g的流量卡| 亚洲精品乱码久久久久久按摩| 国产成人91sexporn| 深爱激情五月婷婷| 三级国产精品片| tube8黄色片| 99精国产麻豆久久婷婷| 色吧在线观看| 在线 av 中文字幕| 黄色欧美视频在线观看| 亚洲欧美精品自产自拍| 亚洲国产欧美在线一区| 精品久久久久久久末码| 久久久久国产精品人妻一区二区| 欧美高清性xxxxhd video| 女性生殖器流出的白浆| av免费观看日本| 久久久久人妻精品一区果冻| 国产精品伦人一区二区| 国产成人精品福利久久| 国产精品一及| 国产精品国产三级国产专区5o| 久久鲁丝午夜福利片| 最近最新中文字幕免费大全7| 亚洲精品日本国产第一区| 亚洲久久久国产精品| 国产精品秋霞免费鲁丝片| 久久午夜福利片| 直男gayav资源| av不卡在线播放| 夜夜爽夜夜爽视频| 久久久午夜欧美精品| 啦啦啦中文免费视频观看日本| 五月天丁香电影| 18+在线观看网站| 亚洲国产精品专区欧美| 99久国产av精品国产电影| 内射极品少妇av片p| 精品久久久久久久久av| 亚洲人成网站在线播| 中文字幕av成人在线电影| 嫩草影院入口| 久久毛片免费看一区二区三区| 26uuu在线亚洲综合色| 黄色视频在线播放观看不卡| 欧美激情极品国产一区二区三区 | 美女xxoo啪啪120秒动态图| 久久久久久人妻| 日韩av在线免费看完整版不卡| 国产综合精华液| videos熟女内射| 2022亚洲国产成人精品| 久久精品久久精品一区二区三区| 国产一级毛片在线| 看十八女毛片水多多多| 我的女老师完整版在线观看| 日本欧美国产在线视频| 少妇精品久久久久久久| 久久久久久九九精品二区国产| 日韩强制内射视频| 99热这里只有精品一区| 在线天堂最新版资源| 国产精品三级大全| 国产精品久久久久久久久免| 久久女婷五月综合色啪小说| 亚洲精品国产色婷婷电影| 纯流量卡能插随身wifi吗| 天堂中文最新版在线下载| 国产亚洲最大av| 99热这里只有精品一区| 狂野欧美激情性bbbbbb| h视频一区二区三区| 亚洲成人中文字幕在线播放| 91在线精品国自产拍蜜月| 一级毛片久久久久久久久女| 国产乱人偷精品视频| 天天躁夜夜躁狠狠久久av| 91久久精品国产一区二区成人| 国产无遮挡羞羞视频在线观看| 久久婷婷青草| 亚洲av男天堂| 国产v大片淫在线免费观看| 国内少妇人妻偷人精品xxx网站| 久久久久久久精品精品| 男女无遮挡免费网站观看| 国产精品一及| 国产亚洲一区二区精品| 国产av精品麻豆| 久久影院123| 亚洲精品456在线播放app| 国产精品久久久久久久电影| 中文精品一卡2卡3卡4更新| 建设人人有责人人尽责人人享有的 | 欧美xxⅹ黑人| 黄色怎么调成土黄色| 欧美一级a爱片免费观看看| 最新中文字幕久久久久| 久久精品夜色国产| 五月开心婷婷网| 各种免费的搞黄视频| 亚洲人成网站在线播| 亚洲精品视频女| 一本久久精品| 欧美激情国产日韩精品一区| 国产亚洲午夜精品一区二区久久| 免费看日本二区| 1000部很黄的大片| 成人18禁高潮啪啪吃奶动态图 | 一本色道久久久久久精品综合| 久久午夜福利片| 日韩视频在线欧美| 精品人妻偷拍中文字幕| 久久韩国三级中文字幕| 伊人久久精品亚洲午夜| 我要看日韩黄色一级片| 国产成人精品一,二区| 九九久久精品国产亚洲av麻豆| 精品人妻熟女av久视频| 欧美区成人在线视频| 欧美另类一区| 精品亚洲成a人片在线观看 | 51国产日韩欧美| 九草在线视频观看| 爱豆传媒免费全集在线观看| 国产美女午夜福利| 黑丝袜美女国产一区| 男女边吃奶边做爰视频| 欧美日韩亚洲高清精品| 纵有疾风起免费观看全集完整版| 精品久久久久久久久av| 女的被弄到高潮叫床怎么办| 蜜臀久久99精品久久宅男| 高清欧美精品videossex| 久久久久久久久久成人| 国产精品欧美亚洲77777| 又黄又爽又刺激的免费视频.| 国产精品福利在线免费观看| 亚洲人与动物交配视频| 熟女电影av网| 91久久精品国产一区二区成人| 人人妻人人看人人澡| 欧美日韩视频精品一区| 2021少妇久久久久久久久久久| 欧美一级a爱片免费观看看| 在线观看免费日韩欧美大片 | 久久久午夜欧美精品| 日韩免费高清中文字幕av| 成人高潮视频无遮挡免费网站| 国产一区二区三区av在线| 我的老师免费观看完整版| 国产精品av视频在线免费观看| 麻豆成人午夜福利视频| 伦理电影大哥的女人| 夫妻午夜视频| 最近的中文字幕免费完整| 精品国产露脸久久av麻豆| 美女内射精品一级片tv| 亚洲中文av在线| 久久人人爽av亚洲精品天堂 | 国产精品av视频在线免费观看| 日本黄色片子视频| 国产黄色免费在线视频| 国产淫片久久久久久久久| 久久国产精品大桥未久av | 麻豆成人午夜福利视频| 欧美日韩一区二区视频在线观看视频在线| 深爱激情五月婷婷| 熟女av电影| 国产亚洲最大av| 小蜜桃在线观看免费完整版高清| xxx大片免费视频| tube8黄色片| 亚洲国产av新网站| 中文字幕免费在线视频6| 日日啪夜夜撸| 在现免费观看毛片| 免费人妻精品一区二区三区视频| 亚洲中文av在线| 免费大片18禁| 欧美高清成人免费视频www| 日韩三级伦理在线观看| 久热久热在线精品观看| 男的添女的下面高潮视频| 2021少妇久久久久久久久久久| 免费看光身美女| 网址你懂的国产日韩在线| 午夜日本视频在线| 嫩草影院入口| 不卡视频在线观看欧美| 免费大片18禁| 久久亚洲国产成人精品v| 国产精品爽爽va在线观看网站| 乱码一卡2卡4卡精品| 少妇人妻 视频| 纯流量卡能插随身wifi吗| 男女啪啪激烈高潮av片| av不卡在线播放| 色网站视频免费| 少妇裸体淫交视频免费看高清| 最近最新中文字幕大全电影3| 男女免费视频国产| 国产免费一区二区三区四区乱码| 内射极品少妇av片p| 国产精品免费大片| 中文字幕久久专区| 免费不卡的大黄色大毛片视频在线观看| 99热全是精品| 汤姆久久久久久久影院中文字幕| 2021少妇久久久久久久久久久| 国产成人午夜福利电影在线观看| 各种免费的搞黄视频| 国产成人a区在线观看| 亚洲精品色激情综合| 一级爰片在线观看| 丰满人妻一区二区三区视频av| 婷婷色av中文字幕| 亚洲精品久久久久久婷婷小说| 成人18禁高潮啪啪吃奶动态图 | 一二三四中文在线观看免费高清| 精品一区二区三卡| 少妇人妻 视频| 亚洲国产成人一精品久久久| 欧美日韩视频精品一区| 亚洲成人一二三区av| 亚洲欧美日韩无卡精品| 国产精品99久久99久久久不卡 | 午夜精品国产一区二区电影| 亚洲人成网站高清观看| 高清在线视频一区二区三区| 日韩精品有码人妻一区| 少妇熟女欧美另类| 男人舔奶头视频| 国产精品99久久99久久久不卡 | 午夜福利高清视频| 人体艺术视频欧美日本| 亚洲国产欧美在线一区| av黄色大香蕉| 久久99精品国语久久久| 亚洲精品中文字幕在线视频 | 久久毛片免费看一区二区三区| 各种免费的搞黄视频| 精品一区二区三卡| 亚洲自偷自拍三级| 成人美女网站在线观看视频| 免费不卡的大黄色大毛片视频在线观看| 老熟女久久久| 日韩av不卡免费在线播放| 中文乱码字字幕精品一区二区三区| 美女脱内裤让男人舔精品视频| 一区二区三区四区激情视频| 国产在线男女| 欧美日韩精品成人综合77777| 国产人妻一区二区三区在| 国产乱人视频| 国产中年淑女户外野战色| 亚洲一区二区三区欧美精品| 色婷婷久久久亚洲欧美| 内射极品少妇av片p| 高清av免费在线| 欧美一级a爱片免费观看看| 在线观看美女被高潮喷水网站| 国产精品伦人一区二区| 日韩中字成人| 亚洲不卡免费看| 久久99精品国语久久久| 在线亚洲精品国产二区图片欧美 | 少妇熟女欧美另类| 深夜a级毛片| 成年av动漫网址| 亚洲综合色惰| 99热这里只有是精品50| 一边亲一边摸免费视频| 日韩欧美一区视频在线观看 | 国产精品国产三级国产av玫瑰| 亚洲不卡免费看| 色吧在线观看| 男女无遮挡免费网站观看| 国产亚洲最大av| 99国产精品免费福利视频| 久久青草综合色| 久久久久久久久久人人人人人人| 国产免费一区二区三区四区乱码| 黄色日韩在线| 日韩亚洲欧美综合| 日韩人妻高清精品专区| 久久99热6这里只有精品| 亚洲欧美日韩无卡精品| 婷婷色av中文字幕| 亚洲欧洲国产日韩| 色婷婷久久久亚洲欧美| 乱系列少妇在线播放| 久久人妻熟女aⅴ| 国产精品久久久久成人av| 人妻夜夜爽99麻豆av| 亚洲精品日本国产第一区| 日本与韩国留学比较| 老熟女久久久| 亚洲内射少妇av| 99国产精品免费福利视频| 毛片一级片免费看久久久久| 精品熟女少妇av免费看| 51国产日韩欧美| 亚洲美女黄色视频免费看| 免费看日本二区| 亚洲美女视频黄频| 国产91av在线免费观看| 久久99热这里只有精品18| 国产成人精品一,二区| 亚洲av男天堂| 午夜视频国产福利| 日本av手机在线免费观看| 国产av精品麻豆| a 毛片基地| 国产黄频视频在线观看| 亚洲美女搞黄在线观看| 国产精品人妻久久久影院| 国产欧美日韩精品一区二区| 色哟哟·www| 久久毛片免费看一区二区三区| 一级毛片电影观看| av视频免费观看在线观看| 少妇裸体淫交视频免费看高清| 久久鲁丝午夜福利片| 日韩欧美精品免费久久| 边亲边吃奶的免费视频| 免费看av在线观看网站| 久久久久久久久大av| 国产成人精品一,二区| 成人影院久久| 日本黄色日本黄色录像| 成人无遮挡网站| 人体艺术视频欧美日本| 女性被躁到高潮视频| 边亲边吃奶的免费视频| 国产淫片久久久久久久久| 亚洲精品,欧美精品| 亚洲精品456在线播放app| 高清黄色对白视频在线免费看 | 黑人猛操日本美女一级片| 亚洲av中文字字幕乱码综合| 免费久久久久久久精品成人欧美视频 | 麻豆国产97在线/欧美| 一级毛片aaaaaa免费看小|