• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Scene Visual Perception and AR Navigation Applications

    2023-05-08 06:13:24LUPingSHENGBinSHIWenzhe
    ZTE Communications 2023年1期

    LU Ping ,SHENG Bin ,SHI Wenzhe

    (1.State Key Laboratory of Mobile Network and Mobile Multimedia Technology,Shenzhen 518055,China;2.ZTE Corporation,Shenzhen 518057,China;3.Shanghai Jiao Tong University,Shanghai 200240,China)

    Abstract: With the rapid popularization of mobile devices and the wide application of various sensors,scene perception methods applied to mobile devices occupy an important position in location-based services such as navigation and augmented reality (AR).The development of deep learning technologies has greatly improved the visual perception ability of machines to scenes.The basic framework of scene visual per‐ception,related technologies and the specific process applied to AR navigation are introduced,and future technology development is pro‐posed.An application (APP) is designed to improve the application effect of AR navigation.The APP includes three modules: navigation map generation,cloud navigation algorithm,and client design.The navigation map generation tool works offline.The cloud saves the navigation map and provides navigation algorithms for the terminal.The terminal realizes local real-time positioning and AR path rendering.

    Keywords: 3D reconstruction;image matching;visual localization;AR navigation;deep learning

    1 Introduction

    Navigation services applied to mobile devices are an in‐dispensable part of modern society.At present,the outdoor positioning and navigating service technology has become mature,and the Global Positioning Sys‐tem (GPS) can provide relatively accurate position information and related supporting navigation services for outdoor pedes‐trians.For example,the navigation products of Baidu,Amap,Tencent and other companies can meet the location informa‐tion and navigation service needs of outdoor pedestrians in terms of location services.However,once pedestrians go in‐doors,e.g.,in shopping malls,airports,underground parking lots and other sheltered places,the positioning signal is greatly attenuated by factors like walls,and the GPS-based outdoor navigation technology becomes insufficient.The exist‐ing indoor localization methods have many constraints in local‐ization accuracy,deployment overhead,and resource con‐sumption,which limits their promotion in real-world naviga‐tion applications.

    In recent years,researchers have designed a variety of in‐door and outdoor positioning solutions for various types of in‐formation such as visible light communication (VLC),built-in sensors,QR codes,and WIFI.However,these solutions have many shortcomings in terms of localization accuracy,deploy‐ment difficulty,and equipment overhead.For example,the VLC-based methods require indoor LED lights to be upgraded on a large scale,which greatly increases deployment costs.Meanwhile,the WIFI-based methods cannot provide accurate direction information,which is difficult to meet the needs of precise localization.

    However,in a visual scenario perception method,target recognition and position calculation are performed by means of image processing,so that relatively high positioning preci‐sion can be provided,and deployment of an additional de‐vice is not required,which is widely researched and applied in recent years.

    The main application of scene perception is visual localiza‐tion,which is a method of determining the position of 6-degree of freedom (6-DoF) from the image.The initialization condi‐tions of visual localization usually require a sparse model of the scene and the estimated pose of the query image.Aug‐mented reality (AR) navigation is an important application sce‐nario of visual localization technologies,which can interact with the real world in a virtual environment through localiza‐tion.The application of AR navigation technologies has great prospects in the future.Shopping malls have the most demand for localization and navigation technologies,and users are very interested in store discount information,personalized ad‐vertisements,store ratings,store locations,and indoor road guidance.The application of scene visual perception and AR navigation can solve most of the above problems well,and has vast potential in future development in the expansion of added value.

    This paper introduces the design and implementation of AR navigation applications (APPs) and the cloud algorithm in de‐tail,and starts from three aspects: navigation map generation,the cloud navigation algorithm,and the client design.Com‐bined with specific cases,this paper introduces in detail the process of panoramic data acquisition and processing,point cloud map[1]and computer aided design (CAD) map alignment in the navigation map generation tool,and introduces the path planning algorithm and path correction algorithm in the cloud navigation algorithm.In terms of localization and AR path ren‐dering,the client design method is introduced in detail,and fi‐nally,the running example of an AR navigation APP is given.

    2 Basic Framework of Scene Visual Perception

    Similar to humans,machines perceive and understand the environment mostly through visual information.In recent years,the development of 3D visual perception methods has provided great help for building models of the real physical world.For various application scenarios,there are currently some vision algorithms with commercial application capabili‐ties,including face recognition,living body detection,3D re‐construction,simultaneous localization and mapping (SLAM),gesture recognition,behavior analysis,augmented reality,vir‐tual reality,etc.

    Scene visual perception applied to navigation mainly in‐cludes 3D reconstruction and SLAM.The above steps can be regarded as the process of building a visual map.Visual mapbased localization usually includes steps such as visual map construction and update,image retrieval,and fine localization,among which the visual map is the core of the method.Accord‐ing to the condition that the image frame has accurate prior pose information or not,the process of constructing a visual map can be divided into prior pose-based construction meth‐ods and non-prior pose methods.In the prior pose-based con‐struction methods,the prior pose of the image frame can be de‐rived from the high-precision LiDAR data synchronized and calibrated with the camera,which is common in highprecision acquisition vehicles in the field of autonomous driv‐ing.In small-scale scenes,especially indoors,the prior pose can also be obtained from visual motion capture systems such as Vicon and OptiTrack.The non-prior pose methods adopt of‐fline extraction of feature points and offline optimization of pose and scene structures,which is similar to structure-frommotion (SfM).The constructed geometric visual map generally includes image frames,feature points and descriptors,3D points,the correspondence between image frames,and the cor‐respondence between 2D points and 3D points.During the process,due to changes in the real scene,the constructed vi‐sual map also needs to be updated synchronously to detect new and expired changes in time,and then update the corre‐sponding changes to the visual map.When the prior visual map is obtained,the image retrieval and fine localization steps can usually be performed on the newly acquired image frame to complete localization.In the visual map-based localization framework,sensor information such as inertial measurement unit (IMU),GPS,and wheel odometer can also be fused.

    3 Introduction to Key Technologies of Scene Visual Perception

    3.1 3D Reconstruction

    Accurate and robust 3D reconstruction methods are crucial to visual localization.The purpose of 3D reconstruction is to obtain the geometry and structure of an object or a scene from a set of images.SfM is a way to achieve 3D reconstruction,which is mainly used in the stage of building sparse point cloud in 3D reconstruction.A complete 3D reconstruction pro‐cess usually also includes a multi-view stereo (MVS) step to achieve dense reconstruction.SfM is mainly used for mapping and restoring the structure of the scene.According to the dif‐ference in the image data processing flow,SfM can usually in‐clude four categories: incremental SfM,global SfM,distrib‐uted SfM,and hybrid SfM.Among them,distributed SfM and hybrid SfM are usually used to solve large-scale reconstruc‐tion and are based on incremental SfM and global SfM.Incre‐mental SfM mainly includes two steps.The first step is to find the initial correspondence,and the second step is to achieve incremental reconstruction.The former aims to extract robust and well-distributed features to match image pairs,and the lat‐ter is used to estimate the image pose and 3D structure through image registration,triangulation,bundle adjustment (BA),and outlier removal.The initial corresponding outliers usually need to be removed by geometric verification methods.Generally,when the number of recovered image frames ac‐counts for a certain proportion,global BA is required.Due to the incremental BA processing,incremental SfM usually has higher accuracy and better robustness.As the number of im‐ages increases,the scale of BA processing becomes larger,leading to disadvantages such as low efficiency and large memory usage.Additionally,incremental SfM suffers from cu‐mulative drift as images are incrementally added.Typical SfM frameworks include Bundler and COLMAP.

    CAO et al.[2]proposed a fast and robust feature tracking method for 3D reconstruction using SfM.First,to save compu‐tational costs,a feature clustering method was used to cluster a large set of images into small ones to avoid some wrong fea‐ture matching.Second,the joint search set method was used to achieve fast feature matching,which could further save the computational time of feature tracking.Third,a geometric con‐straint method was proposed to remove outliers in trajectories produced by feature tracking methods.The method could cope with the effects of image distortion,scale changes,and illumi‐nation changes.LINDENBERGER et al.[3]directly aligned low-level image information from multiple views,optimized feature point locations using depth feature metrics after fea‐ture matching,and performed BA through similar depth fea‐ture metrics during incremental reconstruction.In this pro‐cess,the convolutional network was used to extract the dense feature map from the image,then the position of the feature points in the image was adjusted according to the sparse fea‐ture matching to obtain the two-dimensional observation of the same 3D point in different images,and the SfM reconstruction was completed according to the adjustment.The BA optimiza‐tion residual in the reconstruction process changes from repro‐jection error to feature metric error.This improvement is ro‐bust to large detection noise and appearance changes,as it op‐timizes feature metric errors based on dense features pre‐dicted by neural networks.

    The cumulative drift problem can be solved by global SfM.For the fundamental and essential matrix between images ob‐tained in the image matching process,the relative rotation and relative translation can be obtained through decomposition.Using the relative rotation as a constraint,the global rotation can be recovered,and then the global translation can be recov‐ered using the global rotation and relative translation con‐straints.Since the construction of the global BA does not re‐quire multiple optimizations,the global SfM is more efficient.However,since the relative translation constraints only con‐strain the translation direction and the scale is unknown,the translation averaging is difficult to solve.In addition,the trans‐lational average solution process is sensitive to outliers,so the global SfM is limited in practical applications.

    3.2 Image Matching

    How to extract robust,accurate,and sufficient image corre‐spondences is a key issue in 3D reconstruction.With the de‐velopment of deep learning,learning-based image matching methods have achieved excellent performance.A typical im‐age matching process usually includes three steps: feature ex‐traction,feature description,and feature matching.

    Detection methods based on deep convolutional networks search for interest points by constructing response graphs,in‐cluding supervised methods[4–5],self-supervised methods[6–7],and unsupervised methods[8–9].Supervised methods use an‐chors to guide the training process of the model,but the perfor‐mance of the model is likely to be limited by the anchor con‐struction method.Self-supervised and unsupervised methods do not require human-annotated data,while they focus on geo‐metric constraints between image pairs.Feature descriptors use local information around interest points to establish the correct correspondence of image features.Due to the informa‐tion extraction and representation capabilities,deep learning techniques have also achieved good performance in feature de‐scriptions.The deep learning-based feature description prob‐lem is usually a supervised learning problem,that is,learning a representation so that the matched features in the measure‐ment space are as close as possible,and the unmatched fea‐tures are as far as possible[10].Learning-based descriptors largely avoid the requirement of human experience and prior knowledge.Existing learning-based feature description meth‐ods include two categories,namely metric learning[11–12]and descriptor learning[13–14],and the difference lies in the output content of the descriptor.Metric learning methods learn met‐ric discriminants for similarity measurement,while descriptor learning generates descriptor representations from raw images or image patches.

    Among these methods,SuperGlue[14]proposed a network ca‐pable of feature matching and filtering outliers simultaneously,whose feature matching was achieved by solving a differen‐tiable optimization transfer problem.The loss function was con‐structed by a graph neural network,and a flexible content ag‐gregation mechanism was proposed based on the attention mechanism,which enabled SuperGlue to simultaneously per‐ceive potential 3D scenes and perform feature matching.LoFTR[15]used a transformer module with self-attention and cross-attention layers to process dense local features extracted from convolutional networks.Dense matches were first ex‐tracted at a low feature resolution (1∕8 of the image dimension),from which high-confidence matches were selected and refined to high-resolution sub-pixel levels using correlation-based methods.In this way,the large receptive field of the model en‐abled the transformed features to reflect context and location in‐formation,and the prior matching was achieved through mul‐tiple self-attention and cross-attention layers.Many methods in‐tegrate feature detection,feature description,and feature match‐ing into matching pipelines in an end-to-end manner,which is beneficial for improving matching performance.

    3.3 Visual Localization

    Visual localization is a problem of estimating the pose of a 6-DoF camera,from which a given image is obtained relative to a reference scene representation.Classical approaches to vi‐sual localization are structure-based,which means that they rely on 3D reconstructions of the environment (e.g.point clouds) and use local feature matching to establish correspon‐dences between query images and 3D maps.Image retrieval can be used to reduce the search space by considering only the most similar reference images instead of all possibilities.Another approach is to directly interpolate the pose from the reference image or estimate the relative pose between the query and the retrieved reference image,which does not rely on the 3D reconstruction results.Scene point regression meth‐ods can directly obtain the correspondence between 2D pixel positions and 3D points using a deep neural network (DNN),and compute camera poses similar to structure-based meth‐ods.Modern scene point regression methods benefit from 3D reconstruction during training but do not rely on it.Absolute pose regression methods use a DNN to estimate poses end-toend.These methods differ in generalization ability and local‐ization accuracy.Furthermore,some methods rely on 3D re‐construction,while others only require pose-labeled reference images.The advantage of using 3D reconstructions is that the generated poses can be very accurate,while the disadvantage is that these 3D reconstructions are sometimes difficult to ob‐tain and even more difficult to maintain.For example,if the environment changes,they need to be updated.

    The typical work of the structure-based approach can refer to a general visual localization pipeline proposed in Ref.[17].Through a hierarchical localization approach,the pipeline can simultaneously predict local features and global descriptors for accurate 6-DoF localization,which utilizes a coarse-to-fine localization paradigm,first performing global retrieval to ob‐tain location hypotheses and then matching local features in these candidate locations.This hierarchical approach saves runtime for real-time operations and proposes a hierarchical feature network (HF-Net) that jointly estimates local and global features,thereby maximizing shared computation,and compresses the model through multi-task distillation.

    4 AR Navigation Based on Scene Visual Per?ception

    AR navigation usually works in the following process: 1) The real-world view is got from the user’s point of view;2) the location information is obtained and used to track the user;3) virtual-world information is generated based on the real-world view and location information;4) the generated virtual world information is registered into the real-world view and dis‐played to the user,creating augmented reality.The main chal‐lenge of AR navigation is how to integrate the virtual and real worlds,and design and present the navigation interface.Regis‐tration is the process of correctly aligning virtual information with the real world,which gives the user the illusion of keeping the virtual and the real coexist‐ing.For AR in navigation,accu‐rate registration is critical,and AR navigation systems can cause confusion when orienta‐tion changes rapidly due to reg‐istration errors.So even small offsets of registering dummy in‐formation can be harmful.In an AR navigation system,the dis‐play should not interfere with the user’s movement.The aug‐mented reality display technol‐ogy is also known as video seethrough.Video see-through dis‐play refers to placing a digital screen between the real world and the user,where the user can see the real world and augmented information,use a cam‐era to capture the real-world view,and then combine it with the augmented information and display it on the screen supe‐rior.Typical examples of displays include head-mounted dis‐plays with cameras and smartphone displays.

    On the basis of scene visual perception,this paper designs an AR navigation APP developed based on Unity and AR‐Core.Its overall framework is shown in Fig.1.The system con‐sists of three parts,namely,the navigation map generation tool,the cloud navigation algorithm,and the terminal naviga‐tion APP design.

    The navigation map generation tool works offline,including scene panoramic video capture,dense point cloud generation,point cloud and plane CAD map alignment,navigation map management and other functions.The map generated by the navigation map generation tool is stored in the cloud.In addi‐tion,the cloud is also responsible for providing navigation al‐gorithms to the terminal,including visual localization meth‐ods,path planning algorithms,path correction algorithms,floor judgment algorithms and cross-layer guidance algo‐rithms.When users request a navigation activity with the ter‐minal APP,they first select the current location map,and the cloud issues the corresponding navigation map according to the user’s selection.After selecting the starting point and end‐ing point,the user requests the navigation service from the cloud,and realizes local real-time localization,global path and current position display,and AR path rendering in the lo‐cal APP.

    4.1 Panoramic Data Collection and Processing

    ▲Figure 1.Overall framework of an AR navigation application (APP)

    This paper uses a panoramic camera to capture video to col‐lect mapping data.Instead of rotating the camera around its optical center,this panoramic camera can be used to capture multiple images of a scene from different viewpoints,from which stereoscopic information about the scene can be calculated.The stereo information is then be used to create a 3D model of the scene,and arbitrary views can be computed.This ap‐proach is beneficial for 3D re‐construction of large-scale scenes.The dense reconstruc‐tion results of the proposed ap‐proach on the building dataset are shown in Fig.2.

    Taking a large shopping mall as an example,for the process‐ing and 3D reconstruction of the data collected from the pan‐oramic video,this paper goes through the following steps:

    1) Shoot a panoramic video of the scene,and the shooting area should be covered as much as possible;

    2) Frame the obtained panoramic video to obtain a pan‐oramic image and segment the panoramic image according to the field of view (FOV);

    3) Realize sparse point cloud reconstruction for each floor and finally output all camera parameters and sparse 3D point cloud;

    4) Complete the single-layer dense point cloud reconstruc‐tion;

    5) Integrate multiple layers of dense point clouds to obtain a complete 3D structure of the scene.

    4.2 Alignment of Point Cloud Map and CAD

    The point cloud obtained in Section 3.1 is based on the camera coordinate system,which must be aligned with the world coordinate system if it is to be used for navigation tasks.This paper takes the CAD map as the world coordinate sys‐tem,because CAD can provide accurate position information and scale information.The problem is transformed into the alignment of the point cloud map and the plane CAD.The spe‐cific process of its realization is as follows:

    1) The point cloud is dimensionally reduced and projected to the XoY plane to form a plane point cloud map,as shown in Fig.3.

    2) Marker points (such as walls and other points that are easy to be distinguished) and the corresponding points are found on the plane point cloud map and the CAD map,re‐spectively.

    3) Alignment is completed through the scale information provided by the CAD map,output rotation and the displace‐ment matrix.

    ▲Figure 2.Result of dense reconstruction: (a) photometric depth map,(b) photometric normal map,(c) geo?metric depth map,(d) geometric normal map,and (e) dense reconstruction effect

    Once the point cloudXis sampled,it can be mapped to a 2D plane by simply removing the z coordinates.The problem is transformed into finding the mapping between (Xx,Xy) and pixels (u,v),where (Xx,Xy) is the set of 2D coordinates (x,y) extracted from the point cloudX.It is worth noting that (x,y) are usually float values,while pixel coordinates (u,v) are usually positive integer values.Therefore,(x,y) needs to go through a certain scale,rotation and rounding transformation.

    Once the plane point cloud map is obtained,it can be aligned with the CAD map through the affine transformation.To determine the affine matrix,at least three pairs of corre‐sponding points are usually required.Considering the need to reduce errors,this paper selects multiple pairs of correspond‐ing points in the point cloud map and CAD map respectively,and uses the least square method to achieve alignment.It is worth noting that the selection of corresponding points should try to select parts that are easy to identify,such as walls and other fixed objects with clear structural characteristics.Fig.3 shows the process of aligning a point cloud map with a CAD map.After the alignment,the position coordinates of the point cloud in the world coordinate system can be obtained,which is beneficial to the subsequent localization and navigation tasks.The obtained results can be saved separately according to the scene,and the saved content includes the scene pose,corresponding geographic information,camera model,and other information to form a navigation digital map.

    ▲Figure 3.An example of a 2D point cloud map generation

    4.3 Cloud Navigation Algorithm

    When a user requests a navigation activity with the terminal APP,he first selects the map corresponding to the current lo‐cation,and the cloud issues the corresponding navigation map according to the user’s selection.After the user selects the destination,the user requests the navigation service from the cloud,and at the same time uploads the current scene graph to the cloud.At this time,the cloud needs to invoke the visual localization algorithm to determine the current initial position of the user as a starting point.After obtaining the coordinates of the starting point and the ending point,the cloud calls the path planning algorithm to obtain the navigation path point se‐quence and sends it to the terminal APP for AR rendering.The user is actually positioned through ARCore during the process of traveling.However,this method will generate accu‐mulated errors after traveling for a certain distance,and since the user may deviate from the recommended path,the path correction algorithm needs to be implemented through the cloud,and the user is directed to the correct path.

    According to common practice in the industry,the path planning algorithm designed in this paper does not need to provide a path from any point to any point.The path planning involved in this paper only needs to provide a path from any point (user location or user-selected location) to a specific point (specified end-point set).Therefore,the path planning problem in this paper can be regarded as solving the shortest path problem between the vertices of a directed graph.The ba‐sic flow of the path planning algorithm proposed in this paper is as follows:

    1) The passable area is determined through the point cloud map,and the waypoint is selected in the passable area.

    2) The route point and the destination point (the selected end-point) form a graph structure.

    3) The shortest path is found among all vertices in the graph through a search algorithm.

    The process of building route points and destination points into a graph structure forms a road network.In this process,it is necessary to clarify the world coordinates of the waypoint and the destination point,and mark the connection re‐lationship between points to form a graph structure of the road network,which is stored in the form of an adjacency list.Since the purpose of this paper is to find the shortest path among all vertices in the graph,it constitutes an all pairs shortest paths (APSP) problem.The general solution to the APSP problem is the Floyd-Warshall algorithm.After the shortest path among all points is obtained,the result is saved in the cloud according to the scene,so that in practical appli‐cations,there is no need to calculate the planned path on‐line,and only the retrieval function will be implemented,which is time-consuming.

    During the user’s journey,the local positioning provided by ARCore will gradually produce errors with the advancing dis‐tance.At the same time,the user may deviate from the recom‐mended navigation path due to internal or external reasons.Therefore,the cloud needs to provide a path correction algo‐rithm to guide the user back to the navigation path (the correct path).The specific workflow of the path correction algorithm is as follows:

    1) The user uploads the current scene image while traveling.

    2) The cloud determines whether it deviates from the navi‐gation path recommended by the algorithm according to the positioning algorithm.

    3) If the user’s deviation is small,the user will be guided to the recommended navigation path through the navigation ar‐rows of the terminal APP.If the user’s deviation is too large,the path planning will be re-planned based on the user’s cur‐rent position.

    The path correction process is actually a verification pro‐cess of the real-time local positioning information fed back by the terminal.When the error exceeds the distance thresholdτ,the path correction function can be activated.In practical ap‐plications,the selection of the distance thresholdτis usually between 50 cm and 200 cm.If the threshold is too small,it will increase the influence of visual positioning errors.If the threshold is too large,it will not only lose the accuracy of navi‐gation,but also bring inconvenience to users.

    4.4 AR Systems

    AR systems contain three basic features: the combination of real and virtual worlds,real-time interaction,and accurate 3D registration of virtual and real objects.In this way,AR changes people’s continuous perception of the real environ‐ment and obtains an immersive experience by integrating the composition of the virtual world into people’s perception of the real environment.Specific to AR navigation APPs,users can obtain real-world information from smartphones (through the phone camera),and by applying the AR technology,vir‐tual navigation paths can be added to the smartphone’s inter‐face,enhancing the user’s perception of the real environment for a better navigation experience.From the user’s point of view,a complete AR navigation includes the following pro‐cess: 1) The user selects the current scene and obtains the navigation map delivered by the cloud;2) the user selects the destination according to the navigation map and requests the cloud navigation service;3) the user follows the terminal inter‐face rendering AR path to the end.Due to network bandwidth limitations,users cannot obtain real-time localization by send‐ing the current scene image to the cloud in real time.There‐fore,the ARCore-based method is used to provide real-time lo‐calization.However,this method will generate accumulated er‐rors after traveling for a certain distance.And since users may deviate from the recommended path,path correction needs to be implemented through a correction algorithm to guide users to the correct path.Fig.4 shows the flow of the AR navigation APP and AR rendering.

    ARCore is an AR application platform provided by Google,which can be easily combined with 3D engines such as Unreal and Unity.ARCore provides three main applications for mo‐tion tracking,environment understanding,and lighting estima‐tion.Among them,motion tracking enables the phone to know and track its position relative to the world,environment under‐standing enables the phone to perceive the environment,such as the size and location of detectable surfaces,and light esti‐mation allows the phone to obtain the current lighting condi‐tions of the environment.Localization can be achieved using ARCore’s motion-tracking capabilities.

    The motion-tracking function of ARCore is actually realized by visual inertial odometry (VIO).VIO includes two parts: a vi‐sual tracking system and an inertial navigation system.The camera obtains a frame of pixel matching to track the user’s pose.The inertial navigation system realizes position and atti‐tude tracking through an IMU,which usually consists of an ac‐celerometer and a gyroscope.The outputs of the two systems are combined through a Kalman filter to determine the final pose of the user.The local positioning function provided by ARCore can track the user’s position in real time,but the er‐ror in the inertial navigation system of ARCore will accumu‐late over time.As the user’s advancing distance increases and time passes,tracking of the user’s position will be offset.In practice,we find that after a user travels about 50 m,the lo‐calization provided by ARCore will begin to deviate.At this time,it is necessary to relocate through the visual localization algorithm and correct the path.

    ▲Figure 4.Augmented reality (AR) navigation application (APP) and AR rendering result

    On the basis of the previous work,the AR navigation APP can obtain the current position of the user and the path point sequence of the path planning from the cloud.Then the next question is how to realize AR rendering of the path point se‐quence on the mobile phone interface.From the perspective of user experience,the AR markers cannot block the user’s line of sight and must provide an obvious guiding role.Therefore,in the actual rendering process,this paper chooses to render the AR markers close to the ground.The environment under‐standing section in ARCore provides plane detection capabili‐ties.In fact,ARCore stipulates that all virtual objects need to rely on planes for rendering.After ARCore implements plane detection,the AR markers can be placed on the ground.The placement of AR markers can be achieved by radiographic in‐spection.The principle of ray detection is to judge whether there is a collision with an object through the ray emitted from the camera position to any position in the 3D world.In this way,the collision object and its position can be detected.By performing collision detection on the planes in the scene,the planes can be judged and AR signs can be placed.Here,this paper adopts two kinds of AR markers,one is the navigation guidance arrow,which is responsible for indicating the for‐ward direction,and the other is the end prompt sign,which re‐minds the user to reach the end-point.Fig.4 shows the actual workflow of the AR navigation APP and the rendering effect of the AR markers.In the figure,from left to right,the user se‐lects the destination (elevator entrance),the navigation guide arrow is rendered,the user follows the navigation guide arrow,and the navigation ends at the end prompt sign.

    5 Conclusions and Outlook

    ▲Figure 5.An example of a 2D point cloud map aligned with CAD map

    This paper analyzes and introduces related technologies in the field of scene visual perception,based on which we imple‐ment AR navigation.In practical application,there are still some problems to be solved[18–19].For example,this paper adopts a structure-based localization framework,with an ad‐vantage that it can effectively handle large-scale scenes and has high localization accuracy.However,if the environment changes,the 3D structure needs to be re-adjusted to achieve re-registration of point clouds.The alignment method of point cloud map and plane CAD shown in Fig.5 still requires manual selection of corresponding points,which is not condu‐cive to large-scale applications,so it needs to be studied in the follow-up work to realize the automatic process.The pro‐posed localization method in this paper adopts a pure vision solution.In the future,it can also be considered to combine other sensor data such as IMU,depth camera or LiDAR to fur‐ther improve the localization and navigation performance.In addition,most of the current visual localization algorithms can‐not be independent of the scene,and usually need to train dif‐ferent models on different datasets (such as training models on indoor and outdoor datasets),which brings difficulties to prac‐tical applications.For example,in the AR navigation process,image feature matching is usually performed in the cloud.Due to the diversity of the user’s scene,if a scene-related localiza‐tion algorithm is used,the generalization ability of the model will be insufficient,which will lead to poor localization perfor‐mance.Therefore,for AR navigation,it is particularly impor‐tant to enhance the generalization performance of localization algorithms and achieve scene-independent visual localization.

    99久国产av精品国产电影| tube8黄色片| 又爽又黄a免费视频| 亚洲久久久国产精品| 日韩欧美 国产精品| 久久精品国产自在天天线| 国产精品三级大全| 少妇猛男粗大的猛烈进出视频| 韩国高清视频一区二区三区| 插阴视频在线观看视频| 国产色爽女视频免费观看| 性高湖久久久久久久久免费观看| 欧美亚洲 丝袜 人妻 在线| 日韩av免费高清视频| 国语对白做爰xxxⅹ性视频网站| 亚洲精品国产av蜜桃| 久久av网站| 欧美精品人与动牲交sv欧美| 国产精品免费大片| 国模一区二区三区四区视频| 久久6这里有精品| 日韩成人伦理影院| 亚洲美女搞黄在线观看| 青春草视频在线免费观看| 中文资源天堂在线| 黄色一级大片看看| 岛国毛片在线播放| 亚洲欧美精品专区久久| 天美传媒精品一区二区| 国产爽快片一区二区三区| 国产探花极品一区二区| 少妇丰满av| 嫩草影院新地址| 久久久久国产精品人妻一区二区| 97超碰精品成人国产| 亚洲一区二区三区欧美精品| 国产精品爽爽va在线观看网站| 亚洲人与动物交配视频| 日韩成人伦理影院| av播播在线观看一区| 亚洲精品国产色婷婷电影| 最近2019中文字幕mv第一页| 亚洲性久久影院| 国产精品偷伦视频观看了| 日韩强制内射视频| 免费看日本二区| 欧美激情国产日韩精品一区| 视频区图区小说| 91精品国产国语对白视频| 国产精品蜜桃在线观看| 性高湖久久久久久久久免费观看| 国产精品国产三级国产av玫瑰| 亚洲高清免费不卡视频| 国产乱人偷精品视频| 亚洲成色77777| 日韩伦理黄色片| 久久久久久久精品精品| 丝瓜视频免费看黄片| 亚洲经典国产精华液单| 各种免费的搞黄视频| 午夜福利影视在线免费观看| 午夜免费观看性视频| 男的添女的下面高潮视频| 一边亲一边摸免费视频| 日韩一区二区三区影片| 欧美老熟妇乱子伦牲交| 少妇人妻一区二区三区视频| 亚洲成人手机| 国产色爽女视频免费观看| 永久网站在线| 91久久精品国产一区二区三区| 亚洲精品第二区| 久久久欧美国产精品| 男人舔奶头视频| 国产亚洲av片在线观看秒播厂| 免费大片黄手机在线观看| 一区在线观看完整版| 亚洲色图综合在线观看| 久久久a久久爽久久v久久| 插逼视频在线观看| 欧美国产精品一级二级三级 | 久久精品久久久久久久性| 麻豆成人av视频| 街头女战士在线观看网站| 国产精品女同一区二区软件| 国产精品伦人一区二区| 亚洲经典国产精华液单| 日韩中文字幕视频在线看片 | 永久网站在线| 成人黄色视频免费在线看| 亚洲av综合色区一区| 国产免费一区二区三区四区乱码| 少妇裸体淫交视频免费看高清| 超碰av人人做人人爽久久| 超碰av人人做人人爽久久| 女人十人毛片免费观看3o分钟| 午夜老司机福利剧场| 亚洲精品日本国产第一区| 老师上课跳d突然被开到最大视频| 成人免费观看视频高清| 日本-黄色视频高清免费观看| 日日啪夜夜撸| 欧美日韩精品成人综合77777| 国产精品人妻久久久影院| 亚洲国产精品一区三区| 久久韩国三级中文字幕| 久久国内精品自在自线图片| 热re99久久精品国产66热6| 观看免费一级毛片| 在线天堂最新版资源| 十分钟在线观看高清视频www | 美女内射精品一级片tv| 久久精品熟女亚洲av麻豆精品| 在线观看av片永久免费下载| 人人妻人人添人人爽欧美一区卜 | 国产极品天堂在线| 少妇猛男粗大的猛烈进出视频| 国产日韩欧美亚洲二区| 午夜日本视频在线| 一级片'在线观看视频| 国产高清国产精品国产三级 | 欧美精品人与动牲交sv欧美| 欧美精品一区二区大全| 亚洲一区二区三区欧美精品| 日本爱情动作片www.在线观看| 高清不卡的av网站| 最近2019中文字幕mv第一页| 九九爱精品视频在线观看| 亚洲精华国产精华液的使用体验| av.在线天堂| 免费播放大片免费观看视频在线观看| 青春草亚洲视频在线观看| 女性被躁到高潮视频| 99久久人妻综合| 大香蕉久久网| 深夜a级毛片| 搡老乐熟女国产| av女优亚洲男人天堂| 久久久a久久爽久久v久久| 女人久久www免费人成看片| 在线观看免费日韩欧美大片 | 日本-黄色视频高清免费观看| 最黄视频免费看| 天美传媒精品一区二区| 三级国产精品片| 男女无遮挡免费网站观看| av天堂中文字幕网| 只有这里有精品99| 日韩人妻高清精品专区| 精品国产三级普通话版| 18禁在线播放成人免费| 下体分泌物呈黄色| 免费黄频网站在线观看国产| 麻豆成人av视频| 日日啪夜夜撸| 色婷婷av一区二区三区视频| 噜噜噜噜噜久久久久久91| 亚洲综合精品二区| 欧美极品一区二区三区四区| 国产成人午夜福利电影在线观看| 永久免费av网站大全| 日韩中文字幕视频在线看片 | 久久国产精品大桥未久av | 亚洲精品自拍成人| 久久国产精品大桥未久av | 亚洲av欧美aⅴ国产| 亚洲国产高清在线一区二区三| 身体一侧抽搐| 亚洲天堂av无毛| 亚洲自偷自拍三级| 女人十人毛片免费观看3o分钟| 亚洲国产色片| 99热这里只有精品一区| 午夜免费男女啪啪视频观看| av天堂中文字幕网| 精品久久久久久久久亚洲| 欧美国产精品一级二级三级 | 99久国产av精品国产电影| av黄色大香蕉| 国产高清不卡午夜福利| 高清日韩中文字幕在线| 六月丁香七月| 黄色欧美视频在线观看| 最新中文字幕久久久久| 麻豆乱淫一区二区| 亚洲第一区二区三区不卡| a级毛片免费高清观看在线播放| 美女视频免费永久观看网站| 制服丝袜香蕉在线| av卡一久久| 青春草视频在线免费观看| 久久国产乱子免费精品| 亚洲四区av| 久久久久久久久久成人| 夫妻午夜视频| 亚洲国产精品国产精品| 蜜臀久久99精品久久宅男| 成人漫画全彩无遮挡| 国产在视频线精品| 欧美日韩视频精品一区| 精华霜和精华液先用哪个| 男女边吃奶边做爰视频| 亚洲综合精品二区| 亚洲国产精品国产精品| 看十八女毛片水多多多| 深夜a级毛片| 深爱激情五月婷婷| 国产中年淑女户外野战色| 亚洲第一av免费看| 成人亚洲欧美一区二区av| 美女主播在线视频| 久久影院123| 少妇 在线观看| 午夜免费观看性视频| 国产免费一级a男人的天堂| 亚州av有码| 毛片女人毛片| 国产日韩欧美在线精品| 国产精品一区二区性色av| 天美传媒精品一区二区| 欧美高清成人免费视频www| 久久久成人免费电影| 人妻少妇偷人精品九色| av天堂中文字幕网| 久久97久久精品| 精品视频人人做人人爽| 久久久成人免费电影| 最近2019中文字幕mv第一页| 免费观看av网站的网址| 欧美zozozo另类| 国产av一区二区精品久久 | 国产av国产精品国产| 欧美97在线视频| 最新中文字幕久久久久| 成人二区视频| 免费观看av网站的网址| 五月伊人婷婷丁香| 国产免费视频播放在线视频| av在线老鸭窝| 国产真实伦视频高清在线观看| 久久国内精品自在自线图片| 老司机影院毛片| 亚洲国产精品一区三区| 中文字幕久久专区| 午夜日本视频在线| 精品午夜福利在线看| 大陆偷拍与自拍| 日韩伦理黄色片| 性色av一级| 日韩一区二区视频免费看| 欧美日韩综合久久久久久| 丰满人妻一区二区三区视频av| 91在线精品国自产拍蜜月| 晚上一个人看的免费电影| 免费大片黄手机在线观看| 精品国产三级普通话版| videossex国产| 国产精品麻豆人妻色哟哟久久| 制服丝袜香蕉在线| 国产男女内射视频| 狂野欧美白嫩少妇大欣赏| 在线播放无遮挡| 深夜a级毛片| 国产免费视频播放在线视频| 中文字幕制服av| 人人妻人人澡人人爽人人夜夜| 国产精品一及| 日本欧美国产在线视频| 午夜免费观看性视频| 一区二区三区精品91| 日本欧美国产在线视频| 国产有黄有色有爽视频| 六月丁香七月| 黄色配什么色好看| 色视频在线一区二区三区| 免费少妇av软件| 亚洲美女视频黄频| 久久精品国产亚洲网站| 中文字幕av成人在线电影| 国产在视频线精品| 亚洲成色77777| 青青草视频在线视频观看| 黄色欧美视频在线观看| 国产成人免费观看mmmm| 亚洲av在线观看美女高潮| 亚洲欧美精品专区久久| 天堂俺去俺来也www色官网| 成人毛片a级毛片在线播放| 天天躁日日操中文字幕| 岛国毛片在线播放| 女人十人毛片免费观看3o分钟| 欧美老熟妇乱子伦牲交| 国产精品一区二区在线不卡| 欧美变态另类bdsm刘玥| 国产精品国产三级国产av玫瑰| 欧美日韩亚洲高清精品| 观看av在线不卡| 熟女人妻精品中文字幕| 亚洲中文av在线| 日韩av在线免费看完整版不卡| 午夜福利在线观看免费完整高清在| 一级片'在线观看视频| 亚洲成人一二三区av| 国产高清三级在线| 最近中文字幕2019免费版| 久久精品国产自在天天线| 国产成人免费无遮挡视频| 晚上一个人看的免费电影| 欧美日韩综合久久久久久| 高清av免费在线| 美女高潮的动态| 黄色日韩在线| 免费久久久久久久精品成人欧美视频 | 亚洲无线观看免费| 少妇人妻一区二区三区视频| 黄色怎么调成土黄色| 国产久久久一区二区三区| 男女免费视频国产| 亚洲精品国产成人久久av| 国产精品麻豆人妻色哟哟久久| 亚洲国产精品专区欧美| 欧美 日韩 精品 国产| 2022亚洲国产成人精品| 在线免费十八禁| 寂寞人妻少妇视频99o| 欧美成人一区二区免费高清观看| 最近中文字幕2019免费版| 国产精品久久久久久av不卡| 久久久久久久久久久免费av| 五月玫瑰六月丁香| 毛片女人毛片| 视频中文字幕在线观看| 干丝袜人妻中文字幕| 色5月婷婷丁香| xxx大片免费视频| 精品99又大又爽又粗少妇毛片| 日韩一本色道免费dvd| 毛片一级片免费看久久久久| 一个人免费看片子| 成人毛片60女人毛片免费| 六月丁香七月| 日韩国内少妇激情av| 国产精品三级大全| 又爽又黄a免费视频| 三级国产精品欧美在线观看| 国产黄片视频在线免费观看| 精品视频人人做人人爽| 亚洲欧美成人综合另类久久久| 99热全是精品| 寂寞人妻少妇视频99o| 91精品伊人久久大香线蕉| 夜夜爽夜夜爽视频| 久久99热这里只有精品18| 秋霞在线观看毛片| 国产探花极品一区二区| 在线观看一区二区三区| 乱系列少妇在线播放| 美女脱内裤让男人舔精品视频| 校园人妻丝袜中文字幕| 免费看不卡的av| 久久久成人免费电影| 日韩欧美精品免费久久| 成年女人在线观看亚洲视频| 日韩不卡一区二区三区视频在线| 建设人人有责人人尽责人人享有的 | 成人二区视频| 永久网站在线| 欧美少妇被猛烈插入视频| 韩国av在线不卡| 免费av不卡在线播放| 国产成人免费观看mmmm| 99久久精品国产国产毛片| 老师上课跳d突然被开到最大视频| 亚洲av国产av综合av卡| 精品一区二区三区视频在线| 在线观看av片永久免费下载| 建设人人有责人人尽责人人享有的 | 亚洲欧洲日产国产| 午夜免费男女啪啪视频观看| 观看美女的网站| 免费看光身美女| 免费观看性生交大片5| 看免费成人av毛片| 毛片一级片免费看久久久久| 全区人妻精品视频| 色婷婷久久久亚洲欧美| 亚洲精品亚洲一区二区| 干丝袜人妻中文字幕| 成人黄色视频免费在线看| 如何舔出高潮| 九九久久精品国产亚洲av麻豆| 新久久久久国产一级毛片| 丰满人妻一区二区三区视频av| 国产精品麻豆人妻色哟哟久久| av又黄又爽大尺度在线免费看| 国产免费又黄又爽又色| 久久久精品94久久精品| 在线精品无人区一区二区三 | 亚洲电影在线观看av| 2021少妇久久久久久久久久久| 亚洲第一区二区三区不卡| 一级毛片aaaaaa免费看小| 日韩av免费高清视频| 欧美变态另类bdsm刘玥| 日韩成人av中文字幕在线观看| 亚洲最大成人中文| 国模一区二区三区四区视频| 国产精品一区二区性色av| 国产精品99久久99久久久不卡 | 毛片女人毛片| 我的老师免费观看完整版| 水蜜桃什么品种好| 国产精品无大码| 亚洲国产成人一精品久久久| 美女国产视频在线观看| 欧美一级a爱片免费观看看| 日本猛色少妇xxxxx猛交久久| 日韩一区二区视频免费看| 狂野欧美激情性xxxx在线观看| 色5月婷婷丁香| 国产高清有码在线观看视频| 国产高清不卡午夜福利| 国产片特级美女逼逼视频| 中文字幕av成人在线电影| 三级国产精品片| 一区在线观看完整版| 亚洲精品乱码久久久v下载方式| 亚洲av男天堂| 午夜免费观看性视频| 国产精品精品国产色婷婷| 亚洲国产日韩一区二区| 18禁裸乳无遮挡免费网站照片| av在线蜜桃| 久久精品夜色国产| 观看av在线不卡| 久久精品国产鲁丝片午夜精品| 国产一区亚洲一区在线观看| av视频免费观看在线观看| 欧美一区二区亚洲| 亚洲成色77777| 免费观看的影片在线观看| av网站免费在线观看视频| 免费黄色在线免费观看| 日本色播在线视频| 国产在线一区二区三区精| 五月玫瑰六月丁香| 久久亚洲国产成人精品v| 久久99热这里只频精品6学生| .国产精品久久| 在现免费观看毛片| 在线观看免费日韩欧美大片 | 妹子高潮喷水视频| 亚洲怡红院男人天堂| 国产免费又黄又爽又色| 18禁在线播放成人免费| 波野结衣二区三区在线| 国产人妻一区二区三区在| 日本与韩国留学比较| 精品99又大又爽又粗少妇毛片| 亚洲一级一片aⅴ在线观看| 伦理电影免费视频| kizo精华| 嫩草影院入口| 男的添女的下面高潮视频| 啦啦啦啦在线视频资源| 亚洲av男天堂| 赤兔流量卡办理| 久久人人爽av亚洲精品天堂 | 精品人妻熟女av久视频| 91aial.com中文字幕在线观看| 亚洲精品乱码久久久v下载方式| 五月玫瑰六月丁香| 成人美女网站在线观看视频| 亚洲一级一片aⅴ在线观看| 在线观看美女被高潮喷水网站| 美女视频免费永久观看网站| 免费人妻精品一区二区三区视频| 啦啦啦啦在线视频资源| 亚洲欧美精品专区久久| 国产人妻一区二区三区在| 熟女电影av网| 一二三四中文在线观看免费高清| 在线播放无遮挡| 多毛熟女@视频| av专区在线播放| 午夜激情久久久久久久| 又黄又爽又刺激的免费视频.| 久久99热这里只有精品18| 中文字幕免费在线视频6| 日本av免费视频播放| 一级黄片播放器| 国产黄色免费在线视频| 欧美xxxx性猛交bbbb| 精品人妻偷拍中文字幕| 在线 av 中文字幕| 天堂中文最新版在线下载| 春色校园在线视频观看| 插逼视频在线观看| 最近手机中文字幕大全| 国产av精品麻豆| 国产在视频线精品| 99精国产麻豆久久婷婷| 能在线免费看毛片的网站| 成人亚洲欧美一区二区av| 欧美成人一区二区免费高清观看| 色网站视频免费| 日本一二三区视频观看| 你懂的网址亚洲精品在线观看| 欧美xxⅹ黑人| 国产av国产精品国产| 蜜桃在线观看..| 国产精品免费大片| 欧美97在线视频| 九色成人免费人妻av| 国产日韩欧美在线精品| 爱豆传媒免费全集在线观看| 国产精品麻豆人妻色哟哟久久| 五月玫瑰六月丁香| 成人美女网站在线观看视频| 在线观看三级黄色| 老师上课跳d突然被开到最大视频| 亚洲av成人精品一二三区| 亚洲精品乱码久久久v下载方式| 国产精品秋霞免费鲁丝片| 国产久久久一区二区三区| 99久久精品热视频| 精品亚洲成a人片在线观看 | 久久久久性生活片| 一区二区三区精品91| 美女高潮的动态| 观看av在线不卡| 亚洲美女搞黄在线观看| 99久久精品热视频| 免费黄网站久久成人精品| 中国三级夫妇交换| 伦精品一区二区三区| 毛片女人毛片| 亚洲精品日韩av片在线观看| 亚洲三级黄色毛片| 国产高清有码在线观看视频| 1000部很黄的大片| 在线亚洲精品国产二区图片欧美 | 婷婷色综合大香蕉| 一级毛片aaaaaa免费看小| 91久久精品国产一区二区成人| 久久久久久九九精品二区国产| 精品99又大又爽又粗少妇毛片| 国国产精品蜜臀av免费| 久久久久久久久久久免费av| 午夜激情福利司机影院| 久久久成人免费电影| 国产精品不卡视频一区二区| 高清欧美精品videossex| 男人舔奶头视频| 国产在线一区二区三区精| 91久久精品国产一区二区三区| av福利片在线观看| 99热全是精品| 免费不卡的大黄色大毛片视频在线观看| 亚洲,欧美,日韩| 毛片女人毛片| 黄片wwwwww| 一级毛片aaaaaa免费看小| av国产精品久久久久影院| 久久久久久九九精品二区国产| 国产v大片淫在线免费观看| av又黄又爽大尺度在线免费看| 久久精品国产亚洲av涩爱| 国产精品熟女久久久久浪| 日本色播在线视频| 欧美一级a爱片免费观看看| 久久 成人 亚洲| 久久久久久九九精品二区国产| 99久久人妻综合| 久久久久人妻精品一区果冻| av.在线天堂| 国产一级毛片在线| 少妇的逼好多水| 欧美最新免费一区二区三区| 最近的中文字幕免费完整| 高清日韩中文字幕在线| 91精品一卡2卡3卡4卡| 国内精品宾馆在线| 最黄视频免费看| 少妇的逼水好多| 春色校园在线视频观看| 亚洲欧美日韩无卡精品| 99热这里只有是精品50| 视频中文字幕在线观看| 国产永久视频网站| 亚洲美女搞黄在线观看| 我要看日韩黄色一级片| 老司机影院毛片| 国产精品女同一区二区软件| 久久久久久久久久人人人人人人| 欧美3d第一页| 久久久国产一区二区| 精品酒店卫生间| 国产乱来视频区| 久久久久久久久大av| 国产在线视频一区二区| 国产片特级美女逼逼视频| 在线观看一区二区三区| 在线观看免费视频网站a站| 大片免费播放器 马上看| 亚洲精品自拍成人| 在线观看一区二区三区激情| 青春草亚洲视频在线观看| 日本-黄色视频高清免费观看| 欧美日韩亚洲高清精品| 一级a做视频免费观看| 亚洲经典国产精华液单| 亚洲成人一二三区av| 黑人猛操日本美女一级片| 日韩国内少妇激情av|