中文版 | English
题名

面向复杂场景自主导航的主动场景流估计技术研究

其他题名
Active Scene Flow Estimation for Autonomous Navigation in Complex Scenes
姓名
姓名拼音
WANG Shuaijun
学号
11849555
学位类型
博士
学位专业
0812 计算机科学与技术
学科门类/专业学位类别
08 工学
导师
郝祁
导师单位
计算机科学与工程系
论文答辩日期
2024-06
论文提交日期
2024-06-19
学位授予单位
哈尔滨工业大学
学位授予地点
哈尔滨
摘要

       自主导航系统包括全局规划、环境感知、局部规划三个主要模块,输入全局目标、感知信号以及导航地图,输出自主车辆运行的控制命令。全局规划模块用于产生粗粒度的运动路径;环境感知模块用于估计实时自车位姿、物体运动信息以及可行驶区域;局部规划模块用于计算自车的位姿和合理的细粒度的运动轨迹,并基于这些计算生成相应的运动控制命令。与其他感知技术相比,场景流估计技术可以计算出场景中每个三维点的运动信息,有助于实时精确地定位自车位置和检测运动目标,准确估算可行驶区域与其他物体的运动轨迹,生成合理的运动控制命令,提高系统安全和决策效率。然而,导致点云数据中的对应点缺失的复杂环境和点云降采样操作,增加了场景流估计的误差。同时,场景流估计中不当的上采样操作也会增加估计误差。目前场景流估计技术面临的挑战包括复杂环境中数据的对应点缺失和算法对时空相关性及点云数据质量考虑不足。因此,为提高场景流估计准确性,需考虑这些因素并改进算法。
       主动场景流估计技术通过实时主动选取合理的自车点云观测位姿作为局部目标,利用运动规划算法,提高观测数据质量,减少相邻点云帧之间的对应点缺失,结合场景流估计算法提高实时估计的准确度。然而,主动场景流估计技术的研究面临以下挑战:(1) 缺乏支持该研究的数据集与算法训练验证的一体化技术架构;(2) 如何在大尺度环境下优化场景流估计方法;(3) 如何在小尺度环境下优化场景流估计方法;(4) 如何通过智能的局部目标选取与自车运动规划来提高点云数据采集质量。为解决上述问题,主动场景流估计技术需要从点云数据主动观测和场景流实时估计两个角度进行改进。本文针对主动场景流估计所需要的数据集与算法验证架构、大尺度场景流估计算法设计、小尺度场景流估计算法设计、以及点云数据主动观测开展了以下四个方面的研究:
       第一,提出了一套适应性高、面向场景流估计算法的训练与验证、基于自动驾驶仿真器的主动场景流数据集构建与标注技术的一体化架构。首先,该架构通过自定义交通场景实现了多样性数据的自动采集,满足了主动场景流估计算法进行训练和验证的需求;其次,该架构能够解决点云主动观测技术无法进行交互验证的问题,满足了主动观测方法对可重现场景进行实时数据采集与算法验证的要求;最后,该架构基于物体标签一致性的自动标注方法,提供了准确的数据真值,以便于算法评估。实验结果表明:本研究所提出的场景流数据集构建与验证框架能满足对数据多样性、标注准确性以及估计算法验证的需求。
       第二,提出了一种基于时空特征一致性的场景流估计方法,以解决大尺度复杂场景中对应点缺失、运动模式复杂以及运动尺度大对场景流估计的影响。首先,通过点云空间变换,基于交叉注意力机制捕获点云在时间维度的特征一致性,结合点云的上下文信息提高点云特征双向匹配的准确性。其次,通过多尺度场景流空间维度的特征一致性来提取点云中各点之间的空间关系,确保了场景流估计特征的稳健性。最后,通过时空一致性的双向注意力模块的神经网络以及Transformer注意力机制的并行计算能力,优化了场景流估计的精度,节约了计算时间。实验结果表明:所提出的方法不仅能够解决大尺度场景流估计中影响估计准确度的问题,还能降低计算时间。
       第三,提出了一种基于局部特征一致性的场景流估计方法,以解决小尺度场景中的对应点缺失、忽略上采样错误,以及大尺度场景流估计对局部微小运动不敏感的问题。首先,通过引入MLP-Mixer操作和多尺度双向对称特征相似度匹配代价估计的策略,减少了由于对应点缺失引起的点云特征关联问题。其次,通过融合场景流特征的语义和几何信息,以及考虑相邻帧和同一帧数据之间的局部邻域特征相关性,提高场景流上采样质量。最后,通过交替更新特征匹配代价估计和场景流估计的网络结构,整体提高场景流估计的准确度。实验结果表明:所提出的方法不仅能够改善场景流估计准确度和场景流特征质量,还能提高上采样场景流的准确度。
       第四,提出了一种基于局部目标主动选取与智能运动规划的主动观测方法,以解决较差质量的点云对场景流估计的影响。首先,通过未来时刻点云预测和隐藏点去除,结合道路信息,建立自车位姿的合法性检查方案,设计可靠的可行驶区域估计方法。其次,通过设计合理的场景相似性度量指标,全面评估两帧点云之间的差异,选取最优的局部目标。最后,通过自适应的感知置信度网络和奖励函数,提出一种基于强化学习的运动规划方法,保障自车通过可行驶区域安全到达局部目标。实验结果表明:所提出的局部目标主动选取技术能够有效地改善点云数据质量、提升场景流估计算法性能;所提出的运动规划方法具有较高的避障成功率,确保了自主导航系统能够安全到达局部目标。
       综上所述,本研究围绕数据集构建与估计算法验证架构、场景流估计算法设计、点云主动观测技术展开,确保了算法的理论可行性和实际可靠性。

其他摘要

   The autonomous navigation systems usually consists of three modules: global planning, environment perception, and local planning. The global planning module is dedicated to formulating the coarse-grained motion path for the autonomous vehicle. The environment perception module can estimate the ego-vehicle's position in real-time, motion information about surrounding objects, and drivable areas for itself. Meanwhile, the local planning module focuses on selecting appropriate local waypoints, refining the local motion path, and generating corresponding motion control commands. In contrast to other environment perception methods, the scene flow (SF) estimation technique can estimate the 3D point-wise motion vector based on the point cloud measurement data from the environment. This technique facilitates precise ego-vehicle localization, real-time detection of moving targets, accurate estimation of drivable areas and the ego-vehicle's motion trajectory, outputting reasonable motion control commands, and enhancing system safety and decision-making efficiency. However, when confronted with hazard factors such as object obstructions, rapid-moving entities, and the down-sampling of point cloud data within perception methods, many corresponding points between successive point cloud frames may be lost, leading to errors in SF estimation. Meanwhile, improper upsampling operations in SF estimation can also cause estimation errors.
    Many existing methodologies inadequately exploit the spatiotemporal correlations inherent in point clouds, thereby neglecting the impact of data quality on the precision of SF estimation. Consequently, improving existing methods while considering these factors is imperative to improve the accuracy and robustness of SF estimation for complex traffic environments. The active SF estimation method can enhance the quality of the point cloud and reduces the occurrence of missing corresponding points between adjacent frames of point cloud data, which is achieved by judiciously selecting reasonable ego-vehicle poses as local targets and employing motion planning methods to improve the accuracy of real-time SF estimation. However, developing a robust active SF estimation method in dynamic environments has to address the following technical challenges: (1) how to build an active SF dataset for verification, (2) how to optimize real-time estimation of the SF within a large-scale view, (3) how to optimize real-time estimation of the SF within a limited view, and (4) how to enhance SF quality through active sensing techniques.
    To address above issues, the active SF estimation technique must be improved from two perspectives: active observation of point cloud data and real-time SF estimation. This thesis undertakes the research in four key areas related to active SF estimation: (1) the framework for dataset construction and method validation, (2) the small-scale SF estimation method, (3) the large-scale SF estimation method, and (4) active observation techniques for improving point cloud data quality.
    First, we propose an integrated framework for constructing and annotating active SF datasets, tailored for algorithm training and validation in autonomous driving system simulations. At first, this framework utilizes self-defined traffic scenarios to autonomously collect data at any scale, to meet the requirements of validating active SF estimation algorithms. Then, this framework addresses the problem of lacking interactive validation of point cloud observation techniques, to meet the requirements of active observation methods for data collection and algorithm verification of reproducible scenes. Furthermore, it employs instance-level object-matching methods to automatically annotate SF data, providing accurate ground truth for algorithm evaluation. Experimental results demonstrate that the proposed method for constructing and annotating SF datasets satisfies the demands for data diversity, annotation precision, and active SF estimation methods validation.
   Second, we present an approach to SF estimation in large-scale complex environments, aiming to address the challenges posed by the absence of corresponding points, complicated motion patterns, and large motion scales. At first, we develop a temporal feature bi-directional matching based accuracy improvement strategy, achieved by using point cloud spatial transformation, a self-attention mechanism, and contextual point cloud information. Then, we develop a spatial multi-scale SF feature consistency to extract the spatial correspondence points, enhancing the stability of SF estimation features. Finally, we develop a neural network with a spatiotemporal consistency bi-directional attention mechanism module to reduce the missing points'harmful impacts and refine SF estimation accuracy, featured with the parallel computing capability of the Transformer attention mechanism. Experimental results validate the efficacy of our method, which not only addresses challenges stemming from data noise and missing data in large-scale SF estimation but also effectively distinguishes complex scene features.
   Third, we propose a local feature consistency-based SF estimation method to address the harmful impacts of missing correspondence points and up-sampling operations on SF estimation in small-scale complex scenes. At first, we develop a robust multi-scale bi-directional symmetric feature similarity strategy and the MLP-Mixer operation to address the point cloud feature association problem caused by missing correspondence points. Then, we develop an upsampling strategy by fusing the semantic and geometric information of scene stream features and considering the local neighborhood feature correlation between neighboring frames and data from the same frame, contributing to improved SF up-sampling quality. Finally, we develop a network structure featuring two alternate modules of updated cost volume and SF estimation for improving the SF estimation accuracy. Experimental results demonstrate that the proposed method can improve the SF estimation accuracy and SF feature quality, as well as enhance the fidelity of the up-sampled SF.
    Fourth, we propose an active observation method based on intelligent local target selection and reinforcement learning based motion planning to mitigate the impact of low-quality point clouds on SF estimation. At first, we develop a reliable reachable area estimation strategy through hidden points removal based scene prediction and a legality checking scheme between the ego-vehicle and the road as well as between the ego-vehicle and the predicted scene. Then, we develop an efficient local target decision strategy by comprehensively evaluating the disparity between two frames of point clouds using a suitable scene similarity measure. Finally, we develop a reinforcement learning-based motion planning approach with an adaptive perception confidence network and reward function to ensure the safe arrival at the local target through the drivable area for the ego-vehicle. Experimental results demonstrate that the proposed active local target selection technique effectively enhances point cloud data quality and improves SF estimation algorithm performance. Additionally, the motion planning method yields a high success rate in obstacle avoidance, ensuring the secure arrival at local targets.
    In summary, this thesis revolves around four main topics: the development of an active SF dataset and estimation method validation framework, the development of a point cloud active observation strategy, and the development of SF estimation techniques applicable to both large-scale and small-scale scenes. These efforts aim to ensure the theoretical soundness and practical applications of the proposed methodology.

关键词
其他关键词
语种
中文
培养类别
联合培养
入学年份
2018
学位授予年份
2024-06
参考文献列表

[1] Wei Y, Zhao L, Zheng W, Zhu Z, Zhou J, Lu J. SurroundOcc: Multi-Camera 3D Occupancy Prediction For Autonomous Driving[C] // Proceedings of the IEEE/CVF International Conference on Computer Vision. IEEE, 2023: 21729-21740.
[2] Wang X, Zhu Z, Xu W, Zhang Y, Wei Y, Chi X, Ye Y, Du D, Lu J, Wang X. OpenOccupancy: A Large Scale Benchmark For Surrounding Semantic Occupancy Perception[C] // Proceedings of the IEEE/CVF International Conference on Computer Vision. IEEE, 2023: 17850-17859.
[3] Huang Y, Zheng W, Zhang Y, Zhou J, Lu J. Tri-perspective View For Vision-based 3D Semantic Occupancy Prediction[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2023: 9223-9232.
[4] Casas S, Sadat A, Urtasun R. MP3: A Unified Model To Map, Perceive, Predict and Plan[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2021: 14403-14412.
[5] Gu J, Hu C, Zhang T, Chen X, Wang Y, Wang Y, Zhao H. ViP3D: End-to-End Visual Trajectory Prediction Via 3D Agent Queries[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2023: 5496-5506.
[6] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser ., Polosukhin I. Attention Is All You Need[C] // Advances in Neural Information Processing Systems. NeurIPS, 2017: 5998-6008.
[7] Hu Y, Yang J, Chen L, Li K, Sima C, Zhu X, Chai S, Du S, Lin T, Wang W, Lu L, Jia X, Liu Q, Dai J, Qiao Y, Hongyang L. Planning-oriented Autonomous Driving[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2023: 17853-17862.
[8] Vedula S, Baker S, Rander P, Collins R, Kanade T. Three-Dimensional Scene Flow[C] // Proceedings of the Seventh IEEE International Conference on Computer Vision. IEEE, 1999: 722-729.
[9] Saputra M R U, Markham A, Trigoni N. Visual SLAM and Structure From Motion In Dynamic Environments: A Survey[J]. ACM Computing Surveys, 2018, 51(2): 1-36.
[10] Zou Z, Chen K, Shi Z, Guo Y, Ye J. Object Detection In 20 Years: A Survey[J]. Proceedings of the IEEE, 2023, 111(3): 257-276.
[11] Hao S, Zhou Y, Guo Y. A Brief Survey On Semantic Segmentation With Deep Learning[J]. Neurocomputing, 2020, 406(9): 302-321.
[12]田萱 ,王亮 ,丁琪 .基于深度学习的图像语义分割方法综述 [J].软件学报 , 2019, 30(2): 440-468.
[13] Wang L, Huang Y. A Survey Of 3D Point Cloud And Deep Learning-based Approaches For Scene Understanding In Autonomous Driving[J]. IEEE Intelligent Transportation Systems Magazine, 2021, 14(6): 135-154.
[14] Quiroga J, Brox T, Devernay F, Crowley J. Dense Semi-Rigid Scene Flow Estimation From RGB-D Images[C] // Proceedings of the European Conference on Computer Vision. Springer, 2014: 567-582.
[15] Herbst E, Ren X, Fox D. RGB-D Flow: Dense 3D Motion Estimation Using Color And Depth[C] // IEEE International Conference on Robotics and Automation. IEEE, 2013: 2276-2282.
[16] Dewan A, Caselitz T, Tipaldi G D, Burgard W. Rigid Scene Flow For 3D Lidar Scans[C] // IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2016: 1765-1770.
[17] Biber P, Stra.er W. The Normal Distributions Transform: A New Approach To Laser Scan Matching[C] // IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2003: 2743-2748.
[18] Jaimez M, Souiai M, Gonzalez-Jimenez J, Cremers D. A Primal-Dual Framework For Real-Time Dense RGB-D Scene Flow[C] // IEEE International Conference on Robotics and Automation. IEEE, 2015: 98-104.
[19] Scharstein D, Szeliski R. A Taxonomy And Evaluation Of Dense Two-Frame Stereo Correspondence Algorithms[J]. International Journal of Computer Vision, 2002, 47(1): 7-42.
[20] Scharstein D, Szeliski R. High-Accuracy Stereo Depth Maps Using Structured Light[C] // Proceedings of the IEEE/CVF International Conference on Computer Vision. IEEE, 2003: 195-202.
[21] Butler D J, Wulff J, Stanley G B, Black M J. A Naturalistic Open Source Movie For Optical Flow Evaluation[C] // Proceedings of the European Conference on Computer Vision. Springer, 2012: 611-625.
[22] Jin Z, Lei Y, Akhtar N, Li H, Hayat M. Deformation And Correspondence Aware Unsupervised Synthetic-to-Real Scene Flow Estimation For Point Clouds[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2022: 7233-7243.
[23] Mayer N, Ilg E, Hausser P, Fischer P, Cremers D, Dosovitskiy A, Brox T. A Large Dataset To Train Convolutional Networks For Disparity, Optical Flow, And Scene Flow Estimation[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2016: 4040-4048.
[24] Liu X, Qi C R, Guibas L J. FlowNet3D: Learning Scene Flow In 3D Point Clouds[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2019: 529-537.
[25] Gu X, Wang Y, Wu C, Lee Y J, Wang P. HPLFlownet: Hierarchical Permuto-hedral Lattice Flownet For Scene Flow Estimation On Large-Scale Point Clouds[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2019: 3254-3263.
[26] Geiger A, Lenz P, Stiller C, Urtasun R. Vision Meets Robotics: The KITTI Dataset [J]. The International Journal of Robotics Research, 2013, 32(11): 1231-1237.
[27] Menze M, Heipke C, Geiger A. Object Scene Flow[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2018, 140: 60-76.
[28] Chang M F, Lambert J, Sangkloy P, Singh J, Bak S, Hartnett A, Wang D, Carr P, Lucey S, Ramanan D, Hays J. Argoverse: 3D Tracking And Forecasting With Rich Maps[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2019: 8748-8757.
[29] Caesar H, Bankiti V, Lang A H, Vora S, Liong V E, Xu Q, Krishnan A, Pan Y, Baldan G, Beijbom O. nuscenes: A Multimodal Dataset For Autonomous Driving[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2020: 11621-11631.
[30] Sun P, Kretzschmar H, Dotiwalla X, Chouard A, Patnaik V, Tsui P, Guo J, Zhou Y, Chai Y, Caine B, Vasudevan V, Han W, Ngiam J, Zhao H, Timofeev A, Ettinger S, Krivokon M, Gao A, Joshi A, Zhang Y, et al. Scalability In Perception For Autonomous Driving: Waymo Open Dataset[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2020: 2446-2454.
[31] Pontes J K, Hays J, Lucey S. Scene Flow From Point Clouds With Or Without Learning[C] // International Conference on 3D Vision. IEEE, 2020: 261-270.
[32] Jund P, Sweeney C, Abdo N, Chen Z, Shlens J. Scalable Scene Flow From Point Clouds In The Real World[J]. IEEE Robotics and Automation Letters, 2021, 7(2): 1589-1596.
[33] ZachC,PockT,BischofH. A Duality Based Approach For Real-time TV-L1 Optical Flow[C] // Joint Pattern Recognition Symposium. Springer, 2007: 214-223.
[34] Zhang X, Chen D, Yuan Z, Zheng N. Dense Scene Flow Based On Depth And Multi-Channel Bilateral Filter[C] // Asian Conference on Computer Vision. Springer, 2012: 140-151.
[35] Ferstl D, Reinbacher C, Riegler G, Rüther M, Bischof H. aTGV-SF: Dense Varia-tional Scene Flow Through Projective Warping And Higher Order Regularization[C] // The Second International Conference on 3D Vision. IEEE, 2014: 285-292.
[36] Vogel C, Schindler K, Roth S. 3D Scene Flow Estimation With A Rigid Motion Prior[C] // Proceedings of the International Conference on Computer Vision. IEEE, 2011: 1291-1298.
[37] Park J, Oh T H, Jung J, Tai Y W, Kweon I S. A Tensor Voting Approach For Multi-View 3D Scene Flow Estimation And Refinement[C] // Proceedings of the European Conference on Computer Vision. Springer, 2012: 288-302.
[38] Basha T, Moses Y, Kiryati N. Multi-View Scene Flow Estimation: A View Centered Variational Approach[J]. International Journal of Computer Vision, 2013, 101(1): 6-21.
[39] Popham T, Bhalerao A, Wilson R. Estimating Scene Flow Using An Interconnected Patch Surface Model With Belief-Propagation Inference[J]. Computer Vision and Image Understanding, 2014, 121: 74-85.
[40] Lv Z, Beall C, Alcantarilla P F, Li F, Kira Z, Dellaert F. A Continuous Optimization Approach For Efficient And Accurate Scene Flow[C] // Proceedings of the European Conference on Computer Vision. Springer, 2016: 757-773.
[41] Jaimez M, Souiai M, Stückler J, Gonzalez-Jimenez J, Cremers D. Motion Cooperation: Smooth Piece-Wise Rigid Scene Flow From RGB-D Images[C] // International Conference on 3D Vision. IEEE, 2015: 64-72.
[42] Popham T, Bhalerao A, Wilson R. Multi-Frame Scene-Flow Estimation Using A Patch Model And Smooth Motion Prior[C] // The British Machine Vision Conference Workshop. BMVA Press, 2010: 210-211.
[43] Zhang Y, Kambhamettu C. On 3D Scene Flow And Structure Estimation[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2001: 778-785.
[44] Li R, Sclaroff S. Multi-Scale 3D Scene Flow From Binocular Stereo Sequences[J]. Computer Vision and Image Understanding, 2008, 110(1): 75-90.
[45] Sun D, Sudderth E B, Pfister H. Layered RGB-D Scene Flow Estimation[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2015: 548-556.
[46] Vogel C, Roth S, Schindler K. View-Consistent 3D Scene Flow Estimation Over Multiple Frames[C] // Proceedings of the European Conference on Computer Vision. Springer, 2014: 263-278.
[47] Veksler O, Boykov Y, Mehrani P. Superpixels Wnd Super Voxels In An Energy Optimization Framework[C] // Proceedings of the European Conference on Computer Vision. Springer, 2010: 211-224.
[48] Hornung A, Wurm K M, Bennewitz M, Stachniss C, Burgard W. OctoMap: An Efficient Probabilistic 3D Mapping Framework Based On Octrees[J]. Autonomous Robots, 2013, 34(3): 189-206.
[49] Ng S K, Krishnan T, McLachlan G J. Handbook Of Computational Statistics: Concepts And Methods[M]. New York: Springer, 2012: 139-172.
[50] Ushani A K, Wolcott R W, Walls J M, Eustice R M. A Learning Approach For Real-Time Temporal Scene Flow Estimation From LiDAR Data[C] // IEEE International Conference on Robotics and Automation. IEEE, 2017: 5666-5673.
[51] Abu Alhaija H, Sellent A, Kondermann D, Rother C. Graph Flow–6D Large Displacement Scene Flow Via Graph Matching[C] // German Conference on Pattern Recognition. Springer, 2015: 285-296.
[52] Bay H, Tuytelaars T, Gool L V. SURF: Speeded Up Robust Features[C] // Proceedings of the European Conference on Computer Vision. Springer, 2006: 404-417.
[53] Tola E, Lepetit V, Fua P. Daisy: An Efficient Dense Descriptor Applied To Wide-Baseline Stereo[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 32(5): 815-830.
[54] Cech J, Sanchez-Riera J, Horaud R. Scene Flow Estimation By Growing Correspondence Seeds[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2011: 3129-3136.
[55] Quiroga J, Devernay F, Crowley J. Local/Global Scene Flow Estimation[C] // IEEE International Conference on Image Processing. IEEE, 2013: 3850-3854.
[56] Sizintsev M, Wildes R P. Spatiotemporal Stereo And Scene Flow Via Stequel Matching[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 34(6): 1206-1219.
[57] Richardt C, Kim H, Valgaerts L, Theobalt C. Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras[C] // International Conference on 3D Vision. IEEE, 2016: 276-285.
[58] Tombari F, Salti S, Stefano L D. Unique Signatures Of Histograms For Local Surface Description[C] // Proceedings of the European Conference on Computer Vision. Springer, 2010: 356-369.
[59] Qi C R, Su H, Mo K, Guibas L J. PointNet: Deep Learning On Point Sets For 3D Classification And Segmentation[C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2017: 652-660.
[60] Qi C R, Yi L, Su H, Guibas L J. PointNet++: Deep Hierarchical Feature Learning On Point Sets In A Metric Space[C] // Advances in Neural Information Processing Systems. NeurIPS, 2017: 1-10.
[61] Zhou Y, Tuzel O. VoxelNet: End-to-End Learning For Point Cloud Based 3D Object Detection[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2018: 4490-4499.
[62] Lang A H, Vora S, Caesar H, Zhou L, Yang J, Beijbom O. PointPillars: Fast Encoders For Object Detection From Point Clouds[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2019: 12697-12705.
[63] Wang Y, Sun Y, Liu Z, Sarma S E, Bronstein M M, Solomon J M. Dynamic Graph CNN For Learning On Point Clouds[J]. ACM Transactions On Graphics, 2019, 38 (5): 1-12.
[64] Zhou D, Fang J, Song X, Liu L, Yin J, Dai Y, Li H, Yang R. Joint 3D Instance Segmentation And Object Detection For Autonomous Driving[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2020: 1839-1849.
[65] Jampani V, Kiefel M, Gehler P V. Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs And Bilateral Neural Networks[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2016: 4452-4461.
[66] Kiefel M, Jampani V, Gehler P V. Permutohedral Lattice CNNs[C] // International Conference on Learning Representations. PMLR, 2015: 452-457.
[67] Gao W, Tedrake R. FilterReg: Robust And Efficient Probabilistic Point-Set Regis-tration Using Gaussian Filter And Twist Parameterization[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2019: 11095-11104.
[68] Thomas H, Qi C R, Deschaud J E, Marcotegui B, Goulette F, Guibas L J. KPConv: Flexible And Deformable Convolution For Point Clouds[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2019: 6411-6420.
[69] Ilg E, Saikia T, Keuper M, Brox T. Occlusions, Motion And Depth Boundaries With A Generic Network For Disparity, Optical Flow Or Scene Flow Estimation[C] // Proceedings of the European Conference on Computer Vision. 2018: 614-630.
[70] Saxena R, Schuster R, Wasenmuller O, Stricker D. PWOC-3D: Deep Occlusion-Aware End-to-End Scene Flow Estimation[C] // IEEE Intelligent Vehicles Symposium. IEEE, 2019: 324-331.
[71] Ouyang B, Raviv D. Occlusion Guided Scene Flow Estimation On 3D Point Clouds[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2021: 2805-2814.
[72] Ouyang B, Raviv D. Occlusion Guided Self-supervised Scene Flow Estimation On 3D Point Clouds[C] // International Conference on 3D Vision. IEEE, 2021: 782-791.
[73] Wu T, Pan L, Zhang J, Wang T, Liu Z, Lin D. Density-Aware Chamfer Distance As A Comprehensive Metric For Point Cloud Completion[C] // Advances in Neural Information Processing Systems. NeurIPS, 2021: 1-13.
[74] Wu W, Wang Z Y, Li Z, Liu W, Fuxin L. PointPWC-Net: Cost Volume On Point Clouds For (Self-)Supervised Scene Flow Estimation[C] // Proceedings of the European Conference on Computer Vision. Springer, 2020: 88-107.
[75] Kittenplon Y, Eldar Y C, Raviv D. FlowStep3D: Model Unrolling For Self-Supervised Scene Flow Estimation[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2021: 4114-4123.
[76] Wang H, Pang J, Lodhi M A, Tian Y, Tian D. FESTA: Flow Estimation Via Spatial-Temporal Attention For Scene Point Clouds[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2021: 14173-14182.
[77] Wei Y, Wang Z, Rao Y, Lu J, Zhou J. PV-RAFT: Point-Voxel Correlation Fields For Scene Flow Estimation Of Point Clouds[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2021: 6954-6963.
[78] Chizat L, Peyré G, Schmitzer B, Vialard F X. Scaling Algorithms For Unbalanced Optimal Transport Problems[J]. Mathematics of Computation, 2018, 87 (314): 2563-2609.
[79] Cuturi M. Sinkhorn Distances: Lightspeed Computation Of Optimal Transport[C] // Advances in Neural Information Processing Systems. NeurIPS, 2013: 2292-2300.
[80] Puy G, Boulch A, Marlet R. FLOT: Scene Flow On Point Clouds Guided By Optimal Transport[C] // Proceedings of the European Conference on Computer Vision. Springer, 2020: 527-544.
[81] Choe J, Park C, Rameau F, Park J, Kweon I S. PointMixer: MLP-Mixer For Point Cloud Understandin[C] // Proceedings of the European Conference on Computer Vision. Springer, 2022: 620–640.
[82] Tay Y, Bahri D, Metzler D, Juan D C, Zhao Z, Zheng C. Synthesizer: Rethinking Self-Attention For Transformer Models[C] // International Conference on Learning Representations. PMLR, 2021: 10183-10192.
[83] Tolstikhin I O, Houlsby N, Kolesnikov A, Beyer L, Zhai X, Unterthiner T, Yung J, Steiner A, Keysers D, Uszkoreit J, Lucic M, Dosovitskiy A. MLP-Mixer: An All-MLP Architecture For Vision[C] // Advances in Neural Information Processing Systems. NeurIPS, 2021: 24261-24272.
[84] Wang G, Wu X, Liu Z, Wang H. Hierarchical Attention Learning Of Scene Flow In 3D Point Clouds[J]. IEEE Transactions on Image Processing, 2021, 30(5): 5168-5181.
[85] Wang G, Jiang C, Shen Z, Miao Y, Wang H. SFGAN: Unsupervised Generative Adversarial Learning Of 3D Scene Flow From The 3D Scene Self[J]. Advanced Intelligent Systems, 2022, 4(4): 2100-2110.
[86] Li R, Lin G, Xie L. Self-Point-Flow: Self-Supervised Scene Flow Estimation From Point Clouds With Optimal Transport And Random Walk[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2021: 15577-15586.
[87] Gu X, Tang C, Yuan W, Dai Z, Zhu S, Tan P. RCP: Recurrent Closest Point For Point Cloud[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2022: 8216-8226.
[88] Lang I, Aiger D, Cole F, Avidan S, Rubinstein M. SCOOP: Self-Supervised Correspondence And Optimization-based Scene Flow[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2023: 5281-5290.
[89] Dong G, Zhang Y, Li H, Sun X, Xiong Z. Exploiting Rigidity Constraints For LiDAR Scene Flow Estimation[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2022: 12776-12785.
[90] Dey R, Salem F M. Gate-Variants Of Gated Recurrent Unit (GRU) Neural Networks[C] // IEEE International Midwest Symposium on Circuits and Systems. IEEE, 2017: 1597-1600.
[91] Luo C, Yang X, Yuille A. Self-Supervised Pillar Motion Learning For Autonomous Driving[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2021: 3183-3192.
[92] Baur S A, Emmerichs D J, Moosmann F, Pinggera P, Ommer B, Geiger A. SLIM: Self-Supervised LiDAR Scene Flow And Motion Segmentation[C] // Proceedings of the IEEE/CVF International Conference on Computer Vision. IEEE, 2021: 13126-13136.
[93] Teed Z, Deng J. RAFT: Recurrent All-pairs Field Transforms For Optical Flow[C] // Proceedings of the European Conference on Computer Vision. Springer, 2020: 402-419.
[94] Kabsch W. A Solution For The Best Rotation To Relate Two Sets Of Vectors [J]. Acta Crystallographica Section A: Crystal Physics, Diffraction, Theoretical and General Crystallography, 1976, 32(5): 922-923.
[95] Li R, Zhang C, Lin G, Wang Z, Shen C. RigidFlow: Self-Supervised Scene Flow Learning On Point Clouds By Local Rigidity Prior[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2022: 16959-16968.
[96] Ester M, Kriegel H P, Sander J, Xu X, Kumar M. A Density-Based Algorithm For Discovering Clusters In Large Spatial Databases With Noise[C] // Proceedings of the Second International Conference on Knowledge Discovery and Data Mining. ACM, 1996: 226-231.
[97] Liu M Y, Tuzel O, Veeraraghavan A, Chellappa R. Fast Directional Chamfer Matching[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2010: 1696-1703.
[98] Gojcic Z, Litany O, Wieser A, Guibas L J, Birdal T. Weakly Supervised Learning Of Rigid 3D Scene Flow[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2021: 5692-5703.
[99] Song Z, Yang B. OGC: Unsupervised 3D Object Segmentation From Rigid Dy-namics Of Point Clouds[C] // Advances in Neural Information Processing Systems. NeurIPS, 2022: 30798-30812.
[100] Mittal H, Okorn B, Held D. Just Go With The Flow: Self-Supervised Scene Flow Estimation[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2020: 11177-11185.
[101] Zuanazzi V, van Vugt J, Booij O, Mettes P. Adversarial Self-supervised Scene Flow Estimation[C] // International Conference on 3D Vision. IEEE, 2020: 1049-1058.
[102] Tishchenko I, Lombardi S, Oswald M R, Pollefeys M. Self-Supervised Learning Of Non-rigid Residual Flow And Ego-motion[C] // International Conference on 3D Vision. IEEE, 2020: 150-159.
[103] Li Z, Xiang N, Chen H, Zhang J, Yang X. Deep Learning For Scene Flow Estimation On Point Clouds: A Survey And Prospective Trends[M] // Computer Graphics Forum: Vol. 42. Wiley Online Library, 2023: 1-22.
[104] Higgins I, Matthey L, Pal A, Burgess C, Glorot X, Botvinick M, Mohamed S, Lerchner A. .-VAE: Learning Basic Visual Concepts With A Constrained Variational Framework[C] // International Conference on Learning Representations. PMLR, 2017: 1-13.
[105] Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative Adversarial Networks[J]. Communications of the ACM, 2020, 63(11): 139-144.
[106] Katz S, Tal A, Basri R. Direct Visibility Of Point Sets[C] // Special Interest Group on Computer Graphics and Interactive Techniques. ACM, 2007: 1-11.
[107] Katz S, Tal A. On The Visibility Of Point Clouds[C] // Proceedings of the IEEE International Conference on Computer Vision. IEEE, 2015: 1350-1358.
[108] Hansard M E. Stochastic Visibility In Point-sampled Scenes[C] // The British Machine Vision Conference. Citeseer, 2015: 1-12.
[109] Xu Q, Xu Z, Philip J, Bi S, Shu Z, Sunkavalli K, Neumann U. Point-NeRF: Point-based Neural Radiance Fields[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2022: 5438-5448.
[110] Carlson A, Ramanagopal M S, Tseng N, Johnson-Roberson M, Vasudevan R, Skinner K A. CLONeR: Camera-LiDAR Fusion For Occupancy Grid-Aided Neural Representations[J]. IEEE Robotics and Automation Letters, 2023, 8(5): 2812-2819.
[111] Douillard B, Underwood J, Kuntz N, Vlaskine V, Quadros A, Morton P, Frenkel A. On The Segmentation Of 3D LiDAR Point Clouds[C] // IEEE International Conference on Robotics and Automation. IEEE, 2011: 2798-2805.
[112] Fischler M A, Bolles R C. Random Sample Consensus: A Paradigm For Model Fitting With Applications To Image Analysis And Automated Cartography[J]. Communications of the ACM, 1981, 24(6): 381-395.
[113] Chen Z, Chen Z. RBNet: A Deep Neural Network For Unified Road And Road Boundary Detection[C] // International Conference on Neural Information Processing. Springer, 2017: 677-687.
[114] Badrinarayanan V, Kendall A, Cipolla R. SegNet: A Deep Convolutional Encoder-decoder Architecture For Image Segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2481-2495.
[115] Romera E, Alvarez J M, Bergasa L M, Arroyo R. ERFNet: Efficient Residual Factorized ConvNet For Real-Time Semantic Segmentation[J]. IEEE Transactions on Intelligent Transportation Systems, 2017, 19(1): 263-272.
[116] Pan X, Shi J, Luo P, Wang X, Tang X. Spatial As Deep: Spatial CNN For Traffic Scene Understanding[C] // Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 2018: 7276-7283.
[117] Asgarian H, Amirkhani A, Shokouhi S B. Fast Drivable Area Detection For Autonomous Driving With Deep Learning[C] // International Conference on Pattern Recognition and Image Analysis. IEEE, 2021: 1-6.
[118] Roldao L, de Charette R, Verroust-Blondet A. LMSCNet: Lightweight Multiscale 3D Semantic Completion[C] // International Conference on 3D Vision. IEEE, 2020: 111-119.
[119] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks For Biomedical Image Segmentation[C] // Medical Image Computing and Computer-Assisted Intervention. Springer, 2015: 234-241.
[120] Yan X, Gao J, Li J, Zhang R, Li Z, Huang R, Cui S. Sparse Single Sweep LiDAR Point Cloud Segmentation Via Learning Contextual Shape Priors From Scene Completion[C] // Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 2021: 3101-3109.
[121] Cheng R, Agia C, Ren Y, Li X, Bingbing L. S3CNet: A Sparse Semantic Scene Completion Network For LiDAR Point Clouds[C] // Conference on Robot Learning. PMLR, 2021: 2148-2161.
[122] Choy C, Gwak J, Savarese S. 4D Spatio-temporal Convnets: Minkowski Convolu-tional Neural Networks[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2019: 3075-3084.
[123] Cao A Q, de Charette R. Monoscene: Monocular 3D Semantic Scene Completion[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2022: 3991-4001.
[124] Li Y, Yu Z, Choy C, Xiao C, Alvarez J M, Fidler S, Feng C, Anandkumar A. Vox-Former: Sparse Voxel Transformer For Camera-based 3D Semantic Scene Completion[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2023: 9087-9098.
[125] RüschendorfL. The Wasserstein Distance And Approximation Theorems[J]. Probability Theory and Related Fields, 1985, 70(1): 117-129.
[126] Piccoli B, Rossi F. Generalized Wasserstein Distance And Its Application To Transport Equations With Source[J]. Archive for Rational Mechanics and Analysis, 2014, 211(1): 335-358.
[127] Eldar Y, Lindenbaum M, Porat M, Zeevi Y Y. The Farthest Point Strategy For Progressive Image Sampling[J]. IEEE Transactions on Image Processing, 1997, 6 (9): 1305-1315.
[128] Hu Q, Yang B, Xie L, Rosa S, Guo Y, Wang Z, Trigoni N, Markham A. RandLA-Net: Efficient Semantic Segmentation Of Large-Scale Point Clouds[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2020: 11108-11117.
[129] Shen Y, Feng C, Yang Y, Tian D. Mining Point Cloud Local Structures By Kernel Correlation And Graph Pooling[C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2018: 4548-4557.
[130] Dovrat O, Lang I, Avidan S. Learning To Sample[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2019: 2760-2769.
[131] Guo K, Wang D, Fan T, Pan J. VR-ORCA: Variable Responsibility Optimal Reciprocal Collision Avoidance[J]. IEEE Robotics and Automation Letters, 2021, 6 (3): 4520 -4527.
[132] Luo Y, Cai P, Aniket B, David H, Sun L W, Dinesh M. PORCA: Modeling And Planning For Autonomous Driving Among Many Pedestrians[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 3418-3425.
[133] Alonso-Mora J, Breitenmoser A, Rufli M, Beardsley P, Siegwart R. Optimal Reciprocal Collision Avoidance For Multiple Non-Holonomic Robots[C] // Distributed autonomous robotic systems: The 10th international symposium. Springer, 2013: 203-216.
[134] Van Den Berg J, Snape J, Guy S J, Manocha D. Reciprocal Collision Avoidance With Acceleration-Velocity Obstacles[C] // IEEE International Conference on Robotics and Automation. IEEE, 2011: 3475-3482.
[135] Wei B, Zhang X. FRVO: A Filter Enhanced Interaction Model For Pedestrian Path Prediction In Crowded Scenarios[C] // Chinese Automation Congress. IEEE, 2019: 538-543.
[136] Everett M, Chen Y F, How J P. Motion Planning Among Dynamic, Decision-Making Agents With Deep Reinforcement Learning[C] // IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2018: 3052-3059.
[137] Chen C, Liu Y, Kreiss S, Alahi A. Crowd-Robot Interaction: Crowd-Aware Robot Navigation With Attention-Based Deep Reinforcement Learning[C] // International Conference on Robotics and Automation. IEEE, 2019: 6015-6022.
[138] Chen Y F, Liu M, Everett M, How J P. Decentralized Non-Communicating Mul-tiagent Collision Avoidance With Deep Reinforcement Learning[C] // IEEE International Conference on Robotics and Automation. IEEE, 2017: 285-292.
[139] Ferrer G, Garrell A, Sanfeliu A. Social-Aware Robot Navigation In Urban Environments[C] // Proceedings of the European Conference on Mobile Robots. IEEE, 2013: 331-336.
[140] Chen C, Hu S, Nikdel P, Mori G, Savva M. Relational Graph Learning For Crowd Navigation[C] // IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2020: 10007-10013.
[141] Chen Y, Liu C, Shi B E, Liu M. Robot Navigation In Crowds By Graph Convolutional Networks With Attention Learned From Human Gaze[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 2754-2761.
[142] Fedus W, Ramachandran P, Agarwal R, Bengio Y, Larochelle H, Rowland M, Dab-ney W. Revisiting Fundamentals of Experience Replay[C] // International Confer-ence on Machine Learning. PMLR, 2020: 3061-3071.
[143]杨惟轶 ,白辰甲 ,蔡超 ,赵英男 ,刘鹏 .深度强化学习中稀疏奖励问题研究综述[J].计算机科学 , 2020, 47(3): 182-191.
[144] Wang S, Gao R, Han R, Chen S, Li C, Hao Q. Adaptive Environment Modeling Based Reinforcement Learning For Collision Avoidance In Complex Scenes[C] // IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2022: 9011-9018.
[145] Van Den Berg J, Guy S J, Lin M, Manocha D. Reciprocal N-Body Collision Avoidance[C] // Robotics Research: The 14th International Symposium ISRR. Springer, 2011: 3-19.
[146] Liu L, Dugas D, Cesari G, Siegwart R, Dubé R. Robot Navigation In Crowded Environments Using Deep Reinforcement Learning[C] // IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2020: 5671-5677.
[147] Battaglia P, Pascanu R, Lai M, Rezende D J R. Interaction Networks For Learn-ing About Objects, Relations and Physics[C] // Advances in Neural Information Processing Systems. NeurIPS, 2016: 4502-4510.
[148] Hochreiter S, Schmidhuber J. Long Short-Term Memory[J]. Neural Computation, 1997, 9(8): 1735-1780.
[149] Alahi A, Goel K, Ramanathan V, Robicquet A, Fei-Fei L, Savarese S. Social LSTM: Human Trajectory Prediction In Crowded Spaces[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2016: 961-971.
[150] Ferrer G, Garrell A, Sanfeliu A. Social-Aware Robot Navigation In Urban Environments[C] // Proceedings of the European Conference on Mobile Robots. IEEE, 2013: 331-336.
[151] Vemula A, Muelling K, Oh J. Social Attention: Modeling Attention In Human Crowds[C] // IEEE International Conference on Robotics and Automation. IEEE, 2018: 4601-4607.
[152] Kiran B R, Sobh I, Talpaert V, Mannion P, Al Sallab A A, Yogamani S, Pérez P. Deep Reinforcement Learning For Autonomous Driving: A Survey[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 23(6): 4909-4926.
[153] Mnih V, Badia A P, Mirza M, Graves A, Lillicrap T, Harley T, Silver D, Kavukcuoglu K. Asynchronous Methods For Deep Reinforcement Learning[C] // International Conference on Machine Learning. PMLR, 2016: 1928-1937.
[154] Azizzadenesheli K, Brunskill E, Anandkumar A. Efficient Exploration Through Bayesian Deep Q-Networks[C] // Information Theory and Applications Workshop. IEEE, 2018: 1-9.
[155] Sathyamoorthy A J, Patel U, Guan T, Manocha D. Frozone: Freezing-Free, Pedestrian-Friendly Navigation In Human Crowds[J]. IEEE Robotics and Automation Letters, 2020, 5(3): 4352-4359.
[156] Trautman P, Krause A. Unfreezing The Robot: Navigation In Dense, Interacting Crowds[C] // IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2010: 797-803.
[157] Sathyamoorthy A J, Liang J, Patel U, Guan T, Chandra R, Manocha D. Dense-CAvoid: Real-Time Navigation In Dense Crowds Using Anticipatory Behaviors[C] // IEEE International Conference on Robotics and Automation. IEEE, 2020: 11345-11352.
[158]庞磊 ,曹志强 ,喻俊志 .基于 A*和 TEB融合的行人感知无碰跟随方法 [J].航空学报 , 2021, 42(4): 524909-524909.
[159] Kuffner J J, LaValle S M. RRT-connect: An Efficient Approach To Single-query Path Planning[C] // IEEE International Conference on Robotics and Automation: Vol. 2. IEEE, 2000: 995-1001.
[160] Magyar B, Tsiogkas N, Deray J, Pfeiffer S, Lane D. Timed-Elastic Bands For Manipulation Motion Planning[J]. IEEE Robotics and Automation Letters, 2019, 4 (4): 3513-3520.
[161] Rodriguez S, Tang X, Lien J M, Amato N M. An Obstacle-Based Rapidly-Exploring Random Tree[C] // Proceedings of IEEE International Conference on Robotics and Automation. IEEE, 2006: 895-900.
[162] Takeuchi E, Tsubouchi T. A 3D Scan Matching Using Improved 3D Normal Distributions Transform For Mobile Robotic Mapping[C] // IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006: 3068-3073.
[163] Myronenko A, Song X. Point Set Registration: Coherent Point Drift[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(12): 2262-2275.
[164] Myronenko A, Song X, Carreira-Perpinan M. Non-rigid Point Set Registration: Coherent Point Drift[C] // Advances in Neural Information Processing Systems. NeurIPS, 2007: 1009-1017.
[165] Figalli A. The Optimal Partial Transport Problem[J]. Archive for Rational Mechanics and Analysis, 2010, 195(2): 533-560.
[166] Shen Z, Feydy J, Liu P, Curiale A H, San Jose Estepar R, San Jose Estepar R, Niethammer M. Accurate Point Cloud Registration With Robust Optimal Transport[C] // Advances in Neural Information Processing Systems. NeurIPS, 2021: 5373-5389.
[167]雷娜 ,顾险峰 .最优传输理论与计算 [M].北京 :高等教育出版社 , 2021: 35-49.
[168] Cordts M, Omran M, Ramos S, Rehfeld T, Schiele B. The Cityscapes Dataset For Semantic Urban Scene Understanding[C] // Proceedings of the IEEE/CVF Confer-ence on Computer Vision and Pattern Recognition. IEEE, 2016: 3213-3223.
[169] Menze M, Heipke C, Geiger A. Joint 3D Estimation Of Vehicles And Scene Flow [J]. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Informa-tion Sciences, 2015, 2(8): 427-434.
[170] ZhouQY,ParkJ,KoltunV. FastGlobalRegistration[C]//ProceedingsofEuropean Conference on Computer Vision. Springer, 2016: 766-782.
[171] Li B, Zheng C, Giancola S, Ghanem B. SCTN: Sparse Convolution-Transformer Network For Scene Flow Estimation[C] // Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 2022: 1254-1262.
[172] Wang G, Hu Y, Liu Z, Zhou Y, Tomizuka M, Zhan W, Wang H. What Matters For 3D Scene Flow Network[C] // Proceedings of the European Conference on Computer Vision. Springer, 2022: 38-55.
[173] Cheng W, Ko J H. Bi-PointFlowNet: Bidirectional Learning For Point Cloud Based Scene Flow Estimation[C] // Proceedings of the European Conference on Computer Vision. Springer, 2022: 108-124.
[174] Han R, Wang S, Wang S, Zhang Z, Zhang Q, Eldar Y C, Hao Q, Pan J. RDA: An Accelerated Collision Free Motion Planner For Autonomous Navigation In Cluttered Environments[J]. IEEE Robotics and Automation Letters, 2023, 8(3): 1715-1722.
[175] Han R, Chen S, Wang S, Zhang Z, Gao R, Hao Q, Pan J. Reinforcement Learned Distributed Multi-robot Navigation With Reciprocal Velocity Obstacle Shaped Rewards[J]. IEEE Robotics and Automation Letters, 2022, 7(3): 5896-5903.
[176] Huang K, Hao Q. Joint Multi-object Detection And Tracking With Camera-LiDAR Fusion For Autonomous Driving[C] // IEEE/RSJ International Conference on In-telligent Robots and Systems. IEEE, 2021: 6983-6989.
[177] Xu Q, Zhong Y, Neumann U. Behind The Curtain: Learning Occluded Shapes For 3D Object Detection[C] // Association for the Advancement of Artificial Intelli-gence: No. 3. AAAI, 2022: 2893-2901.
[178] Yang Y, Jiang K, Yang D, Jiang Y, Lu X. Temporal Point Cloud Fusion With Scene Flow For Robust 3D Object Tracking[J]. IEEE Signal Processing Letters, 2022, 29 (6): 1579-1583.
[179] Li R, Lin G, He T, Liu F, Shen C. HCRF-Flow: Scene Flow From Point Clouds With Continuous High-order CRFs And Position-aware Flow Embedding[C] // Pro-ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2021: 364-373.
[180] Ding L, Dong S, Xu T, Xu X, Wang J, Li J. FH-Net: A Fast Hierarchical Network For Scene Flow Estimation On Real-world Point Clouds[C] // Proceedings of the European Conference on Computer Vision. Springer, 2022: 213-229.
[181] Park C, Jeong Y, Cho M, Park J. Fast Point Transformer[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2022: 16949-16958.
[182] Theis L, van den Oord A, Bethge M. A Note On The Evaluation Of Genera-tive Models[C] // International Conference On Learning Representations. PMLR, 2016: 1-10.
[183] Teed Z, Deng J. RAFT-3D: Scene Flow Using Rigid-motion Embeddings[C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog-nition. IEEE, 2021: 8375-8384.
[184] Ali A, Gergis M, Abdennadher S, El Mougy A. Drivable Area Segmentation In Deteriorating Road Regions For Autonomous Vehicles Using 3D LiDAR Sensor[C] // IEEE Intelligent Vehicles Symposium. IEEE, 2021: 845-852.
[185] Werling M, Ziegler J, Kammel S, Thrun S. Optimal Trajectory Generation For Dynamic Street Scenarios In A FreNet Frame[C] // IEEE International Conference on Robotics and Automation. IEEE, 2010: 987-993.
[186] Liu M. Robotic Online Path Planning On Point Cloud[J]. IEEE Transactions on Cybernetics, 2015, 46(5): 1217-1228.
[187] Hassin R, Levin A. A Better-Than-Greedy Approximation Algorithm For The Minimum Set Cover Problem[J]. SIAM Journal on Computing, 2005, 35(1): 189-200.
[188] Zhang L, Ebrahimi A, Klabjan D. Layer Flexible Adaptive Computation Time[C] // International Joint Conference on Neural Networks. IEEE, 2021: 1-9.
[189] Fan R, Wang H, Cai P, Liu M. SNE-RoadSeg: Incorporating Surface Normal Infor-mation Into Semantic Segmentation For Accurate Freespace Detection[C] // Pro-ceedings of the European Conference on Computer Vision. Springer, 2020: 340-356.
[190] Chang Y, Xue F, Sheng F, Liang W, Ming A. Fast Road Segmentation Via Uncertainty-aware Symmetric Network[C] // IEEE International Conference on Robotics and Automation. IEEE, 2019: 11124-11130.
[191] Chen Z, Zhang J, Tao D. Progressive LiDAR Adaptation For Road Detection[J]. IEEE/CAA Journal of Automatica Sinica, 2019, 6(3): 693-702.
[192] Shan T, Englot B. LeGO-LOAM: Lightweight And Ground-Optimized LiDAR Odometry And Mapping On Variable Terrain[C] // IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2018: 4758-4765.
[193] Besl P J, McKay N D. Method For Registration Of 3D Shapes[C] // Sensor Fusion IV: Control Paradigms and Data Structures. SPIE, 1992: 586-606.

所在学位评定分委会
电子科学与技术
国内图书分类号
TP391.4
来源库
人工提交
成果类型学位论文
条目标识符//www.snoollab.com/handle/2SGJ60CL/765725
专题
工学院
工学院_计算机科学与工程系
推荐引用方式
GB/T 7714
王帅军. 面向复杂场景自主导航的主动场景流估计技术研究[D]. 哈尔滨. 哈尔滨工业大学,2024.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
11849555-王帅军-计算机科学与工(32638KB)学位论文--限制开放CC BY-NC-SA请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[王帅军]的文章
百度学术
百度学术中相似的文章
[王帅军]的文章
必应学术
必应学术中相似的文章
[王帅军]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。

Baidu
map