Home | Browse Archives | About | For Contributors |
Sorry.
You are not permitted to access the full text of articles.
If you have any questions about permissions,
please contact the Society.
죄송합니다.
회원님은 논문 이용 권한이 없습니다.
권한 관련 문의는 학회로 부탁 드립니다.
[ Papers ] | |
Journal of the Korean Society of Manufacturing Technology Engineers - Vol. 30, No. 5, pp. 372-381 | |
Abbreviation: J. Korean Soc. Manuf. Technol. Eng. | |
ISSN: 2508-5107 (Online) | |
Print publication date 15 Oct 2021 | |
Received 14 Jul 2021 Revised 31 Aug 2021 Accepted 29 Sep 2021 | |
DOI: https://doi.org/10.7735/ksmte.2021.30.5.372 | |
겹친 부품의 픽업 작업 문제를 해결하기 위한 신경망 기반 물체 인식과 RANSAC 기반 3차원 자세 추정 | |
심규호a ; 이귀형a, *
| |
Neural-network-based Object Recognition and RANSAC-based Three-dimensional Posture Estimation for Solving the Pickup Problem of Occluded Parts | |
Kyuho Sima ; Guihyung Leea, *
| |
aDepartment of Mechanical Design Robot Engineering, Seoul National University of Science and Technology | |
Correspondence to : *Tel.: +82-2-970-6325 E-mail address: ghlee@seoultech.ac.kr (Guihyung Lee). | |
Funding Information ▼ |
For a manipulator to accurately pick up parts to be assembled, the position and pose of the parts must be accurately recognized. To work in a complex work environment (e.g., a smart factory), it is necessary to estimate the six degrees of freedom given by x, y, z, yaw, pitch, and roll of the parts in three dimensions. The RealSense depth camera can easily obtain three-dimensional (3D) information from a single image through an RGB-D image. In this study, we first applied an artificial neural network (YOLACT) to classify objects and the background. Then, we estimated the 3D pose of each object using the RANSAC-based template-matching method on the 3D point cloud. By calculating the roll, pitch, and yaw values, we can obtain the gripper angle for picking up the object.
Keywords: Object detection, Occlusion, Artificial neural network, Instance segmentation, Point cloud, RANSAC |
이 연구는 서울과학기술대학교 교내학술연구비 지원으로 수행되었습니다.
1. | Kim, D. Y., 2019, Robotic Handling Parts Using Distributed Processing of Multi-CNN Algorithms, Master Thesis, Seoul National University of Science and Technology, Republic of Korea. |
2. | Kim, D. Y., Sim, K. H., Lee, G. H., 2019, Object Detection by Combining Two Different CNN Algorithms and Robotic Grasping Control, J. Inst. Control. Robot. Syst., 25:9 811-817. |
3. | Lee, G. H., 2020, A Study on Robotic Grasp based on Instance Segmentation and Reinforcement Learning using 6-DoF Robotic Manipulator, Master Thesis, Seoul National University of Science and Technology, Republic of Korea. |
4. | Kuo, H. Y., Su, H. R., Lai, S. H., Wu, C. C., 2014, 3D Object Detection and Pose Estimation from Depth Image for Robotic Bin Picking, 2014 IEEE International Conference on Automation Science and Engineering (CASE), 1264-1269. |
5. | Girshick, R., Donahue, J., Darrell, T., Malik, J., 2014, Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation, 2014 IEEE Conference on Computer Vision and Pattern Recognition, 580-587. |
6. | Girshick, R., 2015, Fast R-CNN, 2015 IEEE International Conference on Computer Vision (ICCV), 1440-1448. |
7. | Ren, S., He, K., Girshick, R., Sun, J., 2015, Faster R-CNN: Towards Real-time Object Detection with Region Proposal Networks, IEEE International Conference on Computer Vision (ICCV), 1440-1448. |
8. | Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016, You Only Look Once: Unified, Real-time Object Detection, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 779-788. |
9. | He, K., Gkioxari, G., Dollár, P., Girshick, R., 2017, Mask R-CNN, 2017 IEEE International Conference on Computer Vision (ICCV), 2980-2988. |
10. | Bolya, D., Zhou, C., Xiao, F., Lee, Y. J., 2019, YOLACT: Real-Time Instance Segmentation, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 9157-9166. |
11. | Fischler, M. A., Bolles, R. C., 1981, Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography, Commun. ACM, 24:6 381-395. |
12. | Rusu, R. B., Blodow, N., Beetz, M., 2009, Fast Point Feature Histograms (FPFH) for 3D registration, 2009 IEEE International Conference on Robotics and Automation, 3212-3217. |
13. | Choi, S. J., Zhou, Q., Koltun, V., 2015, Robust Reconstruction of Indoor Scenes, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5556-5565. |
14. | Buchholz, D., Winkelbach, S., Wahl, F. M., 2010, RANSAM for Industrial Bin-Picking, ISR 2010 (41st International Symposium on Robotics) and ROBOTIK 2010 (6th German Conference on Robotics), 1-6. |
15. | Bolya, D., Zhou, C., Xiao, F., Lee, Y. J., 2020, YOLACT++: Better Real-time Instance Segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence. |
Master course in the Department of Mechanical Design Robot Engineering, Seoul National University of Science and Technology. His research interest is Computer Vision, Deeplearning.
E-mail: kgkg920@naver.com
Professor in the Department of Mechanical System Design Engineering, Seoul National University of Science and Technology. His research interest is Control, Intelligent robotics.
E-mail: ghlee@seoultech.ac.kr