In this study, we propose a deep learning-based method for large-area inspection aimed at the high-speed detection of micro hole diameters. Micro holes are detected and stored in large images using YOLOv8, an object detection model. A super-resolution technique utilizing ESRGAN, an adversarial neural network, is applied to images of small micro holes, enhancing them to high resolution before measuring their diameters through image processing. When comparing the diameters measured after 8x super-resolution with the results from existing inspection equipment, the average error rate is remarkably low at 0.504%. The time taken to measure an image of one micro hole is 0.470 seconds, which is ten times faster than previous inspection methods. These results can significantly contribute to high-speed measurement and quality improvement through deep learning.
Citations
Citations to this article as recorded by
A Review of Intelligent Machining Process in CNC Machine Tool Systems Joo Sung Yoon, Il-ha Park, Dong Yoon Lee International Journal of Precision Engineering and Manufacturing.2025; 26(9): 2243. CrossRef
This study presents a method for inspecting ship block wall painting using a cooperative robot. The robot used in this study is a representative example of a human-collaborative robot system. The end-effector of the robot is equipped with a depth camera, designed in an eye-in style. The camera is used to measure and evaluate the thickness of the paint applied to the iron plate, simulating the conditions of ship block wall painting. To improve the accuracy of the recognition, an object detection algorithm with rapid computation and high accuracy was utilized. The algorithm was used to identify and outline the paint areas using the Canny edge algorithm. The proposed method successfully demonstrated the precision of paint area recognition by clearly identifying the center point and outline of the areas. Comparing the paint thickness measurements with laser distance measurements confirmed the effectiveness of the proposed method.
This study developed a defect-detecting system for automotive wheel nuts. We proposed an image processing method using OpenCV for efficient defect-detection of automotive wheel nuts. Image processing method focused on noise removal, ratio adjustment, binarization, polar coordinate system formation, and orthogonal coordinate system conversion. Through data collection, preprocessing, object detection model training, and testing, we established a system capable of accurately classifying defects and tracking their positions. There are four defect types. Types 1 and 2 defects are defects of products where the product is completely broken circumferentially. Types 3 and 4 defects are defects are small circumferential dents and scratches in the product. We utilized Faster R-CNN and YOLOv8 models to detect defect types. By employing effective preprocessing and post-processing steps, we enhanced the accuracy. In the case of Fast RCNN, AP values were 0.92, 0.93, 0.76, and 0.49 for types 1, 2, 3, and 4 defects, respectively. The mAP was 0.77. In the case of YOLOv8, AP values were 0.78, 0.96, 0.8, and 0.51 for types for types 1, 2, 3, and 4 defects, respectively. The mAP was 0.76. These results could contribute to defect detection and quality improvement in the automotive manufacturing sector.
Citations
Citations to this article as recorded by
Large-area Inspection Method for Machined Micro Hole Dimension Measurement Using Deep Learning in Silicon Cathodes Jonghyeok Chae, Dongkyu Lee, Seunghun Oh, Yoojeong Noh Journal of the Korean Society for Precision Engineering.2025; 42(2): 139. CrossRef
Recently, in-depth studies on sensors of autonomous vehicles have been conducted. In particular, the trend to pursue only camera-based autonomous driving is progressing. Studies on object detection using IR (Infrared) cameras is essential in overcoming the limitations of the VIS (Visible) camera environment. Deep learning-based object detection technology requires sufficient data, and data augmentation can make the object detection network more robust and improve performance. In this paper, a method to increase the performance of object detection by generating and learning a high-resolution image of an infrared dataset, based on a data augmentation method based on a Generative Adversarial Network (GAN) was studied. We collected data from VIS and IR cameras under severe conditions such as snowfall, fog, and heavy rain. The infrared data images from KAIST were used for data learning and verification. We confirmed that the proposed data augmentation method improved the object detection performance, by applying generated dataset to various object detection networks. Based on the study results, we plan on developing object detection technology using only cameras, by creating IR datasets from numerous VIS camera data to be secured in the future and fusion with VIS cameras.
Hands perform various functions. There are many inconveniences in life without the use of hands. People without the use of hands wear prostheses. Recently, there have been many developments and studies about robotic prosthetic hands performing hand functions. Grasping motions of robotic prosthetic hands are integral in performing various functions. Grasping motions of robotic prosthetic hands are required recognition of grasping targets. A path toward using images to recognize grasping targets exists. In this study, object recognition in images for grasping motions are performed by using object detection based on deep-learning. A suitable model for the grasping motion was examined through three object detection models. Also, we present a method for selecting a grasping target when several objects are recognized. Additionally, it will be used for grasping control of robotic prosthetic hands in the future and possibly enable automatic control robotic prosthetic hands.
Citations
Citations to this article as recorded by
A Study on Defect Detection Model of Bone Plates Using Multiple Filter CNN of Parallel Structure Song Yeon Lee, Yong Jeong Huh Journal of the Korean Society for Precision Engineering.2023; 40(9): 677. CrossRef