Novel pose estimation algorithm for mobile augmented reality based on inertial sensor fusion /
Augmented Reality (AR) applications have become increasingly ubiquitous as it integrates virtual information such as images, 3D objects, video and more to the real world, which further enhances the real environment. AR functions as an interface by superimposing virtual world information on top of th...
Saved in:
Main Author: | |
---|---|
Format: | Thesis Book |
Language: | English |
Published: |
Kuala Lumpur :
Kulliyyah of Engineering, International Islamic University Malaysia,
2022
|
Subjects: | |
Online Access: | http://studentrepo.iium.edu.my/handle/123456789/11331 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Augmented Reality (AR) applications have become increasingly ubiquitous as it integrates virtual information such as images, 3D objects, video and more to the real world, which further enhances the real environment. AR functions as an interface by superimposing virtual world information on top of the real environment. Moreover, image registration is critical in computer vision. Image registration is widely employed in a variety of applications, including image matching, change detection, 3D reconstruction, mobile robots and more. However, the main concern in the augmented reality is the registration of virtual information, or how to stack the virtual information generated by a computer into a real environment based on its surroundings. Vision-based pose estimation in augmented reality application has been widely investigated. Though, many of the earlier techniques relied on markers, and vision-based registration on mobile devices is currently too computationally expensive and imprecise. The accuracy of vision-based pose estimation approach is degraded due to illumination changes from frame-to-frame image sequence. This may cause jitter in the estimated pose. Many researchers have investigated the pose estimation and augmentation of the 3D virtual object in the physical environment. However, certain loopholes exist in the existing system while estimating the object’s pose, making it inaccurate for Mobile Augmented Reality (MAR) applications. This study proposes to estimate the pose of an object by blending both vision-based technique and MEMS sensor (gyroscope) to minimize the jitter problem in MAR. The algorithm used for feature detection and description is Oriented-FAST Rotated-BRIEF (ORB), whereas to evaluate the homography for pose estimation, Random Sample Consensus (RANSAC) is used. We evaluated the performance of augmenting the 3D object using the both the techniques which includes vision data only and incorporating the sensor data with the vision data. After extensive experiments, the validity of the proposed method was superior to the existing vision-based pose estimation algorithms. The proposed method has proven to be successful in overcoming the problem of jitter in the existing system. The proposed algorithm was benchmarked with two other algorithms, where some shortcoming in the latter were addressed such as computational cost, however, the issues of gyroscope like drift remains limitation. |
---|---|
Item Description: | Abstracts in English and Arabic.
"A dissertation submitted in fulfilment of the requirement for the degree of Master of Science (Computer and Information Engineering)." --On title page. |
Physical Description: | xiv, 112 leaves ; illustrations ; 30 cm. |
Bibliography: | Includes bibliographical references (leaves 95-107). |