REALITY SCAPE “REAL-TIME OBJECT DETECTION WITH AR OBJECT ANNOTATION”

Welcome to DSpace BU Repository

Welcome to the Bahria University DSpace digital repository. DSpace is a digital service that collects, preserves, and distributes digital material. Repositories are important tools for preserving an organization's legacy; they facilitate digital preservation and scholarly communication.

Show simple item record

dc.contributor.author Ahmer Akhtar Mughal, 01-132182-003
dc.contributor.author Madiha Talpur, 01-132182-046
dc.date.accessioned 2022-10-24T05:35:07Z
dc.date.available 2022-10-24T05:35:07Z
dc.date.issued 2022
dc.identifier.uri http://hdl.handle.net/123456789/13737
dc.description Supervised by Engr. Ammar Ajmal en_US
dc.description.abstract The central objective of the application is making a swift solution to finding information in regard to any product in one’s environment, such as its specifications, reviews and where to buy it. Along with an AR display, a user may only point their phone camera in the direction of the object to get interactive information in real time. To develop this application, we are combining CoreML machine learning technologies with ARKit's Augmented Reality using Swift language. Data from a machine learning model is acquired from an ML model using Convolutional Neural Network. While it is displayed on the mobile app using AR whenever a user performs an action. AR annotation has been facilitated by the use ARkit. Selection criteria for the ML model is further explained into the paper, with testing-based results and comparisons. Many options and ways are available to implement CNN architecture, however choosing the most befitting one requires diligent review on the research available in juxtaposition to requirement of our application and the parameters at which it may perform efficiently. For our mobile application, ideally it shall produce most efficient and accurate results while minimizing computational complexity. Our application, RealityScape, comprises of two parts as mentioned above: Object Detection and AR Annotation. For detection, the user must point the camera at an object to allow sending of frame to the object detector which in turn returns the name and bounding box of the object. This deep learning model is then stored on the device itself to allow detection without any internet connectivity and serves us the speed we require. The model has been implemented using Swift 5 and CoreML. In Annotation, after object detection has been successfully completed, bounding box coordinates are further passed to AR kit, where it draws an AR element including the detected object’s information. To determine the deep learning model a series of controlled experiments were conducted to collect quantitative data. Tiny-YOLO proved to be the most suitable for the constraints and requirements of this project. en_US
dc.language.iso en en_US
dc.publisher Computer Engineering, Bahria University Engineering School Islamabad en_US
dc.relation.ispartofseries BCE;P-1663
dc.subject Computer Engineering en_US
dc.title REALITY SCAPE “REAL-TIME OBJECT DETECTION WITH AR OBJECT ANNOTATION” en_US
dc.type Project Reports en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account