Vision-based Indoor Navigation of a Semi-autonomous and Interactive Unmanned Aerial System

View/ Open
Advisor:
McCrink, MatthewWhitfield, Clifford A.
Issue Date:
2017-05Metadata
Show full item recordPublisher:
The Ohio State UniversitySeries/Report no.:
The Ohio State University. Department of Mechanical and Aerospace Engineering Undergraduate Research Theses; 2017Abstract:
This paper considers the development and implementation of a vision-based onboard flight management computer (FMC) and inertial navigation system (INS) that can estimate and control the attitude and relative position of the vehicle while interacting with an operator in an indoor environment. In this project, a hexacopter is developed which uses an onboard inertial measurement unit (IMU), a small-sized computer, and a Microsoft Kinect sensor. Since directional maneuver of the hexacopter can be achieved by changing the angular position of the vehicle, the project consists of inertial and vision sensor fusion.
To accomplish stable control of the multicopter's orientation, a complementary filter is implemented on the IMU. Since the raw sensors do not provide an accurate angular position of the vehicle per se, the complementary filter is used to fuse the raw sensor data to provide low-noise and low-drift estimation of Euler angles. Autonomous indoor flight requires position sensors other than GPS whose accuracy is approximately 2 to 3 meters. GPS signals are typically unavailable in an indoor environment. In this project, a computer vision algorithm is used to provide position estimation. The vision algorithm uses an onboard Microsoft Kinect and a computer to execute EmguCV and Kinect SDK libraries. Since Kinect can provide color and depth data, it allows to detect an object and to provide its real-time local coordinates which can be used for an autonomous indoor flight. The coordinates are used to correct the position of the vehicle. Also, to input commands such as takeoff, landing, proceed, retreat during the flight, an artificial neural network is used to classify human gestures so that the gesture can be used as the commands.
The system is expected to maintain vehicle position within 1 meter for the better accuracy when compared to low-cost GPS receivers, and to control corresponding angular orientation for the position control. Also, the system is expected to interact with an operator within a reasonable interaction while it can maintain the given flight control tasks.
Academic Major:
Academic Major: Aeronautical and Astronautical Engineering
Sponsors:
OSU College of Engineering
Embargo:
No embargo
Type:
ThesisItems in Knowledge Bank are protected by copyright, with all rights reserved, unless otherwise indicated.