Cot5930 Digital Image Processing Fall ‘10 Term Project Final Report



Download 36.05 Kb.
Date21.06.2017
Size36.05 Kb.
#21356
TypeReport

COT5930 – Digital Image Processing Fall ‘10 Term Project Final Report

Professor Oge Marques-Filho Charles Norona



Non-distributed Object Recognition Potential on the Android Platform

Charles Norona



Florida Atlantic University

Cnorona1@fau.edu



Abstract
In recent years, mobile devices such as smart phones are becoming more powerful in terms of performance and battery life despite development practices calling for application developers to be aware of limited computational resources and passing off extensive computational work to the cloud.. In addition just about all mobile devices that are considered smart phones come equipped with a camera and run into the possibility of losing data connectivity. Combine all of these factors and the smart phone becomes another tool for a wide variety of applications from entertainment and education to surveillance and productivity. This warrants an investigation into the practical feasibility of allowing computationally expensive algorithms run on computationally restricted mobile hardware platforms without distributed computing.

1. Introduction

There are a couple of topics one needs to be familiar with in this domain. The first is the concept of Speeded Up Robust Features (SURF) which is the mechanism used for object recognition in this project. The other is Android, the software platform used to conduct the implementation and experimentation of the SURF functionality. Another important concept is distributed computing which is used in one of the referenced projects to delegate some of the image processing computations.

Speeded Up Robust Features (SURF) is a mechanism for extracting interest points in an image and characterizing them with the use of descriptors much like that used in David Lowe’s Scale Invariant Feature Transform (SIFT)[3][4]. Each descriptor contains information about its orientation and gradient information that has been scanned horizontally as well as vertically using Haar summation techniques as exemplified in figure 1. With the descriptors of one image and another set of descriptors from a second image a comparison between the two can be made to find matches that would suggest an object or interest point has been recognized.


Figure 1: Summation of Haar Wavelets in Image Information. Source: Bay et al. [3]
Another part of this project is Android. This software platform’s existence is growing among many mobile devices that have cameras and enough processing power to mimic most functionality that desktop computers are capable of today. Furthermore, Android’s application development framework made it ideal for implementing and experimenting with image processing techniques thanks to the ability to implement some functionality in the Java Native environment. Another advantage of using Android is the ability to create a quick, simple interface for demonstrating experimental results.

One practice which this project is trying to avoid is the use of distributed computing. In Olsson and Åkesson’s work they relied on a server they had set up and used in conjunction with their mobile application. The application would delegate the process of computing descriptors to this server which would return the descriptor information.



2. Related Work

There has been plenty of work done already on the topic of object recognition and works done on the Android platform.

In August of 2010, Vivek Tyagi presented his Master’s thesis at Florida Atlantic University on his work involving Android object recognition with the use of distributed computing. His investigation focuses on the efficiency and reliability of the SURF mechanism on Android devices [5].

One of the inspirations of Tyagi’s work as well as the project this paper is based on is Sebastion Olsson’s and Philip Åkesson’s thesis work on creating a custom SURF library and implementing a distributed processing mechanism between servers and an Android device as a client [1].

Although Olsson and Åkesson promised a release of their SURF library an alternative had to be used. The much revered OpenSURF library proved to be readily available thanks to Christopher Evans, its creator. Furthermore, an Android counterpart was created by Ethan Rublee [6][7].

3. Methods

This project was conducted as per requirements for the Digital Image Processing course taught by Dr. Oge Marques-Filho and required several task stages: integrated developer environment (IDE) setup, design and code modifications, Google code project management and history, and testing.

Developing Android applications requires one to setup the Eclipse (IDE) with the Android Development Tools plug-in combined with the Subclipse subversion client plug-in for use with the Google code repository. Instructions on setting up the Eclipse IDE for Android can be found at developer.android.com [8], Subclipse subversion client is available at subclipse.tigris.org [9], and the Google code repository can be found at code.google.com/p/androidobjectrecognition/ [10].

The first step of the process to validate whether or not handheld devices can feasibly conduct object recognition in real-time was to create a prototype that achieved one-time object recognition. A design was made that included taking a reference image and a captured image from the camera and comparing their SURF descriptors. Afterwards, the next step would be to attempt a real-time counterpart which did not happen due to project management complications as well as feasibility issues. The feasibility will be further explained in the discussion section.

At the beginning of this project an open source Google code repository was established and maintained throughout the duration of the project. More information about this project can be ascertained at code.google.com/p/androidobjectrecognition/ [10].

Once a sustainable implementation was achieved few images were taken and compared as part of the testing procedure. The first trial image was taken of the target image at close proximity with no rotation and was not taken from a strange point-of-view (not head-on). The second image was that of an entirely different image with little or no resemblance found in the target image. The third image was that of the target image but taken from an angle and from approximately thirty centimeters in distance. All of the tests were conducted with the use of a T-mobile G1 whose hardware is comparable to that of the Android Dev Phone 1 [11]. The results of these tests can be found in the results section and will be further explained in the discussion section.



4. Results

The target image which is available through the Gallery activity is depicted in figure 2. All subsequent images taken under the Camera activity would compare acquired descriptors with that of the target image.




Figure 2: Gallery activity screenshot with annotated SURF descriptors.
Figures 3-5 depict the results of the object recognition functionality and the matching feature.



Figure 3: Head-on capture image with some matching descriptors.


Figure 4: Captured image of a white-board where implementation was designed. No matching descriptors.


Figure 5: Captured image of target image from angled and scaled out point-of-view.

5. Discussion

The originally proposed work had three goals:




  1. Investigate the feasibility of object recognition on the Android platform during real-time image or video capture.

  2. Investigate the performance in terms of speed and power consumption.

  3. Implement basic shape (i.e. squares, circles, triangles) recognition for real-time video capture.

Additionally, the proposal made two hypotheses:




  1. In first generation phones (i.e. Android Dev Phone 1, T-mobile G1) the performance of real-time object recognition algorithms will be inadequate.

  2. Prolonged use or computation of image processing algorithms will drastically reduce battery lifetime.

The implementation of the proposed work partially achieved goals one and three since the real-time object recognition was never implemented and the second goal was never addressed. Nonetheless, a basic implementation with one-time object recognition is enough to draw conclusions on the first hypothesis and to comment on the effectiveness of the Android SURF implementation.

The most costly and time-consuming process of this implementation is the acquisition of SURF descriptors. There are a couple of reasons for this. One reason is the nature of the mathematical computations necessary for finding interest points in an image. This is explained by Bay et al in his paper that introduces SURF [3]. The second reason is the implementation of the library.

All functions of the library were implemented at the C/C++ level using the Java Native Interface. In Android making native calls is be costly and should be done only if the cost of the call is outweighed by the advantage of using the lower-level language for optimization purposes. As of now, the library accommodates the use of three native calls. This can be reduced by calling a constructor with all of the necessary information to make the calls from the C/C++ level, only requiring a single call from the Java level and reducing the need for switching multiple times between the Java and C levels. By doing this the overall turnaround time to acquire SURF information on an image should be reduced but may still not be enough to make it feasible for first generation, lower performing mobile devices to achieve real-time object recognition.

Despite not being able to achieve real-time object recognition in time for the project deadline the one-time process shows the potential for mobile devices to conduct object recognition. The SURF mechanism is claimed to capture consistent interest points regardless of scale and some rotation effects [3]. However, the tests conducted in Vivek Tyagi’s trials as well as the one done in this project indicate that there is reduced effectiveness when scaling and rotation is introduced in the application of SURF [5].

The lack of experimenting with higher performing devices (i.e. Nexus One, HTC Evo 4G, Samsung Galaxy) with this project does not provide empirical data to support the prospect that such devices are capable of achieving real-time object recognition. However, given the suggested optimization mentioned earlier in this section and the improved hardware specifications the potential remains to be seen.



6. Conclusion

The experiments conducted throughout this project indicate that mobile devices in the near future---if not now---will be capable of conducting real-time image processing even without distributed computing capabilities. With some optimizations made on the SURF library and improved hardware this prospect should eventually become a reality.



7. References

[1] Olsson, S. and Åkesson, P. “Distributed Mobile Computer Vision and Applications on the Android Platform.” Master’s Thesis Paper. Lund University. 2009.


[2] Henze, N., Schinke, T., and Boll, S. “What is That? Object Recognition from Natural Features on a Mobile Phone.” Research Paper. University of Oldenburg. 2009.
[3] Bay et al. “Speeded Up Robust Features (SURF).” Research Paper. Computer Vision Laboratory ETH website: ftp://ftp.vision.ee.ethz.ch/publications/articles/eth_biwi_00517.pdf [Accessed Nov. 26, 2010].
[4] Lowe, David G. “Distinctive Image Features from Scale-Invariant Keypoints.” Research paper. University of British Columbia. http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf [Accessed: Nov. 26, 2010]. 2004.
[5] Tyagi, Vivek Kumar. “Object Recognition on the Android Platform Using Speeded Up Robust Features.” Master’s Thesis Dissertation: http://digitool.fcla.edu/R/1J7CTJ79EBPN7DNGG89VIV4RXY5E2RIAMVINDYBXILYE7LU623-00343?func=dbin-jump-full&object_id=2683531&local_base=GEN01&pds_handle=GUEST. Florida Atlantic University. 2010. [Accessed: Nov. 26, 2010].
[6] Evans, Chris. OpenSURF home page: http://www.chrisevansdev.com/computer-vision-opensurf.html. [Accessed Nov. 26, 2010].
[7] Rublee, Ethan. Author of OpenASURF, Android SURF library counterpart. Library found at: https://github.com/chbfiv/OpenASURF. [Accessed: Oct. 28, 2010].
[8] Android SDK Quick Start Tutorial. Website: http://developer.android.com/sdk/index.html. [Accessed: Nov. 26, 2010].
[9] Subclipse Subversion client download. Website: http://subclipse.tigris.org/servlets/ProjectProcess;jsessionid=0F089D01ADED834C9D2FF382E1E719F3?pageID=p4wYuA. [Accessed Nov. 26, 2010].
[10] Norona, C. “Android Object Recognition.” Google code project and repository: http://code.google.com/p/androidobjectrecognition/. [Accessed: Nov. 26, 2010].
[11] “Android Dev Phone 1 Hardware Specifications.” Wikipedia Article Entry: http://en.wikipedia.org/wiki/Android_Dev_Phone#Hardware_specifications. [Accessed: Nov. 28, 2010].
[12] Faculty Homepage of Dr. Oge Marques-Filho. Website: http://faculty.eng.fau.edu/omarques/. [Accessed: Nov 28, 2010].

8. Appendix

This appendix contains relevant information to the project as well as related documents.


8.1. Project Proposal


Term Project Proposal:

Object Recognition for Android Mobile Devices


By: Charles Norona
COT5930 – Digital Image Processing

Dr. Oge Marques-Filho

Fall 2010

Topic and Justification

In recent years, mobile devices such as smart phones are becoming more powerful in terms of performance and battery life despite development practices calling for application developers to be aware of limited computational resources. In addition just about all mobile devices that are considered smart phones come equipped with a camera and run into the possibility of losing data connectivity. Combine all of these factors and the smart phone becomes another tool for a wide variety of applications from entertainment and education to surveillance and productivity. This warrants an investigation into the practical feasibility of allowing computationally expensive algorithms run on computationally restricted mobile hardware platforms without distributed computing.



Project Goals

Since this project will not be a part of a prolonged effort for a master’s thesis the scope has been reduced to the following goals set forth for this research project:




  1. Investigate the feasibility of object recognition on the Android platform during real-time image or video capture.

  2. Investigate the performance in terms of speed and power consumption.




  1. Implement basic shape (i.e. squares, circles, triangles) recognition for real-time video capture.



Research Hypotheses

The proposed effort will aim to validate or invalidate the corresponding hypotheses:




  1. In first generation phones (i.e. Android Dev Phone 1, T-mobile G1) the performance of real-time object recognition algorithms will be inadequate.

  2. Prolonged use or computation of image processing algorithms will drastically reduce battery lifetime.



Relevant Literature

Olsson, S. and Åkesson, P. “Distributed Mobile Computer Vision and Applications on the Android Platform.” Master’s Thesis Paper. Lund University. 2009.

Henze, N., Schinke, T., and Boll, S. “What is That? Object Recognition from Natural Features on a Mobile Phone.” Research Paper. University of Oldenburg. 2009.

Methodology

The proposed effort will involve the following major tasks and their tentative deadlines:




  1. Research previous implementations and involved concepts. Due October 14th, 2010.

  2. Design research prototype. Due October 21st, 2010.

  3. Implement basic prototype (image processing functions without real-time OR). Due October 28th, 2010.

  4. Testing and evaluation. Due November 4th, 2010.

  5. Implement advanced features (real-time OR with annotations). Due November 18th, 2010.

  6. Testing and evaluation. Due November 24th, 2010.


Test Images

Due to the reduced scope the functionality of the prototype will be only able to identify simple shapes such as triangles, squares, and circles. Below are a couple of test images for the basic implementation. Note that the real-time object recognition will require identifying arbitrary shapes in the mobile device’s physical environment.

Source: http://www.montessoriedutoys.com/upload/20090326002338_946.jpg
Size: 60 KB
Format: JPEG


Source: http://stephenkyledesign.com/about/shapes.png
Size: 19 KB
Format: PNG











Expected Results and Deliverables

This project should yield a working prototype of real-time object recognition on the Android platform as well as comprehensive scientific report about the potential of smart phones’ ability to perform strenuous image processing techniques on-the-fly. The project implementation and related documents will be available online via Google Code open source project hosting and will also be delivered via standard assignment submission procedures in a single zip file.





8.2. Final Design Algorithm Flow


The following is an activity diagram showing the relationship between the Gallery and Camera activities and the overall algorithmic flow.



Download 36.05 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page