Objectives
In technical terms, the objective list was not short. The camera system would accurately represent the field of fire, with precision alignment to minimize the aiming error and need for correction in control systems. It would also need a high enough resolution to be accurate at a distance, but not so high that it would bog down the central processing portion of the system (in the user interface) with too much data. The frame rate of the camera would need to be fast enough that it could track targets which moved quickly, such as an erratic attacker, which would be realistic if such an attacker had military training in evasive maneuvers. The transmission range of the camera would be representative of a defender in a room adjacent to the defense area, in an area protected enough for the user to be safe, thereby likely creating a difficult medium through which to propagate signals.
The user interface tablet would then need to process this visual information rapidly and present it to the user in a meaningful way, all while taking in the user’s touch-screen input quickly and accurately, minimizing latency between target acquisition and firing. Here, coding would need to be lean and efficient to further minimize strain on the central processor of the system, and the user interface would need to be robust enough to tolerate a range of non-ideal or unexpected inputs from untrained users.
Following this in the system loop, the control system would also have to be able to receive signals across distances and through media similar to those of the front-end camera. The final firing control would be one of the most difficult elements: it would need to be calibrated to accurately match the visual field of the system as presented to the user and the user-interface tablet, and would also control aim via a PID controller without any further feedback into the system.
Around all of this would need to be an enclosure visible to professors (or any other evaluators who might be interested in the inner workings of the system), and accessible to the frequent changes the group knew would probably be required in a prototype system. This casing would need to be light, and have wheels or castors for easy transport, but would need also to be sturdy enough to tolerate multiple teardowns and rebuilds. The control armature would need to be sturdy enough to accept heavier firing systems while still being nimble enough not to make the system sluggish or overloaded.
Project Requirements and Specifications In this section, the requirements and specifications for the main systems that will be used in the project are detailed. These requirements were used as a basis for determining which specific components to select for more thorough analysis. This process is explored in the next section, titled Research Related to Project Definition. User Interface
It was decided that the user interface for the project must be intuitive, with the aim the user should be comfortable firing the system upon first use without extensive instructions – though a brief on-screen note will instruct him or her – and completely seamless and integrated with the user’s thought process after only a few firings.
Due to the fact the system may need to be utilized by personnel that are untrained in projectile combat and likely may simply lack the modern “video game dexterity” necessary to operate a system that was based upon manual aim, it was decided early on that the system would barter some control for ease of operation; the aiming would primarily be handled by the system, with the user operating primarily in a target selection fashion.
Noting the expertise of Apple in designing intuitive interfaces, the group took a cue from their design and began with a design similar to the one shown in Figure . The potential targets were to be outlined via blob-detection software, in a high-contrast and distinct color, readily indicating to the user which target corresponds to which colored button on the bottom the screen. When the user was to have chosen a desired auto-tracked target, they would need to press the button of corresponding color at the bottom of the screen and the system would then automatically calculate the centroid of that target and fire upon that location. Alternatively, there would be one manual-firing mode in which the user may simply have selected a target on the screen by placing their finger in that location; at that time, the location will be outlined by a color which does not correspond to the colors of the automatically selected targets, and with a second touch, the user may fire on this manually selected location by pressing a corresponding button. At that time, the system will initiate the firing sequence and the target will be fired upon.
Figure : Apple-inspired user interface showing buttons and target outline in red
Tablet The tablet will be the main computational source for the image processing. It will initially receive input from the camera in the form of captured frames. It will then analyze the images and perform the programmed operations to successfully detect and track multiple moving targets. In addition, it will have the capability to recognize manually chosen stationary targets. The specifics of this process are detailed further in section 4.5.2, titled Target Acquisition. In order to accomplish these tasks, the tablet must meet certain requirements. First of all, it must have the capability to interface wirelessly with both the camera and the system processor, where it will send the objects’ locations so the servos can be properly oriented. The tablet must also be easily programmable, since an application will need to be created to serve as the user interface to select the targets. Another factor is the necessity of a touch screen interface, since one of the primary project goals is to create a touch-operated system. Additional considerations include computing power and processing speed, which are important for processing the images with minimal delay.
Share with your friends: |