Air force 14. 1 Small Business Innovation Research (sbir) Proposal Submission Instructions



Download 1.72 Mb.
Page7/40
Date02.02.2017
Size1.72 Mb.
#15739
1   2   3   4   5   6   7   8   9   10   ...   40
OBJECTIVE: Develop a DIS/HLA gateway to permit a variety of non-simulation standardized trainers to integrate with and operate on the Distributed Mission Operations Network (DMON) for training exercises.

DESCRIPTION: The Air Force uses a variety of training systems and simulators for space Command and Control (C2) operator training. These systems and simulators are “stove-piped,” in that they used different hardware, operating systems, networking capabilities, and proprietary software. In addition, none of these systems are Distributed Interactive Simulation (DIS) or High Level Architecture (HLA) standard system configurations. Finally, each one differs considerably in its capabilities to support training. These differences drive high development and maintenance costs and also prohibit these systems from integrating with and operating on the DMON. The DMON is the USAF standard distributed simulation network. Further, these systems cannot be brought into the broader USAF and Air Education and Training Command (AETC) training enterprise architecture where a standard approach for operator training that employs an extensible architecture and commercial off-the-shelf (COTS) personal computer (PC) hardware and operating systems, is required. A current attempt to address some of the differences issues and to move toward a common training system structure is called the Standard Space Trainer or SST. The SST has been designed to facilitate the development and delivery of training services on an enterprise, to provide support to the coordination and conduct of training, and to facilitate content and system integration as additional services with a software development kit and a published applications interface specification. However, even the SST is not capable of participating in distributed training and rehearsal events for different space operations communities. With the costs to modify legacy training systems being untenable, there is a substantial opportunity exists to develop a DIS and HLA compliant gateway that will permit these systems, including the SST, to continue to operate natively, but to pass data to and from the gateway that would do the translation to and from the larger DMON enterprise. The developed gateway and software must be compliant with the Joint Federation Object Model (FOM) Specifications, IEEE Distributed Interactive Simulation (DIS) / High Level Architecture (HLA) standards, DoD network, connectivity and interoperability standards (IAW AFI 36-2251, Air Force Training System Management, 5 Jun 2009), and the current DMON standards.

PHASE I: Conduct a training systems and content analysis of candidates for DMO integration. For these candidates, conduct a training mission analysis to characterize data interfaces needed and alignment of training system functions with DMO events of merit. Develop a data specification for the types of data translation the gateway will need to accomplish to support accredited training across the DMON.

PHASE II: Develop and demonstrate a gateway that allows the candidate training systems to interoperate in DMO training events. The gateway should permit crews to interact with the combined trainer/DMO environment in real time, reacting to stimulus and also providing stimulus to others as part of the larger training event. Phase II will also demonstrate student performance tracking across training environments in DMO events.

PHASE III DUAL USE APPLICATIONS: Military: Gateway permits a broader range of legacy training systems to be part of larger enterprises. Commercial: Common data exchanges allowing a variety of non-industry-standard environments to support distributed training and gaming across virtual and constructive boundaries.

REFERENCES:

1. Air Force Space Command (AFSPC) Instruction 36-283, Space Training System Management, 2 Aug 2004.
2. Air Force Instruction (AFI) 36-2251, Management of Air Force Training Systems, 5 Jun 2009.
3. CJCSI 3500.01G, Joint Training Policy and Guidance For the Armed Forces of the United States, 15 Mar 2012.
4. CDRUSSTRATCOM Memo, Space Modeling and Simulation (M&S) Capability, 28 Jul 2010.
KEYWORDS: Bridge, integrate, SST, DMO, training, integrated live, virtual, and constructive training and exercise

AF141-027 TITLE: Operator Interface for Flexible Control of Automated Sensor Functions


KEY TECHNOLOGY AREA(S): Human systems

OBJECTIVE: Develop/evaluate a multiple unmanned air vehicle interface prototype that increases transparency of automated sensor systems and enables intuitive operator interactions to direct/tailor sensor operations in response to dynamic mission requirements.



DESCRIPTION: Automation is becoming a critical element of intelligence, surveillance, and reconnaissance (ISR) operations. Automatic target recognition in unmanned robotic combat vehicles has been particularly successful. Reliance on automated sensor features will become even more critical with the vision of one operator (or crew) simultaneously supervising multiple unmanned air vehicles (UAVs). Existing interfaces are inadequate for controlling multiple UAV/sensor systems. Moreover, research to date has focused on interfaces for supervisory control of the flight of multiple vehicles, not the associated sensor operations. Advanced sensors support a variety of missions such as real-time identification of forces, finding targets in cluttered environments, and aiding battle damage assessment. The interface to the sensor systems needs to enable the operator to efficiently and flexibly interact with the automation in order to refine the sensor’s automation level, change sensor processing algorithms/parameters, and delegate new tasks and constraints based on the current situation. Specific examples include: a) operator contending with target signature variations by refining the target/sensor acquisition parameters (e.g., aspect, depression, etc.) and b) operator needing to rapidly insert a target identification request on-the-fly. The design approach needs to support such interactions in envisioned multi-UAV/sensor applications. In sum, the present effort focuses on developing a flexible, intuitive interface between a UAV operator and electro-optical/infrared (EO/IR) sensors, rather than developing sensor algorithms or hardware. Successful completion of Phase I will require delivery of a technical report that describes the selected representative UAV sensor automation system(s), the associated display/control features that have been designed and evaluated, and anticipated shortfalls, limitations, and tradeoffs of each solution(s). A feasibility demonstration is desirable, but not required.
Completion of this effort will involve identifying which parameters of a representative automated sensor algorithm/system can be adjusted, as well as which are the strongest contributors to improved UAV mission effectiveness across a variety of conditions. The effort should address, at a minimum, EO/IR sensors for whatever UAV platform/task/mission(s) the proposer chooses to focus the interface design/evaluation. (Any simulated or representative sensor system employed in support of this effort should maintain data at an unclassified level. Proposer should not require any government materials, equipment, data, or facilities.) There are also other factors useful to consider in the design of an operator/sensor interface. First, there will be unique challenges to supervise multi-target prosecution and aggregate the findings of sensors across multiple UAVs performing a collaborative mission. Since supervisory control of multiple UAVs will be cognitively demanding, the interface will need to provide an intuitive and rapid means of tailoring the sensors’ functionality, as well as adequate visualization into the sensors’ processing to maintain operator mode awareness of automation state, in addition to general situation awareness. Support tools that assist the operator in authoring and validating sensor modifications may also be useful. Any decision support aid needs to provide information when required by the operator (e.g., projecting the outcome of candidate adjustments, allowing evaluation of alternate courses of action, and identifying potential problems). Moreover, the interface needs to mitigate excessive workload in interacting with the sensor and avoid permitting unsafe or ineffective modifications. Constraints on bandwidth, as well as issues associated with highly automated systems (e.g., sensor’s reliability and operator complacency), are also important to consider. Another objective is to determine how best to optimize operator expectations and tolerances for error.

PHASE I: Design and evaluate candidate display and control features for an operator/sensor interface appropriate for single operator control of multi-unmanned air vehicles/sensor systems. Generate a final report that describes the interface solution(s), evaluation results, and an experimental plan to establish usability improvements in Phase II. A feasibility demonstration is desirable, but not required.

PHASE II: For the best approach identified in Phase I, develop a prototype and perform iterative testing and refinement cycles, culminating in a proof-of-concept interface for multi-UAV/sensor applications. Conduct validation studies of the interface in high-fidelity simulations or operational tests to demonstrate payoffs in interaction flexibility, interaction speed, error reduction, workload management, etc.

PHASE III DUAL USE APPLICATIONS: Military applications include intelligence, surveillance and reconnaissance exploitation; mission planning; and sensor applications in unmanned air, ground and sea systems. Commercial applications include surveillance for homeland security, law enforcement, and industrial security.

REFERENCES:

1. Chen, J.Y.C., Barnes, M.J., and Harper-Sciarini, M. (2009). Supervisory Control of Multiple Robots: Human-Performance Issues and User-Interface Design. IEEE Transactions on Systems, Man, and Cybernetics-Part C: Applications and Reviews, 41(4), 435-454.


2. Miller, C.A., and Parasuraman, R. (2007). Designing for Flexible Interaction between Humans and Automation: Delegation Interfaces for Supervisory Control. Human Factors, 49(1), 57-75.
3. Parasuraman, R., and Wickens, C.D. (2008). Humans: Still Vital After All These Years of Automation. Human Factors, 50(3), 511-520.
4. USAF Chief Scientist (AF/ST) (15 May 2010). Report on Technology Horizons: A Vision for Air Force Science & Technology During 2010-2030, Vol. 1, AF/ST-TR-10-01-PR. Available at: http://www.af.mil/shared/media/document/AFD-100727-053.pdf.
KEYWORDS: unmanned air vehicle, UAV, automation, operator interface, sensor, human factors, situation awareness, target recognition

AF141-028 TITLE: Multimodal-Multidimensional image fusion for morphological and functional evaluation

of the retina
KEY TECHNOLOGY AREA(S): Human systems

OBJECTIVE: Develop a software platform capable of integrating information collected over repeated experiments and from disparate sensors to facilitate the measurement of the physiological response of ocular-tissue to damaging levels of light.

DESCRIPTION: The technological fields of ocular imaging and visual functional testing are rapidly evolving as independent approaches for the investigation of ocular pathophysiology which can be applied to the investigation of laser damaged eyes. Imaging modalities such as the fundus camera, scanning laser ophthalmoscope, optical coherence tomography, hyperspectral, speckle and fluorescence imagers each reveal different pieces of information which need to be evaluated together as a whole in order to form a complete picture of the physiological processes which are modulated within the eye as a result of noxious levels of exposure to light. Furthermore, physiological changes, such as oxygen consumption which can be revealed through combined imaging, need to be spatially and temporally related to loss in visual function in order to fully appreciate the biochemical cascades and neurological consequences of light damage (Muqit, 2011). Additionally, information from visual functional testing techniques, such as multifocal pattern electroretinography, needs to be correlated to image data with a high degree of spatial precision. Therefore, the Air Force seeks the development of a software platform which facilitates integration of data from all relevant retinal imaging modalities used in the investigation of retinal laser damage and associated visual function testing. The software application should recognize and automatically import data presented to it in industry standard formats and also provide a framework for importation of custom image and functional-test data files.
Customized software which accomplishes some of the desired integration detailed above has been published. For example, images of the macula collected with spectral domain optical coherence tomography and confocal scanning laser ophthalmoscopy have been used in conjunction with fundus-controlled microperimetry to create functional maps of the macula. (Charbel Issa et al., 2010; Troeger et al., 2010) However, greater functionality to include advanced, customizable preprocessing techniques for noise and motion artifact suppression, plus edge enhancement techniques and tunable match criteria for image fusion based on retinal features, as well as gradients in illumination and reflectance, are needed. Furthermore, difference imaging which highlights subtle changes from baseline images should be enabled to allow automated retinal layer segmentation and characterization of laser lesions.
Merging and manipulating image sets of multiple scales, dimensions, magnification factors and total field of view will be very computationally intensive; therefore, the use of parallel processing schemes which leverage CUDA and/or other scalable multiprocessor approaches will be required.

PHASE I: Create a software development plan which identifies i/o formats associated with 2D and 3D imaging modalities and complementary visual function tests; delineates desirable preprocessing capabilities for noise reduction in native images; details technical approaches for image registration and data fusion; defines the graphical user interface for data management and outlines verification procedures.

PHASE II: This effort will include creation of a complete software specification, beta code generation, execution on a relevant parallel processor platform and documentation with a detailed design description and compilation dependencies for all algorithms and data structures. Finally, a verification test plan will be executed with representative retinal image data (government provided or approved) to ensure all the requirements identified in the software specification have been adequately addressed.

PHASE III DUAL USE APPLICATIONS: Beyond the study of changes in physiological function of the retina resulting from light induced damage, this technology will have widespread application to ophthalmic medical practice in general.

REFERENCES:

1. Issa, Peter Charbel, Eric Troeger, Robert Finger, Frank G. Holz, Robert Wilke, and Hendrik PN Scholl. "Structure-function correlation of the human central retina." PLoS One 5, no. 9 (2010): e12864.


2. Muqit, Mahiul MK, Jonathan Denniss, Vincent Nourrit, George R. Marcellino, David B. Henson, Ingo Schiessl, and Paulo E. Stanga. "Spatial and spectral imaging of retinal laser photocoagulation burns." Investigative Ophthalmology & Visual Science 52, no. 2 (2011): 994-1002.
3. Troeger, E., I. Sliesoraityte, P. Charbel Issa, H. P. N. Scholl, E. Zrenner, and R. Wilke. "An integrated software solution for multi-modal mapping of morphological and functional ocular data." In Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE, pp. 6280-6283. IEEE, 2010.
KEYWORDS: image fusion, image registration, functional imaging, ophthalmic imaging, feature extraction, visual function

AF141-029 TITLE: Mobile Motion Capture for Human Skeletal Modeling in Natural Environments


KEY TECHNOLOGY AREA(S): Human systems

OBJECTIVE: Develop hardware and/or software tools to accurately determine full body segment positions and orientations of a person performing various activities in natural indoor and outdoor environments.

DESCRIPTION: Full-body human motion capture has a variety of important applications within the Air Force and Department of Defense, as well as in numerous commercial industries, such as athletics, health care, and entertainment. AFRL applications include the creation of biofidelic avatar-based training scenarios and the collection of "ground-truth" data for research on human surveillance and tracking methods. Natural settings, including varying terrain, backgrounds, and clothing, are important to AFRL applications, where multiple sensor modalities are used coincidently (i.e., “sensor fusion”). For example, motion capture can serve as the ground truth for synchronized radar and video collections, where the background and clothing worn are critical to replicating in-field video feeds and the outdoor terrain is critical to replicating in-field radar returns.
Current full-body motion capture technology has limitations that inhibit its use in natural or real-world settings. The current gold standard in accuracy is optical motion capture, which relies on line of sight between multiple cameras with light emitting strobes and retro-reflective markers placed on the subject. Optical systems are, however, cumbersome to move and cannot be used with typical attire. Other motion capture technologies exist, each with their own limitations. Electromagnetic sensors provide accurate orientation and position, but are greatly limited by the range of the generated magnetic field. Inertial measure units (IMUs) increase portability, but are limited to orientation measurements only. Markerless motion capture methods focus on fitting a model to a silhouette extracted from 2-D video but are often inaccurate for precise motion analysis. Additional information, such as from a depth sensor (e.g., Microsoft Kinect) can be used to provide some 3-D information.
While advances in these areas have shown promise as a replacement to optical motion capture, to date commercial products that provide sufficient accuracy are still unavailable. AFRL is seeking innovative hardware and software tools that will result in the development of a motion capture technology that is: 1) mobile (can be relatively easily moved to various locations), 2) compatible with a variety of clothing, and 3) not restrictive of natural motion (e.g., untethered/wireless). In particular, we are interested in tools that fuse different hardware modalities. For example, an IMU-based motion capture system might be augmented with markerless motion capture techniques or a local positioning system to create a single motion capture system capable of accurate orientation and position tracking.

PHASE I: Develop an initial concept hardware and software design in order to accurately determine human segment positions and orientations under clothing in a natural environment. Demonstrate the ability to design and implement the technology through proof of concept.

PHASE II: Develop and demonstrate a fully functional prototype of the hardware/software system. Integrate all hardware so that it can be controlled from a single software interface. Validate the system’s accuracy through laboratory experiments.

PHASE III DUAL USE APPLICATIONS: The technology will allow the military to collect ground-truth human motion data in realistic operational environments to assist in human threat detection. The technology will provide researchers and doctors with a motion capture system that can record motion in realistic settings.

REFERENCES:

1. Lu TW, O’Connor JJ. Bone position estimation from skin marker co-ordinates using global optimisation with joint constraints. Journal of Biomechanics 1999; 32(2):129-134.


2. Cutti A, Ferrari A, Garofalo P, Raggi M, Cappello A, Ferrari A."Outwalk": a protocol for clinical gait analysis based on inertial and magnetic sensors. Med Biol Eng Comput 2010; 48(1):17-25.
3. Krigslund R, Dosen S, Popovski P, Dideriksen J, Pedersen GF, Farina D. A Novel Technology for Motion Capture Using Pasive UHF RFID tags. IEEE Trans Biomed Eng 2012 (Epub ahead of print).
KEYWORDS: motion capture, motion analysis, human modeling, pose estimation, inverse kinematics, inertial measurement units, electromagnetic tracking, markerless motion capture

AF141-030 TITLE: Synthetic Task Environment for Primary & Secondary Assessment in Trauma Care


KEY TECHNOLOGY AREA(S): Human systems

OBJECTIVE: To develop and demonstrate a synthetic task environment for primary and secondary assessment in trauma care. This includes the capacity for creating/editing scenarios, recording performance and simulation data, and interoperating with external sims.



DESCRIPTION: Trauma care is an essential skill for medical professionals in the Department of Defense. The most critical aspect of trauma care is primary and secondary assessment of patients at first encounter. These assessments are made during every trauma case, and represent the first opportunity for medical professionals to impact long-term patient prognosis. Patient outcomes depend not only on the effectiveness of these assessments (doing them right), but also on the efficiency (doing them quickly). Providing opportunities to rehearse and hone these skills in a virtual environment will improve outcomes for warfighters on the battlefield by ensuring that the underlying competencies are well-learned and routinized to maximize the likelihood of a positive outcome for patients.
Importantly, it has been shown that medical professionals who are experts in trauma care do perform this initial assessment more quickly and accurately than novices and training leads to better performance (Holcomb, et al., 2002). At the same time, training opportunities are limited for many of the injuries that may be sustained on the battlefield, creating a need that can be partially addressed with simulation (Bruce, Bridges, & Holcomb, 2003). High-fidelity medical mannequins provide a valuable opportunity, but lower-fidelity training options may provide value for rehearsing many of the fundamental skills necessary to perform assessments quickly and accurately. Currently, such lower-fidelity environments are lacking. Lower-fidelity simulators have the potential to increase the efficiency of training by allowing for the rehearsal of critical skills that do not require the high-fidelity environment. That is, such technologies allow for the evaluation and efficient leveraging of a family of resources that provide the right level of fidelity for the training requirements.
The Air Force Research Laboratory is interested in virtual environments that can present medical professionals with trauma scenarios where the critical steps of primary and secondary assessment can be rehearsed. A useful simulation environment must be of sufficient fidelity to provide opportunities for meaningful rehearsal of critical skills and decision making.
To be of value in assessing the development and maintenance of skills, the virtual environment must also support the collection and recording of critical simulation and performance data and events. These data must be of high enough resolution and detail to permit after-action review and assessment, as well as replay of critical events and sequences. The simulation must also have the capacity to interoperate with external software to allow bi-directional communication of data. That is, the simulation must be able to both communicate state/event information to external components and be able to accept inputs (e.g., actions) as well. The critical aspect of this is to identify a standard communication protocol for passing this information into and out of the environment, preferably using an open-source standard. Finally, it should support authoring through an interface that can be used by medical subject matter experts to create and modify scenarios.

PHASE I: The Phase I deliverable will be a prototype system demonstrating the feasibility for a virtual environment to support primary and secondary trauma assessment. It should include a plan for practical development and deployment, and demonstrate appropriate data capture and interoperability.



Download 1.72 Mb.

Share with your friends:
1   2   3   4   5   6   7   8   9   10   ...   40




The database is protected by copyright ©ininet.org 2024
send message

    Main page