DESCRIPTION: As tactical decision making is pushed down to lower echelons, tools to support the development and operations of ground force small unit decision makers remains an open challenge [1]. For example, ground-based warfighters are required to make quick decisions about any number of situations encountered in the battlefield. To inform these decisions warfighters must learn about these situations and associated skills (e.g. call-for-fire training) and then access and process data during operations. User interfaces and data sources (e.g. tablets) that require taking “eyes off” training or operations limits the warfighters ability to learn and respond to changing conditions. Head-mounted displays (HMDs) coupled with the emergence of Augmented Reality (AR) Technologies [2] offer hands-free user interfaces that can provide training aids and situational awareness (SA) in contextual formats that could minimize cognitive load without losing sight of the battlefield.
AR-based HMD for ground forces is conceptually similar to existing technology used by aviators. For example, Synthetic Vision Systems (SVS) have been shown to improve terrain awareness and potential reductions in controlled flight into terrain accidents over existing SA cockpit technologies. Notwithstanding those benefits, challenges remain with the use of synthetic vision displays in aviation, particularly in managing the allocation of attention [3,4]. Innovation is needed to take the lessons learned from aviation and apply them to the development of a ground-based AR-based HMDs in a cost-effective (less than $1,500) and Infantry Marine-friendly configuration – unobtrusive and not frustrating to end user to wear and operate. The focus of the proposed effort is on defining synthetic vision requirement specifications and functional prototypes for next-generation AR-based HMD technologies that provide operational training aids and SA decision-support and for ground based forces. The effort seeks advancements in visualizations to mitigate attention and perception limitations (e.g. attention tunneling) that have potential adverse effects on cognitive load. Visualization designs and prototypes should focus on two types of display configurations – static and dynamic. Static information displays are persistent and don’t change often regardless of context. Dynamic displays are non-persistent and information is displayed that aligns with specific contexts and tasks. Resulting specifications and proof-of-concepts for more-advanced AR-based HMD technologies will contribute to improvements in SVS design guidelines and recommendations.
Proposals must describe how information visualizations will address psychological and cognitive principles [5] and provide AR examples regarding representation of information [6]. Proposals, however, don’t need to develop a complete AR system [2], but must clearly describe how they will investigate and evaluate the proposed visualizations. All developments and experiments should be done with simulation engines that have no or minimal licensing fees for development or run-time execution (e.g. Unity). The focus of training and operations of SA tools should focus on support for Marine Corps call-for-fire training and missions. Examples of information to be investigated and visualized includes: user heading, bearing, range, target designation information (i.e. symbols, designation box, attack geometry, risk/area effect size (such as range rings)), airspace control measures (i.e. holding areas, battle positions, initial points), and fire support control measures (i.e. no fire areas or restricted fire areas).
PHASE I: Define requirements and develop mock-ups and/or very early prototypes for advanced SVS information/data visualizations that enhance warfighter decision making and situational awareness as it relates to call for fire and close air support activities. Requirements definitions and mock-ups / prototypes must include: a description of domain and tasks, a determination of the fundamental cognitive theories and principles that will be used to define the SVS visualizations, associated Augmented Reality (AR) approaches or properties (e.g. temporal, physical, and perceptual), a detailed discussion of the design trade-offs as they relate to hardware and software capabilities (e.g. 2D vs 3D visualization, egocentric vs. allocentric registration, etc.), and description of proposed methods, metrics, and analysis for designing and evaluating proposed visualizations. In addition, Phase II plans should be provided, to include list of potential hardware and software that will be used to demonstrate proof of concept visualizations, critical technical milestones, and plans for testing and validating proposed data visualizations. Finally, Phase I should also include the processing and submission of any necessary human subjects research protocols for Phase II research.
PHASE II: Develop, demonstrate, and evaluate proof of concept SVS information/data visualizations based on preliminary design requirements generated in Phase I. Appropriate engineering testing will be performed, along with a critical design review and finalizing the design of proposed visualizations. Phase II deliverables will include: working proof of concept visualizations, specifications for their development, and demonstration, validation, and report of results showing capability of visualizations to support warfighter decision making and situational awareness as they relate to call for fire and close air support.
PHASE III DUAL USE APPLICATIONS: The performer will be expected to support the Marine Corps in transitioning the requirements and associated software products to support the development of Synthetic Vision System (SVS) training aids and situational awareness (SA) visualizations, The software products are expected to be used to include integrating and/or support Marine Corps training simulations systems (e.g. Augmented Immersive Team Trainer), and will require certifying and qualifying the system for Marine Corps use, delivering a Marine Corps design manual for the product, and providing Marine Corps system specification materials. Private Sector Commercial Potential: From a commercial perspective, the resulting design methods, principles, and proof of concept visualizations will be applicable to high risk/high demand work domains with large amounts of integrated information demands, such as law enforcement, emergency response, healthcare, and manufacturing. It is anticipated that the general findings of this effort will contribute broadly to our understanding of the design of AR information and data visualizations that will have broad implications relating to the implementation of AR interfaces outside of the military.
REFERENCES:
1. Naval Research Advisory Committee (2009.) Immersive Simulation for Marine Corps Small Unit Training. Retrieved 6 June 2016 from http://www.nrac.navy.mil/docs/2009_rpt_Immersive_Sim.pdf
2. Schaffer, R., Cullen, S., Cerritelli, L., Kumar, R., Samarasekera,S., Sizintsev, M., Oskiper,T., Branzoi,V. (2015). Mobile Augmented Reality for Force-on-Force Training, in Proceeding of the Interservice/Industry Training, Simulation & Education Conference 2013. Arlington, VA: National Training and Simulation Association.
3. Bailey, R. E. (2012). Awareness and detection of traffic and obstacles using synthetic and enhanced vision systems. Retrieved 6 June 2016 from http://ntrs.nasa.gov/search.jsp?R=20120001338
4. Wickens, C. D., & Alexander, A. L. (2009). Attentional tunneling and task management in synthetic vision displays. The International Journal of Aviation Psychology, 19(2), 182-199.
5. Bennett, K. B., & Flach, J. M. (1992). Graphical displays: Implications for divided attention, focused attention, and problem solving. Human Factors: The Journal of the Human Factors and Ergonomics Society, 34(5), 513-533.
6. Tönnis, M., Plecher, D. A., & Klinker, G. (2013). Representing information–Classifying the Augmented Reality presentation space. Computers & Graphics, 37(8), 997-1011.-
KEYWORDS: Augmented Reality (AR); Heads-up display (HUD); Helmet-mounted display (HMD); Decision Making; Synthetic Vision System (SVS); Attention
Questions may also be submitted through DoD SBIR/STTR SITIS website.
N171-092
|
TITLE: Pedagogy Models for Training in Mixed Reality Learning Environments
|
TECHNOLOGY AREA(S): Human Systems
ACQUISITION PROGRAM: Air Warfare Training Development Command; Live Virtual Constructive (CM-FNC)
OBJECTIVE: Develop, demonstrate, and validate mixed reality (MR) technology to improve training of maintenance procedures and troubleshooting skills for measured improvements in learning and transfer to workplace performance.
DESCRIPTION: Naval experts in ship maintenance are retiring, with no mechanism in place for transfer of knowledge or interactive training. Currently, the expertise for maintenance is held within the SMES’s and delivered aboard ship as on-the-job-training (OJT) and will be lost once these SME’s retire. This knowledge and skills will be captured in the cognitive task analysis(CTA) that is part of the design process for mixed reality applications. This knowledge and skills will be codified in the software via a functional description of the architecture for the mixed reality application. The results of the CTA will be a report that contains the functional description of software that drives the mixed reality application. Mixed reality offers an interactive way to capture, store, disseminate knowledge and train new skills without intensive instructor interaction or expensive on-the-job training. Successful use of MR technology, however, requires a pedagogy comprised of strategies that direct and optimize MR and virtual reality (VR) in training. To provide self-directed learning and training in interactive MR and VR environments, new pedagogical capabilities are needed: (1) to assess what students learn in these environments; (2) to offer feedback and practice of needed skills; (3) to assess rates of skill and knowledge acquisition; (4) to motivate students’ self-directed interaction behaviors; and (5) to obtain valid measures of training effectiveness expressed in terms of job performance.
Full implementation of mixed reality for maintenance and trouble-shooting applications could be used to introduce trainees to critical systems and tasks in a number of domains, and to provide interactive job-aiding in the field. The Naval Education and Training Command, as part of Sailor 2025, envisions interactive training with a large number of ship systems freed from the expense of realistic mock-ups. This SBIR would provide much needed pedagogical models for Mixed Reality training that could alter how naval maintenance training is conducted now in the future.
PHASE I: Describe a plan to develop and validate pedagogical models for training maintenance procedures and troubleshooting in mixed reality (MR) environments. Maintenance and troubleshooting procedures are contained in the technical manuals for various Navy equipment and will be provided as GFE to the performer. The end product of this phase is a report that describes the approach to be used to develop and validate these pedagogical models, including metrics to be used to determine their effectiveness and guide lines and heuristics that will be used to design these models. The report will also describe a Phase II approach to achieve the desired result/product and key component technological milestones. The Phase I option should also include a plan for the processing and submission of any necessary human subjects research protocols for Phase II research.
PHASE II: Develop a prototype based on Phase I efforts and demonstrate a proof-of-concept for training maintenance procedures and troubleshooting in mixed reality (MR) environments. The demonstration will be used to select the best approach from Phase I-type effort into the software tool. The performer shall provide a data collection plan that, includes the number and type of subjects, control condition, assessment instruments and analysis plan.
PHASE III DUAL USE APPLICATIONS: The small business will be expected to support the Navy in transitioning a set of software tools for its intended use. The small business will be expected to develop a plan to transition and commercialize the software and its associated guidelines and principles. Private Sector Commercial Potential: This SBIR would provide much needed pedagogical models for Mixed Reality training that could alter how naval maintenance training is conducted now in the future. In addition to the military market, the technology could have broad applicability in technical training and education, consumer products, and developers of augmented and virtual reality systems.
REFERENCES:
1. Kirkpatrick, D. L. (1994) Evaluating Training Programs: The four levels. Berrett-Koehler, San Francisco.
2. Phillips, J.J., (2003). Return on investment and performance improvement programs. 2nd Edition. Butterworth-Heinemann, Burlington, MA.
3. Stanney, K., Samman, S., Reeves, L., Hale, K., Buff, W., Bowers, C., Goldiez, B., Nicholson, D., & Lackey, S. (2004). A paradigm shift in interactive computing: deriving multimodal design principles from behavioral and neurological foundations. International Journal of Human-Computer Interaction, 17(2), 229-257.
4. Perez.R.S ( 2013) . Foreward. In Special Issue of Military Medicine: International Journal of AMUS. Guest Editors, Harold F. O'Neil, Kevin Kunkler, Karl E. Friedl, & RS. Perez. 178,10,16-36.
5. Skinner et al., (2010) Chapter in Special Issue of Military Medicine: International Journal of AMUS. Guest Editors, Harold F. O'Neil, Kevin Kunkler, Karl E. Friedl, & RS. Perez. 178,10,16-36. -
KEYWORDS: Mixed reality; Immersive Environments; Training; Augmented Reality; Virtual Reality; and Design Guidelines
Questions may also be submitted through DoD SBIR/STTR SITIS website.
N171-093
|
TITLE: Theater Anti-Submarine Warfare Contextual Reasoning
|
TECHNOLOGY AREA(S): Battlespace, Information Systems, Sensors
ACQUISITION PROGRAM: Theater Anti-Submarine Warfare Battle Management Tool FNC
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with section 5.4.c.(8) of the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.
OBJECTIVE: To develop and demonstrate an expert system capable of applying contextual clues from theater anti-submarine data sources to evaluate the probable current and near term actions of threat submarines.
DESCRIPTION: In order to effectively plan for theater anti-submarine warfare missions, it is necessary to infer future actions and intentions based on sparse contact data informed by the current state of the world. Factors such as weather, acoustic conditions, geographic or oceanographic features, recent hostilities or tensions, or third party activities can color the conclusions that can be drawn from the set of contact observations. These issues are complicated by the occasions of false or poorly defined detections that can reduce confidence or create ambiguity in the data reports. The ideal solution is a decision engine that can be used to inform Bayesian tracking solutions in a theater Anti-Submarine Warfare (ASW) geo-situational picture. No automatic tools currently exist for assessing theater-wide ASW situations, however the state of the art in information fusion includes numerous expert systems that predict future conditions based on context clues and historical trends. Applications of these techniques range from computer virus detection, medical diagnostics, and motion prediction for driver-less automobiles. Developers should apply or invent similar technology that results in an expert system software solution capable of using contextual clues to refine or constrain the estimated state of a submarine target. This software will be as part of a larger multi-platform search planning decision aid, but should be capable of being developed and tested independently prior to integration.
PHASE I: Develop a concept for a context-sensitive expert system software application that places meaningful bounds on expected target states over wide areas, and also defines the characteristic inputs required to achieve it. The Phase I results should demonstrate that the results can reduce the area of uncertainty of sparsely observed targets, i.e. incomplete target states measured at less than or equal to 4 times/day. Contextual information including (but not limited to) weather predictions, bathymetric limitations, historical movements, expected mission objectives, and the state of hostilities should be considered. A fictional theater of war should be assumed, but with environmental conditions consistent with some real-world area. The Phase I report should fully describe the approach and its method of verification and validation, and the expected effects on a target track solution. Required Phase I deliverables will include at a minimum, mid-term and final progress reports a final brief for FNC stakeholders.
PHASE II: Develop, demonstrate and test a prototype expert system for the theater anti-submarine warfare conditions resulting from the Phase I effort. The prototype should provide proof-of-concept for use of contextual information for theater ASW. The prototype should be designed such that the real-world theater characteristics can be easily provided as model inputs once the demonstration using a fictional theater problem is complete. Required Phase II deliverables will include at a minimum, mid-term and final progress reports, a Phase II brief for FNC stakeholders, and the prototype software, scripts and source codes.
PHASE III DUAL USE APPLICATIONS: The results of a successful Phase II effort will be offered to the FNC program for inclusion in the Theater ASW Battle Management Tool development effort. The final state of the software application will be an integral component of the FNC software Theater ASW Situation Assessment functions and should be implemented using Software-As-A-Service (SAAS) development protocols for the Consolidated Afloat Networks and Enterprise Services (CANES) environment. Integration and testing will be the responsibility of the FNC program, with assistance of the developer. Additionally, the results of Phase III work may be offered to an acquisition program office by means of pre-planned product improvement (P3I) mechanisms such as the Advanced Capability Build process. Private Sector Commercial Potential: Technology developed under this topic will be broadly applicable to area search problems such as maritime domain awareness (homeland defense), air traffic control and vehicle tracking.
REFERENCES:
1. White, “What Role can a Theater Anti-submarine Warfare Commander Serve in the New Maritime Strategy?,” Naval War College, Joint Military Operations Dept., 23 October 2006.
2. Burgess, “Awfully Slow Warfare,” Sea Power 48, no. 4 (April 2005), pp. 12-14.
3. Benedict, “Third World Submarine Developments.” The Submarine Review, October 1990, 53-54.
4. Shesham, “Integrating Expert System and Geographic Information Systems for Spatial Decision Making,” Western Kentucky University, Masters Thesis, December 2012.
5. Eldrandaly, “Expert Systems, GIS and Spatial Decision Making: Current Practices and New Trends,” Chapter 8, Expert Systems Research Trends, Editor: A.R. Tyler, pp. 207- 228.
6. Clark, “The Emerging Era in Undersea Warfare,” Center for Strategic and Budgetary Analysis, January 22, 2015, http://csbaonline.org/publications/2015/01/undersea-warfare/
7. Glynn, “Information Management in Next Generation Anti-Submarine Warfare,” Center for International Maritime Security, June 1 2016, http://cimsec.org/information-management-next-generation-anti-submarine-warfar/25614
8. Northrop Grumman, “CANES: An Open Systems C4I Networks Design,” http://www.northropgrumman.com/Capabilities/CANES/Documents/Canes_Supplement_Defense_Daily.pdf-
KEYWORDS: Theater Anti-Submarine Warfare, Ontology, Planning, Tactical Decision Aid
Questions may also be submitted through DoD SBIR/STTR SITIS website.
TECHNOLOGY AREA(S): Air Platform, Sensors
ACQUISITION PROGRAM: MQ-8C, MQ-4C and P-8A. Directly supports FNCs STK-FY13-01 Long Range RF Find, Fix, and Identify, and STK-FY18-03 Spectrum Maneuverable Radar Technology.
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with section 5.4.c.(8) of the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.
OBJECTIVE: Develop innovative algorithmic approaches and ultimately radar processing software to generate 3-D inverse synthetic aperture radar (ISAR) and also develop feature extraction algorithms from the 3-D (ISAR) imagery formed to support vessel classification.
DESCRIPTION: Currently naval maritime surveillance operations in congested littoral environments present airborne senor operators with hundreds to possibly thousands of vessels under radar track. Classifying, identifying and determining the intent of vessels in these environments quickly overloads sensor operators and situational awareness suffers. Current state of the art ISAR based classification tools utilize ISAR image enhancement tools, automatic image selection, adaptive clutter suppression and segmentation to facilitate dimensional feature extraction necessary for classification using a comprehensive database of combatants, non-combatants and selected non-naval vessels. However classification performance suffers when classes of vessels have physical dimensions very similar to each other. In such cases additional features are needed to improve classification performance. The additional features not available in conventional plan and profile view ISAR may be available in 3-D ISAR imagery. One promising approach to generate 3-D imagery is to employ interferometric ISAR (InISAR). Typically, two displaced receivers are used to retrieve the information from the phase difference of a pair of ISAR images. If the host radar is suitable, using interferometry makes it possible to obtain a 3-D reconstruction of a target with simple and realistic motion. The first step in the InISAR signal processing chain is to generate an ISAR image for each of the receiving antennas. Then, those images are combined in order to obtain an interferogram which will be used to retrieve the interferometric phase. The last step is to convert the interferometric phase to form a representative 3-D ISAR image.
Share with your friends: |