Air force 14. 1 Small Business Innovation Research (sbir) Proposal Submission Instructions



Download 1.72 Mb.
Page5/40
Date02.02.2017
Size1.72 Mb.
#15739
1   2   3   4   5   6   7   8   9   ...   40

PHASE II: Accomplish software development. Using techniques analyzed in Phase I, further develop and mature the research code into a standalone software module that is easily integrateable with other systems. Integrate and test software at Maui Space Surveillance Site.

PHASE III DUAL USE APPLICATIONS: Military Application: Software to support the Joint Space Operations Center (JSPOC). Commercial Application: Similar support to government/commercial space.

REFERENCES:

1. “Sensor-scheduling simulation of disparate sensors for space Situational Awareness,” Hobson, T.A., Clarkson, I.V.L., AMOS Conference Proceedings (2011).


2. “Covariance-Based Network Tasking of Optical Sensors,” Hill, K., Sydney, P. Hamada, K. Cortez, R., Luu, K., Jah, M., Schumacher, P.W., Coulman, M., Houchard, J., Naho’olewa, D. Proc. AAS/AIAA Space Flight Mechanics Meeting (February 2010).
3. “Covariance analysis for deep-space satellites with radar and optical tracking data,” Miller, J.G., AAS 05-314, AAS/AIAA Astrodynamics Specialists Conference, Lake Tahoe, CA (August 2005).
KEYWORDS: tasking, space surveillance, search, tasked, catalog maintenance, threat detection, persistent tactical monitoring

AF141-016 TITLE: Persistent Wide Field Space Surveillance


KEY TECHNOLOGY AREA(S): Sensors
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with section 5.4.c.(8) of the solicitation and within the AF Component-specific instructions. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws. Please direct questions to the AF SBIR/STTR Contracting Officer, Ms. Kristina Croake, kristina.croake@us.af.mil.

OBJECTIVE: To develop and demonstrate innovative, scalable approach to space object detection that permits the detection of dim orbiting objects using a very wide field of view, non-articulated sensor system architecture.



DESCRIPTION: Conceive an approach to this challenging field of dim object detection/wide field of view (FOV) surveillance that exploits recent developments in sensors and data processing/manipulation. Preliminary mathematical models of the proposed techniques indicate significant performance improvements. Trades studies will highlight the cost for various FOV’s and system detection and track formation sensitivity. Special consideration to systems that eliminate the need to mechanically point and track, are highly scalable in the following areas: increase the system’s limiting magnitude for detection and track formation (detect smaller and dimmer objects) and increase the area searched in a given time. Systems that exploit the rapidly growing “off-the-shelf” technology of data bus architectures, networking and electro-optic sensors may be especially attractive to allow for affordably realizable systems. State-of-the-art improvements will cover greater than 1000 square degrees per hour and see at least 18th visual magnitude and operates at AFRL’s Starfire Optical Range (SOR) to observe GEO or deep space object above 45 degrees elevation conditions during typical sky conditions. The system should be designed with a cost goal less than $1 million for building a single system and can be easily installed at SOR.
Start by providing a mathematical model of the system design. In-depth examination of technology-related assumptions will be performed and validated. Develop simulation tools based on these existing mathematical models that will provide performance predictions for various candidate sub-configurations. These simulation tools will predict performance for a large range of object optical cross sections in a given search pattern, and site "seeing" conditions in order to demonstrate the candidate system's range of performance.
Perform small-scale laboratory tests of the concept to demonstrate key required architecture capabilities--the ability to acquire, process, store and manipulate data. Laboratory testing will be followed by actual field testing of a small-scale system architecture. Field experiments will be structured to obtain data to validate the system architecture simulation/model.
Outlining the proposed design and performance, impacts to nominal operations, scalability and estimated cost and implementation schedule for this Phase I work plan. Provide monthly reviews.
Utilizing data obtained during Phase I to further refine models/simulation tools of the system concept. Additional simulation studies will be performed to refine performance predictions for the candidate system. The Phase I small-scale system architecture will be enlarged to more closely represent a potential deployable system. Field testing will be performed to verify scaling predictions. Following successful demonstration of system performance within predictions, the system will be deployed to the SOR for simultaneous collection with existing SOR narrow field of view sensor systems. The goal of this testing will be to demonstrate the ability to use the wide field of view system to detect dim objects and immediately queue SOR high-resolution systems for further, time-critical detailed observations. Phase II work will also include the development and demonstration of prototype user controls and reporting systems, and the development of modifications and procedures to support a deployable operational system.

PHASE I: Deliver a mathematical model of the system design. In-depth examination of technology-related assumptions will be performed and validated. Develop simulation tools based on these existing mathematical models that will provide performance predictions for various candidate sub-configurations.

PHASE II: Demonstrate the ability to use the wide field of view system to detect dim objects and immediately queue SOR high-resolution systems for further, time-critical detailed observations. Phase II work will also include the development and demonstration of prototype user controls and reporting systems, and the development of modifications and procedures to support a deployable operational system.

PHASE III DUAL USE APPLICATIONS: Phase III work will build on "lessons learned" from Phases I and II to fully develop an operational scaled system prototype that can be easily deployed to one of USSTRATCOM's electro-optical surveillance sites.

REFERENCES:

1. Tony Hallas, "Chasing The Curve," Astronomy, Kalmbach Publishing Co., Apr 2012.


2. Study of Potential Spacecraft Target Near Earth Asteroids, Whiteley, AFRL-SR-AR-TR-06-0042.
3. Advances in Imaging and Electron Physics, Hawkes, V145, CEMES-CNRS, Elsevier.
KEYWORDS: wide field space surveillance, space situational awareness, unconventional telescope, persistent, timely
AF141-019 TITLE: Battlefield Airmen (BA) Mission Recorder
KEY TECHNOLOGY AREA(S): Infosystems
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with section 5.4.c.(8) of the solicitation and within the AF Component-specific instructions. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws. Please direct questions to the AF SBIR/STTR Contracting Officer, Ms. Kristina Croake, kristina.croake@us.af.mil.

OBJECTIVE: Develop and demonstrate a device that can record essential information from special operations missions in order to perform mission analysis/debriefing, enhance operational procedures, and improve BA training and mission rehearsals.



DESCRIPTION: Battlefield Airmen missions range from Close Air Support, to field surveys, to direct combat. These special operations are typically high stress and rely on the individual operator's memory of actual events for follow-on actions, like mission debriefing, tactics improvements, training, and future mission rehearsals. Unfortunately, operator memory can’t always be relied upon due to detrimental factors such as the fast-paced nature and stressful dynamics of special ops missions; war casualties; the vast amount of detailed information to be remembered (time, location, who said what to whom, etc.); time lag after conclusion of operations (resulting in inability to remember key details); and individual operators’ mental capacity to accurately recollect events.
The purpose of this effort is to overcome those deficiencies by developing a robust capability that captures critical elements of voice (radio communications) and location of each BA action. The mission recording technology would serve a very similar function to that of an aircraft “black box.”
Mission recording technology is used regularly in aircraft and other systems to record vital communications and other system information. This recorded data is valuable for post-mission debriefing, to develop training scenarios, and to review missions and improve operational tactics, techniques, and procedures (TTP). BA warfighters have been at the forefront of recent operations in Iraq and Afghanistan. Those operations revealed a need for a capability to record key aspects of their missions, yet that capability does not exist.
There are many different types of recorders in industry, but none has the functionality needed for BA missions. Current state-of-the-art (SoA) recorders do have the capability to record audio through a variety of input methods and recording features; however, they specifically lack integration with operational radios, as well as integration with GPS time and location stamp.
What is needed is a capability which can interface with the equipment carried by BA operators in order to capture incoming and outgoing radio communications. If the technology solution is carried by the operator it must plug in-line with the operator's radio and headset through the standard jack. In other words, such a solution would connect to the radio and provide the same standard jack as an output for communication to a push-to-talk and headset. If the technology solution is not carried by the operator, it must function from a secure environment and be easily accessible to operators when needed. Audio format must be a common format like WAV or MP3 so it’s easily compatible with existing playback equipment. In addition to radio voice communications, the device should record the geographic position (GPS coordinates) and time of individual radio voice segments.
The recorder must be radio and headset agnostic (if carried by the operator), simple to use, encrypt the data, include a zeroize function (quickly and permanently erase data), and must not require external processing in order to operate. It also must function under a variety of environmental conditions such as rain, snow, high humidity, and ambient temperatures ranging from 0 degrees F to 110 degrees F. The recorder should require low power and if carried by the operator, draw it from the radios, i.e., no separate batteries for the recorder. Currently, operational radios have the ability to support power draw by external devices. Playback/mission analysis would not occur on the recorder, but on a computer with the proper software. Data encryption shall use DoD-approved methods to protect SECRET and below information. Use of a commercial standard (like AES) will be acceptable for any Phase I prototypes developed. Future versions of the Mission Recorder should allow the option of recording two radios through one technology solution.

PHASE I: Define the system requirements. Identify appropriate components to create a system design. Analyze the software necessary to enable the system to work. Propose a design to be built and demonstrated during Phase

II. Demonstration of laboratory breadboard prototype hardware during Phase I is highly desired, but not required.

PHASE II: Build and demonstrate the recorder in a relevant environment. Recorder must meet requirements as stated in description above. Additionally, design should show significant consideration for human factors, including, but not limited to: size, weight, power, minimal cable management, and ambidexterity. Expected Technology Readiness Level of the recorder by the end of Phase II is TRL 6, and preferably TRL 7.

PHASE III DUAL USE APPLICATIONS: Military Application: Special tactics missions; anti-terrorist actions; urban warfare; team reconnaissance. Commercial Application: Law enforcement; homeland security; fire-fighting; hostage-rescue; fast-paced team activities that could benefit from forensic analysis of what transpired.
REFERENCES:

1. Battlefield Air Operations Kit, Increment II Capabilities Development Document, 19 November 2009.


2. Guardian Angels, Initial Capabilities Document, March 2010.
3. Battlefield Airmen, ISR Journal, August 2004.

KEYWORDS: Battlefield Airmen (BA), Battlefield Air Operations (BAO), BAO Kit, recorder

AF141-020 TITLE: Improved Computerized Ground Forces for Close Air Support Training
KEY TECHNOLOGY AREA(S): Human systems
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with section 5.4.c.(8) of the solicitation and within the AF Component-specific instructions. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws. Please direct questions to the AF SBIR/STTR Contracting Officer, Ms. Kristina Croake, kristina.croake@us.af.mil.
OBJECTIVE: Develop and demonstrate intelligent agents to interact within a computer generated forces suite for training aid use in Joint Terminal Attack Controller Training simulated environment.

DESCRIPTION: In current military operations our missions have become more complex and dynamic than they have been in the past. Given the nature of current operations the requirements for training and simulation is also accelerated. Our warfighters work with many different assets and therefore they should have the opportunity to train with the same assets they go to war with. Close Air Support (CAS) is one of the most challenging pursuits in combat today. It is challenging to train to CAS in a simulator due to the number of personnel required to attain a quality training experience. CAS players may include fast jets, unmanned aircraft, ground controllers and operation centers. Given the current ops tempo and availability of joint and distributed training opportunities, intelligent agents that work within CGFs would be tremendously valuable to assist in CAS training scenarios. Even if the personnel resources were available for all trainees to partake in an exercise, it is difficult to attain quality training for all players.


The current state-of-the-art simulator available for Joint Terminal Attack Controller (JTAC) training is the Indirect Fire Forward Observer Trainer (I-FACT^TM). In order to use this simulation system, the non-training audience is required to be very large so that a robust and dynamic training environment can be replicated. This presents a technology gap that if filled will make training both more efficient and effective. To fulfill this gap, intelligent agents that work within a simulated environment using published simulation standards need to be developed.
Presently, the state-of-the-art for agents and intelligent role players exist in domains outside of military simulation. These agents are not programmed to interact using simulation standards. Furthermore, present state-of-the-art for intelligent agents do not behave appropriately for the military mission set; agents need to provide feedback using doctrinally correct responses. The desired end-state for this effort is the development of a prototype intelligent agent that will use the appropriate protocols to work within a CGF software suite. This will in turn be seamlessly integrated as white force assets into a government or commercial simulated training environment.
This effort aims to limit the white force or training aide assets required to provide immersive and realistic training for warfighters who support a CAS mission and for this effort specifically the JTAC. The primary focus of this work is to demonstrate the injection of white-force role players into a CGF that communicate and behave in a realistic manner to provide robust and cost-efficient training opportunities. The technical work associated with this effort should ensure that interoperability training standards are used. The modeled entities should be constructed synthetically and formatted so that both government and COTS systems may be used in parallel.

PHASE I: Identify and document CAS-related missions that the JTAC warfighters are expected to engage in with U.S. and allied partners. Identify and document possible white-force role players to be injected into a CGF. Develop and demonstrate a prototype of a functioning white force injected into a CGF for a JTAC-related CAS mission.

PHASE II: Upon successful demonstration in the Phase I effort, the white force agent will be refined, fully developed, and tested in a simulated environment. In addition, an additional agent will be developed to support an additional white force role. Both agents will be demonstrated in simulated training environment. Three CAS test

scenarios will be developed that utilize the injected white force agents into a CGF.

PHASE III DUAL USE APPLICATIONS: This effort will provide an array of intelligent role players injected into CGFs to stimulate training environments for ground forces, to include Tactical Air Control Party (TACP), Air Battle Managers (ABM), Air Support Operations Center, and Air Operations Center warfighters.

REFERENCES:

1. Bradley, D. R., & Abelson, S. B. (1995). Desktop flight simulators: Simulation fidelity and pilot performance. Behavior Research Methods, Instruments & Computers, 27(2), 152-159.
2. Doyle, M. J., & Portrey, A. M. (2011). Are Current Modeling Architectures Viable for Rapid Human Behavior Modeling? Proceedings of the Interservice/Industry Training, Simulation and Education Conference (I/ITSEC), Orlando, FL. National Training Systems Association.
3. Feickert, A. (2013). The Unified Command Plan and Combatant Commands: Background and Issues for Congress. Washington D.C.: Congressional Research Service.
4. Freier, N., Bilko, D., Driscoll, M., Iyer, A., Rugen, W., Smith, T., & Trollinger, M. (2011). U.S. Ground Force Capabilities through 2020. Washington D.C.: Center for Strategic & International Studies.
5. Myers, C. W., Gluck, K. A., Gunzelmann, G., & Krusmark, M. (2010). Validating computational cognitive process models across multiple timescales. Journal of Artificial General Intelligence, 2(1), 108-127.
6. Neubauer, P., & Watz, E. (2011). Network Protocol Extensions for Automated Human Performance Assessment in Distributed Training Simulation (11S-SIW-053). Paper presented to 2011 Spring Simulation Interoperability Workshop, Boston, MA.
7. Rodgers, S., Myers, C., Ball, J. & Freiman, M. (2012). Toward a Situation Model in a Cognitive Architecture. Computational and Mathematical Organization Theory.
8. Rosenberg, B., Furtak, M., Guarino, S., Harper, K., Metzger, M., Neal Reilly, S., Niehaus, J., and Weyhrauch, P. (2011). "Easing Behavior Authoring of Intelligent Entities for Training," Proceedings of the 20th Conference on Behavior Representation in Modeling and Simulation (BRIMS), Sundance, UT. Rodgers, S., Myers, C., Ball, J. & Freiman, M. (2012). Toward a Situation Model in a Cognitive Architecture. Computational and Mathematical Organization Theory.
9. Staff, J. C. (2009). Joint Publication 3-09.3: Close Air Support. Washington, DC.
10. Taggart, B. T. (2009). An Argument for the Keyhole Template for Close Air Support on the Urban Battlefield. Quantico: Defense Technical Information Center.
11. Winner, J. L., Nelson, S. F., Burditt, R. L., & Pohl, A. J. (2011). Evaluating games engines for incorporation in military simulation and training. Proceedings from GameOn North America. Troy, NY.
KEYWORDS: Command and Control Training, Computer Generated Forces, Close Air Support, Joint

Terminal Attack Controller Training, Intelligent Agent, Cognitive Modeling

AF141-021 TITLE: Holographic Lightfield 3D Display Metrology (HL3DM)
KEY TECHNOLOGY AREA(S): Human systems

OBJECTIVE: Develop test and evaluation methodology for holographic lightfield 3D displays with an automated measurement system to support comparisons of prototypes emerging from research, to enable robust calibration, and to perform product acceptance testing.



DESCRIPTION: Advanced FoLD 3D visualization systems, that enable multi-user full-parallax viewing of complex 3D data without eyewear, are being developed with the aim of increasing the productivity of operators and analysts in C2 Operations Centers. FoLD approaches have many potential advantages over the more common stereoscopic 3D (S3D) displays, including improved comfort and perception. FoLD systems achieve these user-acceptance improvements by (a) correcting S3D's incongruous accommodative, vergence and motion parallax depth cues and (b) eliminating the need for 3D spectacles (enabling eye-contact and non-verbal gesture communication). Full parallax multi-perspective lightfield 3D displays could also enhance collaborations and shared understanding of multi-layer 3D data sets in other application areas, including military intelligence, medical training, molecular research, mineral geology, and similar civil big-data environments.
Research towards these emerging eye-strain and nausea-free FoLD systems involves novel holographic, volumetric, multi-planar depthcube, integral imaging, and other lightfield display types which have been demonstrated as laboratory prototypes. Further hardware and software maturation is necessary for successful technology transition and commercialization. However, progress is currently restrained by a lack of validated metrics (physical and perceptual) and by a lack of testing protocols based on realistic user content and scenarios. Rapid, inexpensive physical measurement methodologies are required to guide research spirals, display calibration, and product acceptance. An automated measurement approach is needed to reduce the costs associated with making the large number of measurements required to describe display depth and lateral image quality from all pupil-pair positions within a reasonably large (30x30-deg to 90x90-deg) image viewing zone. Proposed approaches to FoLD metrology should address the multidisciplinary nature of display test and evaluation (T&E). Human visual perception needs to be convincingly addressed in all new physical (instrumental) measurement procedures.


Download 1.72 Mb.

Share with your friends:
1   2   3   4   5   6   7   8   9   ...   40




The database is protected by copyright ©ininet.org 2024
send message

    Main page