This effort would develop a proof-of-concept test bed that will demonstrate that a low cost imager is capable of sensing through heavy dust, fog, and smoke. This imager is intended to complement a high-resolution LWIR camera for the detection of common obstacles and targets found while driving ground vehicles. Minimum requirements are the ability to detect an obstacle as small as 56 cm in diameter at a distance of 25 m (50 m objective), refresh rate of 15 Hz (30 Hz objective), horizontal field of view (HFOV) 20° (60° objective) and vertical field of view (VFOV) of at least 6° (15° objective). The range to and the velocity of targets is also desirable. The proposed solution should be scalable, enabling development of either higher or lower resolution imagers based on the concept proposed.
Due to the low-cost requirement, preference will be given to designs that include commercial off-the-shelf (COTS) components. Passive approaches are preferred, but active methods will also be considered.
Previous research suggests that RF and millimeter-wave based solutions are likely candidates, but other methods will be considered.
PHASE I: Design imager and validate, using analytical models, that system can fulfill the requirements. Build a simple prototype to validate design and assumptions. Provide a cost estimate to prototype designed system.
PHASE II: Based on the results and analysis of Phase I, build a fully functional testbed that can be mounted on a ground vehicle and tested in a relevant environment. Demonstrate imagery fused with LWIR video and quantify performance. Government will provide test vehicle and LWIR sensor.
PHASE III DUAL USE APPLICATIONS: Integrate low-cost imager with LWIR sensor into a single enclosure and achieve a Technology Readiness Level 6 (TRL 6).
REFERENCES:
1. T. E. Dillon, C. A. Schuetz, R. D. Martin, D. G. Mackrides, S. Shi, P. Yao, K. Shreve, et al., "Passive, real-time millimeter wave imaging for degraded visual environment mitigation," in Proc. of SPIE 9471, Degraded Visual Environments: Enhanced, Synthetic, and External Vision Solutions, Baltimore, MD, 2015, pp. 947103-947103-9.
2. C. A. Schuetz, R. D. Martin, C. Harrity, and D. W. Prather, "Progress towards a "FLASH" imaging RADAR using RF photonics," 2016 IEEE Avionics and Vehicle Fiber-Optics and Photonics Conference (AVFOP), 2016, pp. 187-188.
3. C. A. Martin, J. A. Lovberg, and V. G. Kolinko, "Expanding the spectrum: 20 years of advances in MMW imagery," Proc. of SPIE 10189, Passive and Active Millimeter-Wave Imaging XX, Anaheim, CA, 2017, pp. 1018903-1018903-7.
KEYWORDS: DVE, Degraded Visual Environments, RF, MMW, Millimeter-Wave, Imaging Radar, LWIR, Multi-spectral imaging, Sensors, Acoustics, Seismic, non-traditional sensing modalities
A18-041
|
TITLE: On-the-Move Spatio-Temporal Processing and Exploitation for Full Motion EO/IR Sensor
|
TECHNOLOGY AREA(S): Electronics
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), which controls the export and import of defense-related material and services. Offerors must disclose any proposed use of foreign nationals, their country of origin, and what tasks each would accomplish in the statement of work in accordance with section 5.4.c.(8) of the Announcement.
OBJECTIVE: Develop spatiotemporal processing and exploitation for full motion Electro-Optic Infrared (EO/IR) sensor for on-the-move real-time detection of in-road and road-side explosive hazard and threat indicators for route clearance application.
DESCRIPTION: Traditionally, EO/IR sensor processing and exploitation of full-motion video has approached the automated target detection problem as a cascade of image processing tasks to detect location/region of interest (ROIs) followed by tracking of these ROIs over a sequence of images to build confidence before a decision. While such an approach is reasonable for sensors operating at low-frame rates (such as hyperspectral sensors), there is an opportunity and need for more integrated spatial and temporal exploitation of data for EO/IR sensors that readily provide full-motion video at 30 frames per second and higher. There is rich target specific (structural and spectral) information in the temporal evolution of the signature in full motion video captured over multiple frames by gradually changing perspective. Traditional approach centered on spatial image exploitation and temporal tracking of detections is not able to fully exploit this spatio-temporal characteristics of the threat signature. Image processing and machine vision approaches such as super-resolution imaging and structure from motion have sought to exploit this temporal content in full motion video to tease out additional information to improve quality/content of the image frame. However, such pre-processing steps are generally computationally expensive and still require traditional image based detection methods for automated exploitation. Highly varying imaging conditions, ever changing clutter environment and uncertain threat scenarios further limit the suitability of such approaches for challenging on-the-move real-time detection of in-road and road-side explosive hazard and threat indicators for route clearance application in both rural and urban scenarios on improved or unimproved roads.
On-the-move real-time detection will require tools, techniques and video processing architecture to identify and efficiently capture robust spatiotemporal features and feature-flow characteristics that may facilitate reliable detection of threats and threat signatures. Further, these threat signatures may occur at different (and often a-priori unknown) spatial and temporal scales. While physics based features and feature-flow characteristics are particularly interesting to gain insight and evaluation of the technique, they are often hard to come by for unstructured tasks. More recent advances in learning algorithms, flux-tensor processing and deep-learning networks may provide an opportunity to investigate viability and suitability of such spatiotemporal detection and exploitation for route clearance application. While on-time-move detection of in-road and road-side threats from ground based and low-flying airborne EO/IR sensors is of primary interest, where applicable person, object and vehicle detection and tracking and human activity detection and characterization will also be on interest from the perspective of threat indicators.
PHASE I: The Phase I goal under this effort is to evaluate current state of the art, identify processing tools/algorithms, develop a design of exploitation architecture/pipeline and scope processing hardware that will allow real-time on-the-move integrated spatiotemporal processing of full-motion video data from EO/IR sensors for detection of in-road and road-side explosive hazard and threat indicators for route clearance application. Representative set of ground truthed data for in-road and road-side threats from ground based EO/IR sensors will be provided to evaluate feasibility of critical technologies/algorithms. The Phase I final report must summarize the current state of the art in spatiotemporal processing of full-motion video, provide details of the technical approach/algorithms, conceptual processing architecture/pipeline, rationale for the selected processing/exploitation architecture, system level capabilities and limitations, and critical technology/performance risks for the proposed processing and exploitation approach.
PHASE II: The Phase II goal under this effort is to implement and evaluate viability, utility and expected performance of spatiotemporal features, processing and exploitation techniques for real-time on-the-move detection of in-road and road-side explosive hazard and threat indicators for route clearance application. The proposed algorithms is expected to be operated and demonstrated in real-time (at specified frame rate that the provider may identify based on processing/computation needs), on-the-move (at suitable useful rate of advance) running on the processing hardware that will be installed and integrated on a ground vehicle for a representative mission scenario in test environment. The Phase II final report will include detailed system (software and hardware) design, hardware-software interfaces, system capability and limitation, detailed summary of testing and results, lessons learned and critical technology/performance risks.
PHASE III DUAL USE APPLICATIONS: The Phase III goal is to develop an end-to-end demonstration prototype (including suitable sensor, processing hardware, detection software and user interface) for on-the-move real-time detection of in-road and road-side explosive hazard and threat indicators for route clearance application. The sensor system may be mounted on a ground vehicle or an airborne platform and operated and demonstrated in relevant variable environment (including the mission relevant variability such as terrain, time of day or climate condition). The sensor system technology developed under this effort will have high potential for other commercial applications for law enforcement, border security and surveillance, autonomous robotics and self-driving cars.
REFERENCES:
1. K. K. Green, C. Geyer, C. Burnette, S. Agarwal, B. Swett, C. Phan and D. Deterline, "Near real-time, on-the-move software PED using VPEF," in SPIE DSS, Baltimore, MD, 2015.
2. Burnette, C., Schneider, M., Agarwal, S., Deterline, D., Geyer, C., Phan, C., Lydic, R.M., Green, K., Swett, B. “Near real-time, on-the-move multi-sensor integration and computing framework,” in SPIE DSS, Baltimore, MD, 2015.
3. B. Ling, S. Agarwal, S. Olivera, Z. Vasilkoski, C. Phan, C. Geyer, “Real-Time Buried Threat Detection and Cueing Capability in VPEF Environment,” in SPIE DSS, Baltimore, MD, 2015.
KEYWORDS: Spatiotemporal processing, full-motion video exploitation, automated target detection, deep-learning networks, feature-flow, route clearance, improvised explosive devices
A18-042
|
TITLE: Helmet-Mounted Microbolometer Hostile Fire Sensor
|
TECHNOLOGY AREA(S): Electronics
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), which controls the export and import of defense-related material and services. Offerors must disclose any proposed use of foreign nationals, their country of origin, and what tasks each would accomplish in the statement of work in accordance with section 5.4.c.(8) of the Announcement.
OBJECTIVE: Develop and deliver an uncooled microbolometer based small and medium arms hostile fire detection (HFD) sensor to be mounted on a helmet or small semi-autonomous or autonomous ground system. Appropriate algorithms to provide, at a minimum, angular direction to the origin of hostile fire events are required.
DESCRIPTION: Especially when first engaged, it is often difficult for a soldier or autonomous system to quickly ascertain from where hostile fire has originated. This confusion prevents a quick and effective response to counter and eliminate the threat. This topic seeks to provide the soldier and autonomous system with a means to eliminate this confusion and allow well-informed and timely actions to be taken in response to hostile fire. Acoustic systems have been developed, but system performance is severely degraded in environments which are prone to multi-path acoustic reflections such as urban or forest environments [1].
Because the system is meant to be mounted on a helmet or small platform, it must be extremely light weight, low power, and possess an appropriate form factor: this is of primary importance in gaining user acceptance. Additionally, it should be compatible and not interfere with other commonly helmet-mounted systems such as night vision goggles. The final production system must also be cheap enough to justify equipping ground troops and small robotic platforms and run >12 hours minimum on batteries, ideally >24 hours.
The sensor need not be imaging, but must provide at least angular direction to the origin of the hostile fire event. In order to provide the user with the best chance of quickly identifying and engaging the threat, the system should minimally be capable of identifying the angle to the threat with <30° resolution and <±15° error, but ideally <5° resolution with <±2.5° error. But, this must be balanced against SWAP-C; horizontal angular (azimuth) resolution is more important than vertical (zenith). The time lag between the shot and display to the user should be minimal, ideally <50 ms.
Of course, probability of detection at tactically relevant ranges for small arms (500–600 m), such as common assault rifles and carbines, and medium arms (1–1.5 km), such as large rifles and machine guns, should be maximized (>90% minimum, ideally >95%) and false alarms close to zero. Other features, such weapon type identification, the ability to squelch alerts generated from friendly fire, and range to target, are desirable. The system must minimally operate at a brisk walking speed, >6.5 kph, and ideally at a sprint, = 25 kph.
PHASE I: The proposer shall provide a complete helmet-mounted sensor design using only components which are COTS (commercial off-the-shelf) or those that could reasonably be designed and fabricated within the time and budget constraints. The sensor design need not be optimized for SWAP-C at this stage, but it must show extensibility to a usable and wearable system.
A complete and thorough understanding of the algorithms necessary to make the sensor successful shall be demonstrated. Rigorous modeling shall be performed to estimate system performance, including at least probability of detection verses range, angular resolution and error, time to detect, and any other features. Sources of false alarms and potential mediation should be well thought-out and incorporated into the design.
PHASE II: Using the results of Phase I, fabricate and deliver a prototype helmet-mounted HFD system. Prototype should meet requirements for TRL 4: component and/or breadboard validation in laboratory environment. All required sensors must be mounted to the helmet, but processing and power may be external at this stage so long as a detailed design path is provided to show that it can all be integrated onto the helmet (full integration is preferred).
Probability of detection, angular resolution and error, and time to detect shall be measured through live-fire laboratory testing at close to moderate distance, at least 50–100 m. False alarm mitigation techniques should also be laboratory or field tested when possible.
PHASE III DUAL USE APPLICATIONS: Transition applicable techniques and processes to a production environment with the support of an industry partner. Finalize a sensor design with appropriate SWAP-C and form factor based on human factors testing. Determine the best integration path as a capability upgrade to existing or future systems, including firmware and interfaces required to meet sensor interoperability protocols for integration into candidate systems as identified by the Army.
REFERENCES:
1. G Tidhar, “Hostile fire detection using dual-band optics,” SPIE Newsroom (2013).
2. AMRDEC Public Affairs, “Serenity payload detects hostile fire,” https://www.army.mil/article/140459/Serenity_payload_detects_hostile_fire/ (2014).
3. “Uncooled Multi-Spectral (UMiS) Hostile Fire Detection and Discrimination System for Airborne Platforms,” https://www.sbir.gov/sbirsearch/detail/824645 (2015).
4. E Madden, “Small Arms Fire Location for the Dismounted Marine,” Navy SBIR 2015.3, http://www.navysbir.com/n15_3/N153-125.htm (2015)
5. L Zhang, F Pantuso, G Jin, A Mazurenko, M Erdtmann, S Radhakrishnan, J Salerno, “High-speed uncooled MWIR hostile fire indication sensor,” Proc. SPIE, Vol 8012 (2011)
6. S Nadav, G Brodetzki, M Danino, M Zahler, “Uncooled infrared sensor technology for hostile fire indication systems,” Opt. Eng., Vol 50, No 6 (2011)
7. M Pauli, W Seisler, J Price, A Williams, C Maraviglia, R Evans, S Moroz, M Ertem, E Heidhausen, D Burchick, “Infrared Detection and Geolocation of Gunfire and Ordnance Events from Ground and Air Platforms,” www.dtic.mil/get-tr-doc/pdf?AD=ADA460225 (2004)
KEYWORDS: hostile fire, HFD, HFI, uncooled, bolometer, helmet
A18-043
|
TITLE: Real-time Scene Labeling and Passive Obstacle Avoidance in Infrared Video
|
TECHNOLOGY AREA(S): Electronics
OBJECTIVE: To develop and demonstrate techniques for labeling frames of infrared video in real time and using those labels and other information to identify obstacles and threats.
DESCRIPTION: Great progress has been made in automated identification of targets (objects of interest) in single frame infrared imagery. However, less success has been achieved in multi-class characterization of entire images (scene labeling)--with difficulties presented both in the correct classification of many categories of objects and in the computational time needed to process an entire image. Real time capability is essential for obstacle avoidance, threat detection, and navigation in moving vehicles. What is needed a set of algorithms which exploit spatial and temporal context for the computationally efficient scene labeling of video sequences--which will enable the military operator to respond to avoid obstacles and threats in real-time. The problem of threat detection and obstacle avoidance in full motion passive infrared (IR) video is of critical interest to the Army. Vehicle drivers and sensor operators are inundated with many terabytes of video. Human operators are subject to fatigue, boredom, and information overload. To maintain necessary situational awareness, it is vital to automate the video understanding process as much as possible. The problem presents immense computational complexity and is unsolved. Novel deep learning methods have been developed that promise a qualitative breakthrough in machine learning and aided target recognition (AITR) for object detection and classification in video. The approach in this effort should expand these successes to include full motion video understanding and threat detection.
PHASE I: Show proof of concept for scene labelling algorithms for obstacle avoidance, navigation, and threat detection in full motion IR video. Show proof of concept for algorithms to greatly increase threat classification effectiveness (high probability of correct classification with minimal false alarms). Integrate algorithms into comprehensive algorithm suite. Test algorithms on existing data. Demonstrate feasibility of technique in infrared (IR) video sequences. Distribute demonstration code to Government for independent verification. Successful testing at the end of Phase 1 must show a level of algorithmic achievement such that potential Phase 2 development demands few fundamental breakthroughs but would be a natural continuation and development of Phase 1 activity.
PHASE II: Complete primary algorithmic development. Complete implementation of algorithms. Test completed algorithms on government controlled data. System must achieve 90% classification rate with less than 5% false alarms. Principle deliverables are the algorithms. Documented algorithms will be fully deliverable to government in order to demonstrate and further test system capability. Successful testing at end of Phase 2 must show level of algorithmic achievement such that potential Phase 3 algorithmic development demands no major breakthroughs but would be a natural continuation and development of Phase 2 activity.
PHASE III DUAL USE APPLICATIONS: Complete final algorithmic development. Complete final software system implementation of algorithms. Test completed algorithms on government controlled data. System must achieve 90% classification rate with less than 5% false alarms. Documented algorithms (along with system software) will be fully deliverable to government in order to demonstrate and further test system capability. Applications of the system will be in NVESD Multi-Function Display Program and vehicle navigation packages. Civilian applications will be in crowd monitoring, navigation aids, and self-driving cars
REFERENCES:
1. Farabet, C., Couprie, C., Najman, L., and LeCun, Y., “Learning Hierarchical Features for Scene Labeling”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 35 Issue 8, August 2013, pp. 1915-1929
2. Albalooshi, F. and Asari, V.K., "A Self-Organizing Lattice Boltzmann Active Contour (SOLBAC) Approach For Fast And Robust Object Region Segmentation," Proceedings IEEE International Conference on Image Processing - ICIP 2015, pp. 1329-1333, Quebec City, Canada, 27-30 September 2015.
3. I-Hong Jhuo; Lee, D.T., "Video Event Detection via Multi-modality Deep Learning," Pattern
Recognition (ICPR), 2014 22nd International Conference on, pp.666,671, 24-28 Aug. 2014
KEYWORDS: Aided Target Recognition, Deep Learning, Neural Networks, Scene Labeling, Threat Detection
A18-044
|
TITLE: A Resource-aware metadata-based Information Sharing: Achieving Scalability and VoI in future autonomous networks
|
TECHNOLOGY AREA(S): Information Systems
OBJECTIVE: The objective of this topic is to develop resource efficient methods and techniques that generate and annotate metadata based on information that has been retrieved from Army tactical networks that deploy artificial autonomous agents. The goal is to improve the accuracy of information queries, with this accuracy determined by quantitative criteria that reflect the risk in misidentifying what information is relevant for the Army mission at hand.
DESCRIPTION: The Army vision of artificial autonomous agents collaborating with mounted and dismounted forces in order to perform a wide range of mission operations will require scalable and robust networking solutions. Artificial agents of different types and complexity will consequently form a heterogeneous network that has variable resources and capabilities, and will need to coordinate and interact with each other to allow the completion of the required mission tasks while respecting the limited resources available in a tactical network.
Due to the limitations of communication bandwidth, storage and processing capabilities of tactical edge networks, it is impossible to disseminate all the generated information (e.g. images and videos) to the agents that need it. For example, autonomous aerial drones with mounted cameras can generate images to aid in mission planning by uploading all the images/videos they record to a server, and the server will then utilize content-based techniques to resolve user queries over the images and videos. However this approach can utilize an appreciable amount of network bandwidth, and the storing and processing of images and videos can utilize a significant amount of disk space and processing power on the server. But realistically only a small percentage of the uploaded images or videos may actually be useful to any participating agent in the network. Therefore the resources utilized to upload, store, and process the remaining images and videos will be wasted.
These considerations call for the design of an information access methodology on the network that will be aware of network, computational, and relevance constraints. Such a methodology will significantly enhance the network’s ability to transfer relevant information, and will therefore increase the likelihood of mission effectiveness. 50>
Share with your friends: |