Air force 17. 1 Small Business Innovation Research (sbir) Phase I proposal Submission Instructions



Download 1.01 Mb.
Page6/30
Date02.02.2017
Size1.01 Mb.
#15728
1   2   3   4   5   6   7   8   9   ...   30

4. Michael A. Kegerise, Shann J. Rufer, Unsteady Heat-Flux Measurements of Second-Mode Instability Waves in a Hypersonic Boundary Layer, AIAA Paper 2016-0357

KEYWORDS: Carbon-Carbon structures instrumentation, temperature sensor, pressure sensor, hypersonic flow, arc jets facility


AF171-021

TITLE: Ultradense Plasmonic Integrated Devices and Circuits

TECHNOLOGY AREA(S): Battlespace, Electronics, Sensors

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with section 5.4.c.(8) of the solicitation and within the AF Component-specific instructions. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws. Please direct questions to the AF SBIR/STTR Contracting Officer, Ms. Gail Nyikon, gail.nyikon@us.af.mil.

OBJECTIVE: Develop ultradense, low-power plasmonic integration components and devices for future battlefield sensors and systems.

DESCRIPTION: The use of surface plasmons in guiding metallic structures is one of the most promising approaches for overcoming the diffraction limit of light and significantly increasing levels of integration and miniaturization of integrated optical devices and components. The unique properties of plasmonic waveguides (such as very high mode confinement, ultrasharp bends, etc) promises the level of integration optical components, previously unachievable with any other technology. However, significant advancements in both the design and fabrication of both passive and electro-optically active plasmonic structures are required to transition this technology to practical applications. For example, due to very high dimensional tolerances of plasmonic devices electron-beam lithography is typically used for patterning of plasmonic waveguides, which is slow and expensive process, incompatible with large scale production and low costs, required by military and commercial applications. On the other hand, significant advances in nanofabrication techniques made over the last decade may provide the means to solve this problem. For these reasons, advancements in both theoretical understanding of plasmonic integrated circuits and fabrication techniques is required.

Future battlefield systems will exploit highly sophisticated optical communication and signal processing systems with very high component integration density, low power consumption, fast and low-cost data transfer rates. Successful completion of this program will lead to the development of high quality, robust, photonic circuits that will serve as an integrating medium for optical components and networks and that will perform basic, on-chip functions (such as signal conditioning and signal processing).

PHASE I: Demonstrate feasibility of the ultradense plasmonic components and circuits by modeling. Design an active integrated photonic structure and select the materials and fabrication methods. Demonstrate the validity of key processes. Identify the application, develop a documented set of performance parameters and suggest a viable transition to manufacturing strategy.

PHASE II: Build upon Phase I work and fabricate functional passive plasmonic devices. Design and fabricate and demonstrate the functionality of active plasmonic devices. Address the integration of the developed plasmonic circuits into existing or future communication systems. Demonstrate the scalability of the fabrication process to the large-scale low-cost production in Phase III.

PHASE III DUAL USE APPLICATIONS: Applications of the ultradense integrated photonic components in sensors, signal processing and communications. Commercial application: Enable fabrication and design of optical ultradense functional optoelectronic (OE) circuits for optical communications and optical signal processing.

REFERENCES:

1. T. Nikolajsen and K. Leosson, S. I. Bozhevolnyi, Surface plasmonic polariton based modulators and switches operating at telecom wavelengths, Appl. Phys. Lett., 85, 5833-5835 (2004).

2. S. I. Bozhevolnyi, V. S. Volkov, D. Devaux, J. Laluet & T. W. Ebbesen, Channel plasmonic subwavelength waveguide components including interferometers and ring resonators, Nature, 440, 508-511 (2006).

KEYWORDS: signal processing, optical components, optical subcomponents, photonic integration, optical processors, nanotechnology, plasmonics, meta-materials, nanofabrication.


AF171-022

TITLE: Advanced back-illuminated CMOS image sensors for adaptive optics applications

TECHNOLOGY AREA(S): Battlespace, Electronics, Sensors

OBJECTIVE: Develop a high-speed, low-noise scientific camera using a back-illuminated silicon CMOS image sensor capable of operating at high quantum efficiency (>80%) without the use of microlenses for use in advanced adaptive optics applications.

DESCRIPTION: Low-noise, high-speed, scientific cameras are the backbone of adaptive optics (AO) applications, and are used in a wide range of applications from wave-front sensing to tilt control. Currently, charge coupled device (CCD) image sensors are the technology of choice for these, and other demanding imaging applications. The technology is well established, and enables high quantum efficiency (QE) to be achieved over the wavelengths of interest. Alternatively, complementary metal-oxide semiconductor (CMOS) image sensors offer potential performance advantages over CCDs. These advantages include lower read noise, the possibility for integrating the imaging system on chip, the provision of digital outputs, and a reduction in overall processing cost by leveraging existing CMOS digital processing technology. However, CMOS image sensors have yet to enter the high-specification scientific imaging market on a large scale, especially for AO applications operating in extremely low-light conditions. Standard front-side illuminated CMOS image sensors suffer from a reduction in QE since there is a light mask over the electronic circuit contained in each pixel, ultimately reducing the sensitivity of the device. In some applications, the QE can be increased through the use of microlenses to direct photons to the light-sensitive areas of each pixel. However, the use of microlenses is not suitable for wavefront sensing and other high performance science applications as photons are not absorbed uniformly across the device. Thus, a back-side illuminated CMOS image sensor with high QE and sensitivity is desirable for high-specification, low-light AO applications. In principle, it should be possible to back-thin existing CMOS image sensors to obtain the same, or better, performance as the CCD.

This solicitation seeks exploration into the design, development, and demonstration of a high-speed, low-noise, scientific camera using a back-side illuminated 128 × 128 (minimum) silicon CMOS image sensor for use in advanced AO applications. The sensor is expected to demonstrate QE above 80% (at 570 nm) without the use of microlenses.

Additional performance objectives for the sensor are (1) full frame rates above 5 kHz, (2) low read-noise below 3 e- at a 5 kHz frame rate, (3) true on-chip binning, (4) efficient windowing geometry for high frame rates, (5) oversampling through non-destructive reads to reduce read-noise at lower frame rates, (6) high linearity over the dynamic range of the imager for each pixel, and (7) the use of the camera-link communication protocol.

PHASE I: Perform preliminary analysis and conduct trade studies to identify a suitable silicon CMOS image sensor. Work with foundry partners to implement back-thinning, anti-reflection coating, and packaging processes to solve the fill-factor and QE issues associated with the front-side illuminated sensor.

PHASE II: Integrate the packaged back-illuminated CMOS sensor into a controller which uses a standard camera-link interface protocol to meet the stated performance requirements. Test and validate device performance. Further work is desired to complete specialized chip fabrication runs to incorporate skipping throw-away-guard-column (TAGC) mode to double the frame rate (or half the latency), and to optimize the read-out electronics for more specialized wavefront sensor (WFS) applications.

PHASE III DUAL USE APPLICATIONS: Develop technology applicable to high-specification science-imaging applications.

REFERENCES:

1. Paul Jerram; David Burt; Neil Guyatt; Vincent Hibon; Joel Vaillant, et al. "Back-thinned CMOS sensor optimization", Proc. SPIE 7598, Optical Components and Materials VII, 759813 (February 25, 2010); doi:10.1117/12.852389.

2. James R. Janesick ; Tom Elliott; Stewart Collins; Morley M. Blouke and Jack Freeman, "Scientific Charge-Coupled Devices", Opt. Eng. 26(8), 268692 (Aug 01, 1987).

3. Daniel Durni; David Arutinov; Martin Lesser, et al. “High performance silicon imaging: fundamentals and applications CMOS and CCD sensors”, Woodhead Publishing Ltd. Edition: 1st (May 28, 2014).

KEYWORDS: Image Sensor, Science Camera, CCD, CMOS, Back-Illuminated, Back-Thinned, Adaptive Optics




AF171-023

TITLE: Adaptive Optics Prototype for a Meter-class Telescope used for Space Surveillance

TECHNOLOGY AREA(S): Space Platforms

OBJECTIVE: Build and demonstrate an adaptive optics prototype which can be deployed at 1 m class telescopes for the Space Surveillance Network (SSN) to accomplish characterization of Low Earth Orbit (LEO) objects and area inspection around high value assets at Geosynchronous Earth Orbit (GEO).

DESCRIPTION: The goal of this project is to design, build and test an adaptive optics (AO) system which can be deployed to 1 m class telescopes at an AFRL or at a Ground-based Electro-Optical Deep Space Surveillance (GEODSS) site. An affordable AO system would enhance the value of future meter-class telescopes developed for the Space Surveillance Network (SSN). The AO system shall be flexible enough to be integrated with telescopes ranging between 0.5 m - 1.5 m. The system field of view (FOV) should allow for imaging Low Earth Orbit (LEO) objects and spatially separating closely-spaced Geosynchronous Earth Orbit (GEO) objects.

An effective AO system shall presumably include a tracker, a low-order mode corrector - steering mirror (SM), a high-order mode corrector - deformable mirror (DM) or micro-electro-mechanical system (MEMS), an established wavefront sensor, an auxiliary path to split the visible light to a second novel WFS, an imaging camera, reconstruction algorithm, software, and an interface mechanism that allows closed-loop communication between the corrector elements, the WFS, and the reconstruction algorithm. The AO system should be able to correct a range of seeing between 0.7 to 2 arcseconds at 500 nm. The servo loop shall be able to run at kilo-hertz speeds. For the imaging camera, the number of pixels selected and the pixel pitch shall be able to sample D/r0 = 20. The system shall record reconstructed wavefronts from the visible WFS, and I-band or J-band PSFs from the imaging camera. A mechanism is needed to either allow all the light to the baseline WFS or different percentages (25%, 50%, 75%, 100%) of light to the baseline WFS while sending the remaining light to the auxiliary WFS. The design, construction, and integration of the second WFS is independent of this SBIR. Successful bidders will to the greatest extent possible show:

1. Ability to design and build an AO system adhering as much as possible to the previous description and based as much as possible on COTS parts. Provide a conceptual optical design with FOV specifications. Include a conceptual electrical design.


2. Ability to design, code, and test a reconstruction algorithm that is linear, has a high dynamic range and allows user modification.
3. Ability to integrate rapidly with a telescope for on-sky testing (target < 4 weeks). The baseline is a 1 m telescope with f/100 and a connected coude room operated by AFRL at the Starfire Optical Range.
4. Ability to build a compact AO system which can be transported to potential GEODSS sites.
5. Ability to switch rapidly (minutes) between the base-line WFS and the auxiliary novel WFS without effecting alignment.
6. Ability to deliver specifications and performance metrics for the scoring, WFS, and imaging cameras.
7. Ability to accurately predict and model telescope hardware and AO system performance as well as photometric and radiometric performance for objects of mv = 7 - 14 magnitude. Derive a metric for observing closely-spaced objects.
8. Show understanding of the cost of, hardware, people-hours required to assemble and interface the AO system, and software development.
9. Ability to provide follow-on use of the AO system and the software by the Air Force under a cooperative agreement to be arranged in the future.

PHASE I: Carry out simulations to test the performance of the AO system in conjunction with the listed metrics. Design an AO system that can be integrated with a 1 m class telescope using metrics given above, especially being able to correct seeing between 0.7 - 2 arcseconds at 500 nm, operate at kilo-hertz speeds, and sample D/r0 = 20. The system should be built for a coude room but be compact enough to be transportable. Provide a cost-estimate for the hardware, software, and people-hours required to design, build, and integrate the system.

PHASE II: Work in consultation with the government to build and install the AO system on a 1 m telescope. Carry out on-sky observation with the AO testbed using different magnitude (mv = 7 - 14) sources as well as different separations (3 lambda/D - 4 lambda/D). From the observations record reconstructed wavefronts from the WFS and corrected PSFs from the imaging camera. Provide a report on system performance based on metrics and goals outlined in the description.

PHASE III DUAL USE APPLICATIONS: Integrate and demonstrate one or more AO systems. Work with the government to operate the prototype to conduct an SSA mission. Analyze the performance of the AO system using the metrics in the description. Prepare a final report on the AO prototype.

REFERENCES:

1. J. W. Hardy. Adaptive Optics for Astronomical Telescopes. Oxford University Press, July 1998.

2. Fugate et. al. Experimental Demonstration of Real Time Atmospheric Compensation with Adaptive Optics Employing Laser Guide Stars. BAAS, Vol. 23, p. 88, March 1991.

3. O. Guyon. Limits of Adaptive Optics for High-Contrast Imaging. ApJ, 629:592–614, August 2005.

KEYWORDS: Adaptive Optics, AO, wavefront sensors, WFS, Shack Hartmann wavefront sensor, SHWFS, Ground-based Electro-optical Deep Space Surveillance, GEODSS, Space Surveillance Network, SSN, Space Situational Awareness, SSA


AF171-024

TITLE: Daylight Glare Reduction Using a Light-field Camera

TECHNOLOGY AREA(S): Battlespace, Electronics, Sensors

OBJECTIVE: Improve glare reduction/rejection in daylight satellite tracking operations caused by close proximity to the Sun, reducing the exclusion angle around the Sun.

DESCRIPTION: The USAF requires the ability to track space objects in daylight. At present the Air Force daylight tracking telescopes are excluded from the area within about 15 degrees of the Sun due to glare that baffles are not able to block. Current methods of reducing glare involve modifying the telescope and the optics, e.g. baffles and optics coatings. The Directed Energy Directorate is interested in identifying and investigating technologies that could be applied to reduce or reject solar glare inexpensively. We are looking for innovative ways to reduce this exclusion zone to less than 15 degrees. Two concepts that have caught our eye are coded apertures and modulo cameras. Coded apertures have been shown to be effective in removing glare in terrestrial photography and we believe that it may have use in reducing the solar glare in daylight observations. The modulo camera uses a technology that removes the effects of glare/over exposure at the chip level, while leaving the unsaturated areas of the chip as they are. The over exposed areas are then computationally re-inserted with the result being an image showing no overexposed pixels. Both of these technologies show promise that these and/or other new techniques / technologies could be applied to meet the Air Force need to minimize the exclusion zone around the Sun.

Modeling and simulation are highly desired to guide in the development of this technology. Software derived to improve image processing is also of interest. Development of a prototype device in Phase II that can be used to demonstrate the glare reduction is the principal goal of this SBIR.

We believe any technology that meets the Air Force's needs could also be used in military or non-military applications for tracking airborne missiles, projectiles, aircraft, and even birds in close proximity to the Sun using optical systems.

There is no requirement for the use of government materials, equipment, data, or facilities in the execution of this SBIR. The developed technology concept shall be demonstrated on commercial-off-the-shelf equipment to show its effectiveness in a generic telescope system.

The technologies cited are just examples of potential technologies for this program and are not meant to limit exploration of other innovative technological solutions.

PHASE I: Identify technologies for reducing solar glare while tracking orbiting objects that visually pass close to the Sun. Perform modeling and simulation to determine technologies to reduce glare effects. Develop an initial concept design of the most promising technological solution. Produce a detailed analysis of predicted performance for this concept to reduce the exclusion angle by 25% with comparison to current glare reduction methods.

PHASE II: Develop, test, and demonstrate a prototype glare reducing sensor system, including any software and processing. This sensor system must be readily adaptable to work on standard, commercial-off-the-shelf telescopes (Meade, Celestron, etc.). Assess the utility and skill level required for implementation to carry out observations in the field under both ideal and non-ideal conditions (e.g., extreme temperatures or high humidity).

PHASE III DUAL USE APPLICATIONS: Manufacture and market sensors for customers who will be tracking objects that pass visually close to the Sun. Since the envisioned concept would be optics-agnostic it should open a wider field of customers who could benefit from this technology using standard lens assemblies.

REFERENCES:

1. http://www.cs.cmu.edu/~ILIM/projects/IM/aagrawal/sig08/index.html.

2.http://web.media.mit.edu/~hangzhao/modulo.html.

KEYWORDS: glare reduction, daylight observation, satellite tracking, solar exclusion angle


AF171-025

TITLE: Realtime Multiframe Blind Deconvolution (MFBD) for Imaging through Turbulence

TECHNOLOGY AREA(S): Battlespace, Electronics, Sensors


The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with section 5.4.c.(8) of the solicitation and within the AF Component-specific instructions. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws. Please direct questions to the AF SBIR/STTR Contracting Officer, Ms. Gail Nyikon, gail.nyikon@us.af.mil.

OBJECTIVE: Develop a standalone processing unit that applies a multiframe blind deconvolution (MFBD) algorithm to an input stream of live, turbulence-degraded satellite imagery. Corrected image estimates should be displayed to the user in real-time.

DESCRIPTION: Applications such as ground-based space imaging, remote sensing, and target identification must contend with real-time atmospheric fluctuations that distort optical imagery. Adaptive optics (AO) can be used to mitigate atmospheric and local optical distortions; however, even with AO, residual distortions generally prevent an optical system from achieving diffraction-limited resolution for a variety of reasons. Moreover, some applications may not support the beacon requirements or additional size, weight, power, and cost (SWaP-C) of adding AO to an optical system.

The aim of this topic is to develop the capability to apply MFBD image reconstruction to both AO compensated and uncompensated camera imagery in near-real-time. Generally MFBD is used as a post-processing technique to remove residual imaging distortions over a set of images. It is assumed that within an image set, the object of interest does not change significantly and that each frame contains a different point spread function (PSF) due to the turbulent media. Depending on the particular MFBD algorithm, varying further assumptions can be made. For the Phase I and II efforts, proposals should focus on algorithms tailored to turbulence degraded imagery of resolved space objects. Reference [1] presents an overview of MFBD techniques and some variations. More recent research on alternate MFBD techniques can be found in the literature. Proposals should motivate specific the algorithm(s) chosen.

Currently, MFBD requires significant computational time and/or super computer resources to process image sets. This effort aims to leverage emerging massively parallel computing hardware to provide the real-time MFBD capability from a standalone computing system. Bidders are not restricted to any particular hardware scheme, leaving the door open for a range (or mix) of computing technologies. Image frames up to 512x512 pixels may be provided at up to 200Hz, and reconstructed images should be displayed at a minimum of 2Hz to the user. The system should adjust accordingly as frame rates or frame sizes are changed without significant effort to the user. A number of frame queuing strategies could be envisioned, and bidders will be expected to motivate a particular strategy in their Phase I efforts. It is expected that some level of interfacing will be needed to provide the system with the most recent camera dark captures corresponding to specific camera settings.

Proposed designs will be evaluated based upon the following criteria (in this order):


1) Achievable frame rates
2) Reconstructed image quality

Directory: osbp -> sbir -> solicitations -> sbir20171
solicitations -> Army 14. 1 Small Business Innovation Research (sbir) Proposal Submission Instructions
solicitations -> Navy small business innovation research program submitting Proposals on Navy Topics
solicitations -> Navy small business innovation research program
solicitations -> Armament research, development and engineering center
sbir20171 -> Army 17. 1 Small Business Innovation Research (sbir) Proposal Submission Instructions
solicitations -> Navy 11. 3 Small Business Innovation Research (sbir) Proposal Submission Instructions
sbir20171 -> Department of the navy (don) 17. 1 Small Business Innovation Research (sbir) Proposal Submission Instructions introduction
sbir20171 -> Department of the navy (don) 17. 1 Small Business Innovation Research (sbir) Proposal Submission Instructions introduction

Download 1.01 Mb.

Share with your friends:
1   2   3   4   5   6   7   8   9   ...   30




The database is protected by copyright ©ininet.org 2024
send message

    Main page