PHASE III DUAL USE APPLICATIONS: Applications include all industrial computed tomography (CT) inspection applications, both military and commercial. CT applications include airport baggage inspection systems and medical diagnostic equipment. Artifacts are the bane of all CT systems, whether x-ray, neutron, acoustic, ultra-sound, or seismic. One or more algorithms which prevent artifacts from forming will have a marketplace in and for all such systems. For example, the government is currently purchasing thousands of CT systems for baggage inspection in airports. All of these systems suffer from such artifacts.
REFERENCES:
1) Smith, B. D., Cone-Beam Tomography: Recent Advances and a Tutorial, Opt. Eng. , 1990, vol. 29, no. 5.
2) Axelsson-Jacobson, C., Guillemaud, R., Danielsson, P.-E., Grangeat, P., Defrise, M., and Clark, R., Comparison of 3D Reconstruction Methods from Cone-Beam Data, in Three-Dimensional Image Reconstruction in Radiation and Nuclear Medicine , Grangeat, P. and Amans, J.-L., Eds., Dordrecht: Klüwer, 1996.
3) Herman, G., Image Reconstruction from Projections. The Encyclopedia of Computerized Tomography , New York: Academic, 1980.
4) Tuy, H. K., SIAM J. Appl. Math., vol. 43, no. 3, pp. 546 552.
5) Badazhkov, D. V., Some Algorithms for Tomographic 3D Reconstruction in Problems with a Helical Trajectory of a Radiation Source, in INPRIM-2000, Novosibirsk, 2000.
6 Trofimov, O., Kasjanova, S., and Badazhkov, D., Algorithms of 3D Cone Beam Tomography for Incomplete Data, Proc. 1st World Congress on Industrial Process Tomography, Buxton, 1999, pp. 181183.
7) Badazhkov, D. V., Some of the Algorithms for the Analysis of a Photoplethysmogram, Pattern Recognit. Image Anal. , 1999, vol. 9, no. 2, pp. 341343.
KEYWORDS: Computer tomography, radiography, x-ray, NonDestructive inspection, baggage inspection, tomographic artifacts.
A03-020 TITLE: 3-D HyperSpectral Microbolometer
TECHNOLOGY AREAS: Materials/Processes, Sensors
ACQUISITION PROGRAM: PEO Ammunition
OBJECTIVE: Develop a 3-D microbolometer in which each layer of the single MEMS device measures the IR energy within a different narrow spectral band.
DESCRIPTION: Uncooled microbolometers are used for infrared imaging. Current micro-bolometer pixels are mechanically situated in a single plane, with each pixel absorbing a broadband of infrared energy. This solicitation is for the development, design and fabrication of stacked layers of micro-bolometer pixels in a single chip in which each layer absorbs energy in a narrow infrared band and passes the remaining energy onto the layer below. The structure, as a whole, will measure the intensity of the infrared energy in narrow spectral bands for each picture element in the scene, i.e., the 3-D microbolometer structure will measure simultaneously the entire data cube, as it is called in the HyperSpectral industry. The number of spectral bands should exceed 63, the spectral resolution should exceed one tenth of a micron, the broad spectral region should be approximately 8 to 25 microns, the number of spatial pixels should be 256 by 256 or more, the response speed of the device (data cubes per second) should be greater than 20 cubes per second.
PHASE I: Design the 3-D microbolometer. Provide convincing evidence that the device can be realistically fabricated using available manufacturing technology. Provide convincing evidence that the device can acquire a HyperSpectral infrared image. Compare the device design to that of the best, most efficient, and fastest microbolometer designs then available. Provide evidence that the developer can produce at least a fully operational prototype device within the Phase II budget.
Phase II: Fabricate a self contained, fully operational 3-D microbolometer that includes all the electronics, power supplies, computational hardware, software, etc., to acquire HyperSpectral Infrared images.
PHASE III DUAL USE APPLICATIONS: HyperSpectral imaging has already been shown to have a huge military and commercial market. Applications abound in target acquisition, battlefield assessment, LADAR, missile guidance, non-destructive inspection, surveillance, medical diagnostics, chemical analysis, process control and many other fields. A device of the nature being solicited could be used wherever infrared hyperspectral imaging is appropriate. But such a device could outperform all others in terms of robustness, acquisition speed, and field hardening. These attributes are essential to many of the aforementioned applications.
REFERENCES:
1) http://weewave.mer.utexas.edu/MED_files/MED_research/microbolometers/bolo_paper/IRMMW_bolo_paper.html
2) http://www.aticourses.com/hyperspectral_imaging.htm
3) .http://www.techexpo.com/WWW/opto-knowledge/IS_resources.html
KEYWORDS: HyperSpectral imaging, thermal imaging, microbolometers
A03-021 TITLE: Innovative Automatic Warhead Optimization and Modeling
TECHNOLOGY AREAS: Materials/Processes, Weapons
ACQUISITION PROGRAM: Multi-Role ATD Manager, ARDEC
OBJECTIVE: Develop an innovative, semi-automatic warhead optimization and modeling system using hydrocode simulation with design sensitivity analysis and stochastic methods.
DESCRIPTION: There is a need to quickly develop and field new lightweight warheads for Future Combat Systems (FCS). Automatic optimization and modeling software is needed, but current optimizing is done with smooth continuous response functions. Warhead simulations, however, are based on complex hydrocode simulations that are subject to considerable numerical noise. These noises introduce errors in the simulations that are not physics based; they include errors in the calculated velocities, stresses and strains. Another problem with the noise is that it may cause the simulations to terminate prematurely. Additionally, simulations must be calibrated with respect to experiments that are also subject to many noise sources, ranging from manufacturing imperfections to test measurement uncertainty. Numerical noise can be greatly reduced by using analytical differentiation of the equations of motion through the process of Design Sensitivity Analysis (DSA). Noise can be reduced by improving the accuracies of the velocity, stress and strain predictions to be within 5% of the experiment. In simulations where the noise would cause the calculation to go unstable, these new techniques would enable the simulations to run to completion. The optimization process must be suitable for large numbers of design variables. Stochastic methods should also be used in this optimization procedure to account for simulation and experimental noise. Approaches for this might include an extension to Kriging methods that are used for fitting data subject to large amounts of noise. The resulting software system should be able to take a set of design requirements, search through an existing database for similar experimental results from previous tests and produce a candidate design with a minimum of user intervention (reduced from user manipulation during every 100 calculation cycles to near autonomous completion).
PHASE I: Design an automatic warhead optimization and modeling software. Demonstrate the ability to compute analytical derivatives of major warhead performance variables based on typical input design variables using DSA in hydrocode simulations. Use this procedure to automatically iterate for an optimal warhead design.
PHASE II: Produce a robust system for automatic warhead design.
PHASE III DUAL USE APPLICATIONS: In addition to military applications, design optimization using hydrocode simulation is done in many different industries ranging from car crash to metal forming to bird ingestion in jet engines.
REFERENCES:
1) AIAA2000-4906, Design Sensitivity Analysis for Structures using Explicit Time Integration, D. Stillman.
2) ASME 2002 Symposium on Design Automation for Vehicle Crashworthiness and Occupant Protection, "Development of a Design Sensitivity Analysis Technique for Explicit Finite Element Software with Applications in Crashworthiness", D. Stillman. Current tools used with Explicit FEA showing brute force stochastic simulation:
3) http://www.easi.com/software/storm Kriging theory:
4) http://www.geomatics.ucalgary.ca/~nel-shei/lecture.htm Design Sensitivity Analysis:
5) http://www.ccad.uiowa.edu/focus/designopt/dsa.html
KEYWORDS: hydrocode, simulation, design sensitivity analysis, numerical noise
A03-022 TITLE: HyperSpectral Data Cube Processor
TECHNOLOGY AREAS: Information Systems, Materials/Processes, Sensors
ACQUISITION PROGRAM: PEO Ammunition
OBJECTIVE: Fabricate a computer processor which has the ability to process HyperSpectral Images in rates of 30 or more cubes per second.
DESCRIPTION: Hyperspectral images can easily exceed 100 MB in size, consisting of more than one hundred spectral bands and be greater than one million pixels. The images need to be calibrated, corrected and spectrally matched to known spectra. The results need to be output at video frame rates to common display devices as images. Current processors do not come close to such high processing rates. This solicitation is for the design and fabrication of such a device. Consideration will be given only to those proposals where it is clear that the Phase II effort will actually fabricate, test and implement the processor in a hyperspectral system.
Hyperspectral images can easily exceed 100 MB in size, consisting of more than one hundred spectral bands and be greater than one million pixels. The images need to be calibrated, corrected and spectrally matched to known spectra. The processing algorithms will include (a) large convolutions, which may run in parallel on all of the pixels; (b) possible spatial transformations; (c) scalar and vector products and differences; (d) table lookup; etc. The results need to be output at video frame rates to common display devices as images. Current processors do not come close to such high processing rates. This solicitation is for the design and fabrication of such a device. Consideration will be given only to those proposals where it is clear that the phase 2 effort will actually fabricate, test and implement the processor in a hyperspectral system.
PHASE I: Design the hyperspectral data cube processor. In order to meet the Phase II requirements, the design must build on prior techniques or technology. It is expected that the commitment for the fabrication of the ASIC will be completed in the first six months of Phase II. The contractor?s design must be evaluated by at least one external expert in the processor field and his results reported to the government.
PHASE II: Fabricate a self contained, working system which can be directly attached to HyperSpectral VIS-NIR Imager, Model 700, fabricated by Surface Optics Corporation, acquire hyperspectral images and display the analysis results.
PHASE III DUAL USE APPLICATIONS: HyperSpectral imaging has already been shown to have a huge military and commercial market. Applications abound in target acquisition, battlefield assessment, LADAR, missile guidance, non-destructive inspection, surveillance, medical diagnostics, chemical analysis, process control and many other fields. A device of the nature being solicited could be used wherever infrared hyperspectral imaging is appropriate. The device will outperform all others in terms of robustness, acquisition speed, and field hardening. These attributes are essential to many of the aforementioned applications.
REFERENCES:
1) http://www.aticourses.com/hyperspectral_imaging.htm
2) http://www.techexpo.com/WWW/opto-knowledge/IS_resources.html
3) http://www.surfaceoptics.com/
KEYWORDS: HyperSpectral imaging, parallel processors, data cube
A03-023 TITLE: Measurement of Career Leadership Performance
TECHNOLOGY AREAS: Human Systems
ACQUISITION PROGRAM: Center for Army Leadership, CGSC
OBJECTIVE: To develop a measurement system for leadership performance of Objective Force Leaders that accounts for cumulative experiences in applicable career areas. Assessment approaches should use objective, unbiased measures of leadership and should be specific to a leader’s experience in positions of responsibility and types of career opportunities open to him or her. The product will be used to aid leader development in instituional, operational and self development. The product, when widely applied, will have secondary applications to screen for appropriate development tracks, to inform Army educational institutions about appropriate course timing and duration, and to set optimal assignment paths.
DESCRIPTION: Long-range measures of leadership performance are needed to understand the impact of leadership over time and at various points in one’s career. Measures will lead to feedback to guide the development of leaders. Measures will be applicable for self-development and for institutionally-directed education. The Army Training and Leader Development Panel Officer Study found that 'junior officers are not receiving adequate leader development experiences,' [junior officers] 'do not believe they are being afforded sufficient opportunity to learn from the results of their own decisions and actions,' and 'personnel management requirements drive operational assignments at the expense of quality developmental experiences' (ATLDP, 2000).
Leaders develop over time based on unintentional experience and intentional attention to needs and goals. Development occurs because of an integration of trial and error experience, education and thoughtful reflection, and observation of the good and bad examples of others. Change can occur suddenly and rapidly or continuously and steadily (Weick & Quinn, 1999). The pattern of change over time can be revealing about a leader. But the Army has no theory-based system of leader development based on change over time. More importantly, there is no adopted system to measure the types, frequencies, and qualities of experiences that influence leader development. The Army is different from most civilian organizations because it grows its leaders “from the ground up” and moves leaders through a series of assignments in which they can develop for subsequent positions of higher responsibility.
People will remain the centerpiece of the Army and growing leaders will be one of its most essential missions. Leaders will be relied on to out think and dominate adversaries by speed and decisive action. Objective Force leaders will require a collection of interpersonal, conceptual, technical, tactical, mental, physical, and emotional competencies and the ability to learn, be self-aware and adapt. These requirements need to mature earlier in leaders’ careers. Leaders must grow with the positions they assume to fully anticipate the higher order effects of their actions (Objective Force, 2002).
Standard Industrial/Organizational methods of psychology could link measures of leadership performance to job requirements. However, traditional job analysis would be complicated in this application by the number of different positions and skill classifications. Costs for a thorough job analysis of Army leader positions that constantly change are prohibitive. Rigorous job analyses that represent a bottom-up approach are impractical. In the Army, as with many organizations, a great variability of job demands exists within positions of supposedly the same level of authority and responsibility.
In the absence of a rigorous job analysis, leader assessment tools are oftentimes based on leader attributes, rather than objective measures of behaviors and outcomes. These subjective measures are often one-shot, self-report measures that attempt to tap job-related constructs. Measures that capture leader performance for only one point in time are insufficient in the support of leader development. These snap-shots of attributes are limited in scope and do not provide the objective behavioral information needed for leader development. In addition, self-report measures may be subject to socially desirability and faking (Paulhus, 1986).
On the other hand, biodata measures (activities, accomplishments, experiences) may be sufficiently free of social desirability bias and faking to be applicable to a career-oriented measurement framework. Biodata measures might focus on unit ratings, awards, climate, morale, and retention or on individual experiences considered to be significant learning opportunities. Measures that address growth or development over a period of time or in stages are reasonable candidates to consider. Records of work assignments (McCauley, Eastman & Ohlott, 1995) and the development that results can provide a basis for identifying appropriate measures. Retrospective measures confirmed by outside sources may be suitable substitutes for longitudinal measures taken at time intervals. These are not practical for validation research and may be too limiting for career development. Analysis of actual leadership challenges and situations for an individual would provide insight into leadership beliefs, styles, and capabilities. Systems of measurement should be explored to consider the merits of alternate approaches [e.g., modification of receiver operating characteristic curves (ROCs) from signal detection theory [Swets, 1964], personal strategies for leveraging talents and compensating for weaknesses, biodata (Mael & Schwartz, 1991), measures of variability such as Weiss & Shanteau’s index of consistent discrimination variability (CWS), and so on].
PHASE I: Phase I will produce a framework for measurement and theory building or application, selection of measurement concepts, and demonstration of proof of concept. Models of leader work experience and event-based situations shall be a fundamental aspect of the measurement framework. Measurement or characterization of multiple leadership performance instances shall be intrinsic to the concept. Profiles of measured leader instances shall be explored to characterize leader potential for positions of greater responsibility derived from measures of performance.
PHASE II: Phase II will involve enhanced tool and measurement system development, evaluation, and validation. Validation shall be done at various ranks and for various leader positions within the Army. The goal will be to achieve face, construct and predictive validity.
PHASE III DUAL USE COMMERCIALIZATION: Phase III will involve tailoring aspects of the measurement system to use in leadership domains beyond the US Army. Sister service and joint assignments would be prime candidates for immediate extension of the measurement system. Aspects of the framework will need to be tailored to make it applicable to organizations that hire for positions throughout levels.
REFERENCES:
1) ATLDP (2000). The Army Training and Leader Development Panel Officer Study Report to the Army. http://www.army.mil/features/ATLD/report.pdf
2) McCauley, C. D., Eastman, L. J., & Ohlott, P. J. (1995). Linking management selection and development through stretch assignments. Human Resource Management, 34, 93-115.
3) Mael, F. A. & Schwartz, A. C. (1991). Capturing temperament constructs with objective biodata. ARI Technical Report 939. Alexandria, VA: U.S. Army Research Institute for the Behavioral and Social Sciences. ADA 245 119
4) Paulhus, D. L. (1986). Self-deception and impression management in test responses. In A. Angleiner & J.S. Wiggings (Eds.), Personality Assessment via Questionnaire (pp. 142-165). New York: Springer.
5) Objective Force Task Force (December 2002). Objective Force in 2015 White Paper.
6) Swets, J. (1964). Signal detection and recognition by human observers. New York: Wiley and Sons.
7) Weick, K. E. & Quinn, R. E. (1999). Organizational change and development. Annual Review of Psychology, 50, 361-386.
8) Weiss, D. J. & Shanteau, J. CWS: A User’s Guide. http://www.ksu.edu/psych/cws/pdf/using_cws.pdf
KEYWORDS: Leadership, leader development, career, self-development, measurement, assessment
A03-024 TITLE: Semi-Automated Question Accumulation and Response System
TECHNOLOGY AREAS: Information Systems, Human Systems
ACQUISITION PROGRAM: TRADOC- Training Developments and Analysis Dir.
OBJECTIVE: To create and empirically validate a semi-automated system that uses desktop computer technology to answer questions posed on user specified topics. The system would allow subject matter experts (SME) without high-level computer skills to load topic-specific text information into the system and manage the system. An artificial intelligence component would allow the system to refine answers automatically based on SME input and questioner feedback.
DESCRIPTION: The Objective Force will be a networked system of systems with soldiers who update their knowledge and skills through reachback capabilities and life long learning. To fully meet the training needs of this networked force, a greater use of distributed learning and embedded training is required (TRADOC). SMEs and instructors spend a great deal of time responding to questions, many of which are redundant. A system that can appropriately answer questions would reduce the workload of SMEs and instructors, allowing them to spend more time on other crucial aspects of teaching and supporting life long learning. A successful system would also provide assistance to Objective Force soldiers when they have on-the-job questions that need immediate answers, are engaged in embedded training, or performing en route mission rehearsal.
While automated question answering systems have been developed previously, many have been an attempt to answer questions in far reaching content areas, like “Ask Jeeves”. This process leads to vague answers and enormous databases that grow to an unmanageable size for a single administrator using a desktop computer. Question answering systems that cover limited content areas, like the Answer Wizard in Microsoft Help, have tended to be more successful in providing on-target responses, however these limited content systems lack administrator control and provide “canned” responses.
While the average computer user can ask questions of these systems, both types of systems require high-level technical skills to develop the content, maintain the knowledge-base, and manage the output. Presently, there is no automated question management system that runs on a standard PC administered by a person without high-level computer skills.
The proposed system would be a shell program with the capability of parsing information provided by an administrator to answer questions in natural language. For example, a SME could load text files that come from a particular book, topic notes, and other sources. The system would then use a combination of statistical (e.g., frequency of word usage, correlation of terms) and linguistic (e.g., latent semantic analysis, natural language processing, and knowledge-base) methods to locate information and generate appropriate responses to questions acquired from natural communication media (e.g., e-mail, text messaging, threaded discussions). In addition, based on input by the SME and answer feedback from the questioners, the system should refine successive responses. The SME would have control over the output of the system to determine that quality answers are generated, and as the system produced a higher percentage of quality answers the SME could allow the system to respond directly to the questioner. The SME would also have the ability to modify topic content. For example, a graphical file might be linked to a particular answer or set of terms, so that the graphics would be included in subsequent responses to related questions. In addition, the system should code the questions/answers in a common metadata format, so that they may be repurposed for use with a SCORM (Sharable Content Object Reference Model) compliant learning management system.
PHASE I: Phase I should determine the feasibility of producing a question answering system that runs on a desktop computer administered by a person without high-level technical skills. This feasibility study with specific recommendations for the system to be developed during the Phase II effort would be required by the end of Phase I.
Share with your friends: |