In order to do a hierarchy value function, the criteria has been placed into three buckets; time, benefits, and operational suitability. Time speaks to how long it will take for specific aspects of the alternative to be available. Benefits speak to key benefits that are a priority for the agency to realize with the transition to NextGen. Operational Suitability is the degree to which a system can be placed satisfactorily in field use.
When determining the criteria, we worked to ensure that all measures are independent with little to no overlap. The criterion was vetted through three rounds with designated subject matter expert to include the sponsor. Below is the agreed upon criteria and a description of each measure. At the end of each description, is a numerical explanation of what is meant when placing the measure on a scale from 1 to 10. These numerical values were also vetted and approved by subject matter expertise.
Time to Mature Capability: This metric represents how mature the actual capability is at this point in time. This is a quantitative metric as both tools have undergone a maturity assessment as recently as September 30, 2013. In terms of the analysis, 1 = TRL 1, 2, 5=TRL 4, 10= TRL 9. TRL speaks to the Technical Readiness Level of the Capability. We are assuming that each capability would be brought to a max level of a TRL 9 before the next stage in the lifecycle. The figure below describes each level in the TRL framework .
Figure : Technology readiness level 
Time to Adapt/Train: This metric is based upon research and development performed to date. As RPI is incrementally more mature, this capability requires a much shorter timeframe than TSS. As such, it will take a longer time for site adaptation and training. For the purpose of this analysis, we recognize this time to be a reoccurring measure as this step will need to take place at each site. This number is quantitative based upon analysis. In terms of the analysis, 1= year or more, 5 = five months, and 10 = 1 month.
Maintain/Increase Throughput: Throughput is a measure of number of landings per hour on a given runway. This metric is a qualitative relationship based upon individual data derived from both TSS and RPI simulations. In terms of the analysis, 1= 0% increase to throughput, 5= 5% increase, 10= 10% increase or more to throughput.
RNP Utilization/Predictability: This metric represents a key objective – making arrivals as efficient as possible using PBN procedures. TSS provides a toolset which makes things as efficient as possible being that it is based upon an absolute schedule. RPI does provide greater efficiency compared to baseline operations but is not as efficient as TSS being that it is a relative tool. Included in this metric, is the ability of controllers to keep aircraft on RNP approaches. TSS has proved to be extremely efficient in keeping airplans on their RNP curved path approaches. While RPI has also proven effectiveness with allowing controllers to keep aircraft on PBN procedures, an evaluation of how many aircraft have been taken off their RNP curved path approach has not been conducted. Nonetheless, TSS demonstrates a clear gain in efficiency with controllers keeping aircraft on 95% of the time. In terms of the analysis, 1 = 50% of a/c stay on approach, 5 = 75% of a/c stay on approach, 10= 100% of a/c stay on approach.
Fuel/Emissions: This metric is based on both qualitative and quantitative data. While an apples to apples comparison of the two capabilities cannot be performed, data and subject matter expertise opinion demonstrates that TSS will provide more fuel and emissions savings than RPI. In terms of the analysis, 1 = 5% savings on fuel/emissions, 5 = 10% savings on fuel/emissions, 10 = 15% savings on fuel/emissions.
Reliability: This is the ability of the system to perform and maintain its functions in routine circumstances, as well as unexpected circumstances. This includes off nominal situations where controllers are being faced with difficult situations where the system is being tested in terms of sensitivity and flexibility. This is a qualitative assessment based upon subject matter expertise. In terms of the analysis, 1= reliable 10% of the time, 5= reliable 75% of the time, 10= 100% of the time..
Controller Acceptability: This metric represents the amount of buy in controllers have provided in regards to both capabilities. Human factors element (reduce workload, etc)This metric is based upon controller involvement in both RPI and TSS simulations and their subsequent feedback which has been documented in simulation result reports. In terms of the analysis, 1= no buy in, 5 = somewhat buy in, 10 = greatly buy in.
System Use: This metric represents how many facilities will be able to use the capability. TSS is dependent on the facility having TBFM whereas RPI does not have a similar constraint. Both capabilities have a dependency on STARS. The factor of what facilities will gain benefit from either/or is also taken into account. The weights associated with this metric are qualitative based upon subject matter expertise of all factors listed above. In terms of the analysis, 1= 0 facilities able to use capability, 5 = 35 facilities able to use capability, 10 = 70 facilities or more able to use capability.
Target Accuracy: In specific terms, accuracy is a degree of closeness to the actual value. For this analysis, we focus on the level of accuracy the system gives in terms to the information it displays to the controllers. The more accurate the information, the more precisely they can deliver aircraft to the runway. This is also a qualitative assessment based upon subject matter expertise. In terms of the analysis, 1 = not accurate, 5 = somewhat accurate, 10 = very accurate.
Collision Risk: This metric was included to show that none of these capabilities truly have a collision risk. All of these tools are decision support tools to the controllers and controllers are ultimately responsible for separation of aircraft. In terms of the analysis, 1 = .001% risk 5 = .0001%, 10 = .00001% risk.