Doctor of Philosophy in Public Affairs
Environmental and Public Affairs
First Committee Member
Second Committee Member
Third Committee Member
Fourth Committee Member
Number of Pages
The Government Performance and Results Act (GPRA) of 1993 requires government agencies to conduct performance measurements of their contractors for purposes of evaluation and comparison. To be most meaningful, performance comparisons need to consider all relevant characteristics that are of importance to the agency. Yet, bounded rationality theory states that managers of complex programs may have insufficient time and resources to consider all potentially relevant factors. Therefore, metrics used for decision making need to incorporate all relevant factors before the information is provided to decision makers.
Over the last several decades, government agencies have increasingly identified Quality Assurance compliance as a characteristic of concern for government contractors. Nevertheless, government agencies, such as the United States Department of Energy (DOE), infrequently conduct quantitative performance comparisons of their contractors with respect to quality assurance compliance. When agencies do conduct the comparisons, the agencies generally use results from quality assurance audits. However, while audit results are quantitative and readily available, they generally do not address all relevant factors. Providing these incomplete data to decision makers increases the risk of making less than optimal decisions.
This research investigated the feasibility of using statistical regression techniques to transform raw audit results into more meaningful data that government decision makers could use to meet the intent of the GPRA's performance comparison requirements. The research used existing data from 398 DOE audits of 60 government contractors to develop fixed-effects models of quality assurance compliance.
The research results show that using raw audit results for contractor performance comparisons may lead to inappropriate ranking of contractors. In order to ensure more accurate ranking of contractors, comparison metrics that use audit results must account for audit-specific variables that increase the depth of the audit. Audit-specific variables such as audit duration, audit team size, number of audit modules, and the time between successive audits contribute to the number of issues found during an audit and need to be accounted for in relative performance metrics.
Assessment; Benchmarking (Management); Government Performance and Results Act of 1993 (United States); Public administration; Quality assurance--Auditing
Public Administration | Public Affairs, Public Policy and Public Administration | Public Policy
Keeler, Raymond E., "Interorganizational Performance Comparisons Using Quality Assurance Audit Results" (2014). UNLV Theses, Dissertations, Professional Papers, and Capstones. 2105.