How Can Managers Design ‘good’ Performance Measures in Organisations?
Essay by pjteys • October 31, 2017 • Essay • 1,006 Words (5 Pages) • 1,107 Views
Essay Preview: How Can Managers Design ‘good’ Performance Measures in Organisations?
How can managers design ‘good’ performance measures in organisations?
Performance measurement is the systemic collation of quantitatively measured outputs intended to assess whether processes are being performed at the desired level of efficiency (HRSA 2015). Effective performance measurement is firstly contingent on construct validity, the effectiveness of the measure to capture what it is intended to (Michael 2013). Construct validity certifies the degree to which the measure validly captures information in a manner consistent with the intended or theoretical purpose of the measure. Organisational examples such as Criteria Corp, College Board and the Hopkins Verbal Learning Test demonstrate how content and criterion-related validity testing is utilised by organisations to ensure that performance measures hold construct validity. In ensuring construct validity is present in all measures, management ensure that their decisions are grounded in measures that accurately reflect what they are intended.
Construct validity certifies the degree to which the measure captures information in a manner consistent with the intended or theoretical purpose of the measure (Carmines & Zeller 1987). Classical test theory disaggregates construct validity into the effectiveness of the measure to capture a complete representative sample of behaviour intended, content validity, and the statistical strength of the relationship between the measure and the result that it is intended to measure, criterion validity (Haynes 2013). By design, performance measures should display content validity and criterion validity, however manager’s failure to appropriately select input data’s size, composition and units of measurement can lead to invalidity. A failure to do this results in performance measures not accurately providing information on their intended purpose which invariably leads to less than optimal decision making within the organisation. Therefore, it is paramount for organisations to implement content and criterion validation testing to ensure the effective design and upkeep of performance measurement (Nkwake 2000).
Content validation testing is the continuous process of ensuring performance measures sufficiently capture the scope of information the measure is intended to (Cronback 1990). Evidence of content validity arises from the judgement of people with specialised knowledge in the subject matter (college board 2015). College Board, a non for profit organisation that specialise in researching the validity of assessments within the American university system, assess content validity through a process known as curricular validity testing. This entails a group of experts assessing the extent to which the content of the test matches the objectives of the curriculum as it is formally described (college board 2015). Michels et al (2016) performed a similar assessment on the competency frameworks used to assess medical students during a practical placement. To assess the content validity of these competency frameworks two subject experts evaluated each item on all 120 competency frameworks and created descriptive analysis on whether each item provided sufficient information to enable assessment. The complexity of Michels et all (2016) study highlights the primary issues of content validity testing, it is somewhat subject to the bias of the expert reviewing the measure, and to implement it at scale it is expensive and time-consuming. Although being an essential validation technique, content validity is generally expensive to assess and is subject to a level of individual bias. Organisations with rapidly evolving goals that underlay performance measures are likely to require to invest heavily in ongoing content validity testing.
Criterion-related validation refers to the process of assessing a measures ability to predict what it is theoretically intended to predict (nap 2007) This, unlike content validity, is predominantly measured by the strength of the statistical relationship, examples of this are seen in predictive and concurrent validity testing. Predictive validity testing is the process of measuring the correlation between the scores of a predictive measure and future performance. Criteria Corporation, a supplier of pre-employment assessments, calculates the correlation co-efficient, or strength of correlation, between their surveys and employee’s future performance (Criteria Corp 2007) By assessing the correlation between pre-employment assessments and later performance metrics of the employee they are able to validate whether their employment surveys have strong predictive capabilities. The principle draw-back of predictive validity testing is that in order to produce an accurate correlation co-efficient a large data set is required. It is therefore likely that predictive validity testing will aid managers in assessing ‘good’ performance measures only when sufficient data is available.
...
...