About the Author(s)


Xander van Lill Email symbol
Department of Industrial Psychology and People Management, College of Business and Economics, University of Johannesburg, Johannesburg, South Africa

Department of Product and Research, JVR Africa Group, Johannesburg, South Africa

Nicola Taylor symbol
Department of Industrial Psychology and People Management, College of Business and Economics, University of Johannesburg, Johannesburg, South Africa

Department of Data Enablement, JVR Africa Group, Johannesburg, South Africa

Citation


Van Lill, X., & Taylor, N. (2022). The validity of five broad generic dimensions of performance in South Africa. SA Journal of Human Resource Management/SA Tydskrif vir Menslikehulpbronbestuur, 20(0), a1844. https://doi.org/10.4102/sajhrm.v20i0.1844

Original Research

The validity of five broad generic dimensions of performance in South Africa

Xander van Lill, Nicola Taylor

Received: 26 Nov. 2021; Accepted: 23 Mar. 2022; Published: 15 June 2022

Copyright: © 2022. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Orientation: Disconnected scholarly work on the theoretical and empirical structure of individual work performance negatively impacts predictive studies in human resource management. Greater standardisation in the conceptualisation and measurement of performance is required to enhance the scientific rigour with which research is conducted in human resource management in South Africa.

Research purpose: The present study aimed to conceptualise and empirically validate the structural validity of five broad generic dimensions of individual work performance, based on 20 narrow dimensions of performance.

Motivation for the study: A generic model and standardised measurement of individual work performance, measuring performance at the appropriate level of breadth and depth, may help human resource professionals to make accurate decisions about important work-based criteria and their related predictors. A validated generic model of performance could further increase the replicability of science around performance measurement in South Africa.

Research approach/design and method: A cross-sectional design was implemented by asking 448 managers across several organisations to rate the performance of their subordinates on the Individual Work Performance Review (IWPR). The quantitative data were analysed by means of hierarchical confirmatory factor analyses.

Main findings: An inspection of the discriminant validity of the 20 narrow performance dimensions supported the multidimensionality of performance to a fair degree. The bifactor statistical indices, in turn, suggested that the five broad factors explained a significant amount of common variance amongst the manifest variables and could therefore be interpreted as more unidimensional.

Practical/managerial implications: Practitioners can interpret the broader performance dimensions in the IWPR as total scores, especially when high-stakes decisions are made about promoting or rewarding employees. The interpretation of the narrow performance dimensions might be more useful in low-stakes development situations. Cross-scale interpretations are encouraged to enable a holistic understanding of employees’ performance, as the narrow performance dimensions covary.

Keywords: individual work performance; generic performance; performance measurement; hierarchical factor analysis; five-factor model.

Introduction

According to Campbell and Wiernik (2015), individual work performance is:

[T]he basic building block on which the entire economy is based. Without individual performance there is no team performance, no unit performance, no organisational performance, no economic sector performance, no GDP. (p. 48)

Although individual work performance (hereafter referred to as performance) is one of the most important criteria used in predictive analytics in human resource management and industrial psychology, more is known about the theoretical and empirical structure of antecedents to performance, such as personality, than about performance itself (Campbell & Wiernik, 2015; Schepers, 2008). The lack of clarity and disconnected literature (Campbell & Wiernik, 2015; Carpini, Parker, & Griffin, 2017) appear to have spillover effects on research in the South African context. Based on a meta-analysis of the predictive validity of personality, Van Aarde, Meiring and Wiernik (2017) revealed shortcomings in the measurement of performance in South Africa in terms of the conceptualisation of performance models and the relevance of performance dimensions utilised in predictive studies. If human resource management and industrial psychology are to be taken seriously as scientific fields in South Africa, related scientists and practitioners must ensure the development and use of carefully conceptualised measures of performance (Van Aarde et al., 2017).

A problem often associated with the validation and prediction of performance is the difficulty in obtaining sufficiently large data sets to validate or draw predictive inferences based on job-specific performance criteria. Consequently, practitioners are often left making desk-based judgements about the nature of job criteria without empirically validating the performance measurements implemented. Available samples may be even smaller if there is a limited number of specific positions for a local validation (Myburgh, 2013). The identification and standardised measurement of generic dimensions of performance might be the first step in obtaining sufficiently large data sets to build valid and replicable science around the rating and prediction of performance in South Africa. Generic performance dimensions reflect actions independent of specific jobs (Harari & Viswesvaran, 2018) that enable or thwart organisations in achieving goals (Campbell & Wiernik, 2015). Replicative studies on generic performance dimensions come with the added advantage that organisations are more likely to measure universal behaviours that are most likely to contribute to team and organisational effectiveness (Carpini et al., 2017; Hunt, 1996). Noteworthy research in South Africa on the conceptualisation and measurement of generic performance include Schepers’s (2008) development of the Work Performance Questionnaire (WPQ), Myburgh’s (2013) Generic Performance Questionnaire (GPQ) and Van der Vaart’s (2021) validation of the internationally developed Individual Work Performance Questionnaire (IWPQ). Dimensions measured by the WPQ, GPQ and IWPQ are listed here:

  1. Broad, empirically derived performance dimensions in the WPQ (Schepers, 2008), include work performance, initiative or creativity and managerial abilities.

  2. Narrow theoretically derived dimensions identified and empirically validated by Myburgh (2013) include task performance, effort, adaptability, innovation, leadership potential, communication, interpersonal relations, management, analysing and problem solving, counterproductive work behaviours, organisational citizenship behaviours and self-development.

  3. Van der Vaart (2021) validated three broad dimensions of performance proposed by Koopmans et al. (2012), namely task performance, contextual performance and counterproductive behaviours.

Cronbach and Gleser (1965) captured an important problem in the measurement of human behaviour, namely the bandwidth–fidelity dilemma. Cronbach and Gleser (1965) observed that ‘there is some ideal compromise between a variety of information (bandwidth) and thoroughness of testing to obtain more certain information (fidelity)’ (p. 100). The dilemma is intensified by the dissimilar approaches to the number and type of performance dimensions, as well as the level (either broad or narrow) at which generic performance is currently conceptualised and measured in South Africa. The aim of this study was to address the bandwidth–fidelity problem by proposing a variety of narrow dimensions of performance that still enable the in-depth measurement of broader performance dimensions. This was performed by inspecting preliminary evidence for a simplified model of performance consisting of five broad dimensions, divided into 20 narrower performance dimensions, to extend construct coverage of performance measurement in South Africa. Instead of employing standalone unidimensional constructs, as in Schepers (2008), Myburgh (2013) and Van der Vaart (2021), hierarchical models of performance are proposed.

Literature review

The most recent published study of generic performance in South Africa supports the three-dimensional structure of the IWPQ, namely task performance, contextual performance and counterproductive work behaviours (Van der Vaart, 2021). Koopmans et al.’s (2011) systematic review served as a thorough and contemporary point of departure to conceptualise generic performance in the present study. A more recent review conducted by Carpini et al. (2017) excluded counterproductive performance, which is considered an important part of generic performance (Rotundo & Sackett, 2002). The broader structure of Carpini et al.’s (2017) performance model was considered not as comprehensive as Koopmans et al.’s (2011) four-factor model of performance. However, Carpini et al.’s (2017) theoretical propositions still proved useful in the conceptualisation of some broad dimensions of the Individual Work Performance Review (IWPR).

Koopmans et al.’s (2011) initial conceptualisation of performance included adaptive performance, which was removed after an empirical study revealed statistical overlap with contextual performance (Koopmans et al., 2012). This revised model was adopted by Van der Vaart (2021) with the local validation of the IWPQ. Aguinis and Burgi-Tian (2021) argued that crises, like the COVID-19 pandemic, bring about rapid change that requires employees to cope with and respond to unfolding complexity. Myburgh (2013) further reasoned that long-term systemic change in the internal and external environment is inevitable, which requires that employees demonstrate the flexibility required to adjust. Adaptive performance was retained as a broad dimension of performance to ensure construct coverage (Aguinis, 2019; Carpini et al., 2017; Myburgh, 2013). Leadership performance was a fifth dimension included by Myburgh (2013) as a standalone narrow dimension in the GPQ but was not included in the WPQ or IWPQ. Leadership plays a central role in assisting organisations to achieve goals (Campbell, 2012; Hogan & Sherman, 2020). Demonstrating the ability to progress to higher levels of leadership is an important consideration in talent management and can therefore not be excluded as a dimension of performance (Myburgh, 2013).

Five broad dimensions of performance are proposed in the present study, namely in-role performance (task performance), extra-role performance (contextual performance or organisational citizenship behaviour), adaptive performance, leadership performance and counterproductive performance (counterproductive work behaviours). Each broad dimension has 4 narrow dimensions and 16 associated items (four items per narrow dimension), to ensure sufficient coverage of behaviours (Carpini et al., 2017). The restriction on dimensions was aimed at reducing the length of the measure to ease the process of administering the review for performance feedback (e.g. 80 items × 10 employees to be rated by one manager) or include it in predictive studies along with other predictive measures. Narrow dimensions of performance were derived from existing generic models of individual work performance (Myburgh, 2013). Narrow dimensions were considered for inclusion if they were:

  1. included in published (reputable books and journals) or unpublished (high-quality dissertations and theses) scholarly works (Viswesvaran, Schmidt, & Ones, 2005)

  2. methodologically sound, in the case where empirical findings were shared

  3. associated with one of the five broad performance dimensions

  4. defined in terms of observable behaviour (Koopmans et al., 2011)

  5. relevant to the achievement of organisational goals (Koopmans et al., 2011)

  6. generic and thus applicable to a wide variety of jobs (Myburgh, 2013; Schepers, 2008)

  7. representative of individuals’ actions (Koopmans et al., 2011).

As recommended by Campbell and Wiernik (2015) and Carpini et al. (2017), the performance model in the present study was purged of terms that are theorised to have conceptual overlap with job knowledge and skills (antecedents to performance), such as analysing and problem solving. Communication as a skill that facilitates performance is difficult to disentangle from performance itself and was therefore excluded. However, it should be observed that communication has been highlighted as a narrow dimension of generic performance in previous models by, for example, Myburgh (2013), Viswesvaran et al. (2005) and Campbell, McCloy, Oppler and Sager (1993).

In-role performance

In-role performance refers to actions that are official or known requirements for employees (Carpini et al., 2017; Motowidlo & Van Scotter, 1994). These behaviours could be viewed as the technical core (Borman & Motowidlo, 1997) that employees must demonstrate to be perceived as proficient and able to contribute to the achievement of organisational goals (Carpini et al., 2017). Unlike the performance models proposed by Schepers (2008), Koopmans et al. (2012) and Myburgh (2013), quality and quantity of work were divided into two separate, narrow performance dimensions, which align more with Viswesvaran et al.’s (2005) conceptualisation of performance. Technical performance was included under in-role performance, based on Borman and Motowidlo’s (1997) definition of task performance, namely the technical core tasks that employees perform, and was conceptualised in accordance with Campbell et al.’s (1993) definition. Finally, whereas following rules and organisational procedures was categorised by Borman and Motowidlo (1997) under contextual performance, it was included under in-role performance as an official (minimum and less voluntary) requirement of acceptable behaviour, as it is expected that employees will comply with organisational rules and procedures (Carpini et al., 2017; Katz, 1964). Definitions of the narrow dimensions comprising in-role performance are provided here:

  1. Quality of work: The thoroughness with which employees perform work tasks, evident in the degree to which employees pay attention to detail and minimise errors.

    Conceptual overlap with thoroughness (Hunt, 1996), quality concern (Tett, Guterman, Bleier, & Murphy 2000), work quality (Renn & Fedor, 2001), maintaining quality processes (Bartram, 2005), quality (Viswesvaran et al., 2005) and quantity and quality of work (Schepers, 2008).

  2. Quantity of work: How productive employees are in meeting challenging work goals in terms of both the volume of output and meeting the required time frame.

    Conceptual overlap with quantity concern (Tett et al., 2000), work quantity (Renn & Fedor, 2001), maintaining productivity levels (Bartram, 2005), productivity (Viswesvaran et al., 2005) and quantity and quality of work (Schepers, 2008).

  3. Rule adherence: Employees’ tendency to comply with informal and formal rules and regulations of the organisation.

    Conceptual overlap with following organisational rules and procedures (Borman & Motowidlo, 1997), rule orientation (Tett et al., 2000), following procedures (Bartram, 2005) and compliance or acceptance of authority (Viswesvaran et al., 2005).

  4. Technical performance: The degree to which employees perform well at tasks that are differentiated, complicated and require a certain level of expertise.

    Conceptual overlap with technical proficiency (Borman & Motowidlo, 1997; Tett et al., 2000) and technical performance (Campbell, 2012).

The following hypothesis was formulated based on the conceptualisation of in-role performance as a hierarchical model:

H1: The broad in-role performance dimension explains covariance between the 16 in-role performance items, independent of the covariance that the narrow dimensions quality of work, quantity of work, rule adherence and technical performance explain in the same set of items.

Extra-role performance

Extra-role performance refers to acts orientated towards the future or change (Carpini et al., 2017), aimed at benefitting coworkers and the team (Organ, 1997), which are discretionary or not part of the employee’s existing work responsibilities (Borman & Motowidlo, 1997; Van Dyne, Cummings, & McLean, 1995). These behaviours shape the team in a way that creates a work setting that contributes to the achievement of organisational goals (Borman & Motowidlo, 1997). In the present study, altruism and initiative were untangled to form two separate, narrow dimensions, instead of the unidimensional construct measured by the IWPQ (Koopmans et al., 2012) and GPQ (Myburgh, 2013). Two narrow dimensions outlined by George and Brief (1992) and Podsakoff, MacKenzie, Paine and Bachrach (2000) as extra-role behaviours and applied as narrow, standalone dimensions in the GPQ (Myburgh, 2013) were further included to extend current conceptualisations of extra-role performance in South Africa, namely self-development and innovation. Self-development and innovation share conceptual overlap with adaptive performance (Pulakos, Arad, Donovan, & Plamondon, 2000), but were retained because of their discretionary and future-orientated nature (Carpini et al., 2017; George & Brief, 1992). The narrow dimensions of extra-role performance are defined here:

  1. Helpful behaviours: Employees’ acts of kindness towards coworkers.

    Conceptual overlap with altruism (Organ, 1988), interpersonal performance (Murphy, 1990), helping coworkers (George & Brief, 1992), supporting others (Borman & Motowidlo, 1997), working with people (Bartram, 2005), interpersonal competence (Viswesvaran et al., 2005) and peer or team member leadership performance (Campbell, 2012).

  2. Taking initiative: Demonstrated by employees showing self-starting behaviour and doing more than is expected of them.

    Conceptual overlap with persisting with enthusiasm and extra effort (Borman & Motowidlo, 1997), volunteering to carry out task activities (Borman & Motowidlo, 1997), personal initiative (Frese, Fay, Hilburger, Leng, & Tag, 1997), initiative (Tett et al., 2000), acting on own initiative (Bartram, 2005), effort (Viswesvaran et al., 2005), initiative and creativity (Schepers, 2008) and initiative, persistence, and effort (Campbell, 2012).

  3. Self-development: Reflected in employees’ initiatives to enhance their competence by actively gaining knowledge and learning new skills that could benefit the team.

    Conceptual overlap with developing oneself (George & Brief, 1992), self-development (Tett et al., 2000), embracing personal and professional development (Hedge, Borman, Bruskiewicz, & Bourne, 2004), pursuing self-development (Bartram, 2005), job knowledge (Viswesvaran et al., 2005) and self-development (Myburgh, 2013).

  4. Innovative behaviour: Employees exploring or generating new opportunities and implementing new and creative ideas.

    Conceptual overlap with making constructive suggestions (George & Brief, 1992), innovative behaviour (Scott & Bruce, 1994), innovating (Bartram, 2005), initiative and creativity (Schepers, 2008), innovating (Myburgh, 2013) and creative and innovative performance (Harari, Reaves, & Viswesvaran, 2016).

The conceptualisation of extra-role performance as a hierarchical model gave rise to the following hypothesis:

H2: The broad extra-role performance dimension explains covariance between the 16 extra-role performance items, independent of the covariance that the narrow dimensions helpful behaviours, taking initiative, self-development and innovative behaviours explain in the same set of items.

Adaptive performance

Adaptive performance relates to employees’ demonstration of the ability to cope with and effectively respond to crises or uncertainty (Carpini et al., 2017; Pulakos et al., 2000). It is also reflected in employees’ flexibility when dealing with novelty or working with coworkers who have different views (Pulakos et al., 2000). A broad multidimensional conceptualisation of adaptive performance (Pulakos et al., 2000), instead of a narrow dimension (Koopmans et al., 2012; Myburgh, 2013), was used as a primary orientation, but was adjusted to fit the performance model of this study. Firstly, dealing with uncertainty was reformulated as dealing with complexity to enlarge the scope of the construct and increase its relevance to important psychological predictors of performance, for example, cognitive ability (Sackett, Zhang, Berry, & Lievens, 2021). Secondly, interpersonal and cultural adaptability were collapsed into a single dimension for brevity of measurement. Thirdly, two narrow dimensions of adaptive performance identified by Pulakos et al. (2000), namely solving problems creatively and learning to extend existing knowledge on the job, were considered better suited to extra-role performance and were therefore not categorised under adaptive performance in this study. Finally, physical adaptability was conceived to be less generalisable across office and non-office settings and was thus excluded. Definitions of the narrow dimensions of adaptive performance are as follows:

  1. Emotional resilience: Demonstrated when employees maintain their composure when they have to work under high pressure.

    Conceptual overlap with handling crises and stress (Borman & Brush, 1993); handling work stress (Pulakos et al., 2000); resilience (Tett et al., 2000); stress management (Tett et al., 2000); and coping with pressure and setbacks (Bartram, 2005).

  2. Dealing with complexity: Demonstrated when employees think, decide and act sensibly under uncertain and unusual situations when there are no clear guidelines.

    Conceptual overlap with dealing with uncertain and unpredictable situations (Pulakos et al., 2000) and adapting and responding to change (Bartram, 2005).

  3. Adapting to crises: The degree to which employees remain objective, make swift decisions and react with appropriate urgency to a crisis.

    Conceptual overlap with handling crises and stress (Borman & Brush, 1993) and handling emergencies or crisis situations (Pulakos et al., 2000).

  4. Interpersonal flexibility: Reflected in how comfortable employees are with situations in which people with diverse views do not agree with each other. It is also represented by employees’ open-mindedness in interaction with coworkers from different backgrounds.

    Conceptual overlap with demonstrating interpersonal adaptability (Pulakos et al., 2000), demonstrating cultural adaptability (Pulakos et al., 2000), tolerance (Tett et al., 2000), cultural appreciation (Tett et al., 2000), adapting interpersonal style (Bartram, 2005) and showing cross-cultural awareness (Bartram, 2005).

The following hypothesis was formulated to test the tenability of the hierarchical model of adaptive performance:

H3: The broad adaptive performance dimension explains covariance between the 16 adaptive performance items, independent of the covariance that the narrow dimensions emotional resilience, dealing with complexity, adapting to crises and interpersonal flexibility explain in the same set of items.

Leadership performance

Leadership performance refers to the effectiveness with which an employee can influence co-workers to achieve collective goals (Campbell & Wiernik, 2015; Hogan & Sherman, 2020; Yukl, 2012). Leadership does not necessarily have to be tied to a position of authority (Campbell & Wiernik, 2015; Myburgh, 2013) and could be portrayed by anyone who supports, directs and connects coworkers and changes coworkers’ views or approaches to doing things (Hedge et al., 2004; Yukl, 2012). Yukl’s (2012) taxonomy of leadership effectiveness was used as the primary orientation in the conceptualisation of leadership performance. External leadership was changed to network-orientated leadership, to reflect recent developments in complexity leadership (Uhl-Bien & Arena, 2017). The narrow dimensions of leadership performance are defined as follows:

  1. Task leadership: Demonstrated by employees when they direct the efforts of coworkers towards the achievement of team goals.

    Conceptual overlap with providing direction and coordinating action (Bartram, 2005); initiating structure, guiding and directing (Campbell, 2012); goal emphasis (Campbell, 2012); task-orientated leadership behaviours (Yukl, 2012); and management (Myburgh, 2013).

  2. Relations leadership: Demonstrated when consideration is used to empower and motivate coworkers to achieve team goals.

    Conceptual overlap with seeking input (Tett et al., 2000); consideration, support and person-centredness (Campbell, 2012); relations-orientated leadership behaviours (Yukl, 2012); and leadership (Myburgh, 2013).

  3. Change leadership: Reflects the degree to which employees inspire their coworkers to effect required changes to the way they do their work.

    Conceptual overlap with creative thinking (Tett et al., 2000), seeking and introducing change (Bartram, 2005) and change-orientated leadership behaviours (Yukl, 2012).

  4. Network leadership: The degree to which networking is used to connect coworkers with key role players inside and outside the organisation.

    Conceptual overlap with networking (Bartram, 2005), external leadership behaviours (Yukl, 2012) and leveraging network structures (Uhl-Bien & Arena, 2017).

The following hypothesis was formulated based on the conceptualisation of leadership performance as a hierarchical model:

H4: The broad leadership performance dimension explains covariance between the 16 leadership performance items, independent of the covariance that the narrow dimensions of task-orientated, relations-orientated, change-orientated and network-orientated leadership explain in the same set of items.

Counterproductive performance

Counterproductive performance reflects intentional or unintentional acts (Spector & Fox, 2005) by an employee who negatively affects the effectiveness with which an organisation achieves its goals and causes harm to its stakeholders (Campbell & Wiernik, 2015; Marcus et al., 2016). The present researchers deliberately chose forms of counterproductive performance that are of a lesser severity (Bennett & Robinson, 2000). Less severe behaviours (e.g. incivility) were chosen as narrow dimensions that are more frequently observable, less situation-dependent and easier to report on and utilise in performance feedback. Examples of more severe behaviours excluded are wilful damage to property of the employer, wilful endangering of the safety of others, physical assault and sexual harassment (Gruys & Sackett, 2003; Venter et al., 2014). These acts are viewed as forms of gross misconduct in South Africa – and warrant summary dismissal – which might be outside the purview of performance feedback and more in the realm of disciplinary action, which is subject to confidentiality (Venter et al., 2014). However, it must be acknowledged that more severe behaviours of counterproductive performance, such as physical assault, have serious consequences for organisations and there should be mechanisms in place to deal with such acts. Severe behaviours are less frequently endorsed in performance questionnaires and appear to increase the statistical multidimensionality of the construct. The deliberate exclusion of severe forms of counterproductive performance in this study might have increased the unidimensionality of the construct (Marcus et al., 2016).

In contrast to the unidimensional conceptualisation of counterproductive performance in the IWPQ (Koopmans et al., 2012) and CPQ (Myburgh, 2013), intrapersonal-focused and interpersonal-focused dimensions of counterproductive performance were differentiated in this study. Intrapersonal-focused counterproductive performance reflect avoidant behaviours such as withholding effort, aimed at escaping work situations. In contrast, interpersonal-focused counterproductive performance reflect approach behaviours such as interpersonal rudeness, which focuses on a dysfunctional engagement in work situations (Spector et al., 2006). Stagnation and stubborn resistance (Tepper, Schriesham et al. 1998) are seldom categorised under counterproductive performance (Marcus et al., 2016) but were included as narrow dimensions to extend the intrapersonal and interpersonal focus of counterproductive performance. Definitions of the narrow dimensions of counterproductive performance are as follows:

  1. Interpersonal rudeness: Disrespectful acts that reflect a lack of regard for others.

    Conceptual overlap with interpersonal deviance (Bennett & Robinson, 2000), workplace incivility (Cortina, Magley, Williams, & Langhout, 2001), hostility (Martin & Hine, 2005), gossiping (Martin & Hine, 2005) and abuse (Spector et al., 2006).

  2. Withholding effort: Demonstrated when employees show a lack of enthusiasm in their work by exerting less effort than is expected for the position they hold.

    Conceptual overlap with downtime behaviours (Murphy, 1990), off-task behaviour (Hunt, 1996), poor-quality work (Gruys & Sackett, 2003) and withdrawal (Spector et al., 2006).

  3. Stagnation: Demonstrated when an employee displays an unwillingness to learn new skills, thereby affecting team effectiveness. This is a newly created dimension.

  4. Stubborn resistance: Reflected in an employee’s unreasonable opposition to change or an unwillingness to support initiatives at work and suggests a destructive form of opposition to team goals.

    Conceptual overlap with unruliness (Hunt, 1996); dysfunctional resistance (Tepper et al., 1998); short-term focus (Oreg, 2003); and cognitive rigidity (Oreg, 2003).

The following hypothesis was formulated to test the tenability of the hierarchical model of counterproductive performance:

H5: The broad counterproductive performance dimension explains covariance between the 16 counterproductive performance items, independent of the covariance that the narrow dimensions interpersonal rudeness, withholding effort, stagnation and stubborn resistance explain in the same set of items.

Research design

Research approach

A cross-sectional, quantitative research design was utilised in this study. The selection of a cross-sectional design ensured a composite view of the multifaceted nature of managers’ ratings of their subordinates’ performance at one point in time, as well as an efficient quantitative exploration of commonalities between a large set of variables across different organisational contexts (Spector, 2019).

Research method
Participants

Aguinis and Edwards’ (2014) recommendation of sampling from organisations in different economic sectors was implemented to increase the external validity (generalisability) of the results (Holtom, Baruch, Aguinis, & Ballinger, 2022). A total of 15 organisations across different sectors in South Africa were invited to participate in the study, and a sample of 448 employees from 6 organisations was drawn via a census (or stratified) sampling strategy, with the final sample representing the industrial, agricultural, financial, professional services and information technology sectors. The ratio (9:1) of the number of observations (n = 448) to the number of mathematical parameters of the primary statistical test (bifactor) models (q = 48) was larger than or equal to other generic performance models validated in South Africa (WPQ: n = 278 and p = 79; GPQ: n = 205 and p = 158; IWPQ: n = 296 and p = 32). The number of observations also exceeded the typical number of observations (n = 200) reported in studies in which structural equation modelling was used (Kline, 2011). A calculation of statistical power, based on computer software developed by Preacher and Coffman (2006), computed a power value of unity, which suggested that an incorrect model with 88 degrees of freedom would be correctly rejected (α = 0.05; null RMSEA = 0.05; alternative RMSEA = 0.08).

The mean age of employees was 38.77 years (s.d. = 7.02 years). Most of the employees self-identified as white people (n = 201; 48%), followed by black African (n = 136; 30%), Indian (81; 18%), mixed race (mixed ancestry; n = 27; 6%) and Asian (3; 1%). The sample comprised more women (n = 249; 56%) than men (n = 199; 44%). Most of the employees were registered professionals (n = 142; 32%), followed by mid-level managers (n = 106; 24%), skilled employees (103; 23%), low-level managers (n = 84; 19%), semi-skilled employees (n = 9; 2%) and top-level managers (4; 1%).

Measurement instrument

Hinkin’s (1998) guideline for scale construction was used to develop a measure of individual work performance, the IWPR. Items were deductively generated by two senior researchers with due consideration of the theoretical definitions highlighted in the literature review. The items were carefully formulated to avoid double-barrelled, leading and negatively worded items. Myburgh (2013) stressed the need to simplify the language used in the GPQ to improve the measurement. Therefore, items in the IWPR were shortened and simplified as much as possible and qualitative feedback, up until the write-up of report, suggested that raters had few or no issues in understanding the items.

The items were subjected to an item-sort exercise with 13 registered psychologists who utilise psychometrics in the work context, and the results were used to calculate substantive validity coefficients. Anderson and Gerbing’s (1991) guidelines were used to structure the item-sort exercises. Thresholds, based on guidelines from Howard and Melloy (2016), were used to adjust or, in severe cases, remove items that appeared to have low substantive validity. The final IWPR consisted of 80 items (4 items for each of the 20 narrow performance dimensions) that covered five factors, namely in-role performance, extra-role performance, adaptive performance, leadership performance and counterproductive performance. Per the guidelines of Aguinis (2019), each item was measured using a five-point behavioural frequency scale. Word anchors defined the extreme points of each scale, namely (1) never demonstrated and (5) always demonstrated. The guidelines of Casper, Edwards, Wallace, Landis and Fife (2020) were used to guide the qualitative interpretation of numeric values between the extreme points, to better approximate an interval rating scale, namely (2) rather infrequently demonstrated, (3) demonstrated some of the time and (4) quite often demonstrated. The internal consistency reliability of all the narrow dimensions of the IWPR was satisfactory (α and ω ≥ 0.83). An example of a performance review item is provided in Box 1.

BOX 1: Example item from quality of work.
Research procedure and ethical considerations

Direct managers of the participants completed the review via an e-mail link. At the outset of the review, the direct managers and participants received information on the development purpose of the study, the nature of the measurement, voluntary participation, benefits of participation and anonymity of the data; they were informed that their data would be used for research purposes. The Department of industrial Psychology and People Management’s Research Ethics Committee members at the University of Johannesburg granted ethical clearance for the study (ref. no. IPPM-2020-455, 06 October 2020).

Statistical analysis

The objective of the study was to determine the plausibility of five hierarchical dimensions (factors) of performance in the South African context. Confirmatory factor analysis (CFA) was performed, using version 0.6–8 of the lavaan package (Rosseel, 2012; Rosseel & Jorgensen, 2021) in R (R Core Team, 2016), to first inspect the interfactor correlations between all the narrow performance factors, whereafter the hierarchical factor structure of the broad performance factors was investigated. The guidelines recommended by Credé and Harms (2015) were followed in exploring the plausibility of alternative models (including hierarchical factor models), based on the five performance factors. The bifactor models were, in turn, subjected to analysis of bifactor statistical indices to determine whether unidimensional models better represented the structure of the five factors (Reise, Bonifay, & Haviland, 2013; Rodriguez et al., 2016).

Mardia’s multivariate skewness and kurtosis coefficients were 146297.60 (p < 0.001) and 105.56 (p < 0.001), which indicated that the data had a non-normal multivariate distribution. Given the medium (n = 448) sample size (Rhemtulla, Brosseau-Liard, & Savalei, 2012), the employment of rating scales with five numerical categories (Rhemtulla et al., 2012) and violation of multivariate normality (Satorra & Bentler, 1994; Yuan & Bentler, 1998), CFA results with robust maximum likelihood (MLM) estimation were deemed appropriate (Bandalos, 2014). Model–data fit of the CFA models was evaluated using the comparative fit index (CFI), the Tucker–Lewis Index (TLI), standardised root mean-square residual (SRMR) and root mean square error of approximation (RMSEA) (Brown, 2015; Hu & Bentler, 1999). The fit was considered suitable if the RMSEA and SRMR were ≤ 0.08 (Brown, 2015; Browne & Cudeck, 1992) and the CFI and TLI were ≥ 0.90 (Brown, 2015; Hu & Bentler, 1999). Even if comparative fit indices display a marginally good fit to the data (CFI and TLI in the range of 0.90 to 0.95), models might still be considered to display acceptable fit if other indices (SRMR and RMSEA) in tandem are in the acceptable range (Brown, 2015).

Results

Descriptive statistics of the IWPR

Table 1 provides the mean item score and standard deviation for each scale of the IWPR, along with the alpha and omega reliability estimates and standardised interfactor correlations of the 20 narrow performance factors. The interfactor correlations were obtained by conducting an oblique lower-order confirmatory factor model. The fit statistics for the oblique lower-order confirmatory factor model of the entire IWPR (χ2 [df] = 4769.72 [2890]; CFI = 0.94; TLI = 0.93; SRMR = 0.05; RMSEA = 0.04 [0.04; 0.05]) were satisfactory (Brown, 2015; Hu & Bentler, 1999).

TABLE 1: Descriptive statistics for narrow performance factors on the Individual Work Performance Questionnaire.

The interfactor correlations below the diagonal in Table 1 suggest that the narrow factors under in-role, extra-role, adaptive and leadership performance, respectively, covaried as expected. Narrow performance factors also covaried, with narrow factors outside of the broader related performance dimensions, suggesting that alternative theoretical configurations could be explored in future. The size of the interfactor correlations between all the narrow factors suggested that a general performance factor may exist across these factors in the IWPR (Viswesvaran et al., 2005). Evidence forwarded by Schepers (2008), Myburgh (2013) and Van der Vaart (2021) also suggested performance factors tend to covary.

The upper limit of 87% of the interfactor correlations in Table 1 were below the cut off (UL < 0.80, as proposed by Rönkkö & Cho, 2020) and therefore the majority of the narrow dimensions of performance displayed sufficient discriminant validity. Rönkkö and Cho (2020) considered interfactor correlations of 0.80 ≤ UL < 0.90 as marginally problematic and 0.90 ≤ UL < 0.10 as moderately problematic. According to this guideline, 13% of the upper limit correlations in Table 1 have lower discriminant validity. However, 6% of the upper limit correlations with marginal to moderate problematic discriminant validity are between narrow performance dimensions of the same broad dimension. The remaining 7% indicates marginally to moderately problematic discriminant validity between narrow performance dimensions that fall under different broader performance dimensions, which could be areas for future theoretical exploration and empirical replication.

Hierarchical confirmatory factor analysis

Hierarchical CFA was performed to analyse the data gathered on employees using the IWPR. The purpose of hierarchical factor analysis was to provide a more parsimonious account, based on a pre-defined theory, of a latent variable that consists of various underlying narrow factors that have something in common (Brown, 2015). Credé and Harms (2015) argued that five sequential models should be tested before claiming a hierarchical structure amongst latent variables, namely orthogonal first-order, single-factor, higher-order, oblique lower-order and bifactor models. Figure 1 provides an example, using in-role performance, of how each of the factor models was specified. Not all the items of in-role performance are displayed in Figure 1 (the scale of each factor consists of 16 items).

FIGURE 1: Factor structures of in-role performance based on guideline of Credé and Harms (2015).

As portrayed in Figure 1, both higher-order and bifactor models are hierarchical factor models. In-role performance narrow factors (quality of work, quantity of work, rule adherence and technical performance) mediated the relationship between the manifest variables and the in-role performance broad factor in the higher-order model (Beaujean, 2014). The broad performance factor therefore did not explain unique variance in the manifest variables beyond the narrow factors (Beaujean, 2014; Mcabee, Oswald, & Connelly, 2014). Bifactor models, in contrast, accounted for the unique variance explained in the manifest variables by the orthogonal broad performance factor, beyond the variance explained by the orthogonal narrow factors (Beaujean, 2014; Mcabee et al., 2014). The model–data fit of the different factor models outlined in Figure 1 is specified in Table 2.

TABLE 2: Fit statistics of different performance factor models.

The CFI, TLI, SRMR and RMSEA reported in Table 2 convey that a first-order and single-factor model fit the data poorly, whereas acceptable fit for an oblique lower-order factor, higher-order factor and bifactor model provided evidence of the existence of a hierarchical structure in the data. Overall, the bifactor models seemed to provide the best fit to the data, which supported H1 to H5.

Bonifay, Lane and Reise (2017) indicated that the superiority of bifactor models’ fit indices, relative to that of other confirmatory factor models, could be a symptom of overfitting. Rodriguez et al. (2016) recommended that bifactor statistical indices be calculated to determine the practical meaningfulness of group factors, such as the explained common variance (ECV), coefficient omega hierarchical (ωh), construct replicability (H), factor determinacy (FD), percentage of uncontaminated correlations (PUC) and relative percentage bias (ARPB). Group factors were considered plausible when ωh, H and FD2 were > 0.50, 0.70 and 0.70, respectively (Dueber, 2017; Reise et al., 2013). The ECV for the general factor > 0.70 and PUC > 0.80 were indicative of unidimensionality (Reise et al., 2013). The ARPB of 10% to 15% indicated little difference in the factor loadings between a single-factor model and the general factor in a bifactor model (Rodriguez et al., 2016). Bifactor statistical indices were calculated using version 0.2.0 of the Bifactor Indices Calculator package (Dueber, 2020) in R (R Core Team, 2016). The bifactor statistical indices are reported in Table 3.

TABLE 3: Bifactor statistical indices for personality aspects.

The bifactor statistical indices in Table 3 provide evidence of:

(H1) A strong general factor for in-role performance and a small group factor for technical performance. An interpretation of in-role performance as a total score, instead of a hierarchical factor with subscores for quality of work, quantity of work, rule adherence and technical performance may be a more appropriate representation of the data when used to differentiate high-performing employees.

(H2) A strong general factor for extra-role performance and a small group factor for helpful behaviours. An interpretation of extra-role performance as a total score, instead of a hierarchical factor with subscores for helpful behaviours, taking initiative, self-development and innovative behaviours may be a more appropriate representation of the data when used to differentiate high-performing employees.

(H3) A strong general factor for adaptive performance and a small group factor for interpersonal flexibility. An interpretation of adaptive performance as a total score, instead of a hierarchical factor with subscores for emotional resilience, dealing with complexity, adapting to crises and interpersonal flexibility, may be a more appropriate representation of the data when differentiating high-performing employees.

(H4) A strong general factor for leadership performance. An interpretation of leadership performance as a total score, instead of a hierarchical factor with subscores for task-orientated, relations-orientated, change-orientated and network-orientated leadership, may be a more appropriate representation of the data when used to differentiate high-performing employees.

(H5) A strong general factor for counterproductive performance and a small group factor for interpersonal rudeness. An interpretation of counterproductive performance as a total score, instead of a hierarchical factor with subscores for interpersonal rudeness, withholding effort, stagnation and stubborn resistance may be a more appropriate representation of the data with which to differentiate risk-prone employees.

Discussion

Outline of results

The IWPR is an attempt to address the bandwidth–fidelity problem in the measurement of performance by enabling researchers to inspect the validity of predictors and measure performance using a variety of narrow dimensions that still enable the in-depth measurement of the broader performance dimensions (Hunt, 1996). The broader dimensions may be of benefit when predictions need to be made about broader criteria in predictive studies. For example, in-role performance may be a more appropriate criterion when determining the predictive validity of the personality trait conscientiousness (Judge et al., 2013). The narrow performance dimensions quality and quantity of work may, however, be more appropriate when detailed predictions are required. For example, the personality aspects of industriousness (effort exerted in performing work) and orderliness (meticulousness with which tasks are performed) may explain unique variance in the quantity and quality of work produced by employees (Van Lill & Taylor, 2021). Evidence for the hierarchical structure of the IWPR could therefore enable practitioners to run predictive analytics at their preferred level of criterion, without compromising either accuracy or specificity.

The overall oblique lower-order model for the entire IWPR provides preliminary evidence of a fair degree of discriminant validity (87% of interfactor correlations) between the 20 narrow performance dimensions. The 20 narrow dimensions can be interpreted simultaneously with the five broader dimensions of in-role, extra-role, adaptive, leadership and counterproductive performance, based on the fit of the bifactor models. However, caution should be taken when interpreting each dimension as a hierarchical factor, as the bifactor indices suggested that each of the broader factors could be more parsimoniously interpreted as a unidimensional factor.

Practical implications

Contrasting evidence on the hierarchical structure of individual work performance merits two positions regarding the interpretation and use of scales based on the IWPR. Firstly, it may be more prudent to interpret the broader performance dimensions as total scores, especially when high-stakes decisions are made, for example, to identify, reward and promote star performers in organisations. By contrast, the fair degree of discriminant validity indicated that the narrow dimensions could capture differences between individuals, which could be useful in low-stakes situations, for example, tailoring performance feedback, setting performance goals and establishing new work habits to achieve performance goals. Carpini et al. (2017) argued that even in the presence of unidimensionality, narrow interpretations might still add to context when interpreting scores based on general factors, which, at times, could appear as more vague formulations. Cross-scale interpretations are further encouraged to enable a holistic understanding of employees’ performance.

The practical benefits of a carefully constructed measure of generic individual work performance for industrial psychologists and human resource professionals, such as the IWPR, include:

  1. Creating a shared language for performance development in organisations.

  2. Drawing comparisons between employees independent of occupational specific tasks to identify, reward, promote and retain star performers across an organisation. The effective and efficient identification of star performance is amplified by the proposition that performance is non-normally distributed (following power laws) in organisations.

  3. Aggregating performance results across different units of analyses (i.e. the individual, team or organisation) to, for example, create context-specific performance benchmarks for organisations or industries.

  4. Integrating psychology and human resource functions by enabling human resource professionals to manage organisational effectiveness based on reliable and valid performance metrics. Simultaneously, it can assist industrial psychologists to build larger databases against which criterion validity studies can be run, based on psychological predictors. Predictive studies based on a diverse number of job-specific criteria may reduce the sample sizes (statistical power) and therefore make predictive studies unfeasible (Myburgh, 2013).

  5. Utilising more scientific performance data to calculate return on investment when using psychological assessments in selection or implementing performance development programmes based on, for example, the Brogden-Chronbach-Gleser formula (Cascio & Boudreau, 2011). Demonstrating the monetary value of using psychological assessments in selection to predict future performance or evaluating the increases in performance because of development initiatives could further bolster human resource professionals’ and industrial psychologists’ strategic position in organisations.

Limitations and recommendations

Statistical power did not enable an inspection of the inter-factor correlations between the broad dimensions. A bifactor model could be specified in the future, one that extracts the five general factors whilst freeing in-role, extra-role, adaptive, leadership and counterproductive performance to covary, to determine discriminant validity at a broad dimensional level. A calculation of statistical power for a bifactor CFA, based on computer software developed by Preacher and Coffman (2006), suggested that 18.65 participants per variable (k = 80) would be required to ensure a 0.80 probability that an incorrect model with 2990 degrees of freedom is correctly rejected (α = 0.05; null RMSEA = 0.05; alternative RMSEA = 0.08). The required sample size would amount to 1492 performance ratings in the future (MacCallum, Browne, & Sugawara, 1996).

The scope of the present study did not extend to a hierarchical level above the five performance factors. Future studies could investigate the presence of a general factor amongst the in-role, extra-role, adaptive, leadership and counterproductive performance factors as a general pro-organisation component of performance in the IWPR (Viswesvaran et al., 2005). A general pro-organisation factor may provide an even more reliable score of individual work performance and may be used to run validity studies at the corresponding level of predictors. For example, the general performance factor could be used when the predictive validity of the metatrait integrity (a composite of conscientiousness, agreeableness and neuroticism) has to be determined (Ones, Dilchert, Viswesvaran, & Judge, 2007).

The present study alluded to constructs that could help build the nomological network of antecedents to performance such as personality. Other predictors of performance that could be inspected in the future include cognitive ability and interest congruence (Sackett et al., 2021). A further important part of the nomological network, which is currently neglected in performance literature, is the consequences of individual work performance. Future studies could expand the relevance of the IWPR by considering factors related to individual and unit-level effectiveness, such as salaries and profitability, respectively (Campbell & Wiernik, 2015; Carpini et al., 2017).

Similar to the approach used by Schepers (2008) and Myburgh (2013), performance reviews based on the IWPR were limited to direct managers, to obtain a conservative estimate of performance. There is considerable evidence that rating source affects the psychometric properties of performance ratings (Conway & Huffcutt, 1997; Heidemeier & Moser, 2009). Collecting data with just one combination of contextual features can thus serve only as preliminary evidence in establishing the structure of the instrument. Based on the recommendations of Scullen, Mount and Judge (2003), future studies could inspect the inter-rater reliability and measurement invariance of the factor model when assessments are completed by different raters, including the self, subordinates and peers. As more data are collected, the invariance of the model across job families could also be inspected as an additional contextual feature that should be taken into consideration.

Acknowledgements

The authors would like to thank their colleagues at JVR Africa Group and the University of Johannesburg for thoughtfully engaging them in stimulating conversations on the conceptualisation and measurement of individual work performance.

Competing interests

Both authors are employees of JVR Africa Group, which is the company for which this instrument was developed.

Authors’ contributions

X.v.L. and N.T. developed the conceptual framework and devised the method. X.v.L analysed the data.

Funding information

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Data availability

Coefficients based on the bifactor confirmatory factor analysis are available on request.

Disclaimer

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.

References

Aguinis, H. (2019). Performance management (4th ed.). Chicago, IL: Chicago Business Press.

Aguinis, H., & Burgi-Tian, J. (2021). Talent management challenges during COVID-19 and beyond: Performance management to the rescue. BRQ Business Research Quarterly, 24(3), 233–240. https://doi.org/10.1177/23409444211009528

Aguinis, H., & Edwards, J.R. (2014). Methodological wishes for the next decade and how to make wishes come true. Journal of Management Studies, 51(1), 143–174. https://doi.org/10.1111/joms.12058

Anderson, J.C., & Gerbing, D.W. (1991). Predicting the performance of measures in a confirmatory factor analysis with a pretest assessment of their substantive validities. Journal of Applied Psychology, 76(5), 732–740. https://doi.org/10.1037/0021-9010.76.5.732

Bandalos, D.L. (2014). Relative performance of categorical diagonally weighted least squares and robust maximum likelihood estimation. Structural Equation Modeling, 21(1), 102–116. https://doi.org/10.1080/10705511.2014.859510

Bartram, D. (2005). The great eight competencies: A criterion-centric approach to validation. Journal of Applied Psychology, 90(6), 1185–1203. https://doi.org/10.1037/0021-9010.90.6.1185

Beaujean, A.A. (2014). Latent variable modeling using R: A step-by-step guide. New York, NY: Routledge.

Bennett, R.J., & Robinson, S.L. (2000). Development of a measure of workplace deviance. Journal of Applied Psychology, 85(3), 349–360. https://doi.org/10.1037//0021-9010.85.3.349

Bonifay, W., Lane, S.P., & Reise, S.P. (2017). Three concerns with applying a bifactor model as a structure of psychopathology. Clinical Psychological Science, 5(1), 184–186. https://doi.org/10.1177/2167702616657069

Borman, W.C., & Brush, D.H. (1993). More progress toward a taxonomy of managerial performance requirements. Human Performance, 6, 1–21. https://doi.org/10.1207/s15327043hup0601_1

Borman, W.C., & Motowidlo, S.J. (1997). Task performance and contextual performance: The meaning for personnel selection research. Human Performance, 10(2), 99–109. https://doi.org/10.1207/s15327043hup1002_3

Brown, T.A. (2015). Confirmatory factor ana lysis for applied research. New York, NY: The Guilford Press.

Browne, M.W., & Cudeck, R. (1992). Alternative ways of assessing model fit. Sociological Methods & Research, 21(2), 230–258. https://doi.org/10.1177/0049124192021002005

Campbell, J.P. (2012). Behavior, performance, and effectiveness in the twenty-first century. In S.W.J. Kozlowski (Ed.), The Oxford handbook of organizational psychology (pp. 159–194). New York, NY: Oxford University Press.

Campbell, J.P., McCloy, R.A., Oppler, S.H., & Sager, C.E. (1993). A theory of performance. In N.W. Schmitt & W.C. Borman (Eds.), Personnel selection in organizations (pp. 35–70). San Francisco, CA: Jossey-Bass Publishers.

Carpini, J.A., Parker, S.K., & Griffin, M.A. (2017). A look back and a leap forward: A review and synthesis of the individual work performance literature. Academy of Management Annals, 11(2), 825–885. https://doi.org/10.5465/annals.2015.0151

Cascio, W.F., & Boudreau, J. (2011). Investing in people: Financial impact of human resource initiatives (2nd ed.). Upper Saddle River, NJ: Pearson Education.

Casper, W.C., Edwards, B.D., Wallace, J.C., Landis, R.S., & Fife, D.A. (2020). Selecting response anchors with equal intervals for summated rating scales. Journal of Applied Psychology, 105(4), 390–409. https://doi.org/10.1037/apl0000444

Conway, J.M., & Huffcutt, A.I. (1997). Psychometric properties of multisource performance ratings: A meta-analysis of subordinate, supervisor, peer, and self-ratings. Human Performance, 10(4), 331–360. https://doi.org/10.1207/s15327043hup1004_2

Cortina, L.M., Magley, V.J., Williams, J.H., & Langhout, R.D. (2001). Incivility in the workplace: Incidence and impact. Journal of Occupational Health Psychology, 6(1), 64–80. https://doi.org/10.1037/1076-8998.6.1.64

Credé, M., & Harms, P.D. (2015). 25 years of higher-order confirmatory factor analysis in the organizational sciences: A critical review and development of reporting recommendations. Journal of Organizational Behavior, 36(6), 845–872. https://doi.org/10.1002/job.2008

Cronbach, L.J., & Gleser, G.C. (1965). Psychological tests and personnel decisions (2nd ed.). Urbana, IL: University of Illinois Press.

Dueber, D.M. (2017). Bifactor indices calculator: A Microsoft Excel-based tool to calculate various indic es relevant to bifactor CFA models. Lexington, KY: UKnowledge, University of Kentucky. https://doi.org/10.13023/edp.tool.01

Dueber, D.M. (2020). Bifactor indices calculator. Retrieved from https://cran.r-project.org/web/packages/BifactorIndicesCalculator/BifactorIndicesCalculator.pdf

Frese, M., Fay, D., Hilburger, T., Leng, K., & Tag, A. (1997). The concept of personal initiative: Operationalization, reliability and validity in two German samples. Journal of Occupational and Organizational Psychology, 70(2), 139–161. https://doi.org/10.1111/j.2044-8325.1997.tb00639.x

George, J.M., & Brief, A.P. (1992). Feeling good-doing good: A conceptual analysis of the mood at work–organizational spontaneity relationship. Psychological Bulletin, 112(2), 310–329. https://doi.org/10.1037/0033-2909.112.2.310

Gruys, M.L., & Sackett, P.R. (2003). Investigating the dimensionality of counterproductive work behavior. International Journal of Selection and Assessment, 11(1), 30–42. https://doi.org/10.1111/1468-2389.00224

Harari, M.B., Reaves, A.C., & Viswesvaran, C. (2016). Creative and innovative performance: A meta-analysis of relationships with task, citizenship, and counterproductive job performance dimensions. European Journal of Work and Organizational Psychology, 25(4), 495–511. https://doi.org/10.1080/1359432X.2015.1134491

Harari, M.B., & Viswesvaran, C. (2018). Individual job performance. In D.S. Ones, N. Anderson, C. Viswesvaran, & H.K. Sinangil (Eds.), The Sage handbook of industrial, work, and organizational psychology: Personnel psychology and employee performance (pp. 55–72). Thousand Oaks, CA: Sage.

Hedge, J.W., Borman, W.C., Bruskiewicz, K.T., & Bourne, M.J. (2004). The development of an integrated performance category system for supervisory jobs in the U.S. Navy. Military Psychology, 16(4), 231–243. https://doi.org/10.1207/s15327876mp1604_2

Heidemeier, H., & Moser, K. (2009). Self-other agreement in job performance ratings: A meta-analytic test of a process model. Journal of Applied Psychology, 94(2), 353–370. https://doi.org/10.1037/0021-9010.94.2.353

Hinkin, T.R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods, 1(1), 104–121. https://doi.org/10.1177/109442819800100106

Hogan, R., & Sherman, R.A. (2020). Personality theory and the nature of human nature. Personality and Individual Differences, 152, 1–5. https://doi.org/10.1016/j.paid.2019.109561

Holtom, B., Baruch, Y., Aguinis, H., & Ballinger, G.A. (2022). Survey response rates: Trends and a validity assessment framework. Human Relations, 00(0), 1–25. https://doi.org/10.1177/00187267211070769

Howard, M.C., & Melloy, R.C. (2016). Evaluating item-sort task methods: The presentation of a new statistical significance formula and methodological best practices. Journal of Business and Psychology, 31(1), 173–186. https://doi.org/10.1007/s10869-015-9404-y

Hu, L., & Bentler, P.M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118

Hunt, S.T. (1996). Generic work behavior: An investigation into the dimensions of entry-level, hourly job performance. Personnel Psychology, 49(1), 51–83. https://doi.org/10.1111/j.1744-6570.1996.tb01791.x

Judge, T.A., Rodell, J.B., Klinger, R.L., Simon, L.S., & Crawford, E.R. (2013). Hierarchical representations of the five-factor model of personality in predicting job performance: Integrating three organizing frameworks with two theoretical perspectives. Journal of Applied Psychology, 98(6), 875–925. https://doi.org/10.1037/a0033901

Katz, D. (1964). The motivational basis of organizational behavior. Behavioral Science, 9(2), 131–146. https://doi.org/10.1002/bs.3830090206

Kline, R.B. (2011). Principles and practice of structural equation modeling. New York, NY: The Guilford Press.

Koopmans, L., Bernaards, C., Hildebrandt, V., Van Buuren, S., Van Der Beek, A.J., & De Vet, H.C.W. (2012). Development of an individual work performance questionnaire. International Journal of Productivity and Performance Management, 62(1), 6–28. https://doi.org/10.1108/17410401311285273

Koopmans, L., Bernaards, C.M., Hildebrandt, V.H., Schaufeli, W.B., De Vet, H.C.W., & Van Der Beek, A.J. (2011). Conceptual frameworks of individual work performance: A systematic review. Journal of Occupational and Environmental Medicine, 53(8), 856–866. https://doi.org/10.1097/JOM.0b013e318226a763

MacCallum, R.C., Browne, M.W., & Sugawara, H.M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1(2), 130–149. https://doi.org/10.1037/1082-989X.1.2.130

Marcus, B., Taylor, O.A., Hastings, S.E., Sturm, A., & Weigelt, O. (2016). The structure of counterproductive work behavior: A review, a structural meta-analysis, and a primary study. Journal of Management, 42(1), 203–233. https://doi.org/10.1177/0149206313503019

Martin, R.J., & Hine, D.W. (2005). Development and validation of the Uncivil Workplace Behavior Questionnaire. Journal of Occupational Health Psychology, 10(4), 477–490. https://doi.org/10.1037/1076-8998.10.4.477

Mcabee, T.S., Oswald, L.F., & Connelly, S.B. (2014). Bifactor models of personality and college student performance. European Journal of Personality, 28, 604–619. https://doi.org/10.1002/per.1975

Motowidlo, S.J., & Van Scotter, J.R. (1994). Evidence that task performance should be distinguished from contextual performance. Journal of Applied Psychology, 79(4), 475–480. https://doi.org/10.1037/0021-9010.79.4.475

Murphy, K.J. (1990). Performance measurement and appraisal: Motivating managers to identify and reward punishment. Rochester, NY: Business – Managerial Economics Research Center.

Myburgh, H.M. (2013). The development and evaluation of a generic individual non-managerial performance measure (Unpublished Magister Dissertation). Stellenbosch University, Stellenbosch. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.985.2008&rep=rep1&type=pdf

Ones, D.S., Dilchert, S., Viswesvaran, C., & Judge, T.A. (2007). In support of personality assessment in organizational settings. Personnel Psychology, 60(4), 995–1027. https://doi.org/10.1111/j.1744-6570.2007.00099.x

Oreg, S. (2003). Resistance to change: Developing an individual differences measure. Journal of Applied Psychology, 88(4), 680–693. https://doi.org/10.1037/0021-9010.88.4.680

Organ, D.W. (1997). Organizational citizenship behavior: It’s construct clean-up time. Human Performance, 10(2), 85–97. https://doi.org/10.1207/s15327043hup1002_2

Organ, D.W. (1988). Organizational citizenship behavior: The good soldier syndrome. Lexington, MA: Lexington Books.

Podsakoff, P.M., MacKenzie, S.B., Paine, J.B., & Bachrach, D.G. (2000). Organizational citizenship behaviors: A critical review of the theoretical and empirical literature and suggestions for future research. Journal of Management, 26(3), 513–563. https://doi.org/10.1177/014920630002600307

Preacher, K.J., & Coffman, D.L. (2006). Computing power and minimum sample size for RMSEA [computer software]. Retrieved from http://quantpsy.org/

Pulakos, E.D., Arad, S., Donovan, M.A., & Plamondon, K.E. (2000). Adaptability in the workplace: Development of a taxonomy of adaptive performance. Journal of Applied Psychology, 85(4), 612–624. https://doi.org/10.1037/0021-9010.85.4.612

R Core Team. (2016). R: A language and environment for statistical computing. Retrieved from Reference Index website https://cran.r-project.org/doc/manuals/r-release/fullrefman.pdf

Reise, S.P., Bonifay, W.E., & Haviland, M.G. (2013). Scoring and modeling psychological measures in the presence of multidimensionality. Journal of Personality Assessment, 95(2), 129–140. https://doi.org/10.1080/00223891.2012.725437

Renn, R.W., & Fedor, D.B. (2001). Development and field test of a feedback seeking, self-efficacy, and goal setting model of work performance. Journal of Management, 27(5), 563–583. https://doi.org/10.1016/S0149-2063(01)00108-8

Rhemtulla, M., Brosseau-Liard, P.É., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17(3), 354–373. https://doi.org/10.1037/a0029315

Rodriguez, A., Reise, S.P., & Haviland, M.G. (2016). Applying bifactor statistical indices in the evaluation of psychological measures. Journal of Personality Assessment, 98(3), 223–237. https://doi.org/10.1080/00223891.2015.1089249

Rönkkö, M., & Cho, E. (2020). An updated guideline for assessing discriminant validity. Organizational Research Methods, 25(1), 6–14. https://doi.org/10.1177/1094428120968614

Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36. https://doi.org/10.18637/jss.v048.i02

Rosseel, Y., & Jorgensen, T.D. (2021). Latent variable analysis. Package ‘lavaan’. Retrieved from https://cran.r-project.org/web/packages/lavaan/lavaan.pdf

Rotundo, M., & Sackett, P.R. (2002). The relative importance of task, citizenship, and counterproductive performance to global ratings of job performance: A policy-capturing approach. Journal of Applied Psychology, 87(1), 66–80. https://doi.org/10.1037/0021-9010.87.1.66

Sackett, P.R., Zhang, C., Berry, C.M., & Lievens, F. (2021). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology, 1–29. Advance online publication. https://doi.org/10.1037/apl0000994

Sackett, P.R., Zhang, C., Berry, C.M., & Lievens, F. (2021). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology. Advance online publication. https://doi.org/10.1037/apl0000994

Satorra, A., & Bentler, P.M. (1994). Corrections to test statistics and standard errors in covariance structure analysis. In A. Von Eye & C.C. Clogg (Eds.), Latent variables analysis: Applications for developmental research (pp. 399–419). Thousands Oaks, CA: Sage.

Schepers, J.M. (2008). The construction and evaluation of a generic work performance questionnaire for use with administrative and operational staff. SA Journal of Industrial Psychology, 34(1), 10–22. https://doi.org/10.4102/sajip.v34i1.414

Scott, S.G., & Bruce, R.A. (1994). Determinants of innovative behavior: A path model of individual innovation in the workplace. Academy of Management Journal, 37(3), 580–607. https://doi.org/10.5465/256701

Scullen, S.E., Mount, M.K., & Judge, T.A. (2003). Evidence of the construct validity of developmental ratings of managerial performance. Journal of Applied Psychology, 88(1), 50–66. https://doi.org/10.1037/0021-9010.88.1.50

Spector, P.E. (2019). Do not cross me: Optimizing the use of cross-sectional designs. Journal of Business and Psychology, 34(2), 125–137. https://doi.org/10.1007/s10869-018-09613-8

Spector, P.E., & Fox, S. (2005). The stressor–emotion model of counterproductive work behavior. In P.E. Spector & S. Fox (Eds.), Counterproductive work behavior (pp. 151–174). Washington, DC: American Psychological Association.

Spector, P.E., Fox, S., Penney, L.M., Bruursema, K., Goh, A., & Kessler, S. (2006). The dimensionality of counterproductivity: Are all counterproductive behaviors created equal? Journal of Vocational Behavior, 68(3), 446–460. https://doi.org/10.1016/j.jvb.2005.10.005

Tepper, B.J., Schriesham, C.A., Nehring, D., Nelson, R.J., Taylor, E.C., & Eisenbach, R.J. (1998). The multi-dimensionality and multifunctionality of subordinates’ resistance to downward influence attempts. In Annual Meeting of the Academy of Management. San Diego, CA.

Tett, R.P., Guterman, H.A., Bleier, A., & Murphy, P.J. (2000). Development and content validation of a ‘hyperdimensional’ taxonomy of managerial competence. Human Performance, 13(3), 205–251. https://doi.org/10.1207/S15327043HUP1303_1

Uhl-Bien, M., & Arena, M. (2017). Complexity leadership: Enabling people and organizations for adaptability. Organizational Dynamics, 46(1), 9–20. https://doi.org/10.1016/j.orgdyn.2016.12.001

Van Aarde, N., Meiring, D., & Wiernik, B.M. (2017). The validity of the big five personality traits for job performance: Meta-analyses of South African studies. International Journal of Selection and Assessment, 25(3), 223–239. https://doi.org/10.1111/ijsa.12175

Van der Vaart, L. (2021). The performance measurement conundrum: Construct validity of the individual work performance questionnaire in South Africa. South African Journal of Economic and Management Sciences, 24(1), 1–11. https://doi.org/10.4102/sajems.v24i1.3581

Van Dyne, L., Cummings, L.L., & McLean, P.J. (1995). Extra-role behaviors: In pursuit of construct and definitional clarity (a bridge over muddied waters). In L.L. Cummings & B.M. Staw (Eds.), Research in organizational behavior (pp. 215–285). Greenwich, CA: JAI Press.

Van Lill, X., & Taylor, N. (2021). The manifestation of the 10 personality aspects amongst the facets of the basic traits inventory. African Journal of Psychological Assessment, 3, 1–10. https://doi.org/10.4102/ajopa.v3i0.31

Venter, R., Levy, A., Holtzhausen, M., Conradie, M., Bendeman, H., & Dworzanowski- Venter, B. (2014). Labour relations in South Africa. Cape Town, South Africa: Oxford University Press.

Viswesvaran, C., Schmidt, F.L., & Ones, D.S. (2005). Is there a general factor in ratings of job performance? A meta-analytic framework for disentangling substantive and error influences. Journal of Applied Psychology, 90(1), 108–131. https://doi.org/10.1037/0021-9010.90.1.108

Yuan, K.H., & Bentler, P.M. (1998). Normal theory based test statistics in structural equation modeling. British Journal of Mathematical and Statistical Psychology, 51, 289–309. https://doi.org/10.1111/j.2044-8317.1998.tb00682.x

Yukl, G. (2012). Effective leadership behavior: What we know and what questions need more attention. Academy of Management Perspectives, 26(4), 66–85. https://doi.org/10.5465/amp.2012.0088



Crossref Citations

No related citations found.