Booksummary with the 6th edition of Applied psychology in human resource management by Cascio


1. Introduction

Job security = the belief that one will retain employment with the same organization until retirement à has become less important to workers than:
Employment security = having the kinds of skills that employers in the labor market are willing to pay for.

Organization = a collection of people working together in a division of labor to achieve a common purpose/ a system of inputs, throughputs and outputs.
Personnel Psychology = individual differences in behavior and job performance and methods for measuring and predicting such differences.

  1. Variability in jobs and their requirements
  2. People and their individual patterns of values, interests and abilities
    à 1 + 2 = programs for the efficient use of human resource management are essential

Differences in individual performance: difference in ability, difference in motivation or both.

Major areas of interest to personnel psychologists:

  • job analysis and job evaluation
  • recruitment, screening and selection
  • training and development
  • performance management

Psychologists have already made substantial contributions to the field of HRM: in fact, most of the empirical knowledge available in such areas as motivation, leadership, and staffing is due to their work. Over the past decade, dramatic changes in markets, technology, organizational design, and the respective roles of managers and workers have inspired renewed emphasis on and interest in personnel psychology.

Markets
The free (labor) markets that the United States has enjoyed throughout its history have now become a global passion.
Some jobs aren’t being lost temporarily because of a recession; rather, they are being wiped out permanently as a result of new technology, improved machinery, and new ways of organizing work à dramatic effects on organizations and their people.
Psychological contract = an unwritten agreement in which the employee and employer develop expectations about their mutual relationship. Job mobility (instead of stability and predictability) is now the norm, rather than the exception.
 

Technology
Digital revolution: distance means nothing if you have a digital infrastructure.
Darkside: junkmail (spam), hackers.

Organizational design
Organizations must adapt to management via the Web, they must be predicated on constant changes, not stability. Organizations are becoming smaller, with better trained ‘multispecialists’ (those who have in-depth knowledge about a number of different aspects of the business).

Role of the Manager
Traditional:         stability, predictability and efficiency.

Now:       to survive, organizations have to be able to respond quickly to shifting market
               conditions. Task for managers is to articulate a vision, what they are trying to
               accomplish and how they compete for business in the marketplace.
Workers are acting more like managers and managers are acting more like workers.

Workers
Diverse. Paternalism is out, self-reliance is in. Do more with less: empowerment, cross-training, personal flexibility, self-managed work teams, continuous learning.

Human Resources can be sources of sustained competitive advantage as long as they meet three basic requirements:
1. They add positive economic benefits to the process of producing goods or delivering services;
2. The skills of the workforce are distinguishable from those of competitors;
3. Such skills are not easily duplicated.

A diverse workforce is not something a company have to have, it’s something all companies do have or soon will have.

Problems could surface: insecurity, uncertainly, stress, social friction
Compensations: challenge, creativity, flexibility, control, interrelatedness

The future world of work will not be a place for the timid, the insecure, or the low skilled. The need for competent HR professionals with broad training in a variety of areas has never been greater.

2. Bases of Personnel Decisions

Utility Theory
A framework for making decisions by forcing the decision maker to define clearly his or her goal, to enumerate the expected consequences (and costs) or possible outcomes of the decision, and to attach differing utilities or values to each.
The unique features of decision theory/utility theory is that it specifies evaluations by means of a pay off matrix or by conversion of the criterion to utility units. (see tables p.40/41).
Utilize theory = extremely useful tool for the I/O psychologist or HR professional.

Systemstheory = everything is one big system with infinite, interconnected, interdependent subsystems. System = a collection of interrelated parts, unified by design and created to attain one or more objectives.
- Closed-system approach = concentrated primarily on the internal operation of the organization and tended to ignore the outside environment.
- Open system = the modern view of organizations. In continual interaction with multiple, dynamic environments, providing for a continuous import of inputs and a transformation of these into outputs, which are then exported back into these various environments to be consumed by clients or customers.
Systems thinking is a discipline for seeing wholes. A system comprises subsystems of a lower order and is also part of a supersystem. (see figure 3-1, p.42)
People à groups à departments à divisions à companies à industry à economy.
Managers need to understand system theory:
- ability to scan and sense changes in the outside environment
- ability to bridge and manage critical boundaries and areas of interdependence
- ability to develop appropriate strategic responses.

Employment process (figure 3-2 p.44)
- different recruitment, selection and training strategies are used for different jobs
- the various phases in the process are highly interdependent, as the feedback loops indicate

Job Analysis & Job Evaluation à Workforce Planning à Recruitment à Initial (1e) Screening à Selection à Training & Development à Performance Management à Organizational Exit.
Job Analysis & Job Evaluation
Job Analysis is the fundamental building block on which all later decisions in the employment process must rest. Detailed specification by the organization of the work to be performed, the skills needed and the training required in order to perform the job satisfactorily.
Both job analysis and job evaluation are performed independently of the particular individuals who currently happen to be performing the jobs.

Workforce Planning
Anticipating future staffing requirements and formulating action plans to ensure that enough qualified individuals are available to meet specific staffing needs at some future time.
1. Organization devise an inventory of available knowledge, abilities, skills, and experiences of present employees
2. Forecasts of the internal and external human resource supply and demand
3. Various action plans and programs can be formulated in order to meet predicted staffing needs (training, promotions or recruitment)
4. Control and evaluation procedures.
Adequate and accurate WP is essential.

Recruitment
= Attracting potentially acceptable candidates to apply for the various jobs.
Two decisions:
1. Costs of recruiting
2. Selection ratio (the number hired relative to the number that apply)
Recruitment is critically important in the overall selection-placement process.

Initial Screening
Decision errors:
- erroneous acceptance (individual who is passed on from preceding stage, but who fails at the following stage)
- erroneous rejection (individual who is rejected at one stage, but who can succeed at the following stage if allowed to continue. 
Different costs are attached to each of these errors, but the costs of an erroneous acceptance are immediately apparent, the costs of erroneous rejections are much less obvious.

Selection
Matching individual and job. During this phase, information is collected judgmentally (interviews), mechanically (written tests) or both. The resulting combination is the basis for hiring or rejecting.
There is not a systematic or one-to-one relationship between the cost of a selection procedure and its subsequent utility.

Training & Development
Most individuals have a need to feel competent (= a cluster of interrelated knowledge, abilities, skills, attitudes, or personal characteristics that are presumed to be important for successful performance on a job). Personnel selection and placement strategies relate closely to training and development strategies. In lower-level jobs, training objectives can be specified rather rigidly and defined carefully.

Performance Management
It is only after employees have been performing their jobs for a reasonable length of time that we can evaluate their performance and our predictions. In observing, evaluating, and documentating on-the-job behavior and providing timely feedback about it to individuals or teams, we are evaluating the degree of success of the individual or team in reaching organizational objectives. Success can be assessed by objective indices (dollar volume of sales, number of errors), in most cases, judgments about performance play a significant role.

Organizational Exit
Involuntary:
Psychological process à anticipatory job loss; shock, relief, and relaxation; concerted effort; vacillation; self-doubt, and anger; resignation and withdrawal.
Organizational process à communication, participation, control, planning, and support.
Individual à depression, hostility, anxiety, and loss of self-esteem.
Reactions of survivors to layoffs à stress in response to uncertainly about their ability.
Retirement is voluntary.
Organizational exit influences, and is influenced by, prior phases in the employment process.

 

3. Criteria

Criteria = operational statements of goals or desired outcomes. Predictive or evaluative purpose! Evaluative standard that can be used to measure a person’s performance, attitude, motivation, etc.

“Criterion Problem” = the difficulties involved in the process of conceptualizing and measuring performance constructs that are multidimensional, dynamic, and appropriate for different purposes.

Ultimate criterion =  the full domain of performance. Includes everything that defines success on the job. A concept that is strictly conceptual and, therefore, cannot be measured or observed. Long-term effectiveness

Operational measures of the conceptual criterion may vary along several dimensions:

  • Static
    Multidimensionality refers to two issues:
    - Task performance (activities) & Contextual performance (behaviors: also dark side: “workplace deviance” = voluntary behavior that violates organizational norms and thus threatens the well-being of the organizations/its members);
    - Maximum (peak level) & Typical performance (average level): correlated slightly.
     
  • Temporal (Dynamic)
    Criterion measurements are not independent of time. Substitute criteria (instead of ultimate), immediate (near-term) or intermediate (obtained later), must be used.
    Criteria may be dynamic in 3 ways:
    1. changes over time in average levels of group performance
    2. changes over time in validity coefficients (relation predictor/criterion scores)
        à        two hypotheses:
                   a). Changing task model
                   b). Changing subjects model
    Initial performance tends to show some decay in its correlation with later performance!
    3. changes over time in the rank ordering of scores on the criterion.
     
  • Individual
    Determining whether the same job, as performed by different people, is psychologically the same or different. Some perform well due to quality X and others due to quality Y. 

Four challenges before human performance can be studies en better understood:

  • Job Performance (un)Reliability
    = the consistency of stability of job performance over time.
    - Intrinsic unreliability: personal inconsistency in job performance.
    - Extrinsic unreliability: external to job demands or individual personality.
    The rank order of individuals based on performance scores I likely to fluctuate over time. Several factors explain this phenomenon. Solutions: aggregating (average) scores over time and paying more attention to factors that produce this phenomenon.
  • Job Performance Observation
    Different methods of observing performance may lead to different conclusions. Reliability of performance becomes possible only when the reliability of judging performance is adequate.
  • Dimensionality of Job Performance
  • Modification of Performance by Situational Characteristics

In Situ performance” is the specification of the broad range of effects (situational, contextual, strategic, and environmental) that may affect individual, team, or organizational performance. 6 possible extraindividual influences on performance:

  1. Environmental and Organizational Characteristics
  2. Environmental Safety
  3. Lifespace Variables (task challenge on first job assignment, life stability, supervisor-subordinate personality match, immediate supervisor’s success, job stress).
  4. Job & Location (origin of the agency, type of agency, number of agents, number of supervisors, tenure of the agency manager).
  5. Extraindividual Differences & Sales Performance
  6. Leadership

Steps in Criterion Development:

  1. Analyze jobs and needs of organizations;
  2. Develop measures that are related to required behavior in the job;
  3. Identify dimensions in performance measures using factor analysis/ cluster analysis;
  4. Develop reliable measures for each of the identified dimensions;
  5. Determine predictive validity for each of the criterion measures.

Utility à researcher find the highest and most useful validity coefficient.
Understanding à researchers advocates construct validity.

How can we evaluate the usefulness of a given criterion?

  • Relevance
    It must be logically related to the performance domain in question. Objective and subjective measures are not interchangeable.
  • Discriminating (or Sensitivity)
    The use of a particular criterion measure is warranted only if it serves to reveal discriminable differences in job performance.
  • Practical
    It is important to keep criterion measurement practical.

Criteria measuring problems:

  • Criterion Deficiency
    Measure does not cover whole criterion domain
     
  • Criterion Contamination
    Occurs when the operational or actual criterion includes variance that is unrelated to the ultimate criterion. 2 parts:
    Error = random variation
    Bias = systematic criterion contamination, can correlate with predictor measures.
    Three important and likely sources of bias:
    1. Bias due to knowledge of predictor information
    2. Bias due to group membership
    3. Bias in ratings
     
  • Criterion Equivalence
    Two criteria are equivalent if it doesn’t make any difference which one is used. Equivalence: two criteria measures correlate highly.
     
  • Different dimensions: Composite Criterion (= overall measure of success) vs. Multiple Criterion (= more meaningful, for research purposes)
    Adequate measurement of job performance requires multidimensional criteria. Although the combination of multiple criteria into a composite is often done subjectively, a quantitative weighting scheme makes objective the importance placed on each of the criteria what was used to form the composite. Forming a composite requires a careful consideration of the relative importance of each criterion measure.
    When used for managerial decision-making purposes, criterion dimensions must be combined into a composite representing overall worth to the organization.
     

Research Design and Criterion Theory
See figure 4-3, p. 70. Criterion related à
- a predictor (BF score) should be related to an operational criterion (sales made) measure;
- the operational criterion (sales made) measure should be related to the performance domain it represents (selling furniture).
Composite criteria models focus on outcomes, multiple criteria focus on behaviors, together they form a performance domain.

4. Performance

Performance management = a continuous process of identifying, measuring, and developing the performance of individuals and teams and aligning performance with strategic goals of the organization. Not a one-time event.
Performance appraisal = the systematic description of job-relevant strengths and weaknesses within and between employees or groups.

There are technical and human issues involved. N addition, performance management needs to be placed within the broader context of the organization’s vision, mission and strategic priorities.

Purposes for performance management

  • Strategic à link employee activities with the organization’s mission and goals.
  • Communication à they allow employees to know how they are doing and what the organizational expectations are regarding their performance.
  • Employment decisions à information gathered by the performance management system can serve as predictors and as key input for administering a formal organizational reward and punishment system, including promotional decisions.
  • Criteria à in HR research (e.g., in test validation).
  • Developmental à establish objectives for training programs.
  • Feedback à to employees for personal development.
  • Organizational diagnosis, maintenance, and development
  • Records à to keep proper records to document HR decisions and legal requirements.

Realities of Performance Management Systems

  1. Expected in all organizations. They need to know if individuals are performing competently, etc.
  2. Appraisal is fraught with consequences for individuals (rewards/punishments) and organizations.
  3. As job complexity increase, it becomes progressively more difficult to assign accurate, merit-based performance ratings.
  4. When sitting in judgment on coworkers, there is an ever-present danger of the parties being influenced by the political consequences of their actions.
  5. Implementation takes time and effort, and participants must be convinced the system is useful and fair.

Barriers to implementing effective Performance Management Systems:

  • Organizational à result when workers are held responsible for errors that may be the result of built-in organizational systems.
  • Political à stem from deliberate attempts by raters to enhance or to protect their self-interests when conflicting courses of action are possible.
  • Interpersonal à arise from the actual face-to-face encounter between subordinate and superior.

Fundamental requirements of successful Performance Management Systems:

  1. Congruence with Strategy
  2. Thoroughness (grondigheid, doortastendheid)
  3. Practicality: available, plausible, acceptable, easy to use.
  4. Meaningfulness
  5. Specificity
  6. Discriminability
  7. Reliability and Validity
  8. Inclusiveness
  9. Fairness and Acceptability

This 9 key requirements indicate that performance appraisal should be embedded in the broader performance management system and that a lack of understanding of the context surrounding the appraisal is likely to result in a failed system.

Performance appraisal à
1). Observation:  detection, perception, and recall or recognition of specific behavioral events.
2). Judgment:         categorization, integration, and evaluation of information.

In practice, observation and judgment represent the last elements of a three-part sequence:
* Job analysis (requirements for a job)
* Performance standards (translate requirements in acceptable/unacceptable)
* Performance appraisal (job-relevant strengths and weaknesses of each individual)

Our goal in performance appraisal is not to make distinctions (onderscheidingen) among jobs, but rather to make distinctions among people, especially performing the same job.
Standards are usually constant across individuals in a given job, while goals are often determined individually or by a group. It is often the case that charges of unequal treatment and unfair discrimination arise in jobs where no clear performance standards exist.

 

Who shall rate?
In addition to being cooperative and trained in the techniques of rating, raters must have direct experience with, or firsthand knowledge of, the individual to be rated.

  • Immediate Supervisor
    Responsible for managing the overall appraisal process. He/she is also responsible for reward and punishment.
    360-degree feedback = including input from peers, subordinates, and customers, certainly increase the types and amount of information about performance that is available. If a supervisor has recently received a positive evaluation regarding his or her own performance, he or she is also likely to provide a positive evaluation regarding his or her subordinates.
  • Peers
    Peer nominations: identifying persons with extreme high of low levels of KSAO’s.
    Peer rating: providing feedback.
    Peer ranking: discriminating various levels of performance on each dimension.
    * One possible solution that might simultaneously increase feedback value and decrease the perception of friendship bias is to specify clearly the performance criteria on which peer assessments are based.
    * A second problem with peer assessments is that they seem to include more common method variance than assessments provided by other sources. Two types of remedies to address this problem:
    - Procedural remedies
    - Statistical remedies
  • Subordinates
    Subordinates offer a somewhat different perspective on a manager’s performance. Subordinates ratings have been found to be valid predictors of subsequent supervisory ratings over two-, four-, and seven-year periods. Averaging has several advantages:
    1. Averaged ratings are more reliable than single ratings
    2. Averaging helps to ensure the anonymity of the subordinate raters.
    Subordinate ratings are of significantly better quality when used for developmental purposes rather than administrative purposes.
  • Self
    Positive à especially when it is combined with goal setting, improve the individual’s motivation and reduce his or her defensiveness during an appraisal interview.
    Negative à show more leniency, less variability, more bias, and less agreement with the judgments of others.
    Self- and supervisor ratings agree much more closely when both parties have a thorough knowledge of the appraisal system or process.
    To improve the validity of self-appraisals, consider four research-based suggestions:
    1. Relative scale (“below average”, etc.) instead of absolute scale (“poor”)
    2. Provide multiple opportunities
    3. Reassurance of confidentiality (will not be publicized)
    4. Focus on the future
  • Clients / Consumers
    In jobs that require a high degree of interaction with the public or with particular individuals. They can provide useful information.

In practice, appraising performance is not strictly an individual task. Supervisors often use information from outside sources in making performance judgments.
Groups can be a useful mechanism for improving the accuracy of performance appraisals under two conditions:
1. the task needs to have a necessarily correct answer.
2. the magnitude of the performance cue should not be too large.

Table 5-1 (p. 83).
 

Multiple raters for the same individual may be drawn from different organizational levels, and they probably observe different facets of a ratee´s job performance.
In general, it does not make sense to assess the extent of interrater agreement without first establishing measurement equivalence (= measurement invariance) because a lack of agreement may be due to a lack of measurement.
- Measurement equivalence needs to be established before ratings can be assumed to be directly comparable.
- Raters must use a hybrid multitrait-multirater analysis: raters make evaluations only on those dimensions that they are in good position to rate and that reflect measurement equivalence à improved conceptual fit for analyzing performance ratings, the probability of obtaining convergent and discriminant validity is probably higher that for the traditional multitrait-multirater analysis.
- Confirmatory Factor Analysis à each performance dimension as a latent factor and the extent to which these factors are correlated with each other.
 

Most commonly observed judgmental biases:

  • Leniency (people are being extremely easy) and Severity (extremely difficult).
    Can be controlled or eliminated in several ways:
    1). Allocating ratings into a forced distribution;
    2). Requiring supervisors to rank order their subordinates;
    3). Encouraging raters to provide feedback on a regular basis, thereby reducing rater and rate discomfort with the process;
    4). Increasing raters’ motivation to be accurate by holding them accountable for their ratings.
     
  • Central Tendency
    Avoid using the high and low extremes of rating scales and tend to cluster all ratings about the center of all scales: “everybody is average”. Solution: make clear anchors.
     
  • Halo
    Judgment based on overall impression. Presence of a certain quality, provides the observer suggested that a different quality is present as well.

Types of performance measures:
Correlations between objective and subjective measures are often low.

  • Objective = Production data (dollar volume of sales, number of errors) as well as employment data (accidents, turnover, absences). Theoretical and practical limitations often make them unsuitable.
     
  • Subjective = judgmental supervisor about quality of work. Relative (compared with others) or absolute.

Rating Systems:

  • Relative
    1). Helpful in employment decisions,
    2). Control leniency, severity and central tendency
    3). No indication of relative distance between individuals
    4). Less reliable for rankings in the middle

    - Rank Ordening à Simple ranking (from highest to lowest) or Alternation ranking (first choose the best and the worst, then the second best and second worst, etc).
    - Paired Comparisons à to choose the better of each pair
    - Forced Distribution
     

  • Absolute
    Describe a ratee without making direct reference to other ratees.

    - (Narrative) Essay à they can provide detailed feedback and are unstructured.
    - Behavioral Checklist
    - Forced-Choice System
    àspecial type of behavioral checklist: to reduce leniency errors and establish objective standards of comparison between individuals. (discriminability and preference).
    - Critical Incidents à things employees did were especially (in)effective.
    - Graphic Rating Scale (figure 5-4, p.92) à In terms of amount of structure provided, the scales differ in three ways: 1). The degree of which the meaning of the response categories is defined, 2). The degree to which the individual who is interpreting the ratings can tell clearly what response was intended, 3). The degree to which the performance dimension being rated is defined for the rater. Anchoring = in order to make meaningful distinctions in performance within dimensions, scale points must be defined clearly for the rater. Graphic rating scales may not yield the depth of information  that narrative essays or critical incidents do; but they 1). Are less time consuming to develop and administer, 2). Permit quantitative results to be determined, 3). Promote consideration of more than one performance dimension, 4). Are standardized and therefore comparable across individuals.
    - Behavioral Anchored Rating Scale (BARS) à help raters to rate. Retranslation. BARS development is a long, painstaking process that may require many individuals. May not be practical for many organizations.

1). Build in as much structure as possible in order to minimize the amount of discretion exercised by a rater.
2). Don’t require raters to make judgments that they are not competent to make.
3). Recognize that the process of performance appraisal (not just the mechanics), determines the overall effectiveness of this essential component of all performance management systems.

Performance of teams
Given the popularity of teams, it makes sense for performance management systems to target not only individual performance but also an individual’s contribution to the performance of his/her team(s), as well as the performance of teams as a whole. The assignment of team performance does not imply that individual contributions should be ignored. Three different types of teams:
1). Work or service teams;
2). Project teams;
3). Network teams.
Interpersonal relationships among the team members play a central role in the resulting ratings. 

Rater Training
Three objectives:

  1. Improve the observational skills of raters by reaching them what to attend;
  2. Reduce or eliminate judgmental biases;
  3. Improve the ability of raters to communicate performance information to ratees in an objective and constructive manner.

Regarding unintentional errors, rater error training (RET) exposes raters to the different errors and their causes.
Frame-of-Reference (FOR) Training is most effective in improving the accuracy of performance appraisals. FOR training provides trainees with a “theory of performance” that allows them to understand the various performance dimensions, how to match these performance dimensions to rate behaviors, how to judge the effectiveness of various ratee behaviors, and how to integrate these judgments into an overall rating of performance.

Recommendations regarding issues that should be explored further;

  1. Social power, influence and leadership;
  2. Trust;
  3. Social exchange;
  4. Group dynamics and close interpersonal relationships.

Feedback has a positive effect on performance, but 38% of the feedback interventions had negative effect on performance. Information regarding performance is usually gathered from more than one source. Ideally, a continuous feedback process should exist between superior and subordinate so that both may be guided. In many organizations, electronic performance monitoring (EPM) is common practice.

Several activities should engage before, during and after appraisal interviews:

  • Communicate frequently
  • Get training in appraisal
  • Judge your own performance first
  • Encourage subordinate preparation
  • Use “priming” information
  • Warm up and encourage participation
  • Judge performance, not personality or self-concept
  • Be specific
  • Be an active listener
  • Avoid destructive criticism and threats to the employee’s ego
  • Set mutually agreeable and formal goals
  • Continue to communicate and assess progress toward goals regularly
  • Make organizational rewards contingent on performance

5. Measuring and Reliability

Tests can be classified according to 3 criteria à content, administration, scoring.

Reliable measures are dependable, consistent, and relatively free from unsystematic errors of measurement.
Measurement = the assignment of numerals to objects or events according to rules à “How much”?. The definition says nothing about the quality of the measurement procedure.

Psychological measurement: individual differences in psychological traits.

 

Scales of measurement: qualitative & quantitative.
 

Scale

Operation

Formula

Nominal*

- Equality

(a = b) or (a ≠ b), but not both

 

Ordinal*

- Equality
- Ranking (transitivity)

If [(a > b) and (b > c)], then (a > c) OR

If [(a = c) and (b = c)], then (a = c)

 

Interval

- Equality

- Ranking
- Equal-sized units (additivity)

(d – a) = (c – a) + (d – c)
X’= a + bX à X’= transformed score, a & b = constants, X = original score.

 

Ratio

- Equality
- Ranking
- Equal-sized units
- True (absolute) zero

  
     

Psychological are mostly nominal- or ordinal level scales*. Intelligence, aptitude (talent) and personality scales are ordinal-level measures (not amounts, rather ranks). Yet, we can often assume an equal interval scale.

Physical measurements are evaluated in terms of the degree to which they satisfy the requirements of order, equality, and addition.

Most important purpose of psychological measures is decision making.
In personnel selection à accept or reject;
In placement à which alternative course or action to pursue;
In diagnosis à which remedial treatment is called for;
In hypotheses testing à the accuracy of the theoretical formulation;
In evaluation à what score to assign to an individual or procedure. 

HR specialists are confronted with the tasks of selecting and using psychological measurement procedures, interpreting results, and communicating the results to others.
Test = any psychological measurement instrument, technique or procedure. Testing is systematic in 3 areas: content, administration, and scoring.

Steps for selecting and creating new tests/ measures:

  1. Determining a measure’s purpose
  2. Defining the attribute
  3. Developing a measure plan
  4. Writing items
  5. Conducting a pilot study and traditional item analysis
    - distractor analysis (evaluate MC-items of the frequency with which incorrect choices are selected)
    - item difficulty (evaluate how difficult it is to answer each item correctly)
    - item discrimination (evaluate whether the response to a particular item is related to responses on the other items included the measure).
  6. Conducting an item analysis using Item Response theory (IRT)
    IRT explains how individual differences on a particular attribute affect the behavior of an individual when he or she is responding to an item. This specific relationship between the latent construct and the response to each item can be assessed graphically through an item-characteristic curve (Figure 6-1, p. 117). IRT can also be used to assess bias at the item level because it allows a researcher to determine of a given item is more difficult for examinees from one group than for from another when they all have the same ability.
  7. Selecting items
  8. Determining reliability and gathering evidence for validity
  9. Revising and updating items.

Methods of classifying tests:

  • CONTENT
    * Task:            - Verbal
                           - Non-verbal
                           - Performance

    * Process:      - Cognitive (tests)
                           - Affective (inventories)
     

  • ADMINISTRATION
    * Efficiency:    - Individual
                           - Group

    * Time:           - Speed (‘number checking’)
                           - Power (a.s.a.p. all items)
     

  • (NON) STANDARDIZED TESTS
    * Standardized
    * Non-standardized
     

SCORING
* Objective
* Nonobjective

In addition to content, administration, standardization and scoring, several additional factors need to be considered in selecting a test à

  • Cost
    Direct costs à price of software or test booklets, answer sheets, etc.
    Indirect costs à time to prepare the test materials, interviewer time, etc. 
  • Interpretation
    Thorough awareness of the strengths and limitations of the measurement procedure, background of the examinee, the situation and the consequences for the examinee.
  • Face validity
    Whether the measurement procedure looks like it is measuring the trait in question.
     

Reliability and validity information should be gathered not only for newly created measures but also for any measure before it is put to use. Reliability is important; to make that shot count, to present the ‘truest’ picture of one’s abilities or personal characteristics. = freedom from unsystematic errors of measurement. Errors reduce the reliability, and therefore the generalizability of a person’s score from a single measurement.

The correlation/reliability coefficient is a particularly appropriate measure of such agreement.
2 purposes:
1). To estimate the precision of a particular procedure as a measuring instrument;
2). To estimate the consistency of performance on the procedure by the examinees.
! 2 includes 1 à it is possible to have unreliable performance on a reliable test, but reliable performance on an unreliable test is impossible.
Reliability coefficient may be interpreted directly as the percentage of total variance attributable to different sources (coefficient of determination, r²).
X = T + e à X = observed (raw) score, T = true score (measurement error-free), e = error.
 

  • Test-retest
    Coefficient of stability. Errors: administration (light, loud noises) or personal (mood).
    TEST/FORM A--------- RETEST/FORM A (TIME > 0)
     
  • Parallel (Alternate) Forms
    Coefficient of equivalence.
    * Random assignment à creating a large pool of items with the only requirement being that they tap the same domain;
    * Incident-isomorphism à change the surface characteristics of items that do not determine item difficulty, but, at the same time, do not change any structural item features that determine their difficulty.
    * Item-isomorphism à creating pairs of items.
    FORM A --------- FORM B (TIME = 0)
     
  • Internal Consistency (more questions to measure one characteristic)
    * Kuder-Richardson reliability estimates. Coefficient alpha.
    * Split-half reliability estimates. Select the items randomly for the two halves.
     
  • Stability and Equivalence
    Coefficient of stability and equivalence. Combination of the test-retest and equivalence methods.
    FORM A --------------------FORM B (TIME > 0)
    3 types of errors à random response errors, specific factor errors, transient errors.
    The coefficient of equivalence assesses:
    - the magnitude of measurement error produced by specific-factor and random-response error, but not transient-error processes;
    - the impact of all three types of errors.
     
  • Interrater Reliability
    Can be estimated using 3 methods:
    1. Interrater agreement à % of rater agreement and Cohen’s kappa
    2. Interclass correlation à when 2 raters are rating multiple objects/individuals
    3. Intraclass correlation à how much of the differences among raters is due to differences in individuals and how much is due to the errors of measurement.
    It is not a ‘real’ reliability coefficient, because it provides no information about the measurement procedure itself.

    If a procedure is to be used to compare one individual with another, reliability should be above .90.
    While the accuracy of measurement may remain unchanged, the size of reliability estimate will vary with the range of individual differences in the group à as the variability of the scores increases (decreases), the correlation between them also increases (decreases).
    Sample must be large and representative (figure 6-10, p. 131).

    Standard error of measurement = a statistic expressed in test score (standard deviation) units, but derived directly from the reliability coefficient. Useful, because it enables us to talk about an individual’s true and error scores (figure 6-11, p. 131).
    Useful is 3 ways, to determine whether:
    1. the measures describing individuals differ significantly;
    2. an individual measure is significantly different from some hypothetical true score;
    3. a test discriminates differently in different groups.
    A final advantage is that it forces to think of test scores not as exact points, but rather as bands or ranges of scores.

    Scale coarseness: regardless of whether the scale includes one or multiple items, information is lost due to scale coarnesses, and two individuals with true scores of 4.4 and 3.6 will appear to have an identical score of 4.0. Scales of Likert-type and ordinal items are coarse. In contrast to the effects of measurement error, the error caused by scale coarseness is systematic and the same for each item. Effect à the relationship between constructs appears weaker than it actually is. Solutions à
    1. Use a continuous graphic-rating scale instead of Likert-type scales.
    2. Use a statistical correction procedure after data are collected.

    Generalizability theory = the reliability of a test score as the precision with which that score/sample, represents a more generalized universe value of the score.
    An examinee’s universe score is defined as the expected value of his or her observed scores over all admissible observations.
    The use of generalizability theory involves conducting two types of research studies: a generalizability (G) study and a decision (D) study. A test has not one generalizability coefficient, but many. The application of generalizability theory revealed that subordinate ratings were of significantly better quality when made for developmental rather than administrative purposes, but the same was not true for peer ratings.

    A raw score (“48”) is meaningless because psychological measurement is relative rather than absolute à need to compare the raw scores with a norm group:
    - percentile ranks (ordinal)
    - standard scores/z-scores (interval) à disadvantage = decimals and negative numbers, so à transform to T-scores (figure 6-12, p. 138).
     

6. Validity

 

Reliability is a necessary, but not sufficient, condition for validity. The lower the reliability, the lower the validity à Rxy < √Rxx.
Rxy=validity coefficient(correlation between scores on procedure X and external criterion Y).
Rxx=reliability coefficient.

Correction for attenuation in the criterion variable:

Rxt =        Rxy     à Rxt = correlation between scores on some procedure and “true score”.
   √Ryy

Using different reliability estimates is likely to lead to different conclusions regarding validity. An underestimation of Ryy produces an overestimation of the validity coefficient.

* Traditionally, validity was viewed as the extent to which a measurement procedure actually measures what it is designed to measure.
* Validation = the investigative process of gathering or evaluating the necessary data:
1). WHAT a test or procedure measures;
2). HOW WELL it measures.
* Validity is not a dichotomous variable (valid or not valid), but a matter of degree.
* No different “kinds” of validity, but only different kinds of evidence for analyzing validity.
* It is the inferences regarding the specific uses of a test or other measurement procedure that are validated, not the test itself.
* Validity is an evolving property and validation is a continuing process.
 

  1. Content-related evidence
    The extent to which items cover the intended domain.

    Whether or not a measurement procedure contains a fair sample of the universe of situations it is supposed to represent (adequacy of the sampling). 3 assumptions:
    1. the area of concern to the user can be conceived as a meaningful, definable universe of responses;
    2. a sample can be drawn from the universe in some purposeful, meaningful fashion;
    3. the sample and sampling process can be defined with sufficient precision to enable the user to judge how adequately the sample of performance in the universe.
    * If a selection procedure focuses on work products (typing), then content-related evidence is appropriate. Of the focus is on work processes (reasoning ability), then content-related evidence is not appropriate (because not directly observable).
    * The distinction between a content-related strategy and a construct-related strategy is a matter of degree, because constructs underlie all psychological measurement. Content-related is a precondition for construct-related validity evidence.

    * Steps:
    1. Conduct a job analysis;
    2. Share the list of KSAO’s with Subject Matter Experts (SME’s);
    3. They should think of an individual who is a newly appointed job incumbent;
    4. Think about alternative items;
    5. Keep minimum qualifications straightforward and express them using the same format;
    6. Ask SME’s to rate the list of potential items independently;
    7. Link each of the potential items back to the KSAO’s;
    8. Group potential items in a thoughtful manner.
    * Content-related evidence may be evaluated in terms of the extent to which members of a content-evaluation panel perceive overlap between the test and the job-performance domain.
    *              - Content-Validity Index (CVI) à content-validity ratio (CVR)
                   - Substantive-Validity Index
                   - Content-Adequecy procedure
                   - Analysis-of-Variance Approach.
    These procedures illustrate that content-related evidence is concerned primarily with inferences about test construction rather than with inferences about the test scores.

    * Although it does have its limitations, it has made a positive contribution by directing attention toward (1) improved domain sampling and job analysis procedures, (2) better behavior measurement, (3) the role of expert judgment in confirming the fairness of sampling and scoring procedures and in determining the degree of overlap between separately derived content domains.
     

  2. Criterion-related evidence
    Empirical relationship between predictor and criterion scores.

    Predictive study = oriented toward the future and involves a time interval during which events take place à “Is it likely that Laura will be able to do the job?”
    *              1. Measure candidates for the job;
                   2. Select candidates without using the results of the measurement procedure;
                   3. Obtain measurements of criterion performance at some later date;
                   4. Assess the strengths of the relationship between predictor and criterion.
    * Statistical power: the probability of rejecting a null hypothesis when it is false:
    Parameters:       1. The power of the test (1–ß)
                               2. Type I error (α)
                               3. Sample size, N (power increases as N increases)
                               4. Effect size (power increases, as effect size increases).
    * A power analysis should be conducted before a study is conducted. 
    * “small” (.10), “medium” (.30) of “large” (.50) effects.
    * Assuming that multiple predictors are used in a validity study and that each predictor accounts for some unique criterion variance, the effect size of a linear combination of the predictors is likely to be higher than the effect size of any single predictor.
    * When has an employee been on the job long enough to appraise his performance? When there is evidence that the initial learning period has passed (+ after 6 months).

    Concurrent study = oriented toward the present and reflects only the status quo at a particular time à “Can Laura do the job now?”
    * Criterion measures usually are substitutes for other, more important, costly, complex performance measures. Valuable only if:
    1). There is a relationship between the convenient/accessible measure and the costly/complex measure;
    2). The use of the substitute measure is more effective.
    * With cognitive ability tests, concurrent studies often are used as substitutes for predictive studies.
    * This design ignores the effects of motivation and job experience on ability.
     

Factors affecting the size of obtained validity coefficients:

  • Range Improvement
  • Range Limit
    Because the size of the validity coefficient is a function of two variables, restricting the range (truncating or censoring) either of the predictor of the criterion will serve to lower the size of the validity coefficient (figure 7-1, p. 150). Selection effects on validity coefficients result from changes in the variance(s) of the variable(s).
     à Direct range restriction & Indirect/Incidental range restriction (when an experimental predictor is administrated to applicants, but is not used as a basis for selection decisions).
    * The range of scores also may be narrowed by preselection; when a predictive validity study is undertaken after a group of individuals has been hired, but before criterion data become available for them.
    * Selection at the hiring point reduces the range of the predictor variable(s), and selection on the job during training reduces the range of the criterion variable(s).
     

To correct for direct range restriction on the predictor when nu third variable is involved; 3 scenario’s (formula’s à figures 7-4, 7-5, 7-6, p. 151).
1). See formula
2). Selection takes place on one variable (either the predictor or the criterion), but the unrestricted variance is not known
3). If incidental restriction takes place on third variable z and the unrestricted variance on z is known.
à In practice, there may be range-restriction scenarios that are more difficult to address with corrections. These include:
1. Those were the unrestricted variance on the predictor, the criterion, or the third variable is unknown, and:
2. Those were there is simultaneous or sequential restriction on multiple variables.

Multivariate-correction formula = can be used when direct restriction (one or two variables) and incidental restriction take place simultaneously. Also, the equation can be used repeatedly when restriction occurs on a sample that is already restricted à Computer program RANGEJ, which makes this correction easy to implement.
There are several correction procedures available. Criterion-related validation efforts focusing on a multiple-hurdle process should consider appropriate corrections that take into account that range restriction, or missing data, takes place after each test is administered. Corrections are appropriate only when they are justified based on the target population (the population to which one wishes to generalize the obtained corrected validity coefficient).

Homoscedasticity = The computation of the Pearson product-moment correlation coefficient assumes that both variables are normally distributed, the relationship is linear, and when the bivariate distribution of scores (from low to high) is divided into segments, the column variances are equal = the data points are evenly distributed throughout the regression line and the measure predicts as well at high score ranges as at low score ranges. (=als alle variabelen dezelfde eindige variantie hebben. Homoscedasticiteit duidt met andere woorden aan dat in een reeks resultaten de variantie van de residuen onafhankelijk is van de afhankelijke variabele).

  1. Construct-related evidence
    The understanding of a trait or construct that a test measures.

    A conceptual framework specifies the meaning of the construct, distinguishes it from other constructs, and indicates how measures of the construct should relate to other variables. Provides the evidential basis for the interpretation of scores.
    Construct validation is not accomplished (volbracht) in a single study; it requires an accumulation of evidence derived from many different sources to determine the meaning of the test scores and an appraisal of their social consequences à both logical and empirical process.

    The construct is defined not by an isolated event, but, rather, by a nomological network. Information relevant either to the construct or to the theory surrounding the construct may be gathered from a wide variety of sources.

     

Convergent validation = samenhang tussen de resultaten van het oorspronkelijke onderzoek en de resultaten van een gelijksoortig onderzoek.
Discriminant validation = samenhang tussen de onderzoeksresultaten en een andersoortig onderzoek.

Multitrait-multimethod matrix (MTMM) = an approach to examining construct validity à figure 7-2, p. 155. In this approach, reliability estimated by two measures of the same trait using the same method, while validity is defined as the extent of agreement between two measures of the same trait using different methods. This shows that the concepts of reliability and validity are intrinsically connected, and a good understanding of both is needed to gather construct-related validity evidence.
Limitations:
1). The lack of quantifiable criteria;
2). The inability to account for differential reliability;
3). The implicit assumptions underlying the procedure.

Cross Validity = to whether the weights derived from one sample can predict outcomes to the same degree in the population as a whole or in other samples drawn from the same population. There are procedures available to compute cross-validity:
- Empirical cross-validity à fitting a regression model in a sample and using the resulting regression weights with a second independent cross-validation sample.
- Statistical cross-validity à adjusting the sample-based multiple correlation coefficient (R) by a function of sample size (N) and the number of predictors (k).
Cross-validation, including rescaling and reweighting of items if necessary, should be continual (recommended annually), for as values change, jobs change, and people change, so also do the appropriateness and usefulness of inferences made from test scores.

In many cases, local validation may not be feasible due to logistics or practical constraints. Several strategies available to gather validity in such situations:

  • Synthetic Validity
    Process of inferring validity in a specific situation from a systematic analysis of jobs into their elements, a determination of test validity for these elements, and a combination or synthesis of the elemental validities into a whole.
  • Test Transportability
    to be able to use a test that has been used elsewhere locally without the need for a local validation study, evidence must be provided regarding some points.
  • Validity Generalization (VG)
    Meta-analyses
    conducted with the goal of testing the situational specificity hypothesis have been labelled psychometric meta-analysis or VG studies. This allows small organizations to implement tests that have been used elsewhere without the need to collect data locally. The use of VG evidence alone is not recommended.
     

Empirical Bayes Analysis = his approach involves first calculating the average inaccuracy of meta-analysis and a local validity study under a wide variety of conditions and then computing an empirical Bayesian estimate, which is a weighted average of the meta-analytically derived and local study estimates.

7. Fairness

Unfair discrimination = when persons with equal probabilities of success on the job have unequal probabilities of being hired for the job.
Although fairness is a socially constructed concept and is defined in different ways, test bias is a psychometric concept and it has been defined quite clearly.

Concept of differential validity if distinct from differential prediction.
Equal correlations do not necessarily imply equal standard errors of estimate, nor do they necessarily imply equal slopes or intercepts of group regression equations.

Differential validity = when the scores of a test have significantly different degrees of validity for different groups of people (e.g., males vs. females).
Figure 8-1 and 8-2, p. 169: no differential validity.

An important consideration in assessing differential validity is whether the test in question produces adverse impact.
Adverse impact
= members of one group are selected at substantially greater rates than members of another group. To understand whether that is the case, compare selection ratios across groups.

SR1 (selection ratio 1)/ SR 2 = Adverse impact ratio. 80% rule: if adverse impact ratio < .80 à adverse impact. 

Figure 8-3, p. 170: Adverse Impact is justifiable, since minority groups score lower.
Figure 8-4, p. 170: Valid predictor for whole group, invalid for each group separately.
Figure 8-5, p. 171: Equal validity, unequal predictor means. As long as the expectancy of success on the job is equal for the two groups, the use of separate cut off scores is justified.
Figure 8-6, p. 171: Similar validity, dissimilar criterion means.
Figure 8-7, p. 172: Only valid for majority.
Figure 8-8, p. 172: Dissimilar criterion means, valid only for majority.
à There are numerous possibilities when heterogeneous groups are combined in making predictions. When there is differential validity, the use of a single regression line, cut score, or decision rule can lead to serious errors in prediction.

Evidence of differential validity provides information only on whether a selection device should be used to make comparisons within groups. A selection procedure may be fair and yet predict performance inaccurately and vice versa.

Differential Validity exists when:
1). There is a significant difference between the validity coefficients obtained for two subgroups;
2). The correlation found in one or both of these groups are significantly different from zero.

Single-group validity = in which a given predictor shows validity significantly different from zero for one group only, and there is no significant difference between the two validity coefficients. 

Using tests with adverse impact can lead to negative organizational and societal consequences and perceptions of test unfairness. It is recommended that, when adverse impact is found, HR specialists strive to use alternative tests with similar levels of validity, but less adverse impact. It would be more efficient to reduce adverse impact by using available testing procedures. Countering/ reducing adverse impact:

  • Improve recruiting strategy for minorities (reduce also the overall validity);
  • Use cognitive abilities in combination with non-cognitive predictors;
  • Use multiple regression and other methods for combining predictors into a composite;
  • Use more specific than general cognitive predictors (differences are smaller for specific abilities);
  • Use differential weighting for the various criterion facets;
  • Use alternate modes of presenting test stimuli (using formats that do not have heavy reading and verbal requirements, such as video-based tests);
  • Enhance face validity (increasing motivation can help reduce adverse impact);
  • Implement test-score banding* (to select among the applicants).
  • Adverse impact may occur even when there is no differential validity across groups.

Tests are never perfectly reliable, and the relationship between test scores and criteria is never perfect. Test-score banding* = is decision-making process that is based on these two grounds.

Sliding-band method = an attempt to bring together economic and social objectives within the framework of generally accepted procedures for testing hypotheses about differences in individual test scores.

Criterion-reference banding model = Procedure that incorporates not only reliability information for the predictor but also reliability information for the criterion and the explicit relationship between the predictor and criterion scores.

Differences between comparison of predictor-referenced bands and criterion-referenced bands:

  • Use of validity evidence;
  • Bandwidth;
  • Inclusion of criterion information;
  • Use of reliability information.

The interaction of examiner with examinee should be professional, courteous, caring and respectful. Attention to these aspects of test use and interpretation is no less important than more technical concerns. To understand the fairness and impact of the selections systems in place, it is necessary not only to conduct technical analyses on the data but also to take into account the perceptions of people who are subjected to the system.
From the perspective of applicants and test takers, there are 2 dimensions of fairness:
1). Distributive (perceptions of fairness of the outcomes);
2). Procedural (perceptions of fairness of the procedures used to reach a hiring decision).

Applicants’ personality profiles are related to their perceptions of fairness:
Higher on neuroticism à more negative perceptions;
Higher on agreeableness à more positive perceptions.
 

Interpersonal Context
Employers do have control of the procedures implemented and can improve the perceived fairness of the testing process. Although tests may be technically fair and lack predictive bias, the process of implementing testing and making decisions can be such that applicants perceive unfairness.

Public Policy
Public interest in measurement include three functions:
1). Diagnosing needs;
2). Assessing qualifications to do;
3). Protecting against false credentials.
We have moved from naive acceptance of tests, through a period of intense aggression to tests, to a higher acceptance of tests. 
Tests serve as instruments of public policy, and public policy must be evaluated periodically.

Good policy is not for or against tests; what really matters is how tests are used.

The use of cognitive abilities tests in combination with other predictors provides the highest level of accuracy in predicting future performance.

8. Job Analysis

Job analysis:

  • Job description à   specify the work to be done
  • Job specification à            indicate the personal characteristics necessary to do the work.
     

Before some decisions can be made, we must first define the jobs in question, specify what employee behaviors are necessary to perform them, and then develop hypotheses about the personal characteristics necessary to perform those work behaviors à = job analysis.
Figure 9-1, p. 195: uses/ purposes of job analysis information.

Duty = number of tasks
Position = one or more duties
Job = group of positions that are similar in their significant duties
Job family = group of two or more jobs

Occupation = a group of similar jobs found in different organizations at different times
Vocation = similar to occupation, but is more used by a worker, than by an employer.

Different choices confront the job analyst:

  1. Activities or attributes?
  2. General or specific?
  3. Qualitative or quantitative?
  4. Taxonomy-based or blank slate?
  5. Observers or incumbents and supervisors?
  6. KSA’s of KSAO’s? (O = other personal characteristics)
  7. Single job or multiple-job comparison?
  8. Descriptive or prescriptive?

Job description (traditional, task-based):

  1. Job title
  2. Job activities and procedures
  3. Working conditions and physical environment
  4. Social environment
  5. Conditions of employment

Job specification:
Represent the KSAO’s deemed necessary to perform a job. Guidelines for recruitment, selection, and placement. Job specifications should indicate minimally acceptable standards for selection, because some individuals may be restricted from certain jobs because job specifications are inflexible, high or invalid.

Interrater reliability: the degree to which different rates on the components of a target work role of job, or the extent to which their ratings covary.
Intrarater reliability: a measure of stability (rate – rerate the same job at different times),

Job descriptions are valid to the extent that they accurately represent job content, environment, and conditions of employment.
Job specifications are valid to the extent that persons possessing the personal characteristics believed necessary for successful job performance.

However, many job-analysis processes are based on human judgment.

Obtaining Job Information:

  • Direct Observation and Job Performance
    Observations should include a representative sample of job behaviors. Observation and job performance are inappropriate for jobs that require a great deal of mental activity and concentration: solution: digital video. Functional Job Analysis (FJA) often is used to record observed tasks.
     
  • Interview
    Most commonly used. “conversation with a purpose”. The analyst should follow a structured interview form that covers systematically the material to be gathered during the interview. Analyst should provide a comfortable atmosphere where the worker/team feels free to discuss job duties and responsibilities. Wise to interview several incumbents and supervisors.
     
  • SME Panels
    Panels of 6 to 10 SME’s are often convened for different purposes in job analysis:
    1. to develop information on tasks or KSAO’s to be used in constructing job-analysis questionnaires.
    2. in test development, to establish linkages between tasks and KSAO’s, KSAO’s and test items.
    Representative panels of SME’s provide results very similar to those obtained from broad surveys of respondents in the field.
     
  • Questionnaires
    Standardized. Respondents check items that apply to a job or rate items in terms of their relevance to the job. Cheap and quick. Problems: time-consuming, expensive to develop and misunderstandings are likely to go uncorrected. 
    Task inventories and checklists à questionnaires that are used to collect information about a particular job or occupation. An overall perspective of the total job is extremely difficult to obtain with checklist information alone. One purpose is to assess the relative importance of each task.
     
  • The Position Analysis Questionnaires (PAQ)
    Instrument that is based on statistical analyses of primarily worker/oriented job elements and lends itself to quantitative analysis. 194 items/job elements: information input, mental processes, work output, relationships with other persons, job context. Average item reliability of the PAQ is .80. PAQ ratings from experts and job-naïve raters are not equivalent. Problems:
    1. Since no specific work activities are described, behavioral similarities in jobs may mask genuine task differences between them.
    2. Readability à Job Element Inventory (JEI) was developed: lower reading level.
     
  • Fleishman Job Analysis Survey (F-JAS)
    Its objective is to describe jobs in terms of the abilities (cognitive, psychomotor, physical, sensory/ perceptual, social/ interpersonal) required to perform them. Rating scales define each ability. (Figure 9-8, p. 209).
     
  • Critical Incidents
    The collection of a series of anecdotes of job behavior that describes especially good or especially poor job performance (cases, STAR):
    1. what led up to the incident and the context in which it occurred;
    2. exactly what the individual did that was so (in)effective;
    3. the perceived consequences of this behavior;
    4. whether or not such consequences were actually within the control of the employee.
     
  • Job Analysis Wizard (JAW)
    Based on the Internet. Availability of sophisticated information search-and-retrieval methods.

Many personal characteristics, like conscientiousness, can be used as valid predictors for many different types of occupations.

Personality-based Job analysis (PBJA) may be particularly useful for cross-functional and difficult-t-define jobs that cannot be described in terms of simple tasks of discrete KSA’s.

Personality-Related Position Requirements Form (PPRF): a worker-oriented job-analysis method that assesses the extent to which each of the Big Five personality traits is needed for a particular job. (Neuroticism, Extraversion, Openness to experience, Agreeableness, Conscientiousness).

There are times when organizations want information concerning specific skill and ability requirements for jobs or positions that do not yet exist (new technology, etc.). The likelihood of even more change in the future makes strategic job analyses ever more important. Competency models are future oriented, but standard job-analysis methods can also be adapted for this purpose. While predictions van represent useful forecasting information for decision makers, their validity rests on two assumptions;
1). The covariance relationships among tasks and skills-abilities remain stable over time;
2). The tasks and skills-abilities included in the database include the same kinds of skills and abilities to be forecasted.

Competency models: to identify variables related to overall organizational fit and to identify personality characteristics consistent with the organization’s vision. A form of job analysis that focuses on broader characteristics of individuals and on suing these characteristics to inform HR practices.

One can look for differences among jobs; analysis of variance or multivariate analysis of variance approach.
Alternatively, one can look for similarities among jobs; cluster analysis or Q-type factor analysis.

In the information-driven organization of today, many firms are using enterprisewide resource planning (ERP) systems à require underlying definitions and architectures of work and work requirements in order to build platforms of information that can be used to support a wide range of HR applications. Competency models might provide such information, but rigorous job-analysis techniques should be used to define core competencies.

Dictionary of Occupational Titles (DOT): to help deal with the economic crisis of the Great Depression by allowing the new public employment system to link skill supply and skill demand. Does not indicate directly what personal characteristics workers must have to perform the job or the context in which the job is performed à
Occupational Information Network (O*Net): information about job and work obtained over the 60 years since the DOT was developed. A national occupational information system that provides comprehensive descriptions of the attributes of workers and jobs. 4 principles:
1. “Multiple windows” into the world of work;
2. A common language of work and workers descriptors;
3. Description of occupations based on taxonomy from broad to specific (hierarchy);
4. A comprehensive content model that integrates the previous three principles à Figure 9-10, p. 215!

9. Strategic HR Planning

WP systems (link to strategic business planning) as an integrated system:

  1. Talent inventories à assessing current resources
  2. Forecasts of supply and demand à planners to predict employment requirements
  3. Actions plans à in recruitment, selection, placement, training, etc.
  4. Control & Evaluation à feedback to the WP system.

Definition = an effort to anticipate future business and environmental demands on an organization and to meet the HR requirements dictated by these conditions. Strategic and tactical business plans are the basis for HR strategy. (figure 10-1. p. 219).

To develop strategies à organizations need to plan à reasons:
1. Planning leads to success;
2. It gives a sense of being in control;
3. Requires managers to define the organization’s objectives;
4. Without objectives effective control is impossible.
Planning: strategic, operational, or tactical levels. Each step in the process may involve considerable data collection, analysis, and iterative management reviews.

A). Strategic planning = process of setting organizational objectives and deciding on comprehensive action programs to achieve these objectives:
- SWOT à         Strenghts, Weaknessess, Opportunities, Threats
- PEST à           Political, Economic, Social, Technological
- STEER à        Sociocultural, Technological, Economic, Ecologic, Regulatory
1. Defining company philosophy
2. Formulating company and divisional statements of identity, purpose, and objectives;
3. Evaluating the company’s ‘SWOT’ (see above);
4. Determining the organization design;
5. Developing appropriate strategies for achieving objectives;
6. Devising programs to implement the strategies.

B). Value-based view = the alternative approach to developing strategy à strategy comes last, after the values and practices are aligned and after the company develops capabilities that set it apart.

Dilemma for strategic planning: “how to plan for the future when the future changes so quickly?” à to make the planning process more democratic, it needs to include a wide range of people. Listen carefully to customers and then offer solutions what customers want.

A business strategy and HR strategy are interdependent, relationship of HR strategy to the broader business-strategy: figure 10-3, p. 222:

1. How do we compete? (innovation, quality, cost, leadership or speed?)
2. What must we execute well?
3. How do we delight our internal and external customers?
4. What competencies, incentives, and work practices support high performance?
 

    
   
  
 

   = Plan: top-down                                = Executive: bottom-up

High-performance work practices include:
à pushing responsibility down to employees operating in flatter organizations;
à increasing the emphasis on line managers as HR managers;
à instilling learning as a priority in all organizational systems;
à decentralizing decision-making to autonomous units end employees;
à linking performance measures for employees to financial performance indicators.

Workforce plans must flow from, and be consistent with, the overall business and HR strategies!
A forecast of net workforce needs is the result of an analysis of the future availability (supply) of labor and future labor requirements (demand), tempered by an analysis of external conditions (technologies, markets, competition).

  1. Talent inventory
    = organized database of the existing skills, abilities, career interests, and experience of the current workforce. Part of a broader WP effort that helps managers decide whether to “buy”, “make”, or “rent” employees. Some common uses of talent inventory à identification of candidates for promotion, succession planning, transfer, training, career planning, organizational analysis, etc.
     
  2. Forecasting Workforce Supply and Demand
    Talent inventories and workforce forecasts must complement each other. To estimate future labor requirements. 1. Anticipating the supply of HR. 2. Anticipating organizational demand for various types of employees. 2 paradoxes:
    -              The techniques are basically simple and easy to describe, but applying them
                   successfully may be complex and difficult.
    -              After the forecast has been made, it may prove to be most useful when it
                   proves to be least accurate as a vision of the future.
    External and Internal Workforce Supply. Most common type of internal =
    Leadership-Succession Planning à to identity top talent (HiPo’s), both within functions and corporate-wide; to develop pools of talent for critical positions; and to identify development plans for key leaders.
    CEO Succession à Ideally, careful succession planning grooms people internally. On other hand; ‘outsiders’ bring changes in a wide variety of organizational dimensions. To avoid a future crisis in leadership succession:
    1. Ensure that the sitting CEO understands the importance of this task;
    2. Focus on an organization’s future needs;
    3. Encourage differences of opinion with respect to management decisions;
    4. Provide broad exposure to variety of jobs, changing responsibilities every 3/5 years.
    5 Provide access to the board.
    Sometimes small, family-owned firms look to outsiders, especially for new ideas and technology for the firm. Every company: once the firm has a new leader, the old leader should “get off the bus”.
     
  3. Workforce Demand
    Demand workforces subjective; because of multiple uncertainties regarding trends (changes in technology, etc.). Factors such as the duration of the planning period, quality of the data, and the degree of integration of WP with strategic business planning all affect accuracy in forecasting the demand for labor à depends on the degree of flexibility in staffing the workforce.
    Integrated workforce supply & demand forecast (“bottom-line”)à figure 10-6, p. 231.
    Actions programs help organizations adapt to changes in their environments.
    It is often more cost-effective to “buy” rather than to “make”.
     

Control and Evaluate
Goals and objectives are fundamental in measuring performance. But, certain objectives may be hard to quantify à specify sub-objectives.
Numerous monitoring procedures are commonly in use: examination of the costs of current practices, employee and management perception of results, measurement and analysis of costs and variations in costs under alternative decisions.
Evaluation is likely to be more qualitative than quantitative, because supply and demand forecasts are based more on subjective opinions than on hard data. An obvious advantage of quantitative information is that it highlights potential problem areas and can provide the basis for constructive discussion of the issues.
Responsibility for WP: every line managers in the organization; he/she is responsible for integrating HR management functions; planning, supervision, performance appraisal, job assignment. Role of HR manager is to help line manager.
We plan in order to reduce uncertainty of the future! Ultimate success in WP rests on the quality of the action programs established to achieve HR objectives and on the organization’s ability to implement these programs.

10.  Recruitment

Advances in technology (LinkedIn) and growing intensity of competition in domestic and international markets, have made recruitment a business.

Recruitment process (figure 11-1, p. 236):

  • Characteristics of the firm (generating pool of candidates);
  • Characteristics of the vacancy itself (keep status of candidates);
  • Characteristics of the labor market (“getting to yes”/post-offer closure).

Recruitment planning begins with a clear specification of HR needs and the time frame.

  • Whom to recruit? 
  • Where to recruit?

The effective use of “in-house” talent should come first.

Recruitment planning à 3 parameters must be estimated: time, money and necessary staff à ‘number of leads needed to generate a given number of hires in a given time’.

Yield ratio = ratios of leads to invites, invites to interviews (others selection methods), interviews to offers, offers to hires obtained over some specified time period (6 months/1 year). Figure 11-2, p. 238.

Time-lapse data = the average intervals between events (‘interview to offer’).

Figure 11-3, p. 239.
Time period also depends on labor-market conditions. Supply of suitable workers in a particular labor market is high relative to available jobs à Price of labor will be cheap.

- Analyze the external labor market by source;
- Evaluate the entire organizational environment;
- Capitalize on any special factors that are likely to attract candidates (‘image’);
 à 3 reasons why positive image might influence candidates:
1. People seek to associate themselves with organizations that enhance their self-esteem;
2. May signal to provide other desirable attributes (high pay, strong career opportunities);
3. May make applicants more open to whatever information an organization provides.

Gross Cost per Hire = TCOR (Total cost of recruiting) / NH (number of individuals hired).

Costs:

  • Staff costs (salaries, benefits)
  • Operational costs (telephone, material)
  • Overhead (office furniture, equipment)

Most expansive sources of recruiting: private employment agencies. Followed by flied trips. Less expansive: advertising responses, Internet responses, write-ins, promotions.

Employee referrals generated more applicants, more hires and higher yield ratio than geographically focused advertisements. Recruiters must be familiar with the job description. Characteristics of organizations and jobs have a greater influence on the likelihood of job acceptance by candidates than do characteristics of the recruiter. Nevertheless, at the first stage of recruitment, characteristics of recruiters do affect the perceptions of candidates.

Different types of recruiters are important, because:
1). They vary in the amount of job-related information;
2). They differ in terms of their credibility in the eyes of recruits;
3). They signal different things to job candidates.

Step 1 à examine internal sources/talent inventories, especially in large organizations.
Step 2 à external resources: advertising, employment agencies, educational institutions, professional organizations, military, labor unions, career fairs, outplacement firms, direct application, intracompany transfers and company retirees, employee referrals.

- Informal contacts are used widely and effectively at all occupational levels;
- Use of the public employment service decrease as required job skills increase;
- Job advertisements à those that include more information; being viewed as more attractive. 

- Employee referrals are popular à 43% of companies using them.

- Individuals referred by higher-performing employees should be viewed by HR as being of higher quality and should have higher scores than referrals from lower performers who should be viewed more favorable and score higher than Internet recruits.
- Most applicants use more than one recruitment source to learn about jobs.
- 94% of top 500 U.S. companies recruit via the Internet.

Any number of cost and quality analyses might be performed, but it is crucial to choose those that are strategically most relevant to a given organization. Choose measures of recruiting success that are most relevant to various stages in the recruitment process.

Job search from applicant’s perspective à Many job applicants:
1). Have an incomplete and/or inaccurate understanding of what a job opening involves;
2). Are not sure what they want from a position;
3). Do not have a self-insight with regard to their knowledge, skills, abilities;
4). Cannot accurately predict how they will react to the demands of a new position.
Job-choice is highly social. Networking is important (LinkedIn, Facebook, Twitter).
“My Job Search Agent” à send applicant an e-mail within minutes when match is found.
Most applicants preferred decentralized organizations and performance-based pay to centralized organizations and seniority-based pay.

Realistic Job Preview (RJP) à Job-acceptance rates tend to be lower, and job performance is unaffected, but job survival tends to be higher for those who receive an RJP prior to hire. RJP’s are likely to have the greatest impact when the applicant:
- can be selective about accepting a job offer;
- has unrealistic job expectations;

- would have difficulty coping with job demands without the RJP.
Realistic Job Information could initially be provided in a job advertisement and on Internet.

Important to emphasize that all components of the recruitment-selection process are interrelated.

 

11. Selection I

 

Past behavior is the best predictor of future behavior.

4 kinds of information obtainable:

  • Employment and educational history
  • Evaluation of the applicant’s character, personality, interpersonal competence
  • Evaluation of the applicant’s job performance ability
  • Willingness to rehire
  • Recommendations and Reference Checks
    Average validity of recommendations = 0.14. Letters should include:
    1. Degree of writer familiarity with the candidate;
    2. Degree of writer familiarity with the job in question;
    3. Specific examples of performance;
    4. Individuals or groups to whom the candidate is compared.
    Average validity of reference checks = 0.26. Should include:
    1. Consistent;
    2. Relevant;
    3. Written;
    4. Based on public records, if possible.
    Can also via telephone à Structured Telephone Reference Check (STRC).
    Recommendations and reference checks can provide important information.
     
  • Personal History Data
    Job Application Blanks (JAB’s): questions must be job-related and not privacy.
    Weighted Application Blanks (WAB’s): Certain aspects of an individual’s total background (years of education, previous experience, etc.) should be related to later job success.  WAB: identifying which of these aspects reliably distinguish groups of effective and ineffective employees. Simple and straightforward. Cross-validated.
    Biographical Information Blanks (BIB’s): Self-report instrument, multiple-choice.

    à Although individuals have the ability to fake, it does not mean that they do it:
    - information can be verified;
    - multiple-choice questions are less agreeable to distortion;
    - distortion is less likely if applicants are warned of the presence of a lie scale;
    - asking job applicants to detail their answers.

    Evidence that the validity of personal history data as predictor of future work behavior is quite good.
    Biodata inventories are relative free of adverse/negative impact compared to cognitive ability tests.
    Personal-history data are most useful when they are based on a rational approach (questions are based on a job analysis). It is probably the only legally valid approach for the use of personal history data in employment selection. However, the validity of biodata items can be affected by the life stage in which the item is.
     

  • Honesty/ Integrity Tests
    1. Overt integrity tests à questions about theft and other forms of dishonesty and questions about admissions of theft and other illegal activities.
    2. Personality-oriented measures à predictors of a wide variety of counterproductive behaviors.
    Both (1 + 2) seem to have a common latent structure reflecting conscientiousness, agreeableness, emotional stability. It works, but 3 issues have to be resolved:
    1. Need for a greater understanding of the construct validity of integrity tests, because the integrity tests are not interchangeable.
    2. Women and old people (40+) score higher than men and young people (40-).
    3. Real threat of intentional distortion (ironic: being not honest on honesty test).
    à Alternative ways to assess integrity: conditional reasoning: testing focuses on how people solve what appear to be traditional inductive-reasoning problems.
  • Evaluation of Training and Experience
    Evaluating job experience is not as easy as one may think because experience includes both qualitative and quantitative components that interact over time. However, using experience as a predictor of future performance can pay off.
     
  • Computer-Based Screening (CBS)
    CBS can be used simply to convert a screening tool from paper to an electronic format = electronic page turner.
    Computer-adaptive testing (CAT) presents all applicants with a set of items of average difficulty, and, if responses are correct, items with higher levels of difficulty. Uses IRT to estimate an applicants’ level.

    Advantages:
    1. Administration is easier;
    2. Applicants can assess the test from different locations (increasing applicant pool);
    3. Possibilities for applicants with disabilities;
    Disadvantages:
    1. Technology changes so fast that HR professionals simply cannot keep up;
    2. Costly;
    3. “Image problem” (low face-validity).
     

  • Drug Screening
    67% of employers use drugs. Critics think that screening violates an individuals’ right to privacy and that tests are often inaccurate. However, employees in jobs where public safety is crucial, should be screened for drugs use.
    Is drug screening legal? If illegal drug use, on or off the job, may reduce job performance, the employer has adequate legal grounds for conducting drug tests.
     
  • Polygraph Tests
    Intended to detect deception and are based on the measurement of physiological processes (heart rate) and changes in those processes.
     
  • Employment Interviews
    Almost universal. 2 functions:
    1. Fill information gaps in other selection devices;
    2. To assess factors that can be measured only via face-to-face interaction.
    Distortion of interview information: to upgrade rather than downgrade prior work experience. Computer-based interviews decreased social-desirability distortion compared to face-to-face interviews.
    Impression-management = applicants who are pleasant and compliment the interviewer are more likely to receive more positive evaluations.

    The best way to improve validity is to improve the structure of the interview.
     

Factors affecting the interview decision-making process in each of these areas:
1. Social/ Interpersonal factors (interview-applicant similarity and (non)verbal cues);
2. Cognitive factors (pre-interview-/ first impressions, prototypes, contrast effects);
3. Individual differences (inwardly/ outwardly, interviewer training experience/mood);
4. Structure (questioning consistency, evaluation standardization, question sophistication, rapport building);
5. Use of alternative media (videoconferencing/skype, telephone interviewing).
 

  • Virtual Reality Screening (VRT)
    Simulator. VRT has some challenges:
    1. Can lead to syndromes (eyestrain, headache);
    2. Cost and lack of commercial availability;
    3. Technical limitations.

12. Managerial Selection

 

Results from different studies suggest that different knowledge, skills and abilities are necessary for success at the various levels within management. The sample sizes required for rigorous research are virtually impossible to obtain at higher managerial levels. Applicant pre-selection poses problems with severe restriction of range.

Both objective and subjective indicators frequently are used to measure managerial effectiveness. ‘The effective manager’ is an optimizer who uses both internal and external resources  (human, material, financial) in order to continue over the long term. To be a successful optimizer, a manager needs to possess implicit traits (results orientation, strategic thinking, innovation, risk taking, integrity).
The process begins with a specification of the total domain of the manager’s job responsibilities.
“Career attainment (skill/ability)” = as whether to a manager had been selected for a top management position or whether he or she had continued a middle-level management position.

Global or administrative criteria tell us where a manager is on the “success” continuum, but almost nothing about how he or she got there. Because behaviors relevant to managerial success change over time, as well as by purpose or function in relationship to the whole organization, the need is great to develop psychologically meaningful dimensions of managerial effectiveness in order to discover the linkage between managerial behavior patterns and managerial success.
While ambition and quality and quantity of education predict objective-career success, accomplishments and organization success  predict subjective-career success.

Managerial-selection decisions take place in the context of both organizational conditions (culture, technology, financial health) and environmental conditions (internal and external labor markets, competition, legal requirements). Knowledge and skills acquired over time in a single organization may be viewed as less relevant than diverse experience outside the organization.

There needs to be a fit among the kinds of attributes decision makers pay attention to in selection, the business strategy of the organization, and the environmental conditions.

Instruments of Prediction

  • Cognitive Ability Tests (do have correct/incorrect answers)
    Managerial success has been forecast most accurately by tests of general intellectual ability and general perceptual ability. Differences in intellectual competence are related to the degree of managerial success at high levels of managerial top and middle managers were due to fundamental differences in cognitive ability and personality rather than to the influence of on-the-job experience.
     
  • Objective Personality Inventories (do not have correct/incorrect answers)
    Big Five can serve as a meaningful taxonomy for classifying personality attributes.
    (Extroversion, Neuroticism, Agreeableness, Conscientiousness, Openness to experience). Conscientiousness (leads to a motivation to strive for accomplishments, which, in turn, leads to higher levels of performance) and extroversion (leads to a motivation for status striving, which, in turn, leads to higher levels of performance) seem to be the best two predictors of performance for managers. Central concept =
    Trait activation = personality traits are expressed in response to specific situational cues à job demands, distracters, constraints, releasers and facilitators.

    Socio-analytic theory suggests two broad individual motive patterns that translate into behaviors:
    1). “Getting along” = behavior that get the support of others, improves cooperation, and serves to build and keep relationships à expressive role, providing consideration, contextual performance
    2). “Getting ahead” = behavior that produces results and advances an individual within the group and the group within its competition à instrumental role, initiating structure, task performance.
    Choosing work-related personality measures on the basis of thorough job and organizational analyses is a fundamental element in the selection process.
    Response distortion (faking) on self-report tests is almost impossible to avoid.
    Core self-evaluation = a broad, higher-order latent construct indicated by self-esteem, generalized self-efficacy, neuroticism and locus of control.
    Conditional-reasoning measurement = procedure is based on the premise that individuals with different standings on a specific personality trait are likely to develop different justification mechanisms to explain their behaviors.
     

  • Leadership-Ability Tests
    Scales designed to measure two major construct underlying managerial behavior have been developed and used in many situations:
    - Providing consideration (type of “getting along”) à managerial acts oriented toward developing mutual trust, which reflect respect for subordinates’ ideas and consideration of their feelings. Good two-way communication.
    - Initiating structure (type of “getting ahead”) à the extent to which an individual is likely to define and structure his/her own role to focus on goal attainment.
    Our inability to predict the effects of hierarchical leader behaviors might be due to subordinate, task, or organizational characteristics that serve as “neutralizers of” or “substitutes for” hierarchical leader behaviors. 
    Our ability to predict successful managerial behaviors will improve if we measure more specific predictors and specific criteria.
     
  • Projective Techniques
    Projection refers to the process by which individuals’ personality structure influences the ways in which they perceive, organize, and interpret their environment and experiences. To study: one should present an individual with new and/or ambiguous stimuli and observe how he reacts and structures the situation.
     
  • Motivation to Manage
    One projective instrument that has shown potential for forecasting managerial success is the Miner Sentence Completion Scale (MSCS), a measure of motivation to manage.
    Leadership Motive Pattern (LMP) and Job Choice Exercise (JCE).
    Motivation To Lead (MTL), 3 components:
    - affective-identity MTL (“I am the type of person who likes to be in charge of others”)
    - non-calculative MTL (“I agree to lead a group, would never expect any advantages”)
    - social-normative MTL (“I agree to lead whenever I am asked by other members”).

     
  • Personal History Data
    Biographical information has been used widely in managerial selection, because the best predictors of future behavior is past behavior. People who are already intelligent, mature, ambitious, energetic, and responsible are in excellent positions to profit from training opportunities and from challenging organizational environments.
     
  • Peer Assessment
    Raters are asked to predict how well a peer will do if placed in a leadership/managerial role:
    - Peer Nomination = select a certain number of group members as being highest (lowest) on a particular dimension of performance.
    - Peer Rating = rate every other group member on several performance dimensions using some type of graphic rating scale.
    - Peer Ranking = rank all others from best to worst.

All the methods showed adequate reliability, validity and freedom from bias.
Important issues in peer assessment: 1). Influence of friendship, 2). Need for cooperation in planning and design and 3). Required length of peer interaction.
 

Work Samples of Managerial Performance
To understand more fully individual behavior in organizations, work-sample measures must be related to observable job-behavior measures. Development of work-samples is time-consuming and difficult, but money and social payoffs.
Two types: group exercises, and individual exercises. Both specified further: 1). Bandwith, 2). Fidelity, 3). Task specificity, 4). Necessary experience, 5). Task types, 6). Mode of delivery and response. Most popular are:
 

  • Leadership Group Discussion (LGD)
    Simple technique. A group of participants have a discussion about some topic for a period of time without an appointed leader. Seven characteristics are rated.
     
  • In-basket Test
    Individual work sample to simulate important aspects of the manager’s position. Although the situation is relatively unstructured for the candidate, each candidate faces exactly the same complex set of problem situations. It makes direct observation possible.
     
  • Business-Game
    A “live” case. Intelligence seems to have no effect on the success of players. A variation of the business game focuses on the effects of measuring “cognitive complexity” on managerial performance; “how” persons think and behave.
     
  • Situational Judgment Test (SJT)
    Not a really work sample, because they consist of a series of job-related situations presented in a written, verbal of visual form. Job applicants are asked to choose an alternative among several choices available (multiple-choice). SJT’s are inexpensive to develop. Also video-based SJT’s. While SJT’s work, we don’t understand why.

Assessment Center (AC)
The AC is a method, not a place. Brings together many of the instruments and techniques of managerial selection. AC’s are successful at predicting long-term career success.
Multiple assessment procedures were used first by German military psychologists during World War II. Later, results were so impressive, that operational use of the method has spread rapidly.
There is no one best way to structure a center. Fluctuate with the target group as well as with the objectives of the center. Reasons for developing AC’s:
1). Selection;
2). Promotion;
3). Development planning.
Sometimes assessment is combined with training (development centers).

In general, assessors hold positions about two organizational levels above that of the individuals being assessed. Assessors usually are trained in interviewing and feedback techniques, behavior observation, and evaluation of in-basket performance.
Frame-of-Reference (FOR) training can be successful in improving the accuracy of supervisors as they assess the performance of their subordinates in the context of a performance management system.
 

The performance-feedback process is crucial. Interrater reliabilities vary from .60 to .95.
Standardizing an AC program, so that each candidate receives relatively the same treatment, is essential so that differences in performance can be attributed to differences in candidate’s  abilities and skills. Include: exercise instruction, time limits, assigned roles, assessor/candidate acquaintance, assessor consensus discussion session, exercise presentation order. 

Applicants tent to view AC’s as more face valid than cognitive ability tests and tent to be more satisfied with the selection process, the job and the organization. Adverse impact is less of a problem in an AC.

Potential problems:
1). Requires more than the careful construction of exercises and identification of dimensions to be rated;
2). A subtle criterion contamination phenomenon may inflate assessment validities when global ratings or other summary measures of effectiveness are used as criteria;
3). Construct validity. Factors that play a role are: a). cross-situational inconsistency in participant performance, b). poor AC design, c). assessor unreliability.
Participants’ levels of true performance seem to be the key determinants of AC construct validity rather than biases on the part of assessors.

Combining predictors

Organizations use more than one instrument in their managerial and non-managerial selection processes. Highest corrected predictive validity coefficient is for cognitive ability combined with an integrity test (r = .65). (Table 13-3, p. 312). The overall validity coefficient between cognitive ability and job performance was .30, the same coefficient found for the relationship between structured interview and job performance.

13.Decision-making

 

In personnel selection, decisions are made about individuals. Since decision-makers cannot know in advance with absolute certainty the outcomes of any assignment, outcomes must predicted on the basis of available information. Two-step procedure:
1). Measurement (collecting data using tests that are relevant to job performance);
2). Prediction (combining these data to enable the decision-maker to minimize predictive error in forecasting job performance).
Measurement and prediction are simply technical components of a system designed to make decisions about the assignment of individuals to jobs.
Linear models often are used to develop predictions.

Unit-weighting: transform all predictors in z-scores à sum them to one factor à compare to others and make decision.
Advances of unit-weighting schemes:
1). Not estimated from the data and don’t “consume” degrees of freedom;
2). “Estimated” without error (no standard errors);
3). They cannot reverse the “true” relative weights of the variables.

Suppressor variables: can affect a given predictor-criterion relationship, even though such variables bear little or no direct relationship to the criterion itself. (figure 14-2, p. 319).

Data combination strategies
Mechanical (statistical): if individuals are assessed on some instruments, if they are assigned scores based on that assessment, and if the scores subsequently are correlated with criterion measure (tests, biographical data forms, structured interviews).
Judgmental (clinical): if a set of scores or impressions must be combined subjectively in order to forecast criterion status (assessment interviews and observations of behavior).

Strategies of Data Collection and Data Combination

 

Mode of data combined

Mode of data collection

Judgmental (clinical)

Mechanic (statistical)

Judgmental (clinical)

Pure clinical

Behavior threat rating

Mechanical (statistical)

Profile

Pure statistical interpretation (best?)

Both

Clinical composite (most common)

Mechanical composite (best?)

Alternative prediction models

  • Multiple-regression approach: performance on certain predictors can compensate weaknesses on other predictors. Linear models (y = a + b1x1 + .. bixi). Not possible?
  • Multiple-cutoff approach: when one predictor cannot compensate for another. Cutoff score dependent on need of organization. Multiple regression cutoff à represents the minimum score necessary to qualify for selection.
    Methods for determining cutoff score for single predictor:
    - Angoff method: based on how incompetent people score. Experts judges rate each item in terms of the probability that a minimally competent person would answer the item correctly. The probabilities are then averaged for each item across judges to yield item cutoff scores, which are summed to yield a test cutoff score.
    - Expectancy charts: are used to illustrate visually the impact of cutoff scores on future hiring decisions; depict the likelihood of successful criterion performance for any given level of predictor scores.
    score > 85 (best 20%) à 90% chance of being successful
    score > 50 (best 40%) à 70% chance of being successful
    score > 25 (best 60%) à 50% chance of being successful
  • Multiple-hurdle approach: combine multiple regression and multiple cut-off. Sequential, decision strategy.
    Two-stage, sequential selection procedure (figure 14-5, p. 327):
    Assessment of ability occur in stage 1, because this information is inexpensive to obtain. Applicants who reach the cutoff score on the ability measure progress to stage 2; the others are rejected. Final selection decisions are based on stage 1 and stage 2 information. Stage 2 information would normally be more expensive than ability measures to obtain, but the information is obtained from a smaller group, thereby reducing the cost relative to obtaining stage 2 information from all applicants.

Above are the classical validity approaches to selection decisions.
Decision-theory approach = extension/ addition of classical approach. Not only validity, but also utility decision for individual and organization à overall worth of selection instrument. Utility depends not only on the validity of a selection measure but also on two other parameters:

  • Selection Ratio (SR): the ratio of the number of available job openings to the total number of available applicants (figure 14-6, p. 328). As the SR approaches 1.0 (all applicants must be selected), it becomes high or unfavorable from the organizations’ perspective. Conversely, as the SR approaches zero, it becomes low or favorable, and therefore, the organization can afford to be selective.
  • Base Rate (BR): # successful applicants/ # total applicants. The more skills needed for the job, the lower the base rate.
  • Both affect success ratio: successful hires/ total hires.

By focusing only on selection, the classical validity approach ignore the implications of selection decisions for the rest of the HR system. Any selection procedure must be evaluated in terms of its total benefits to the organization. The main advantage of the decision-theory approach is that it addresses the SR and the BR parameters (if erroneous acceptance are a major concern, then the predictor cut off score may be raised.
 

  • Taylor-Russell model

Strive for a high success-ratio. Percentage of successful persons in selected group as high as possible (validity, SR and BR). CA : FA.

  • Naylor-Shine model
    Not: ‘good or not good’, but: criterion score of selected group as high as possible.
    A linear relationship between validity and utility. The higher the validity, the greater the increase in average criterion score for the selected group over that observed for the total group. Less information is required in order to use this utility model.
  • Brogden-Cronbach-Gleser model
    Added value of selected group as high as possible (in money). Calculates the increase in financial pay-off resulting from the use of a selection procedure instead of selecting randomly. The only assumption required to use this model is that the relationship between test scores and job performance is linear; the higher the test score, the higher the job performance and vice versa.

14. Training and Development

Training and development à changes in skill, knowledge, attitude, or social behaviour.
Training and development activities à planned programs of organizational improvement. Ultimate objective = to link training content to desired job behaviors.

Theoretical models:
- individual differences model
- principles of learning and transfer
- motivation theory
- goal setting
- behaviour modelling.

Change, growth, and development are bald facts of organizational life. In addition to nonstop change, modern organizations face other major challenges:
- Hyper competition à both domestic and international
- A power shift to the customer à internet, access to databases
- Collaboration across organizational and geographic boundaries
- The need to maintain high levels of talent
- Changes in the workforce
- Changes in technology
- Teams.

Organizations that provide superior opportunities for learning and growth have a distinct advantage when competing for talented employees à dual responsibility:
- organization: responsible for providing an atmosphere that will support and encourage change
- individual: responsible for deriving maximum benefit from the learning opportunities provided.

Training and management are important managerial tools, but there are limits to what they can accomplish.

Properties and characteristics of training and development:
1. Learning experiences
2. Planned by the organization
3. Occur after the individual has joined the organization
4. Intended to further the organization’s goals.

Training = activities directed toward the acquisition of knowledge, skills, and attitudes for which there is an immediate or near-term application.
Development = the acquisition of attributes or competencies for which there may be no immediate use. 

Research: 70% of workforce learning is informal (unplanned).

Characteristics of Effective Training:

  • Top management is committed to training and development: training is part of the corporate culture.
  • Training is coupled to business strategy and objectives and is linked to bottom-line results.
  • Organizational environments are “feedback rich”.
  • There is commitment to invest the necessary resources, to provide sufficient time and money for training.

The model (p. 347) illustrates that characteristics of the individual, as well as on the work environment, are critical factors before training (by affecting motivation), during training (by affecting learning), and after training (by influencing transfer and job performance).

Program development comprises three major phrases:
- Planning (or needs assessment)
- Implementation (or training and development)
- Evaluation à twofold process:
   1. Establishing measures of training and job-performance success (criteria)
   2. Using experimental and quasi-experimental designs to determine what
   changes have occurred.

Possible training goals:

  • Training validity (did trainees learn anything during training?)
  • Transfer validity (to what extent did KSAO’s learned in training lead to improved performance on the job?)
  • Intraorganizational validity (is the performance of a new group of trainees in the same organization that developed the training program similar to the performance of the original training group?)
  • Interorganizational validity (can a training program that “works” in one organization be used successfully in another organization?)

6 steps in defining what is to be learned/ content of the training:

  1. Analyze the training and development subsystem and its interaction with other systems.
  2. Training needs.
  3. Training objectives.
  4. Decompose the learning task into its structural components.
  5. Optimal sequencing of the components.
  6. Alternative ways of learning.

Overall goal à to link training content to desired job behaviors!

1.

Training does not always lead to effective behaviors and enhanced organizational results à reason: lack of alignment between training and organization’s strategy à better alignment? Do 3 things:

  • Identify what new capabilities will be needed, how they compare to current capabilities, and what steps are necessary to bridge the gap.
  • Leaders should periodically seek to identify key strategic capabilities that will be needed as the organization goes forward.
  • Training organizations should compare their current programs and services against the organization’s strategic needs.

3 other conditions must be present:
- individual must be capable (can do)
- individual must be motivated (will do)
- supporting development effort of individuals.

2.
Needs assessment:

  1. Essential starting point about needs assessment.
  2. Many training programs do not use it (only 6%).
  3. Very little ongoing research.

Three-facet approach:

  • Organization analysis
    Identification where training is needed in the organization. Purpose à to link strategic workforce-planning considerations with training needs-assessment results.
  • Operations analysis
    Identification of the content of the training.
    1. A systematic collection of information that describes how work is done.
    2. Determination of standards of performance for that work.
    3. How tasks are to be performed to meet the standards.
    4. The competencies necessary for effective task performance.
    For jobs that are complex/ dynamic à Cognitive Task Analysis (CTA): focuses explicitly on identifying the mental aspects of performance, like decision-making or problem solving (not directly observable).
  • Person analysis
    Identification of who needs training and of what kind is needed. Important aspect à to determine whether training can fill the gap or whether other interventions should be used. Critical incidents.

All three must be conducted in a continuing, ongoing manner and at all three levels.
Demographic analysis = needs analysis done at policy level based on different populations (special needs of a particular group, e.g. workers over 40)

An important consideration in the needs-assessment process is the external environment.

Individual Development Plans (IDP) include:
- Statement of aims (desired changes)
- Definitions
- Ideas about priorities.

3.
Training objectives à the fundamental step in training design.
Objectives are stated either in behavioural or in operational terms.
Each objective should describe:
1. the desired behaviour
2. the conditions under which the behaviour should occur
3. the standards by which the trainee’s behaviour is to be judged.
Objectives also may be stated in operational or end-result terms.
4.
7 features of the learning environment that facilitate learning and transfer:

  1. Trainees understand the objectives of the training program.
  2. Training content is meaningful.
  3. Trainees are given cues that help them learn and recall training content.
  4. Trainees have opportunities to practice.
  5. Trainees receive feedback from trainers, observers, video, etc.
  6. Opportunity to observe and interact with other trainees.
  7. Training program is good coordinated and arranged.

5.
Sequence involves learning each subtask before undertaking the total task.

Team training à individual training cannot do the whole job; we need to address interactions among team members. But simply placing a task within a team context may not improve overall performance.
Systematic approach to team training that includes 4 steps:
1. Conduct a team-training needs analysis.
2. Develop training objectives that address both taskwork and teamwork skills.
3. Design exercises and training events based on the objectives from step 2.
4. Design measures of team effectiveness based on the objectives set at step 2, evaluate the effectiveness of the team training, and use this information to guide future team training.
à popular intervention: Crew Resource Management (CRM).

Theoretical models to guide training and development efforts:
Different people have their own favourite ways of learning. The growing popularity of various forms of technology-delivered instruction offers the opportunity to tailor (op maat) learning environments to individuals.
“Can do” and “will do” factors should be effective predictors of training performance.
Trainability = combination of an individual’s ability and motivation levels.
We need to establish a behavioural baseline for each individual: results from each individual’s prior history. Unfortunately, a great deal of training research ignores the concept of the behavioural baseline and the measurement of initial state.
Adaptive training = methods are varied to suit the abilities and characteristics of the trainees.

Learning principles

  • Knowledge of Results: KR (Feedback)
    Provides information that enables the learner to correct mistakes. Intrinsic or extrinsic. Qualitative, quantitative, informative of evaluative. Presence of KR improves performance, but managers often misperceive its effects.
  • Transfer of Training
    The application of behaviors learned in training to the job itself. The use of multiple rating sources with different perspectives is necessary to provide a more realistic assessment to transfer effects. O.a.: maximize the similarity between the training situation and the job situation. The attitudes of trainees may also affect transfer.
  • Self-regulation to keep Changes in Behavior
    Increases the self-efficacy of trainees and their attendance was higher than that of the control group.
  • Adaptive Leadership
    Designed to provide trainees with information about future directions they should take in sequencing study and practice in order to improve their performance. Substantial impacts on self-regulation process indicators and on the sequence of trainees’ study and practice.
  • Reinforcement
    In order to behaviour to be acquired, modified, and sustained, it must be rewarded (reinforced). Reward à “Good, repeat what you have done”
    Punishment à “Stop, you make the wrong response”.
    In practice, it is difficult to apply this principle.
     
  • Practice
    Active use of training content. 3 aspects:

- Active Practice: not enough to read only. Error-management training.

   - Over-learning: tasks become “second nature”.

   - Length of the Practice Session: learning is better when practice is spread
   (intervals between sessions) rather than massed.

  • Motivation
    Characteristics that predict motivation to learn: pretraining self-efficacy, valence of training, job involvement, organizational commitment, career exploration, conscientiousness, goal orientation, anxiety.
    Pygmalion effect = expectations have a way of becoming self-fulfilling prophecies, so that the higher the expectations are, the better the trainees perform (and vice versa).
  • Goal Setting
    One of the most effective ways to raise a trainee’s motivation is by setting goals. Goal setting is not risk free à excessive risk taking, feelings of failure.
  • Behavior Modeling
    Based on social-learning theory (we learn by observing others).
    1. Modeling
    2. Role-playing
    3. Social reinforcement
    4. Transfer of training  

15.  Evaluating Training

Training and Development techniques: 3 categories:
- information-presentation techniques à lectures, CDs, Internet, etc.
- simulation methods à case, incident, pole-playing, business-game, assessment, etc.
- on-the-job-training à orientation training, job rotation, performance management, etc.

Computer-Based Training (CBT) = the presentation of text, graphics, video, or animation via computer for the purpose of building job-relevant knowledge and skill à CDs, DVDs, Intranet, Internet, e-learning, virtual reality training, etc.

2 features: customization & learner control.

First, define what trainees are to learn, and only then choose a particular method.
Checklist for selecting a particular technique: A technique should..

  1. Motivate the trainee to improve his performance
  2. Clearly illustrate desired skills
  3. Provide for the learners’ active participation
  4. Provide an opportunity to practise
  5. Provide feedback on performance while the trainee learns
  6. Provide some means of reinforce the trainee while learning
  7. Be structured from simple to complex tasks
  8. Be adaptable to specify problems
  9. Enable the trainee to transfer what is learned in training to other situations.

Evaluation of a training implies a dichotomous outcome (program has value or not).

Why measure training outcomes?

  • To make decisions about the future use of training programs or techniques
  • To make decisions about individual trainees (feedback)
  • To contribute to a scientific understanding of the training process
  • To further political or public relations purposes.

Essential elements for measuring training outcomes:

- Use of multiple criteria (according to time*, type**, level***)
- Relationship between internal and external criteria is especially important
- Enough experimental control
- Provision for saying something about the practical and theoretical significance of the results.
- A thorough logical analysis of the process and content of the training
- Some effort to deal with the “systems” aspects of training impact
à Trainers must address these issues before they can conduct any truly meaningful evaluation of training’s impact.
* Criteria – Time à
“When, relative to the actual conduct of the training, should we obtain criterion data?” The timing of criterion measurement can make a great deal of difference in the interpretation of training’s effects.
It is not the absolute level of behavior that is crucial, but rather the change in behavior.

** Criteria – Types à
Internal = linked directly to performance in the training situation (written exam).
External = measures designed to assess actual changes in job behavior (rating by peer).

Qualitative = measures obtained by interviewing/ observing.
Quantitative = sales, turnover.
Both relevant. By ignoring qualitative (process) measures, we may miss the richness of detail concerning how events occurred.

*** Criteria – Levels à
Organizational levels from which we collect criterion data or the relative level of rigor we adopt in measuring training outcomes.

Kirkpatrick identified 4 levels of rigor in the evaluation of training and development programs: reaction, learning, behavior and results.
 
In assessing on-the-job behavioural changes, allow a reasonable period of time (at least 3 months) after the completion of training before taking measures à especially for development programs to improve decision-making, etc.

ROI = program profits to invested capital ( ratio à yields : investment).

Several advantaged and disadvantages. ROI includes:

  • The inflow of returns produced by an investment.
  • The offsetting outflows of resources required to make the investment.
  • How the inflows and outflows occur in each future time period.
  • How much what occurs in future time periods should be “discounted” to reflect greater risk and price inflation.

ΔU = T x N x Dt x SDy – N x C

ΔU = Dollar value of the training program
T = Number of years’ duration of the training effect on performance
N = Numbers of persons trained
Dt = Effect size. True difference in job performance between the average trained worker and the average untrained worker in standard z-score units.
SDy = Variability (SD) of job performance in dollars of the untrained group
C = per-person cost of the training

            Xe – Xc
Dt =     SD√ryy

Xe = average job performance exp. Group
Xc = average job performance contr. Group
SD = SD of the job performance measure in the untrained group
ryy = reliability of the job performance measure

The real payoff from program-evaluation data is when the data lead to organizational decisions that are strategically important à LAMP:

Logic à “story” that connect numbers with effects and outcomes
Analytic à          drawing he right conclusions from data
Measures à       numbers that populate the formulas and research design
Process à         effective measurement systems must fit within a change-management process that reflects principles of learning and knowledge transfer.

Researchers want to be able to say that trainer caused the changes. Unfortunately, most experimental designs and most training studies do not permit the causal arrow to point unequivocally toward training (x) as the explanation for observed results (y).
3 necessary conditions:
- y did not occur until after x
- x and y are actually shown to be related
- other explanations of the relationship between x and y can be eliminated as plausible rival hypotheses.

Classical Experimental Designs
 

Threats:

  1. History
  2. Maturation
  3. Testing
  4. Instrumentation
  5. Statistical Regression (changes in criterion scores resulting from selecting extreme group from a pretest).
  6. Differential Selection (using different procedures to select individuals for exp. and contr. groups).
  7. Attrition
  8. Interaction of differential selection and maturation
  9. Interaction of pretest with the experimental variable
  10. Interaction of differential selection with training
  11. Reactive effects of the research situation
  12. Multiple-treatment interference

Threats may affect:

  • Statistical-conclusion validity
  • Internal validity
  • Construct validity
  • External validity

Experimental Designs Assessing Training and Development Outcomes

 

A

B

C

D

 

After-Only

Before-After

Before-After

Solomon Four-Group Design
Before-After

 

E

C

E

E

C

E

C1

C2

C3

Pretest

  

X

X

X

X

X

  

Training

X

 

X

X

 

X

 

X

 

Posttest

X

X

X

X

X

X

X

X

X

Limitations Solomon Four-Group Design à Despite its apparent advantages, there are problems:

  • It assumes that the simple passage of time and training experiences affect all posttest scores independently.
  • The design requires large numbers of persons.
  • Difficulty is the application of this design.

à Despite its limitations, when it is possible to apply the Solomon Four-Group Design realistically, to assign subjects randomly to the four groups, and to maintain proper controls, this design controls most of the sources of invalidity that it is possible to control in one experimental design.

Limitations of Experimental Designs:

  1. Exclusive emphasis on the design aspects of measuring training outcomes is rather narrow in scope.
  2. Effect sizes obtained from single-group pretest-posttest designs (B) are systematically higher than those obtained from control or comparison-group designs.
  3. Important to ensure that any attempt to measure training outcomes through the use of an experimental design has adequate statistical power.
  4. Experiments often fail to focus on the real goals of an organization.

Quasi-Experimental Designs

Despite calls for more rigor in training-evaluations designs, some less-complete (quasi-experimental) designs can provide useful data even though a true experiment is not possible. The central purpose of an experiment is to eliminate alternative hypotheses that also might explain results. If a quasi-experimental design can help eliminate some of these rival hypotheses, then it may be worth the effort. Investigators should design the best experiment possible, given their circumstances, they should use the most rigorous design that is possible. It is always wise to collect as much data as possible or to demonstrate the effect of training in several ways.

E = Time-Series Design
test                     test                  training                        test                  test
à Requires that criterion data be collected at several points in time (before and after). The more observations (pre and post), the better. Weakness: failure to control for history.

F = Non-equivalent Control-Group Design
Group 1: test                  training                        test
Group 2: test                                                     test
à Appears identical to design C, but in design F, individuals from a common population are not assigned randomly to the experimental and control groups. Common applied in settings where naturally occurring groups must be used (workgroup A and workgroup B). Sources of invalidity: selection-maturation interaction, the testing-training interaction and regression effects (when exp. and control groups are “matched”). Useful in applied settings.

G = Non-equivalent Dependent Variable Design (or internal-referencing strategy)
test (exp and contr variables)    training            test (exp and contr variables)
à Based on a single treatment group and compares two sets of dependents variables: exp. var. (that training should affect)  and control var. (that training should not affect).
+ Controls 2 threats to internal validity: testing and the Hawthorne effect.
- Does not control for history, maturation and regression effects.

H = Institutional-Cycle Design
Group 1: test                  training                        test                  no-training       test
Group 2: test                  no-training                  test                  training            test
à large number of persons will be trained, but not at the same time (different cohorts).
Controls history and test-retest effects, but not differences in selection.

As in selection, the problem of statistical versus practical significance is relevant for the assessment of training outcomes. Practical perspective à researchers must show that the effects of training do make difference to organizational goals (external criteria are important). A related issue: relation between practical and theoretical significance.

A logical analysis of the process and content of training programs can further enhance our understanding of why we obtained the results we did. Quantitative and qualitative criteria are important for a thorough understanding of training’s effects. Confront quant and qual. issues/questions during program planning and evaluation.  

16. Ethics and HRM

Organizational Responsibility (OR) = context-specific organizational actions and policies that take into account stakeholders’ expectations and the triple bottom line of economic, social and environmental performance. HRM researchers and practitioners can play a central role in OR research and practice. Dual mission of enhancing human well-being and maximizing organizational performance. 3 areas:
- employee privacy
- testing and evaluation
- organizational research.

Ethics: making decisions that represent what you stand for, not just what is legal.

Ethical behaviour = not governed by hard-and-fast rules, it adapts and changes in response to social norms and in response to the needs and interests of those served by a profession à “a continuous adjustment of interests.
Although we cannot prescribe the content, we prescribe the processes.

Privacy = interest that employees have in controlling the use that is made of their personal information and in being able to engage in behaviour free from regulation or surveillance.
Confidentiality = treatment of information provided with the expectation that it will not be disclosed to others (established by law, institutional rules or professional relationships).
Ethics and morality = behaviors about which society holds certain values.
Ethical choice = considered choice among alternative courses of action where the interests of all parties have been clarified and the risks and gains have been evaluated openly and mutually.

Ethical decisions about behaviour = those that take account not only of one’s own interests but also equally of the interests of those affected by the decision.
Validity = the overall degree of justification for the interpretation and use of an assessment procedure.

OR is not only possible but also necessary for start-ups and MKB’s if they want to be successful in today’s globalized and hypercompetitive economy.
“organizational” instead of the narrower term “corporate”
“responsibility” instead of the narrower term “social responsibility.
OR is driven both internally and externally.

2 factors that now serve as important catalysts for OR:
- Changes in twenty-first century organizations (Internet and organizations looks like a web instead of a pyramid because it connects various stakeholders).
- Responsibility.

Overall positive relationship between social and environmental performance and financial performance.

Strategic Responsibility Management (SRM) = a process that allows organizations to approach responsibility actions in a systemic and strategic manner. 6 steps:

  1. Vision and values related to OR.
  2. Identification and prioritization of stakeholders’ expectations.
  3. Development of OR initiatives that are integrated with corporate strategy.
  4. Employee training related to OR
  5. Institutionalization of SRM by measuring and rewarding processes and results.
  6. Reporting on OR initiatives internally and externally.

There are some additional specific areas directly or indirectly related to OR to which HRM research has made important contributions à employee privacy, testing and evaluation, and organizational research.

Employee Privacy
Information Privacy
The U.S. Constitution, defines legally acceptable behaviour in the public and private sectors of the economy. Such legal standards have affected HR research and practice in several ways. Employees are more aware of these issues, they take legal action when they believe that their privacy rights have been violated by their employers. 3 issues:
- kind of information about individuals
- how the info is used
- the extent to which that information can be disclosed to others.

Because employees are likely to provide personal information electronically that they would not provide in person, organizations should take extra care in handling information gathered electronically.

Upholding Employee Privacy à USA Patriot Act (2001): grants broad powers to the government to follow individuals’ use of the Internet and requires that employers report any imminent threats to the government. (11 sept). à Important for employers to establish a privacy-protection policy.

Electronic Communications Privacy Act (1986): prohibits “outside” interception of e-mail by a third party (government, police or individual) without proper authorization.
47% of organizations monitor employee’s mail and 63% monitor their use of the Web.

à Safeguards to protect personal privacy are more important than ever!

  • Employers should periodically and systematically review their HR recordkeeping practices.
  • Employers should articulate, communicate, and implement fair information/practice policies.
     

Physical Privacy
The issue of employee searches in the workforce involves a careful balancing of the employer’s right to manage its business and to implement reasonable work rules and standards against the privacy rights and interests of employees.
-base the search on legitimate employer interests
- avoid random searches
- include all types of searches
- provide adequate notice to employees before implementing the policy
Workplace investigations often involve the observation of an employee. 5 means that an employer can use to do this legally: electronic (photo/video), stationary, moving, undercover operatives and investigative interviews à each carries risks.

Testing and Evaluation
HR experts and psychologists have obligations to their profession, to job applicants and employees and to their employers. Employers also have ethical obligations.
Psychologists have the most richly developed and documented ethical guidelines. These standards specify minimally acceptable practices in training assessors, informing participants about what to expect, and using assessment centre data.

* Obligations to HR profession à APA. The Ethical Principles include a specific standard that prohibits any unfair discrimination against people filing or responding to a complaint.

* Obligations to those who are evaluated à In the making of career decisions about individuals, issues of accuracy and equality of opportunity are critical. Beyond these, ethical principles include:

  • Guarding against invasion of privacy
  • Guaranteeing confidentially
  • Obtaining employees’ and applicants’ informed consent before evaluation
  • Respecting employees’ right to know
  • Imposing time limitations on data
  • Minimizing erroneous acceptance and rejection decisions
  • Treating employees with respect and consideration.

The most effective way to ensure ethical treatment is to standardize procedures: personal and considerate treatment, clear explanation of the evaluation process, direct and honest answers to examinees’ questions, practice exercises to make sure examines understand how to use the equipment.

* Obligation to employers à
- Accurate expectations for evaluation procedures
- Ensuring high-quality information for HR decisions
- Periodically reviewing of this
- Respecting the employers’ ownership rights
- Balancing the vested interests of employer with government regulations, commitment to profession and with rights of those evaluated for HR decisions.

Individual differences in the ethical behavior of individual:
- Implementation ethical codes most successful by individuals who achieved the conventional level of moral development
- Highest group moral development distribution à more transformational leadership behavior
- Cognitive ability can affect
- women more likely than men to observe specific hypothetical practices as unethical
- Personal values influence the extent to which an issue will be viewed as moral
- Manager’s narcissism negatively related to supervisor ratings of interpersonal performance.
à To improve understanding of individuals’ responses to potentially unethical situations at work: need for a person-situation perspective.

Ethical Issues in Organizational Research. Issues:

  • At the Research-Planning Stage
    Researchers must evaluate their competence to conduct the research, their knowledge of ethical guidelines, soundness of research design, and ethical acceptability of their study
  • In Recruiting and Selecting Research Participants
    While offering inducements (money) increases participation rates, ethical issues are raised when participants feel they cannot afford to pass up the incentive.
    Subtle force may also exist when a supervisor “strongly recommends” that all employers participate in the research.
     
  • In Conducting Research
    Although organizational research rarely involves physical and psychological hurt, harm can take place à researchers must take precaution to protect this and determine if harm intentionally is justified in terms of the benefits of the research or if other research methods can be used to obtain information in harmless ways. Researchers must also protect:
    - Right to Informed Consent
    - Right to Privacy
    - Right to Confidentiality
    - Right to Protection from Deception
    - Right to Debriefing
    à these right are in many cases not protected in practice.
    “Committed-to-participant” approach = researchers have an ethical obligation to ensure the well-being of multiple research participants in organizational settings.
  • In Reporting Research Results
    Researchers must be aware of ethical violations regarding each of the following issues:
    - Misrepresentation of Results (being honest and accurate)
    - Censoring (provide detailed detailed information)
    - (Self) Plagiarism
    - Authorship Credit (first author most credits)
    - Data Sharing (APA à data must saved for 5 years).

Ethical dilemmas arise as a result of role ambiguity (uncertainly about what the occupant of a particular role is supposed to do), role conflict, and ambiguous (conflicting) norms (standards of behavior).
Table 18-1 (p.433) à Strategies for Addressing Ethical Dilemmas in Organizational Research.

Guidelines:
- When reporting, distinguish clearly between what has been observed under certain circumscribed conditions and what is being advocated as a desired state of affairs.
- Avoid use of success stories that managers can expect to duplicate rather painlessly.
- Respect limitations of data obtained from a single study.
- Do not allow advocacy of certain techniques or organizational policies to masquerade as science.

17. personen, beslissingen en de methodeaanpak

Let op: hs 17 & 18 afkomstig uit vijfde druk.

Een aantal belangrijke theorieën die in dit hoofdstuk worden besproken zijn:

 1) Utiliteitstheorie: degene die beslist moet de kosten en consequenties van alle beslissingen die hij of zij anticipeert te nemen in ogenschouw nemen en de verschillende opties afwegen. De beslisser definieert een duidelijk doel, brengt verwachte consequenties en mogelijke uitkomsten van de beslissing in beeld en baseert op deze informatie het eindoordeel.

2) Open systeem theorie: binnen deze theorie zijn organisaties open systemen, gericht op de externe wereld: input uit de omgeving wordt  geïmporteerd, deze worden omgezet in output en terug getransporteerd naar de omgeving. transporteren. Vanuit de omgeving komt vervolgens feedback over het totale proces.

3) Closed system approach: binnen deze theorie richten organisaties zich op wat er binnen de organisatie afspeelt en wordt er minder aandacht gehecht aan de externe wereld.

4) System theory: volgens deze theorie is alles één groot systeem met hieraan gerelateerde subsystemen. Managers moeten deze systemen begrijpen en niet afgaan op intuïtie. De voordelen van deze benadering zijn:

  • Makkelijker opsporen van veranderingen in de omgeving
  • Mogelijkheid tot het vaststellen van kritieke grenzen
  • Mogelijkheid tot het ontwikkelen van strategische responsen

5) Open-system decision-theoretic model:

Aannamen:

  • Voor verschillende banen gelden verschillende werving-, selectie- en trainingstrategieën.
  • Fasen in het proces zijn onderling afhankelijk.
  • Een verandering in het systeem heeft nawerking op alle andere delen van het systeem.

Een aantal belangrijke begrippen in dit hoofdstuk zijn:

  • Job analysis: Het beschrijven van het werk, de benodigde vaardigheden en aanbevolen training.
  • Job evaluation: Relevantie en waarde van de baan ten opzichte van de organisatie vaststellen
  • Workforce planning: in kaart brengen aan welke eisen een baan moet voldoen en een overzicht maken van de activiteiten zodat er genoeg medewerkers aanwezig zijn. Hierbij komen onder andere de volgende factoren kijken: inventaris van kennis, vaardigheden, mogelijkheden en ervaringen van aanwezige werknemers, voorspellingen van het interne en externe personeelsbeleid, verkregen informatie opnemen in actieplan -> zoals carrièreplan, training, transfers, promoties of werving, controle en evaluatie voor het verkrijgen van feedback over de workforce planning.
  • Recruitment: werven van goede kandidaten

De onderdelen van een recruitment proces zijn:

  1. Initial screening
    Decision errors:
  • erronecus acceptance: een individu dat geslaagd is voor een vorige stage maar een volgende stage niet aankan.
  • erronecus rejection: een individu dat een vorige stage niet aankan maar een volgende wel.
  1. Selection
    Het verkrijgen van informatie dmv interviews en/of testen. Vervolgens kan gekeken worden voor welke baan een individu geschikt is, maar meestal wordt een individu verworven voor een specifieke baan.
  2. Training en development
    Competency: een cluster van gerelateerde kennis, mogelijkheden, vaardigheden en gedragingen welke belangrijk zijn voor het uitvoeren van de baan.
  3. Performance management
    Na een bepaalde tijd kan pas gekeken worden of de werknemer goed presteert d.m.v. observeren, evalueren en documenteren en het geven van feedback. Het doel van performance management is het verbeteren van prestaties van de werknemer of van een team. Het is erg belangrijk voor het succes van een organisatie.
  4. Organizational exit
    Problemen bij het ontslaan van medewerkers: shock, opluchting, weifeling, twijfel aan jezelf, boosheid. Ondersteuning hierbij: communicatie, participatie, controle, planning en support.

18. Internationale dimensies van toegepaste psychologie

Globalisering

Refereert naar handel zonder grenzen en de onafhankelijkheid van operaties van organisaties op verschillende locaties.

De ‘global village’ wordt met de dag kleiner, door:

  • Satellietschotels die ervoor zorgen dat wij zenders uit andere landen kunnen ontvangen en websites zoals amazon.com die 24/7 geopend zijn voor iedereen
  • Wereldwijde telecommunicatie wordt vergroot door computertechnologie, satellieten
  • Grote multinationals (Unilever, Gilette) verliezen hun nationaliteit
  • Vrije toegang neemt toe tussen landen
  • De financiële markt is 24/7 geopend over de hele wereld
  • Buitenlandse landen hebben meer dan 12% controle over de fabrieksaandelen van de U.S.
  • Globale normen en reglementen duiken op voor handel, financiën, producten en services

Het meest relevante competitief aandeel van een land zijn de vaardigheden en het cumulatief leren van workforce. Workforce is het enige wat een land uniek maakt.

Cultuur

Waarden en overtuigingen die mensen delen met andere mensen van een sociale groep (land, regio, organisatie).

Multiculturele maatschappij: maatschappij met veel ethnische-, socio-economische-, ras- en generatiegroepen elk met een eigen cultuur.

Subcultuur: kleinere culturele groep binnen een grotere culturele groep, die zijn eigen gedefinieerde cultuur heeft.

Organisatie cultuur: attitudes, waarden, gedragsnormen en verwachtingen die worden gedeeld met andere werknemers.

Verticale cultuur: hiërarchie/ horizontale cultuur: gelijkheid.

  • Individualistische cultuur: complex (veel subgroepen) en los (weinig regels en normen over wat correct gedrag is in bepaalde situaties).
  • Collectivisme: simpel (overeenstemming in overtuigingen en attitudes) en krap (veel regels en normen)

Culturele specifieke attributen:

  1. Definitie van de ‘self’: autonomie en onafhankelijkheid (individualistisch) vs. afhankelijk van anderen (collectivisme).
  2. Structuur van doelen: Prioriteit geven aan eigen doelen (individualistisch) vs. prioriteit geven aan groepsdoelen (collectivisme.).
  3. Empathie voor normen versus attitudes: attitudes, persoonlijke behoeften, verkregen rechten (individualistisch) versus normen en plichten (collectivisme.)
  4. Empathie voor relaties versus ratio: collectivisme geeft meer om relaties, waarbij de behoeften van anderen meegenomen worden. Individualisme weegt meer de kosten en baten af van een relatie.

Hofstede: culturele variatie in waarden

Basisproblemen waar een maatschappij mee om moet gaan, maar waarvoor de oplossingen verschillen:

  1. Power distance
    De mate waarin leden van een organisatie ongelijkheid accepteren en of ze veel afstanden ondervinden tussen diegene met power en diegene met minder power.
    Hoge PD: Acceptatie van ongelijkheid.
    Lage PD: Gelijkheid en mogelijkheden voor iedereen.
  2. Ontlopen van onzekerheid
    De mate waarin een cultuur zijn leden zich comfortabel laat voelen of oncomfortabel in ongestructureerde situaties.
    Hoog: Lage tolerantie, zo min mogelijk ongestructureerde situaties (strikte regels en wetten).
    Laag: Weinig bezorgd over ambiguïteit en onzekerheid (weinig regels). Tolerantie voor verschillende meningen.
  3. Individualisme
    Mate waarin een persoon geïntegreerd is binnen een groep. Maten waarin individuele doelen of groepsdoelen nagestreefd worden.
    Hoog: Individualistisch en individuele rechten.
    Laag: Collectivisme, strakke banden. Sterke integraties. Voorbeeld: families die elkaar beschermen in ruil voor loyaliteit.
  4. Masculinity (mannelijkheid)
    Verschil in sekse binnen groepen.
    Feministisch: minimale verschillen in sekserollen, lage discriminatie, sociale doelen, andere helpen.
    Mannelijk: egoïstisch; werk, carrière en geld staan centraal.
  5. Lange termijn vs. korte termijn oriëntatie
    De mate waarin de maatschappij toewijding aan traditionele waarden op lange termijn omarmt .

Hoog: Betrokkenheid op lange termijn en respect voor tradities. Latere voldoening, beloning op lange termijn van werk. Organisatie heeft langer nodig om zich te ontwikkelen.

Laag: Versterkt geen lange termijn, traditionele oriëntatie.

Psychologische metingen worden vaak ontwikkeld in één land en dan getransporteerd naar een ander land. Er zijn hiervoor verschillende richtlijnen opgesteld. Vanuit het meetperspectief bestaat het probleem dat elke cultuur een eigen kijk heeft op het leven qua normen, waarden, attitudes, etc.

Ten eerste is ‘vertalingsgelijkheid’ relevant bij het transporteren van een meetmethode naar een andere cultuur.

Blind-back translation: ten eerste wordt de meetmethode vertaald naar een andere taal, vervolgens vertaalt een tweede persoon het weer naar de originele taal. Ten slotte wordt deze versie vergeleken met de originele versie, waardoor discrepanties opgelost kunnen worden.

Ten tweede is conceptuele gelijkheid belangrijk. Dit houdt in dat de betekenis overal gelijk is.

Ten slotte bestaat er ritmische gelijkheid. Dit betekent dat de statistische associatie tussen afhankelijke- en onafhankelijke variabelen relatief stabiel blijven.

Verschillende termen:

  • Expatriate (expat): Een term die gegeven wordt aan iedereen die buiten zijn of haar land werkt, met een geplande terugkomst naar het land van herkomst of naar een derde land.
  • Home country: het land waar de expat woont.
  • Host country: Het land waar de expat werkt.
  • Third-country national: een expat die verplaatst is naar een ander land terwijl diegene in het buitenland werkt (voorbeeld: een Duitse werknemer die voor een Amerikaanse firma in Spanje werkt).

Selecteren van personen die in het buitenland kunnen werken op:

  • GMA (General Mental Ability): mogelijkheid om te leren. GMA is een goede voorspeller van werkprestatie.
  • Persoonlijkheid: relatie tussen FFM (Five Factor Model)en werkprestatie. Conscientiousness en emotionele stabiliteit zijn valide voorspellers voor prestaties.

Ontwikkelen van een wereldwijde meetmethode: een moeilijkheid van het transporteren is (ook al verschilt persoonlijkheid niet per cultuur), dat persoonlijkheid verschilt in de mate van expressiviteit. GPI (Global personality inventory) is ontwikkeld: cultuur wordt van binnenuit en buitenaf geobserveerd.

Persoonlijke dimensies zijn gerelateerd aan het succes van een expat: definitie van succes door een organisatie; afmaken van de oefening en een supervisor zijn waardering van de prestatie die geleverd is voor de oefening. Drie persoonlijkheden die gerelateerd zijn aan het afmaken van een oefening: extraversie, emotionele stabiliteit en agreeableness.

Andere kenmerken die gerelateerd zijn:

  1. Vasthoudendheid en veerkracht: houdt mogelijkheden in gedachten en blijft positief ondanks tegenslagen.
  2. Communicatie: communiceert duidelijk, vloeiend en ‘to the point’. Praat op een manier die aandacht trekt in groeps- en individuele situaties.
  3. Aanpassingsvermogen: past zich snel aan in situaties en aan bepaalde manieren van werken.
  4. Organisatie en commercieel bewustzijn: alert op veranderingen, heeft kennis van financiële en commerciële kwesties.
  5. Teamwork: werkt goed samen met anderen.
  6. Self-discipline: betrokken, consistent, punctueel en gewetensvol.
  7. Cross-cultureel bewustzijn: kan kwesties vanuit een ander perspectief zien.

Cross-culturele training: training om mensen uit een bepaalde cultuur voor te bereiden om te interacteren met mensen uit een andere cultuur.

  1. Bewustzijn: bewust zijn van eigen culturele waarden, kaders en verplichtingen.
  2. Gedrag: het leren en oefenen van geschikt gedrag.

Een middel dat gebruikt wordt bij cross-culturele training is de cross-cultural assimilator:

geprogrammeerde leerervaring ontworpen om leden van een cultuur bloot te stellen aan sommige basisconcepten, attitudes, rolpercepties, verplichtingen en waarden van een andere cultuur. Het is een combinatie van critical incidents die informatie geven over interculturele interacties en misopvattingen.

Het maken van een assimilator:

  1. Interviews: Interview mensen van de doelgroep (chinezen) die crossculturele moeilijkheden hebben ervaren met de doelcultuur(Nederland).
  2. Critical Incidents: Maak voorbeelden van situaties die voortkwamen uit culturele verschillen.
  3. Ontlokken van eigenschappen: Ontlok eigenschappen uit critical incidents.
  4. Betekenis van eigenschappen: Selecteer 4 mogelijke reacties op de critical incidents, waaruit gekozen kan worden.
  5. Validatie

Performance management in de internationale context: Refereert naar de evaluatie en continue verbetering van individuele of groepsprestaties. Het bevat doelen, schattingen en feedback.

Schattingsproblemen:

  • Verschillen in nationale regels en arbeidswetten maakt het moeilijk om prestaties van managers van verschillende landen te vergelijken.
  • Specifieke factoren van het land beïnvloeden prestaties.
  • Het veranderen van internationale condities kan flexibele doelen eisen.
  • Ontwikkeling van de markt van dochtermaatschappijen in het buitenland zijn over het algemeen langzamer en moeilijker dan thuis. Daarom hebben expats meer tijd nodig om resultaten te bereiken.

Schattingen worden gemaakt van objectieve data (inkomsten, marktdeelname en terugkrijgen van investeringen), subjectieve data (beoordelingen door jezelf en door hoofdkwartier) en verbandhoudende prestaties (factoren die resulteren uit de situatie waarin de prestatie voorkomt).

Tevens beïnvloeden sommige factoren de daadwerkelijke baanprestatie van expats:

  • Technische kennis
  • Persoonlijke aanpassing aan cultuur
  • Steun van het hoofdkwartier
  • Omgevingsfactoren van het gastland, zoals: politieke en arbeidskracht stabiliteit, culturele afstand, economische ontwikkeling, fysieke eisen en type organisatie.

Repatriëren (terugkeren naar het thuisland)

Alle expats voelen een mate van angst over persoonlijke financiën, reacclimatisering aan de levensstijl van thuis, het opnieuw aanpassen aan de gemeenschappelijke structuur en de toekomst van hun carrière.

Oplossingen:

  1. Planning: Voordat je naar het buitenland gaat, moet je je afvragen wat het strategische doel hiervan is: ontwikkelingen, coördinatie en controle hoofdkwartier of verplaatsen van informatie en technologie.
  2. Career management: Aanwijzen van een career sponsor die kijkt naar de carrière interesses van iemand die in het buitenland zit en die de ontwikkelingen in de organisatie bijhoudt.
  3. Compensatie: Het missen van een maandsalaris is een groot probleem. Een oplossing hiervoor is compensatie
Page access
Public
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

How to use and find summaries?


Online access to all summaries, study notes en practice exams

Using and finding summaries, study notes en practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Starting Pages: for some fields of study and some university curricula editors have created (start) magazines where customised selections of summaries are put together to smoothen navigation. When you have found a magazine of your likings, add that page to your favorites so you can easily go to that starting point directly from your profile during future visits. Below you will find some start magazines per field of study
  2. Follow authors or (study) organizations: by following individual users, authors and your study organizations you are likely to discover more relevant study materials.
  3. Search tool: quick & dirty - not very elegant but the fastest way to find a specific summary of a book or study assistance with a specific course or subject. The search tool is available at the bottom of most pages or on the Search & Find page
  4. Tags & Taxonomy: gives you insight in the amount of summaries that are tagged by authors on specific subjects. This type of navigation can help find summaries that you could have missed when just using the search tools. Tags are organised per field of study and per study institution. Note: not all content is tagged thoroughly, so when this approach doesn't give the results you were looking for, please check the search tool as back up

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study (main tags and taxonomy terms)

Field of study

Quick links to WorldSupporter content for universities in the Netherlands

Follow the author: Psychology Supporter
Comments, Compliments & Kudos

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.