A perfectly valid selection procedure can be invalidated through improper use. Validation has to do with the interpretation of scores. A valid selection procedure produces scores that can be informative in both absolute and relative terms. A person who scores 90% on a written test absolutely answered about 9 out of each set of 10 questions correctly. In an absolute sense, they answered just about every test item correctly. But what if they scored in the lowest 10% of all test takers (i.e., about 90% of the applicants scored higher)? This paints a completely different picture. Relative to the other applicants, they scored very low. Interpretation at this point can be difficult: was it the test, was it the relative abilities of the test takers, or are there other factors at play? Scores on a selection procedure should be used in such a fashion that the validation evidence supports the way the selection procedure interpreted them. If classifying applicants into two groups – qualified and unqualified – is the end goal, the test should be used on a pass/fail basis (i.e., an absolute classification based on achieving a certain level on the selection procedure). If the objective is to make relative distinctions between substantially equally qualified applicants, then banding is the approach that should be used. Ranking should be used if the goal is to make decisions on an applicant-by-applicant basis (making sure that the requirements for ranking are addressed). If an overall picture of each applicant’s combined mix of KSAPCs is desired, then a weighted and combined selection process should be used. For each of these procedures, different types of validation evidence should be gathered to justify the corresponding manner in which the scores will be interpreted. Learn more about the BCG Institute for Workforce Development by going to www. BCGInstitute.org Visit http://bcginstitute.org/?AIBookSeries to learn about the Adverse Impact and Test Validation webinar series based on Dr. Biddle’s book.