Public Safety: A Perplexing Assessment Trend

Public SafetyWith so many tragic situations hitting the airwaves each week, issues around public safety dominate many of our thoughts and discussions.  The ideologic divide in our society is palpable, with extremist positions being staked out an all sides of an already volatile public debate.  Our work at Organization Development Consultants, Inc., (ODC) puts us solidly into the realm of public safety, and our involvement is only increasing!

You see, each week, our psychologists and staff put candidates for law enforcement and fire safety positions through very comprehensive psychological assessments.  In short, our goal is to determine the intellectual adequacy and to identify potential psychopathology in those who we may all someday rely on to provide for our public safety.  They could very well be among the first responders when future tragedies hit.

When you step back and look at our role, it’s really quite humbling.  And few would doubt the critical nature of our work.  We recognize our responsibilities in that regard, and quite honestly, we’ve got a great team of professionals that make this process really fun (at least for us, as evaluators!).

So, here’s a conundrum we’ve faced more and more in the past year or so…..candidates are “tripping up” on our assessment process due to triggering the validity scales on multiple personality instruments.  This tendency has been trending up over the past year, and it’s not entirely clear why this is occurring.

Together, we’ve talked through several potential hypotheses, including:

  • Effects of the Economy – Are candidates for this traditionally very sought after positions feeling increasing pressure “to perform” and thereby inadvertently “faking good” in a subconscious or conscious fear of failing?  In other words, are economic and job market realities creating an environment where candidates are at the “end of their rope” and more desperate than usual to attain these positions?
  • Has increased vigilance to the American with Disabilities Act (ADA), which only allows clinical psychological evaluations to take place after a job offer has been made, and upon which ultimate employment is dependent, created its own “monster” with regard to placing undue stress on candidates?
  • Are we now seeing a truly different caliber of candidate for police, sheriff, and fire department positions that in the past, and what might be driving that, if so?

Also, as we explore these possibilities and more, we also must examine whether there may be ways in which we can (or should) ethically mitigate this trend.  Can (and should) we provide more pre-assessment coaching?  Should the departments themselves?  Should we recommend to departments conducting all the non-clinical portions of the assessment as part of the actual employment evaluation phase (prior to conditional employment offers being made), and only saving the clinical evaluations for post-offer, pre-employment assessments?

Not sure where this situation will take us, but it’s both a professional and intellectual challenge, and we really enjoy our on-going interactions and partnerships with so many great public servants.  Public_Safety_BannerSo, we’ll continue exploring this (and many other) evolving hurdles and questions….it’s what we do at ODC, and it’s what we love!

14 thoughts on “Public Safety: A Perplexing Assessment Trend

  1. Trevor, thank you for sharing what you and your colleagues do at ODC, Inc. The “tripping up” the validity scales for the varying psychological tests which you administer got my attention. Have you and your colleagues reviewed the psychometrical background of the test(s) that you use? Every psychological tests conduct its own “calibration” using psychometrical methods. Are the reasons you shared not addressed by the studies performed on the tests?

    1. Great question, Loreto. In fact, the instruments we use, including the MMPI, PAI, 16PF, etc., have all been validated for decades and against the population in question. It’s not a validation issue, we’re convinced, but rather a process or subject issue. Mind you, we have plenty of candidates who do just fine…the number of those who appear to attempt to “fake good” is simply on the upswing.

      1. I find it interesting. Until I see the data….What makes you firmly point to the “process” and the “subject” as the issue?

      2. Nonetheless, if you are certain that the “item reliability” scale is not affecting the validity of the tests, are you referring to the “test administration” as the process?

      3. Neither. Perhaps my use of “validity scale” was misinterpreted to mean the instruments themselves were in question. That is not the case. What is indicated on the assessments is whether answers given on the instrument produced a valid response set. For example, with the 16PF, there are three such validity indicators (Positive Impression Management, Infrequency, & Acquiescence). A score at the 95th percentile on any of these is considered to generally be an invalid response set. I would say that for many of the candidates we see, it is the PIM above 95th percentile, indicating a “faking good” response set that is one of the key issues. Of course, this assessment is a non-clinical assessment, but we’re observing the same trend on our clinical assessments with these same candidates. Plainly stated, there has been a stronger incidence of “faking good” by candidates than we’ve experienced in years past. Does that further explanation help any?

      4. And by “process” I’m referring to the entire, comprehensive hiring and evaluation process (in which we only play the evaluative role). Imagine you’re a candidate who has made it through an incredibly difficult and competitive hiring process to the point of receiving a conditional offer of employment (contingent on passing a psychological evaluation). In addition to the normal competitive nature of getting hired into police officer positions, the economy has greatly impacted your employability in general. So, you’ve now “nearly arrived.” You’ve gotten the offer. The only standing in your way is the psychological evaluation. You REALLY want to put your best foot forward. So, you go into the psychological evaluation potentially inclined to answer in the way that you think most appropriate for a potential police officer, but perhaps not in a way that is consistent with your own self. The assessment instruments pick up on this “faking good” inclination….and as a result, the validity indicators within the instruments result in a response set that exceeds the maximum allowable.

        Given the pressures candidates are under, the question is, are we seeing more candidates trying to “fake out” the instrument out of extreme desires to be hired? That’s the question, really….

      5. Trevor, I believe you clarified it. I remember studying these psychological tests way back in 1992-1993, if I am not mistaken. Psychometrically, embedded in these psychological tests are items that would indicate or “flag up” the psychologists or psychometricians in scoring and interpreting tests scores. The overall score ( as you had stated) is interpreted in relation to the “composite scores” (in this case, the three indicators you mentioned) or vice versa. When tests score are questionnable upon review of the total score, sub scores and the items used to establish “faking” embedded in the design of the tests, we were taught to check on the error coefficient of error score of the test. As you mentioned in your first two postings, it was process and subject. When you attribute the issue to these two aforementioned factors you are touching on the quantitative measurement of the psychological tests particularly the reliability coefficient and measurement of error coefficient. Very technical but as psychologists each tests has prescribed tests you can do to interpret single score and aggregate scores. Has any of you pulled the manual?

  2. In my training as a psychologists when I was in the Philippines, that’s a flag up. Therefore, technically invalidates the test score. An invalid test score means not acceptables score for us. Re-take is the only action to take. However, that strict adherence was based on the the ethics of psychological testing. ODC may have a contract that does not have that strict requirement or understanding with the client. That somehow changes the strict adherence to the ethical standards and requirements and use of psychological tests.

    1. We cross-validate results between the assessments, which is precisely why we administer a battery of instruments as a routine part of the evaluation. We don’t make any determinations or recommendations on the basis of only one assessment. In fact, I believe in the area of public safety psychological evaluations, this is one of the areas in which we have a distinctly more comprehensive method than many other psychology firms.

      1. When you say you administer a battery of tests and cross-validate results, are you referring to one individual at a time?

      2. I suppose you may invalidate the other posting asking a clarifying question on cross validation.

        The battery of tests when administered and used always have a purpose. Nothing is taken singly for interpretation. A profile is created per individual.

        Regarding the “faking good” items or scale score, they are set in the design of the test for a reason, that is, validated test score that indicates a “go” or an invalidated test score that requires re-testing. However, supervising psychologists who use these battery of tests always have “utility” measure by which they can go by “due to experience”. Have you been analyzing the “risk of errors” as an aggregate data historically? Have you performed a periodic thorough impact analysis of the comprehensive battery of tests not only for its impact but also for its utility if not also relevance? I remember that when we were undergraduate students studying all these psychological tests, we laughed because we all agreed in class that we should just all remember these “fake good” items so we can always make it through. However, when our professor heard that as a loud outcry, she immediately said (I rephrased here) that at any point in time we reached a certain amount of “fake good” items (as you call it), that completely invalidates the test and therefore we completely lose the chance to be considered since it is an indication that is not at all favorable. And she emphasized how critical these psychological tests for any application process that uses it. (Yes, I am referring to MMPI, 16PF, and the others that I cannot remember at this moment. I had 2 semesters of psychometrics and 2 semesters of projective techniques in BS Psychology and two kinds of psychological testing course in MA I/O Psychology. Well, this is from memory. I would have to review my transcripts to be accurate!)

        Nonetheless, until I see the actual data, I am only talking here. The data and the process speak loud and clear like mathematics and science. The rest of the nuance is the human factor. Okay, let us not disregard the system! 🙂

  3. I think this answered my questions about why they offer the jobs before the testing why does the ADA require that if candidates still can not be hired if they fail the testing? Is it actually preventing discrimination against mental disabilities in some way?

    The extensive comments were interesting, too!

    JoDee

    Sent from my iPhone

    1. I think the ADA requirement removes the issue of mental disabilities as a preliminary screening reasons, thereby protecting one’s rights to privacy if one would have been screened out earlier in the selection process for other than mental health reasons. Great questions!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s