CSCS Study Guide Chapter 12: Principles of Test Selection and Administration

Chapter 12 of a free NCSA CSCS Exam Study guide that I'm making to help myself and others become better personal fitness trainers. This chapter covers selecting tests for measuring performance.

Chapter 12 of the Essentials of Strength Training and Conditioning covers test selection and administration. When the appropriate tests are selected, their results can be used to make decisions about the direction about the training program. A test must be testing what it is you want to measure and repeatable by others in similar conditions. Ultimately they will help athletes achieve their goals.

Other chapters can be found here:

Principles of Test Selection and Administration

With the right understanding and test selection, a strength and conditioning professional can use the results of tests and measurements to adapt training programs.

Reasons for Testing

  • Testing helps assess where an athlete is talented and can also help reveal which areas need improvement. Baselines can help tell the story of what is going on in training over time. They can also be used for the purpose of goal setting.
  • It is difficult to tell if an athlete will be good at a sport or not when projecting from say high school to college. One way to get an idea is by looking at previous performance. Another way to look is by comparing the field tests to that of successful athletes.

Testing Terminology

  • Test-an established way of assessing a particular ability.
  • Field Test-a test performed outside of a laboratory setting. Usually doesn't require extensive training or expensive equipment.
  • Measurement-collecting test data.
  • Evaluation-examining test measurements to make decisions.
  • Pretest-a baseline test to see where the athlete is before the proposed training protocol.
  • Midtest-a test conducting once or multiple times during the program to see if there is a need for adjustment.
  • Formative Evaluation-regular reevaluations based on midtesting. Allows for monitoring and adjusting the program which also keeps the program from getting stale.
  • Posttest-a test after training to see if the program worked at improving the desired areas of change.

Evaluation of Test Quality

  • Validity-the degree to which the test measures what it is supposed to measure.
  • Construct Validity-the ability of a test to represent the underlying theory. Overall ability to measure what is supposed to be measured.
  • Face Validity-the appearance to casual observers that the test measures what it is designed to measure. In some fields, like psychology, this is masked so that people do not know you are testing them.
  • Content Validity-the assessment by experts that the testing covers everything that relevantly needs to be tested.
  • Criterion-Referenced Validity-the degree that test scores are connected to some other measure of the same ability. Three types of this, concurrent, predictive and discriminant.
  • Concurrent Validity-the extent that test scores are associated with those of other accepted tests that measure the same abilities.
  • Convergent Validity-a high connection between the results of the test being assessed and tests that are considered the "gold standard".
  • Predictive Validity-the extent that the test is able to predict future performance.
  • Discriminant Validity-the ability of a test to distinguish results between two different constructs. The two tests should not be very closely related. This ensures that there is not a waste of energy and that separate tests are not testing the same thing.
  • Reliability-the test is consistent and repeatable.
  • Test-Retest Reliability-a measure of test consistency. The ability to get a similar measured result.
  • Typical Error of Measurement (TE)-the difference between two sets of scores. A combination of equipment error and biological variation of athletes. Ex. If you weigh an athlete twice within a minute you can assume that any difference between the two scores are due to equipment error.
  • Intrasubject Variability-a lack of consistency by the person taking the test.
  • Interrater Reliability-the ability of two different people giving a test to come out with a similar score.
  • Objectivity-another term for interrater reliability.
  • Interrater Agreement-another term for two different people giving the same test coming up with similar scores.
  • Intrarater Variability-a lack of a consistent score given by the same person giving a test.

Test Selection

  • Metabolic Energy System Specificity is when a test measures the same energy systems that will be required to play the sport.
  • Biomechanical Movement Pattern Specificity is when a test measures moving in the same way that will be required to play the sport. Ex. the vertical jump being important for jumping in volleyball.
  • Experience and Training Status relate to test selection in that a well trained more experienced athlete can perform a more complicated skill consistently to be tested. A younger athlete may use poor technique and impair the test results.
  • Age and Sex affect test selection in some tests may be valid for college aged athletes but, not for high school. Using a chin up as a test of upper body pulling endurance might make sense for male wrestlers but, not for females due to differences in pulling strength.
  • Environmental Factors include things like lighting, weather, different times of day and surface. Differences in any of these can ultimately affect test results.

Test Administration

  • Tests should be performed in a safe manner and athletes should be monitored before, during and after strenuous testing.
  • Testers should be well trained and consistent.
  • Information should be gathered about the testing session itself. The weather, time of day and specific testing set up should be recorded.
  • Test Battery-a set of physical tests designed not to interfere with each other. Ex. Two nonfatiguing tests taken one right after another like height and flexibility.
  • To prepare athletes, they should be made aware of the date, time, and purpose of the testing process. It may be appropriate to have a pretest practice session or to incorporate familiarization of the test into training.
  • General and specific warm-ups performed before a test can increase test reliability.