Part 3 Examinations Structure
Computerized Adaptive Testing (CAT)
CAT examinations are scored differently from fixed-length examinations like computer-based linear tests or pencil and paper tests. Items are placed on a standard scale to assess where the candidate falls. Focus on answering all items to the best of your ability, as difficulty should not be a concern. Here’s an example to explain:
-
The candidate must jump over a bar set at four feet to make the high jump team.
-
The coach needs to be 95% certain the candidate can consistently clear four feet.
-
The bar starts at three feet six inches and is raised after each successful jump.
-
If the candidate fails to clear a height, the bar is lowered.
-
The process continues until the coach is 95% confident the candidate can jump four feet or higher.
-
If the candidate clears four feet consistently, they pass; if not, they fail.
How a CAT Examination Works
The high jump analogy illustrates how a CAT examination works. It starts with an item slightly below the passing standard. As the candidate answers, the computer adjusts the difficulty based on their performance. Incorrect answers trigger easier items, while correct answers lead to more challenging ones. The computer continuously evaluates the candidate’s ability level in real-time. The examination ends once enough data is collected to determine if the candidate is above or below the passing standard.
A "Real" Candidate' Performance
CAT Examination Stopping Rules
A 95% Confidence Level is Required to Pass or Fail a CAT Examination
The examination stops after the minimum number of items in the following cases:
- 95% confidence the candidate is at or above the passing standard.
- 95% confidence the candidate is below the passing standard.
- The candidate reaches the maximum time limit.
The examination length is variable. Candidates near the passing standard need more items for the computer to confidently assess their ability. Each item helps refine the determination. The most precise ability estimate comes at the maximum examination length, especially for candidates near the passing standard.
Candidates unable to reach the passing standard will be identified early once enough data is collected. Regardless of length, the examination ensures precision and efficiency in determining entry-level competency for national certification.
How are examination questions (items) created?
The National Registry's examination development process combines data analytics, external/internal clinical subject matter expertise, and editorial reviews during item construction. Each question is developed with rigor and high-quality practices to meet widely accepted testing and clinical best practices. The process is expensive and time consuming. The process is the same for all National EMS certifications levels.
-
External SME Writing: EMS clinicians write questions based on the test plan and submit them to the National Registry for consideration.
-
Internal Review: The Examinations team reviews questions for clinical accuracy, grammar, and style.
-
External SME Review: A differing committee of external subject matter experts (SMEs) assess each question for accuracy, relevance, and currency. If approval is gained from this group, the item moves forward.
-
Final Internal Review: The Examinations Content Team conducts a final check for referencing, proper rationales, and grammar.
The process ensures:
-
Every question links to a task in the practice analysis.
-
Correct answers are accurate, current, and fully correct.
-
Incorrect answers are plausible, yet incorrect.
-
All answers are found in commonly available up-to-date EMS textbooks, EMS standardized course textbooks, or other widely available and accepted evidence.
-
Controversial questions are discarded or revised.
-
After item piloting, psychometricians conduct a sensitivity and bias review.
Linear Examinations

The AEMT certification examination is a fixed-length linear computer-based test (CBT), where all candidates receive the same number of items, though not identical. Candidates select their answers and can change them before moving to the next item. Once an answer is submitted, it cannot be changed. Candidates are encouraged to answer each question to the best of their ability before submitting.
Example Items
An interactive sample of items were created to help familiarize candidates to the types of questions.
BLS Sample Items:
EMR Sample Items
EMT Sample Items
ALS Sample Items:
AEMT Sample Items
Paramedic Sample Items
The sample items provided above aim to replicate the experience of the actual National Registry examination at Pearson VUE testing centers. To ensure optimal interaction with these sample items, we recommend accessing them using a Windows or Mac desktop or laptop computer. Please note that using a mobile phone may not provide the same experience.
Practice Analysis
The goal of licensure and certification is to assure the public that EMS clinicians meet specific standards and are qualified to safely provide care. The National Registry bases certification and licensure requirements on a candidate’s ability to practice safely and effectively, measuring competency within a fair, valid, and reliable way. A practice analysis plays a critical role in this credentialing process, ensuring that examination content is connected to real-world EMS practice.
The practice analysis identifies key tasks, knowledge, skills, and abilities required for entry-level EMS care, helping the National Registry create examinations that reflect current practice. By analyzing data on task frequency and criticality, the National Registry determines the importance of each task, which guides the examination blueprint.