• 5 Jan 2026 2:26 PM | ​Darcy Richardson

    IAFTC Newsletter. Volume 2. Issue 1. January 5, 2026.

    Darcy Richardson1

    1Vermont Forensic Services, 146 River Street, Milton, VT 05468. Darcy.Richardson@vtforensicservices.com 

    This is an open-access article under the CC BY-NC-ND license.

    Download PDF.

    A Different Scientific Upbringing

    My scientific upbringing was in an environmental laboratory, but not just any environmental lab, and not just at any time. It was 1999, and the lab had been bought out after the infamous and massive Intertek fraud of the late 80s and 1990s had come to light [1]. The criminal trials and thus the public eye on laboratory fraud were contemporaneous with the start of my scientific career. To say that my lab was serious about quality control is putting it lightly. 

    The warnings from supervisors were clear: follow proper laboratory practice or go to prison. The people from the Intertek Texas Lab were a stark warning of what happens when laboratory staff don’t take their roles seriously.

    Our Standard Operating Procedures were uniform, established, and found throughout the country. Audits were monthly, sometimes even more frequently, due to the various organizations, states, and federal entities involved. We were used to being evaluated and critiqued, and while we would sometimes roll our eyes at the quality control department insisting that something be rerun because it was out by 0.01%, we did it anyway.

    From Uniform Standards to the “Wild West” 

    Imagine my surprise when I moved into forensics in late 2002.

    Ask a forensic scientist about the state of things, and you’d hear the term “The Wild West.” Gone were my uniform SOPs; the entire country followed. Accreditation? Never heard of it. Audits? Yeah, pretty sure clinical did that kind of thing.


    Early Lessons in Risk and Responsibility

    Forensic Labs were flying blind and making it up as they went along. I’ll never forget being in a classroom when an attendee admitted they were using urine for DUI cases, and the entire room turned around to stare in disbelief.

    Urine, of course, is a perfectly reasonable matrix to analyze to demonstrate exposure or past use, but it is not appropriate to determine a concentration in effect at a certain point in time or to support impairment, as one needs to do in a DUI case.

    My upbringing in environmental testing meant that in my new forensics lab, I insisted on quality control, following Good Laboratory Practice [2], erring on the side of caution. It was a position that often led to head-butting and ultimately being called a “whistleblower” [3,4].

    Over the twenty-plus years I’ve been involved in forensics, the conversations have happened about the need to tame the Wild West. The need for standards and uniformity, proficiency testing, and accreditation. To bring forensics in line with other scientific areas, and to demand that science used in the courts meet the requirements mandated everywhere else.

    And while things are improving, we’re not quite there yet.

    Why Accreditation Alone Is Not Enough

    Accreditation now exists, but it’s still overbroad, mandating only minimal best practices. When it began, it was taken from manufacturing standards and is still largely based on ISO. States still have varied requirements from practically none at all to more specific rules regarding instrumentation, accuracy, or precision. From there, programs and laboratories vary in what is required. Some are seeking to be in line with the published literature and going above and beyond, and others are falling far short of Good Laboratory Practice. Accreditation can’t go as far as it needs to without uniform standards.

    The Role of the American Standards Board

    Enter the American Standards Board (ASB) under the American Academy of Forensic Sciences [5].

    The ASB consists of multiple Consensus Bodies covering all areas of forensics from Anthropology to Wildlife, which work in conjunction with OSAC to draft standards, guidelines, and best practices for forensic work. It seeks to establish that uniformity that is already so standard in environmental testing and to improve the quality of forensic work performed nationwide.

    Quality Assurance may be a weird thing to be passionate about, but when a colleague recommended I apply to the ASB Toxicology Subcommittee that was looking for new members, I jumped at the opportunity.

    I’ve been honored to work with the Toxicology Consensus Body and various working groups over the years to help adjudicate comments and finalize documents for approval. This work is all volunteer, and I’ve had the pleasure of working with dedicated individuals from manufacturing, independent and public labs, and private practice in an attempt to bring forensics to where it should be. The end goal is that throughout the country, we can be assured that the science entering the courtroom meets the level of performance and acceptability in other areas of laboratory work.

    Changing the Culture Is the Hardest Part

    It is not only a long process, standardizing an entire discipline, but it also requires a change in culture [6]. Not everyone grew up in an environmental lab with the threat of prison held over their heads, after all.

    For each new standard, a slew of questions must be answered, and education on why “but we’ve always done it this way” is not a reason to skip out on following good quality assurance. There is always some resistance. Always some argument that the work is good enough. Some of that comes from always being a cowboy in that Wild West and chaffing at authority. Part of it comes from being entrenched with police departments so that the focus is not on science, it’s on being a partner for prosecution.

    Independence, Bias, and the Courtroom

    This issue of independence has long been discussed in the forensic world [7]. My own forensic lab was maintained in the Health Department in some attempt to stay separate. However, regardless of knowing about cognitive bias, when you see police officers or attorneys on a regular basis in your job, you can’t help but view them as your colleagues. And that can be okay as long as the science always comes first.

    If science is to be used in the courtroom, it must meet scientific standards. That is paramount. Being accredited isn’t a free pass.

    When the Science Speaks for Itself

    A question that often comes up when discussing standards is “What will an attorney do with this document?” I have never found that to be a compelling concern. 

    Why? 

    Because you don’t have to worry about being confronted in court if the work stands for itself, it’s easy to explain that science is constantly changing and improving, and we are always striving to meet that standard.

    One day, while I was still working for the state lab and testifying mostly for the prosecution, a defense attorney told me that I was the only one in the room telling the truth. “The cops are lying, I’m lying, my clients are lying, but not you.” I took that as the highest compliment. I still work to that standard.

    Where Forensics Still Needs to Go

    Is forensics where it needs to be? No, but there are people working to get it there. It will take time, effort, and a change in culture, but eventually it will get there.

    Conflicts of Interest

    Darcy Richardson, MS, is a forensic toxicology consultant and provides expert testimony in civil and criminal cases. She is a member of the American Standards Board Toxicology Consensus Body, where her participation is voluntary and unpaid. She has no financial interest in the topics discussed in this paper.

    References

    [1]Texas lab techs allegedly altered data. UPI 2000. https://www.upi.com/Archives/2000/09/21/Texas-lab-techs-allegedly-altered-data/4774969508800/ (accessed December 29, 2025).

    [2]Jena GB, Chavan S. Implementation of Good Laboratory Practices (GLP) in basic scientific research: Translating the concept beyond regulatory compliance. Regul Toxicol Pharmacol 2017;89:20–5. https://doi.org/10.1016/j.yrtph.2017.07.010.

    [3]Bromage A. DUI Chemists Blow the Whistle on Vermont’s Breath-Testing Program. Seven Days: Vermont’s Independent Voice 2011. https://www.sevendaysvt.com/news/dui-chemists-blow-the-whistle-on-vermonts-breath-testing-program-2143006 (accessed November 19, 2023).

    [4]Olson A, Ramsay C. Errors in toxicology testing and the need for full discovery. Forensic Sci Int Synerg 2025;11:100629. https://doi.org/10.1016/j.fsisyn.2025.100629.

    [5]Academy Standards Board. American Academy of Forensic Sciences 2025. https://www.aafs.org/academy-standards-board (accessed December 29, 2025).

    [6]Mnookin JL, Cole SA, Dror IE, Fisher BAJ, Houck MM. The need for a research culture in the forensic sciences. UCLA L Rev 2010.

    [7]Olson A. Truth, power, and the crisis of forensic independence. Forensic Sci Int Synerg 2025;11:100647. https://doi.org/10.1016/j.fsisyn.2025.100647.



  • 24 Dec 2025 9:14 AM | ​Joshua Ott

    IAFTC Newsletter. Volume 1. Issue 1. December 24, 2025.

    Joshua Ott1

    1Caselock, Inc., P.O. Box 285, Lebanon, GA 30146

    This is an open-access article under the CC BY-NC-ND license.

    Download PDF.

    Abstract

    The Horizontal Gaze Nystagmus (HGN) test is widely presented in courtrooms as an accurate and valid component of the Standardized Field Sobriety Test (SFST) battery. However, the 2007 Robustness of the Horizontal Gaze Nystagmus Test study, authored by Dr. Marceline Burns and sponsored by the National Highway Traffic Safety Administration (NHTSA), raises significant concerns regarding the test’s accuracy, validity, and false positive rates. This paper critically analyzes the raw data from the study, specifically the stimulus variation experiment, and compares those findings to the conclusions reported by the study’s author. When evaluated using the HGN criterion established in the San Diego Study and still taught in the 2025 edition of the SFST Manual (four or more clues indicating a BAC of 0.08 g/dL or more), the overall false positive rate was 67% when administered correctly, and false positive rates ranged from 79% to 92% when the stimulus position was altered. Despite these findings, the study’s published conclusions assert that HGN is “robust” and unaffected by minor procedural deviations. This paper demonstrates that the reported conclusions were achieved only after the study’s author altered the criterion of a false positive, lowering the BAC threshold from 0.08 g/dL to 0.03 g/dL. The analysis presented here reveals substantial issues with the study and the rate of false positives for HGN, even when administered in accordance with the NHTSA standard.

    Introduction

    The Horizontal Gaze Nystagmus (HGN) test has long been portrayed in courtrooms as a highly accurate and valid indicator of a person’s BAC being at/or above the legal limit and remains a central component of the Standardized Field Sobriety Test (SFST) battery. Yet the scientific foundation for this confidence warrants renewed scrutiny. The 2007 study, The Robustness of the Horizontal Gaze Nystagmus Test, was authored by Dr. Marceline Burns and funded by the National Highway Traffic Safety Administration (NHTSA). Because Dr. Burns played a central role in developing the SFSTs, including authoring or co-authoring five of the six studies foundational to their use, her conclusions carry significant influence.

    However, a close examination of the raw data from the Robustness Study demonstrates a substantial discrepancy between the data and the conclusions published in the report. When administered and interpreted correctly, as established in the San Diego Study, a score of four or more clues indicates a blood alcohol concentration (BAC) of 0.08 g/dl or more. When using this established interpretation criterion, the raw data exhibits an alarmingly high false positive rate. The false positive rates increase further when the stimulus position deviates from NHTSA’s standardized procedures. Despite these findings, the study characterizes HGN as “robust,” a conclusion reached only after redefining a false positive by lowering the threshold from 0.08 g/dL to 0.03 g/dL.

    This paper provides a critical analysis of the study’s methodology, its data, and its conclusions. By comparing the study’s raw data to the criterion governing HGN interpretation, this analysis demonstrates that the claimed robustness of HGN is not supported by the underlying data. In doing so, it illuminates significant implications for the admissibility, accuracy, validity, and weight of HGN evidence in impaired-driving cases.

    Study Overview

    The Robustness Study (Horizontal Gaze Nystagmus Test Study)(1) was published in 2007 and was sponsored by the National Highway Traffic Safety Administration (NHTSA). It was authored by Dr. Marceline Burns. Dr. Burns was one of the investigators who developed the SFSTs, and she was an author or co-author of five out of the six studies (1977(2), 1981(3), Colorado(4), Florida(5), and San Diego(6)) used to develop and validate the SFSTs, which includes the HGN test. When analyzing the Robustness Study, it is important to know and understand that Dr. Burns was intimately familiar with HGN and its scoring criterion.

    The study addressed defense attorney arguments that variations from the standardized procedures in HGN administration invalidate the test, so this study examined variations in the administration of the test. There were (3) experiments conducted. The first experiment examined variables in the stimulus, such as stimulus speed during Lack of Smooth Pursuit, elevation of the stimulus throughout the HGN test, and distance of the stimulus from the subject’s face. The second experiment examined the participants’ posture (Standing, sitting, or lying down). The third experiment examined the participants’ vision (Monocular vs. binocular vision). For this paper, the first experiment will be the primary focus. The raw data for the second experiment were not published, so it cannot be analyzed with the same level of scrutiny. The third experiment will also be briefly discussed.

    Stimulus Variation Experiment

    This was a laboratory experiment that involved volunteers who were dosed to different Blood Alcohol Concentrations (BACs) that were measured using an “AlcoholSensor IV.” Seven experienced officers administered the HGN Test to the participants. 

    “A Video/HGN System (EyeDynamics, Inc) was used to make video records of participants’ eyes during examinations. The apparatus uses a small adjustable camera mounted in the right side of goggles that are worn by the participant. The camera transmits an image of the participant’s right eye to a television monitor and VCR which the examiner used to view the right eye. The open left side of the goggles allows the participant’s left eye to be viewed by the examiner.” 

    When analyzing the data, it must be understood that the criterion for HGN is that four or more clues indicate a BAC of 0.08 g/dL or more. This standard was established in the San Diego Study and remains in effect as of the 2025 edition of the NHTSA SFST Manual(7).

    The first variation tested was the speed of the stimulus. This involved moving the stimulus at both the “standard” speed (moving from the center of the face to one side as far as the eye can in 2 seconds and 2 seconds back to the center) and faster than the “standard” (1 second out to the side and 1 second back to the center) when checking for Lack of Smooth Pursuit. One officer administered the test correctly, and the other moved the stimulus faster than the standard. During this variation, the false positive rate of the HGN test was 76% with an overall correct rate of 44% when the test was administered correctly. (Appendix 1)

    The second variation was the elevation of the stimulus. This involved holding the stimulus at the “standard” height (2” above eye level), lower than the “standard” (0” / at eye level), and higher than the “standard” (4” above eye level). During this variation, the false positive rate of the HGN test was 54% with an overall correct rate of 61% when the test was administered correctly. (Appendix 2)

    The last variation was the distance of the stimulus from the participant’s face. This involved holding the stimulus at the “standard” distance (12-15”), closer than the “standard” (10”), and further than the “standard” (20”). During this variation, the false positive rate of the HGN test was 69% with an overall correct rate of 47% when the test was administered correctly. (Appendix 3)

    Overall, for the entire experiment (all 3 variations combined),the false positive rate of the HGN Test was 67% with an overall correct rate of 50% when the test was administered correctly. (Appendix 4)

    HGN Test Accuracy by Stimulus Variation

    Stimulus Variation

    Standard Condition Tested

    False Positive Rate (%)

    Overall Correct Rate (%)

    Appendix

    Speed of Stimulus

    2 seconds out / 2 seconds back

    76%

    44%

    Appendix 1

    Elevation of Stimulus

    2 inches above eye level

    54%

    61%

    Appendix 2

    Distance of Stimulus

    12–15 inches from face

    69%

    47%

    Appendix 3

    Overall (All Variations)

    Standardized administration

    67%

    50%

    Appendix 4


    What were the results when the test was not administered in accordance with the “standard?”

    Stimulus Speed

    • (1 Second) Faster than the “standard” - Overall correct 58% with a false positive rate of 50%. 

    • This is the only variation tested that increased accuracy and decreased false positives.

    Stimulus Elevation

    • (0”) Lower than the “Standard” - Overall correct 44% with a false positive rate of 79%.

    • (4”) Higher than the “Standard” - Overall correct 38% with a false positive rate of 91%.

    Stimulus Distance

    • (10”) Closer than the “Standard” - Overall correct 29% with a false positive rate of 92%.

    • (20”) Further than the “Standard” - Overall correct 35% with a false positive rate of 84%.

    The false positive rates of HGN were very high when the test was administered correctly, but increased notably when the stimulus was not positioned in accordance with the standardized guidelines. What was Dr. Burns’ conclusion, and how did she address the false positives? 

    “In conclusion, HGN as used by law enforcement is a robust procedure. The study findings provide no basis for concluding that the validity of HGN is compromised by minor procedural variations.” 

    How did Dr. Burns come to this conclusion? By changing the criterion for what would be considered a false positive. Image 1 below is a screenshot from page 15 of the study.


    Image 1. Criterion for a “Hit” in the HGN.


    The highlighted area shows that four clues were considered a “hit” if the participant’s BAC was 0.03 or higher. By lowering the criterion, it drastically lowered the number of false positives. As can be seen when looking at each one of the tables, very few of the false positives that occurred when applying the established criterion were noted as false positives (denoted by **) by Dr. Burns. It is important to remember that Dr. Burns was the person who trained the officers in the San Diego Study of the updated criterion of HGN scoring. Her statement (from the box above), “the criteria by which scores have been classified as correct, false negative, or false positive as defined in the SFST curriculum appear below,” is not the truth. 

    It appears that instead of using the correct criterion and applying it to the data to form her opinions, Dr. Burns altered it to make the data fit her opinions. 

    Monocular Vision Experiment

    This was also a laboratory experiment and was listed as a preliminary analysis due to the limited number of participants. The participants were required to be functionally one-eyed, so data was only obtained from 7) individuals. The participants were dosed with alcohol and their BACs were measured with an AlcoSensor IV. Two certified DREs independently examined the participants. The false positive rate was 68%. (Appendix 5)

    What did Dr. Burns state? 

    Because HGN appears to be reduced in a non-functioning eye, if officers were to rely solely on eye signs, they would only increase their false-negative rates, and they might improperly release one-eyed individuals. There is no evidence that HGN signs in such individuals will lead to false arrests.

    NHTSA Training Manuals

    All references to the Robustness Study were removed from the 2018 SFST, ARIDE, and DRE curricula.(8) (At the time of this writing, the study is still available on the NHTSA website, but is still absent from the NHTSA curricula.) This removal occurred due to a concern that part of the study was conducted in a manner that substantially deviated from the normal protocol for administering and interpreting HGN. (The purpose of the study was to examine deviations from the standardized protocol.) A formal retraction of the study was not recommended. There was no additional information provided as to what the specific issues were, or which experiments of the study were the problem. 

    There were no concerns raised with Dr. Burns changing the criterion to alter the number of false positives that were reported in the study. 

    The data speaks for itself. These were experienced officers; their correct or incorrect administration of the HGN test was known, the participants’ BACs were known, and the number of clues reported by the officers was known.

    Conclusion

    The analysis of the Robustness Study reveals a critical issue that has substantial legal and scientific implications: the study’s conclusions are based on an altered definition of a false positive that does not align with the established NHTSA criterion. This change dramatically reduced the number of reported false positives and enabled the author to conclude that HGN was “robust,” despite data showing false-positive rates ranging from 67% when administered correctly to 92% when the stimulus position was altered.

    This alteration was not a trivial mistake. Dr. Marceline Burns was the principal or co-author of five of the six foundational SFST development and validation studies that courts repeatedly rely on. If the same researcher who authored the core validation studies subsequently alters the definition of a false positive to align outcomes with a predetermined conclusion, it raises legitimate concerns about the integrity of their prior SFST validation research.

    The implications for legal proceedings are significant. Courts routinely rely on the SFST validation studies to support the admissibility and scientific accuracy and validity of HGN evidence. Given these issues, the weight afforded to HGN, and by extension the SFST battery, should be carefully reevaluated.

    Acknowledgements

    The author acknowledges the use of ChatGPT to assist in drafting and refining the abstract, introduction, and conclusion by improving wording, organization, and clarity based solely on the author’s original manuscript text. All substantive content, analysis, and conclusions are entirely the author’s own.

    Conflict of Interest Disclosures

    The author is a consultant and expert witness for DUI cases, but has received no funding or compensation for the preparation of this article.

    References

    [1]Burns M. The Robustness of the Horizontal Gaze Nystagmus Test. Southern California Research Institute; 2007.  

    [2]Burns M, Herbert M. Psychophysical Tests for DWI (Driving While Intoxicated) Arrest. U.S. Department of Transportation National Highway Traffic Safety Administration; 1977.

    [3]Tharp V, Burns M, Moskowitz H. Development and Field Test of Psychophysical Tests for DWI Arrest. Southern California Research Institute; 1981. 

    [4]Burns M, Anderson E. A Colorado Validation Study of the Standardized Field Sobriety Test (SFST) Battery. U.S. Department of Transportation National Highway Traffic Safety Administration; 1995.

    [5]Burns M, Dioquino T. A Florida Validation Study of the Standardized Field Sobriety Test (SFST) Battery. United States. National Highway Traffic Safety Administration; 1997. 

    [6]Stuster J, Burns M. Validation of the Standardized Field Sobriety Test Battery at BACs Below 0.10 Percent. United States. National Highway Traffic Safety Administration; 1998. 

    [7]NHTSA. SFST DWI Detection and Standardized Field Sobriety Test (SFST) Participant and Instructor Manuals. NHTSA; 2025.

    [8]DRE Technical Advisory Panel Mid-Year Meeting Minutes March 27, 2018



  • 19 Nov 2025 10:38 AM | ​Jay Gehlhausen

    IAFTC Newsletter. Volume 1. Issue 1. November 19, 2025.

    Jay M. Gehlhausen, Ph.D., DABFT-FD1

    1Forensic Toxicologist and Expert Witness, JG Tox LLC, Apex, NC 27539


    This is an open-access article under the CC BY-NC-ND license.

    Download PDF.

    Abstract

    Kratom (Mitragyna speciosa) is a Southeast Asian botanical product that has gained increasing prominence in forensic toxicology casework throughout North America. The leaves of this tropical tree contain numerous indole alkaloids, most notably mitragynine and 7-hydroxymitragynine, which exhibit complex pharmacological activity at opioid and adrenergic receptors. At low doses, kratom produces stimulant effects, while higher doses result in sedation and euphoria similar to opioid intoxication. The legal status of kratom remains controversial; while it is classified by the U.S. Drug Enforcement Administration as a Drug and Chemical of Concern, federal scheduling efforts have stalled, leading to a patchwork of state-level regulations. Forensic laboratories have developed robust LC-MS/MS methods for detecting kratom alkaloids and their metabolites in biological specimens, though therapeutic and toxic concentration ranges remain poorly defined. Published case reports document mitragynine blood concentrations ranging from 10-970 ng/mL in driving under the influence investigations and 10-4,310 ng/mL in fatal cases, though polydrug use is common. With over 1,200 adverse events reported to the FDA between 2008 and 2024, including 637 fatalities, and increasing prevalence in impaired driving cases, forensic toxicologists require familiarity with kratom's chemistry, pharmacology, and analytical detection. This review synthesizes current knowledge regarding kratom's chemical composition, metabolism, toxicological effects, and legal status to assist toxicologists and legal professionals in evaluating kratom-related cases.

    Introduction

    Mitragyna speciosa, or kratom, is a tropical tree native to Southeast Asia used historically as a natural stimulant and analgesic. Local residents in the region also refer to the tree as thang, kakuam, ketum, or biak. Early purveyors of Mitragyna speciosa would chew or smoke the leaves as a respite from demanding physical labor. Cultural acceptance developed over time in Thailand and Malaysia, and kratom would later become a global commodity. The chemical composition is not fully characterized, but fifty-four known alkaloids have been identified, two of which, mitragynine and 7-hydroxymitragynine, exhibit significant neurological activity (DEA, 2024). Plant varieties differ in composition and potency based on regional soil and climatic conditions; Thai kratom, for example, is the most potent due to a more favorable climate. Another variety of kratom found in Malaysia, referred to as ketum, has a lower prevalence of the psychoactive drug, mitragynine.

    In recent years, products manufactured with concentrated levels of mitragynine have been marketed as health and medicinal products. In the United States, for example, the plant leaves are sold as a powder available through internet sites and herbal shops. Newer formulations, such as brewed tea or concentrated drinks, are also prepared from the crushed leaves. Kratom is available from numerous vendors with names like Kona and Star (Kratom, 2025). Some of these products are highly potent, causing state governments to take notice following anecdotal reports of life-threatening intoxication and dependency.

    The Legal Status of Kratom

    Despite evidence of misuse and opiate-like pharmacology, the legal status of kratom remains in limbo. In September of 2016, the Drug Enforcement Administration (DEA) announced plans to classify mitragynine and 7-hydroxymitragynine as Schedule I narcotics using the administration’s emergency classification authority (Erickson, 2016). The pushback from Congress came immediately. With opposition coming from fifty-one members of the House of Representatives, the DEA was forced to reconsider the kratom ban. The Drug Enforcement Administration remains skeptical of the drug’s efficacy and considers kratom a Drug and Chemical of Concern (DEA, 2024). Advocates and several research centers, on the other hand, have pointed to evidence of relief from anxiety, management of opioid dependence, and limited abuse potential as justification for legal status. Even so, only limited legislative progress has been made at the federal level.

    In the absence of federal regulation, fifteen states have addressed concerns from the legal and medical communities by passing their own laws. Alabama, Arkansas, Indiana, Rhode Island, Vermont, and Wisconsin have banned mitragynine and 7-hydroxymitragynine-containing products. Other states have attempted to limit abuse by enacting age requirements. Tennessee has restrictions on the synthetic products but maintains legal status for the plant material (CRS, 2023). Kratom laws vary significantly across the fifty states. Issues relating to impairment and Driving While Impaired (DWI) can only be managed on a case-by-case basis. The public debate over the efficacy of kratom will continue because few controlled studies have been performed to understand the acute and long-term effects of kratom on human health. The actual risk of psychological and physical addiction has not been elucidated through scientific study.

    The Chemistry of Kratom

    The psychoactive constituents of kratom are characterized as alkaloids. This broad class of naturally occurring organic compounds, including compounds like caffeine and nicotine, contains at least one nitrogen and exhibits weakly basic chemistry.  Other alkaloids, such as theobromine and theophylline, derivatives of caffeine, are amphoteric. Alkaloids dissolve poorly in water but readily dissolve in organic solvents such as diethyl ether, an important consideration in the preparation of concentrated drinks. Alkaloids can also form salts that are freely soluble in water and ethanol. The alkaloids in kratom, for example, are classified as indole alkaloids, containing the structural moiety of indole, which is structurally related to the pentacyclic indole alkaloids, yohimbine and voacangine (Basiliere, 2020).

    Indoleis an organic compound classified as an aromatic heterocycle with a bicyclic structure, consisting of a six-membered benzene ring fused to a five-membered pyrrole ring. Indoles are widely distributed in nature, most notably as the amino acid tryptophan and neurotransmitter serotonin. There are more than 4100 known indole alkaloids, which often exhibit significant physiological activity. The indole structure is the backbone of the kratom alkaloids. Mitragynine is the most abundant active ingredient. In one study, a mitragynine concentrate extracted from the tree leaves contained 66% and 12% by weight from Thai and Malaysian varieties, respectively (Karunakaran, 2022). The plant contains fifty additional alkaloids that are present at much lower concentrations and have not been fully investigated (Karunakaran, 2022).

    Pharmacology of Mitragynine and 7-hydroxymitragynine

    At low dose levels, mitragynine exhibits mild stimulant effects, but as the dosage increases, an individual can experience sedation and euphoria similar to opioid use; this duality relates to concurrent α-adrenergic and opioid receptor induction. Notably, mitragynine has a long half-life in blood, estimated at 23 hours, which increases the risk of drug toxicity during binge use (Trakulsrichai, 2015). Doses of 2 to 10 grams of leaf material are more typical of the casual user. Physiological effects begin within 5 to 10 minutes after ingestion and last for 2 to 5 hours. Pharmacological investigations demonstrate that mitragynine and 7-hydroxymitragynine have µ-opioid receptor agonist activity, but mixed Δ and κ opioid activity has also been observed.

    The major kratom alkaloids mitragynine, paynantheine, speciogynine, and speciociliatine, in addition to several metabolites, have been detected in the urine of rats and humans following ingestion of kratom. There are a few studies describing the metabolic pathways of mitragynine, although recently there has been renewed interest in this area of research. An early study by Zarembo et al. reported that oxidation and hydroxylation were the primary metabolic routes. Mitragynine is known to undergo hepatic metabolism (Zarembo, 1974). Phillipp et al. conducted the first comprehensive in vivo study: rats were administered a single 40 mg/kg dose of mitragynine by gastric intubation (Philipp, 2009). The authors reported that phase I metabolism involved hydrolysis and demethylation, followed by oxidative and reductive transformations to produce carboxylic acid and alcohol derivatives. Additionally, mitragynine undergoes extensive phase II metabolism, producing both glucuronide and sulfate conjugates (Basiliere, 2020).

    Kratom Toxicology

    There is no in-depth understanding of kratom toxicology or how the drug influences the central nervous system (CNS). There have been recent human and animal studies involving kratom alkaloids, but at present, there are no authors who have proposed therapeutic and toxic concentration ranges (Maxwell, 2020). A review of the kratom literature has found blood concentrations between 10 - 970 ng/mL and 10 – 4310 ng/mL for DUID and death investigation cases, respectively (Society of Forensic Toxicologists, 2020). However, these wide ranges preclude the development of a practical safety scale. Similarly, there are no comprehensive studies on the interaction of mitragynine with other drugs despite postmortem case reports involving kratom, mixed drug fatalities (McIntyre, 2015).

    Acute side effects observed during emergency room presentations include nausea, itching, sweating, dry mouth, constipation, and loss of appetite. More severe toxic effects, like psychosis and hallucinations, have also been reported. As kratom products have become readily available in North America, reports of adverse medical events and emergency room visits have also increased. The American Association of Poison Control Centers’ 2022 report indicates that kratom accounted for 1,278 case mentions, 794 single exposures, and 586 cases that involved treatment in a healthcare facility (Gummin, 2024). According to data from the Food and Drug Administration’s adverse event reporting system, mitragynine was identified in 1,255 cases from 2008 to 2024. Of these cases, 1,171 were classified as serious, and 637 reports involved fatalities (DEA, 2025).

    The analysis of biological specimens for mitragynine and other kratom alkaloids has become routine since the development of robust LC-MSMS methods. The first reported analytical method for the quantitation of mitragynine was a high-performance liquid chromatography–ultraviolet detection (HPLC–UV) method measuring mitragynine in the serum of dosed rats, with a limit of quantitation (LOQ) of 100 ng/mL (Janchawee, 2007). More recently, a study by Le et al. reported a quantitative liquid chromatography–tandem mass spectrometry (LC–MSMS) procedure for the identification of mitragynine and other kratom alkaloids in human urine, including the metabolites 5-desmethylmitragynine and 17-desmethyldihydromitragynine (Le, 2012).

    Conclusion

    Kratom, with its wide array of commercial products, challenges a simple characterization as a drug-of-abuse or natural medicine. Kratom has vocal advocates and detractors. The active constituents, mitragynine and 7-hydroxymitragynine, have demonstrated complex pharmacology and myriad psychophysical effects. To date, there is no clear understanding of the toxicology, and claims of medical efficacy are unproven. Despite political support for legal status at the federal level, state governments have moved forward with regulations restricting the sale of kratom. Scientific research on kratom predominantly supports the Drug Enforcement Administration’s classification of kratom as a Drug and Chemical of Concern. The intent of this article was to inform toxicologists and attorneys about the properties of kratom since it is highly likely that cases of overdose and motor vehicle accidents will increase in the coming years.

    Declaration of competing interest

    The author serves as an expert witness in forensic toxicology cases and receives compensation for testimony and consultation services.

    AI Disclaimer 

    Artificial intelligence tools were used to assist in the preparation of this manuscript, including reviewing, editing, and formatting. All scientific content has been verified by the author, who takes full responsibility for the accuracy and integrity of the work.

    References

    CRS. (2023). Kratom Regulation: Federal Status and State Approaches. Retrieved from Congress.gov: https://crsreports.congress.gov/

    Le, David, et al. (2012). Analysis of Mitragynine and Metabolites in Human Urine for Detecting the Use of the Psychoactive Plant Kratom. Journal of Analytical Toxicology, 36, 616–625.

    DEA. (2024). Kratom. Retrieved from Drug Fact Sheet: https://www.dea.gov/factsheets/kratom/

    DEA. (2025). Kratom. Retrieved from DEA: https://deadiversion.usdoj.gov/drug_chem_info/kratom.pdf/

    Gummin, D. D., Mowry, J. B., Beuhler, M. C., Spyker, D. A., Rivers, L. J., Feldman, R., … DesLauriers, C. (2023). 2022 Annual Report of the National Poison Data System® (NPDS) from America’s Poison Centers®: 40th Annual Report. Clinical Toxicology, 61(10), 717–939. https://doi.org/10.1080/15563650.2023.2268981 

    Maxwell, Elizabeth A., et al. (2020). Pharmacokinetics and Safety of Mitragynine in Beagle Dogs. Planta Med.,86(17), 1278–1285.

    Erickson, B. (2016). Congress pushes to delay kratom ban. C&EN, October 10, 2016.

    McIntyre, Iain M., et al. (2015). Mitragynine ‘Kratom’ Related Fatality: A Case Report with Postmortem Concentrations. Journal of Analytical Toxicology, 39, 152–155.

    Zarembo, John E., et al. (1974). Metabolites of Mitragynine. Journal of Pharmaceutical Sciences, 63, 1407-1415.

    Kratom. (2025). Top 13 Ultimate Kratom Vendors Online – Verified Reviews (2025). Retrieved from Kratom.org: https://kratom.org/vendors/

    Philipp, AA, et al. (2009). Studies on the metabolism of mitragynine, the main alkaloid of the herbal drug Kratom, in rat and human urine using liquid chromatography-linear ion trap mass spectrometry. J Mass Spectrom.44(8):1249-61.

    Society of Forensic Toxicologists. (2020). Retrieved from Short Communication for the Analysis of Mitragynine: https://www.soft-tox.org/assets/NPSLiterature/mitragynine.pdf 

    Basiliere, Stephanie, et al. (2020). CYP450-Mediated Metabolism of Mitragynine. Journal of Analytical Toxicology, 44, 301–313.

    Karunakaran, Thiruventhanet, et al. (2022). The Chemical and Pharmacological Properties of Mitragynine and Its Diastereomers: An Insight Review. Frontiers in Pharmacology, vol. 13.

    Trakulsrichai, Satariya, et al. (2015). Pharmacokinetics of mitragynine in man. Drug Design, Development and Therapy, 9, 2421–2429.

    Janchawee, B., et al. (2007) A high-performance liquid chromatographic method for determination of mitragynine in serum and its application to a pharmacokinetic study in rats. Biomedical Chromatography, 21, 176–183.






  • 18 Nov 2025 9:01 AM | ​Joshua Ott

    IAFTC Newsletter. Volume 1. Issue 1. November 18, 2025.

    Joshua Ott1

    1Caselock, Inc., P.O. Box 285, Lebanon, GA 30146

    This is an open-access article under the CC BY-NC-ND license.

    Download PDF.

    Abstract

    The Drug Recognition Expert (DRE) program was developed to equip law enforcement officers with the ability to identify drivers impaired by drugs other than, or in addition to, alcohol. Although implemented across the United States, the program’s scientific basis remains contested. This paper critically examines the studies that validate the DRE’s 12-step Drug Influence Evaluation (DIE). A review of key studies—including the Johns Hopkins (1984), Los Angeles Police Department (1986), Arizona (1994), and Heishman et al. (1996, 1998) studies—reveals that DRE evaluations are not validated for actual impairment, but rather for the presence of drugs as confirmed by toxicology. False positive rates ranging from 5% to over 60% raise serious concerns about the accuracy of DRE assessments, particularly given their role in criminal prosecutions. The findings indicate that without independent, performance-based validation, the DRE program lacks the foundation necessary to serve as a scientifically valid measure of impairment.

    Introduction

    Drug-impaired driving presents a significant challenge for law enforcement, the legal community, and public safety. In response to limitations of toxicological testing, which cannot determine behavioral impairment, the Drug Recognition Expert (DRE) program was developed to train officers to recognize impairment in drivers by utilizing the Drug Influence Evaluation (DIE). The DIE is a standardized 12-step protocol intended to determine whether the suspect is impaired, if so, whether the cause is medical or drug-related, and if drug-related, which category or categories of drugs are the likely cause of the impairment. However, the program’s validation studies have not established that the DIE can accurately determine impairment as defined by being “less safe” to operate a vehicle. This paper critically examines the DRE program’s foundation, methods, and validation studies, arguing that the DIE lacks scientific validation for its stated purpose and carries an unacceptable rate of false positives.

    Overview of the DRE Program

    A Drug Recognition Expert (DRE) is an officer who has been trained to identify people who are impaired by drugs other than or in addition to alcohol. They are trained to use a standardized, systematic 12-step Drug Influence Evaluation (DIE) to determine three things.


    1. Is the subject impaired?

    2. Is the impairment due to medical issues or drugs?

    3. If it is drugs, what category/categories of drugs is/are most likely causing the impairment? (1)


    The 12-step DIE contains the following: breath test, interview of the arresting officer, preliminary examination, eye examinations, divided attention tests, vital signs (pulse is checked three separate times), pupil size and reaction to light, muscle tone, check for injection sites, subject’s statements, opinion of the evaluator, and toxicological analysis. 


    To become certified, a DRE student must have a 75% toxicological corroboration rate. This means that if a DRE opines that one category of drugs is causing the impairment, that category must be confirmed by toxicology. If the DRE opines that two categories are causing the impairment, at least one must be confirmed by toxicology. If they opine that three categories are causing the impairment, at least two must be confirmed by toxicology. In the (Shinar & Schechtman, 1998) Study(2), it stated that DREs should be encouraged to always list two drug categories. Listing two drug categories increases the likelihood of the DRE identifying the correct drug category with no downside. 

    Primary Validation Issue

    The main issue in the DRE program lies in its reliance on toxicological confirmation as a measure of validity. Toxicology can confirm the presence of a drug or its metabolites, but it does not indicate whether the individual was behaviorally impaired. The use of toxicology in validation studies to determine the accuracy of DRE’s fails to measure the DIE’s intended purpose: identifying drivers who are less safe to operate a vehicle. This circular validation approach, where toxicology results are used to support DRE opinions, and DRE opinions are then used to claim that positive toxicology findings indicate psychoactive drug effects, fails to provide independent validation of the DIE’s ability to identify drug impairment. 


    The DRE Manual states that one of the reasons for the DIE being needed is: “chemical tests usually disclose only that the subject has used a particular drug recently. The chemical test usually does not indicate whether the drug is psychoactive at the present time. Thus, the DRE procedures are needed to establish that the subject not only has used the drug, but also that he or she is under the influence.” (2025 DRE 7-Day Participant Manual, Session 3, Page 5)

    Review of Key Studies

    The DRE training course teaches officers about three studies, but the primary focus is on the Johns Hopkins Laboratory Study (1984)(3) and the Los Angeles Field Validation Study (1985)(4). The Arizona Study (1994)(5) is only briefly mentioned in the manual.

    Johns Hopkins Laboratory Study (1984)

    The Johns Hopkins study, sponsored by the National Highway Traffic Safety Administration (NHTSA), involved 80 male volunteers aged 18–35. Only four DRE raters were involved. The false positive rate was 5%. This false positive rate is potentially misleading due to the volunteers being trained on the psychomotor tasks and subjective effect questionnaires that were used in the study ahead of time. If they did not show adequate performance, they were not accepted for participation in the study. This removed potential false positives ahead of time. Additionally, the DREs were free to inquire how the subjects felt, had they ever felt that way before, or had they ever used drugs that made them feel that way, etc. These questions possibly unblinded the DREs and may have influenced their opinions.

    Los Angeles Police Department Field Study (1986)

    The LAPD study involved a total of 219 suspects. Twenty-eight suspects did not provide a blood sample and were not included in the final data. There were 18 suspects who were determined not to be under the influence of drugs by the DREs during the Preliminary Exam (Step 3 of the DIE) and were released from custody. This left a total of 173 suspects. 


    All 173 were believed to be under the influence of drugs by the DREs. One suspect had no drugs or alcohol detected, and 10 suspects had only alcohol detected. This means that after Step 3 of the DIE, the DRE’s accuracy increased 0%. 


    Adding the 18 suspects who were determined not to be under the influence of drugs during the Preliminary Exam (Step 3) to the 11 suspects who tested negative for drugs is 29. The DREs incorrectly determined that 11 out of those 29 were drug-impaired. That is a false positive rate of 37.9%.


    Even the study’s authors acknowledged that drug presence does not equate to impairment, noting, 

    There is no way to determine objectively whether the suspects were actually too “impaired” to drive safely. The fact that drugs were found in a suspect's blood does not necessarily mean the suspect was too impaired to drive safely. Contrary to the case with alcohol, we do not know what quantity of a drug in blood implies impairment. Thus, this study can only determine whether a drug was present or absent from a suspect's blood when the DRE said the suspect was impaired by that drug.(p. 15). 

    This study failed to validate the DIE for identifying drug impairment.

    Arizona Study (1994)

    The Arizona study analyzed 500 records from an established DRE program. The false positive rate was 61.7%, with 42 of 68 individuals with no drugs detected being incorrectly ruled impaired. 

    Heishman et al. Laboratory Studies (1996, 1998)

    These were laboratory studies that were financially supported by NHTSA. Both studies used certified DREs to conduct evaluations on volunteers. The volunteers had a history of drug use and were dosed with a drug from the category that they had a history of using. In the 1996 study, DREs incorrectly identified impairment in 40.7% of placebo-dosed cases and were only 50.6% accurate in predicting the correct drug category. 

    If the cases in which the DRE concluded that ethanol was causing the impairment were excluded, the DREs were only 44.4% correct in predicting the drug category. The issue with this study is that the volunteers did not reside at the location between study sessions. This allowed them to use drugs on their own and created the possibility that they were still impaired by drugs when they showed up to the next session and received a placebo dose.

    The 1998 study addressed this issue by having the volunteers reside in a closed unit of the Addiction Research Center. The DREs incorrectly identified 28.1% of placebo-dosed participants as impaired and were only 32.1% accurate in identifying the correct drug category. These findings demonstrate that DREs frequently misidentify sober individuals as being impaired and lack consistent accuracy in identifying drug categories.

    The (1996) study also noted, 

    Until a broad range of drugs and drug doses are tested on the DEC evaluation and independent performance tests under laboratory conditions, it is difficult to assess the validity of the DEC evaluation with respect to impairment criteria. Such validation is critically needed, however, because the current means of confirming a DRE’s prediction of impairment is the presence of parent drug or metabolite in blood or urine, which, with the exception of ethanol, provides little, if any, information concerning behavioral impairment.

    In the almost 30 years since this paper was published, there have been no known attempts to validate the DIE with respect to impairment criteria.


    Study

    Year

    False Positive Rate (%)


    Johns Hopkins


    1984


    5

    LAPD

    1986

    37.9

    Arizona

    1994

    61.7

    Heishman et al.

    1996

    40.7

    Heishman et al.

    1998

    28.1

    Shinar & Schechtman

    1998

    56.9

    Discussion

    Through decades of research, no study has validated the DIE for determining actual impairment. Instead, the DIE’s accuracy is judged using toxicological confirmation, a method that is incapable of measuring drug impairment. False positive rates ranging from 5% to over 60% demonstrate an alarming lack of accuracy in correctly identifying drug-free subjects as non-impaired. 

    This is particularly alarming in a world in which multiple states have legalized recreational marijuana and many people are prescribed medications that fall into the CNS Depressant, CNS Stimulant, and Narcotic Analgesic categories of the DRE program. It is very concerning in a context where DRE opinions can significantly influence legal outcomes. The omission of critical research, such as the Heishman et al. studies, from current DRE training materials raises many concerns. Without testing the accuracy of the DIE with independent performance tests as cited in Heishman et al. (1996), the DIE cannot be considered a scientifically validated method of detecting drug impairment.

    Conclusion and Recommendations

    The DRE program’s reliance on toxicological corroboration instead of behavioral impairment undermines its scientific legitimacy. Given the serious legal and social implications of incorrectly identifying a sober person as impaired, the DRE program’s continued use without independent validation is not acceptable. Future research must address two foundational questions: 

    1. Can DREs accurately identify drivers who are less safe to operate a vehicle due to drug impairment? 

    2. Can DREs accurately discriminate sober individuals from impaired ones? 

    Until such evidence exists, DRE evaluations should be handled with significant caution in the legal community.

    AI Use Disclosure

    The author acknowledges the use of ChatGPT to refine sentence structure, enhance clarity, and correct grammatical errors. All substantive content and conclusions are the author’s own.

    Conflict of Interest

    The author is a consultant and expert witness for DUI cases.

    References

    1. 2025 Drug Recognition Expert Manuals (Pre-School and 7-Day School Instructor and Participant Manuals)

    2. Modeling the DRE Evaluation of Signs and Symptoms to Improve the Validity of Drug Impairment Diagnosis, David Shinar and Edna Schechtman (1998)

    3. Identifying Types of Drug Intoxication: Laboratory Evaluation of a Subject-Examination Procedure, Bigelow, Bickel, Roache, Liebson, Nowowieski (Johns Hopkins -1985)

    4. Field Evaluation of the Los Angeles Police Department Drug Detection Procedure, Compton, (LAPD -1986)

    5. Drug Recognition Expert (DRE) Validation Study, Eugene V. Adler and Marcelline Burns (Arizona - 1994)

    6. Laboratory Validation Study of Drug Evaluation and Classification Program, Ethanol, Cocaine, and Marijuana, Heishman SJ, Singleton EG, Crouch DJ (1996)

    7. Laboratory Validation Study of Drug Evaluation and Classification Program: Alprazolam, d-Amphetamine, Codeine, and Marijuana, Stephen J. Heishman, Edward G. Singleton, Dennis J. Crouch (1998)


  • 17 Nov 2025 4:29 PM | ​Aaron ​Olson (Administrator)

    IAFTC Newsletter. Volume 1. Issue 1. November 17, 2025.

    Aaron Olson1; Charles Ramsay2

    1ARO Consulting LLC, PO Box 132, Hugo MN, 55038

    2Ramsay Law Firm, PLLC, 2780 E Snelling Ser Dr Suite #330, Roseville, MN, 55113, USA

    This is an open-access article under the CC BY-NC-ND license.

    Download PDF.

    Introduction

    On November 13, 2025, the Iowa Court of Appeals issued its decision in State v. Withers [1]. While the panel affirmed the conviction, the dissent raised substantive concerns that are directly relevant to forensic consultants and experts across the country. 

    The dissent focused on the way late-disclosed forensic evidence can undermine the reliability of the judicial process and prevent meaningful scientific review. To illustrate this broader issue, the judge cited several authoritative sources on forensic reliability, including the articles Errors in Toxicology Testing and the Need for Full Discovery [2], The Courts, the NAS, and the Future of Forensic Science [3], and Invalid Forensic Science Testimony and Wrongful Convictions [4].

    The citation, Errors in Toxicology Testing and the Need for Full Discovery is noteworthy for members of the International Association of Forensic Toxicology Consultants (IAFTC) because it highlights the growing recognition within appellate courts that forensic evidence requires careful evaluation, full transparency, and adequate time for expert interpretation. 

    The dissent’s reasoning aligns closely with the goals of our association: ensuring that the presentation of toxicology results and their interpretation are presented to the court in a scientifically sound, properly documented, and fully discoverable manner.

    Background of the Case and the Discovery Issue

    Only eight days before jury selection, the prosecution disclosed a 63-page digital forensics report and a newly identified expert witness. Defense counsel stated that he did not understand the report, did not have time to consult an expert, and could not properly evaluate the accuracy or meaning of the data.

    The majority held that the late disclosure did not warrant a continuance. The dissent, however, viewed the situation as fundamentally unfair and scientifically unsound.

    The Dissent’s Focus on Forensic Reliability

    The dissent emphasized that forensic evidence is not self-explanatory and cannot be meaningfully challenged without adequate preparation. Late disclosures deprived the defense of time to obtain independent analysis, evaluate the methodology, and examine the underlying data.

    One line from the dissent captured the core concern:

    “I worry that treating an untimely expert witness disclosure like any other witness disclosure perpetuates my concern about how we treat forensic evidence in the courtroom.”

    Judge Sandy’s concern reflects a key point recognized by forensic science researchers and practitioners. Digital forensics, toxicology, pattern evidence, and other analytical disciplines involve complex scientific processes that demand expert interpretation. Treating an expert report as if it were equivalent to a simple fact witness disclosure ignores the reality of forensic work and its potential for error.

    Why This Matters for Forensic Toxicology Consultants

    There are several reasons why IAFTC members should pay attention to this development.

    1. Courts are increasingly aware of the limits of forensic science

    For years, many forensic disciplines have been treated as inherently credible. But recent scholarship and national reviews of forensic science, including reports from the National Academy of Sciences and the President’s Council of Advisors on Science and Technology, have prompted some courts to engage more directly with academic research when considering the reliability of forensic practices [3–6].  For example, in Birchfield v. North Dakota, 579 U.S. 438 (2016), Justice Alito’s majority opinion explicitly cited the forensic science article by A.W. Jones, titled “Measuring Alcohol in Blood and Breath for Forensic Purposes—A Historical Review,” [7,8].

    2. Discovery practices directly affect our ability to provide accurate opinions

    When experts receive data late or in incomplete form, it becomes difficult or impossible to evaluate whether an analysis was conducted correctly. Interpretation of toxicology results relies on instrument calibration records, raw data files, method validation studies, chain-of-custody documentation, chromatograms, and other details that are not always included in summary reports. The dissent recognized that meaningful review requires full access to this information.

    3. The citation reinforces the importance of expert education and advocacy

    The fact that our article was cited shows that courts recognize the role experts play in identifying laboratory errors and helping attorneys understand forensic limitations. It also affirms the importance of continuing to educate legal professionals about laboratory processes, method limitations, and the risks associated with incomplete discovery.

    Implications for the Future

    Judge Sandy’s dissent in State v. Withers is part of a broader trend. Courts are showing greater interest in the scientific integrity of forensic evidence, the importance of discovery, and the role of experts in ensuring accuracy. While the majority did not adopt this view in the Withers case, the presence of a detailed, research-supported dissent signals that the conversation is evolving.

    Forensic toxicology consultants, especially those who provide expert testimony, should expect increasing emphasis on:

    • Complete disclosure of underlying data

    • Careful documentation of analytical methods

    • Transparent quality control processes

    • Independent review by defense experts

    • Ongoing education for attorneys and judges

    These themes mirror many of the IAFTC's priorities and show the growing influence of forensic scholarship on judicial reasoning.

    Closing Thoughts

    It is encouraging to see appellate judges cite current forensic science research in their opinions. The inclusion of toxicology-focused scholarship reinforces our field’s relevance and highlights the need for continued improvement in laboratory transparency and discovery practices.

    As forensic consultants, we play a criticalrole in advancing these discussions. If IAFTC members have encountered similar issues in their jurisdictions or have examples of discovery-related challenges in forensic cases, we encourage you to share them for future newsletter features.

    AI Use Disclosure

    The authors used ChatGPT to assist with the organization and formatting of this article. All content was verified and substantially written by the authors, who take full responsibility for accuracy.

    Declaration of competing interest

    The authors declare the following financial interests/personal relationships, which may be considered as potential competing interests: Aaron Olson serves as an expert witness in forensic toxicology cases, provides consulting services through ARO Consulting LLC, and receives compensation for expert testimony and speaking engagements. Charles Ramsay is a practicing defense attorney specializing in impaired driving cases at Ramsay Law Firm, PLLC.

    References

    [1]State of Iowa vs. Rickie Blaine Withers Sr. 2025.

    [2]Olson A, Ramsay C. Errors in toxicology testing and the need for full discovery. Forensic Sci Int Synerg 2025;11:100629. https://doi.org/10.1016/j.fsisyn.2025.100629.

    [3]Garrett BL, Neufeld PJ. Invalid Forensic Science Testimony and Wrongful Convictions. Virginia Law Review 2009;95:1.

    [4]Mnookin JL. The courts, the NAS, and the future of forensic science. Brooklyn Law Review 2010;75:10.

    [5]The President’s Council of Advisors on Science and Technology. Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature Comparison Methods. PCAST Working Group; 2016.

    [6]National Research Council, Division on Engineering and Physical Sciences, Committee on Applied and Theoretical Statistics, Policy and Global Affairs, Committee on Science, Technology, and Law, Committee on Identifying the Needs of the Forensic Sciences Community. Strengthening forensic science in the United States: A path forward. Washington, D.C., DC: National Academies Press; 2009.

    [7]Jones AW. Measuring Alcohol in Blood and Breath for Forensic Purposes - A Historical Review. Forensic Sci Rev 1996;8:13–44.

    [8]Birchfield v. North Dakota. 2016.
  • 11 Nov 2025 10:05 AM | ​Joshua Ott

    IAFTC Newsletter. Volume 1. Issue 1. November 11, 2025.

    Joshua Ott1

    1Caselock, Inc., P.O. Box 285, Lebanon, GA 30146

    This is an open-access article under the CC BY-NC-ND license.

    Download PDF.

    Abstract

    The Standardized Field Sobriety Tests (SFSTs) have become the primary screening tool for impaired driving enforcement since their development in the 1970s. While widely accepted in courtrooms across the United States, the actual validation data reveal significant limitations that are often overlooked or unknown by practitioners and legal professionals. This article provides an examination of the SFST validation studies, with particular emphasis on false positive rates that raise important questions about the tests. Analysis of the San Diego study reveals false-positive rates of 37% for Horizontal Gaze Nystagmus (HGN), 52% for Walk and Turn, and 41% for One Leg Stand when administered to drivers with BAC below 0.08 g/dL. The 2007 Robustness of HGN study demonstrated even higher false positive rates when HGN was administered correctly in laboratory conditions (67%), with rates exceeding 90% when stimulus positioning deviated from standardized protocols. Recent research published in JAMA (2023) examining the field sobriety tests’ ability to identify drivers under the influence of Cannabis showed false positive rates of 56% and 37% for Walk and Turn and One Leg Stand, respectively, when administered to placebo-dosed individuals. This article examines the three-phase driving under the influence (DUI) detection process, reviews the historical development and validation of SFSTs, analyzes false positive rates across multiple studies, and provides detailed guidance on proper test administration, interpretation, and common officer errors. Understanding these limitations is essential for forensic toxicologists, expert witnesses, and legal professionals who must accurately interpret SFST results in impaired driving cases.

    Introduction

    Since their introduction in the late 1970s, the Standardized Field Sobriety Tests (SFSTs) have become the cornerstone of impaired driving enforcement throughout the United States [1–3]. Law enforcement officers routinely administer these tests during DUI investigations, and their results often form the basis for arrest decisions and serve as critical evidence in criminal prosecutions. The tests are presented in courtrooms as scientifically validated tools with impressive accuracy rates: 88% for Horizontal Gaze Nystagmus (HGN), 79% for Walk and Turn, and 83% for One Leg Stand, according to the widely cited San Diego study.

    However, a closer examination of the research reveals a more complex picture. The same study that produced these frequently cited accuracy rates also had substantial false positive rates that are rarely discussed in training materials or courtroom testimony. These false positive rates—the percentage of times the tests incorrectly indicate that a person will be at/or above the legal limit of 0.08 g/dL, but the person is actually below the legal limit—have profound implications for how we interpret SFST results.

    For members of the International Association of Forensic Toxicology Consultants (IAFTC), understanding the actual capabilities and limitations of the SFSTs is essential. The disconnect between how the SFSTs are portrayed in law enforcement training versus what the study data demonstrates creates significant challenges for scientific testimony and case interpretation.

    This article serves multiple purposes for IAFTC members and the broader forensic community. First, it provides a review of the SFST development and validation, tracing these tests from their origins through current research. Second, it examines the false positive rates documented across multiple studies, including recent research that appears to have gone largely unnoticed in the law enforcement community. Third, it offers a detailed analysis of proper test administration procedures and common errors that may further compromise test validity. Finally, it addresses a critical limitation that is often misunderstood: according to the authors of the San Diego Study, these tests have only been validated to predict if a person's BAC is at or above a specific threshold—they have not been validated as indicators of driving impairment or alcohol/drug impairment.

    As forensic professionals, there is a responsibility to ensure that scientific evidence is accurately represented and properly interpreted. The SFSTs remain a valuable screening tool for law enforcement, serving their intended purpose of helping officers make Probable Cause determinations during roadside investigations. However, when these tests are presented in court as proof of impairment, or when their limitations are not fully disclosed, we risk compromising the integrity of the forensic sciences and potentially contributing to wrongful convictions.

    This article draws from National Highway Traffic Safety Administration (NHTSA) training manuals [4], original validation studies, the SFSTs Field Validation Studies (1995-1998) [5–7], and recent peer-reviewed research to provide IAFTC members with a more complete understanding of what the SFSTs can and cannot tell us about driver impairment. Whether you serve as an expert witness, conduct toxicological analysis, or work in research and policy development, this information is essential for ensuring that field sobriety test evidence is properly evaluated.

    I. The Three Phases of DUI Detection

    The First phase is “Vehicle in Motion.” Law Enforcement Officers are trained to look for 24 cues to indicate that a driver is possibly impaired. These include failing to maintain lane, driving without headlights, making wide turns, etc. When an officer decides to stop a vehicle, they are then trained to observe how the vehicle stops. The stopping sequence may provide the officer with additional evidence that the driver is possibly impaired. There are times in which the officer may not observe anything during Vehicle in Motion that makes them suspect that the driver is impaired (equipment violations, speeding, roadblock, etc.). During the next phase, the officer may see signs of possible impairment that lead to a DUI arrest.


    The second phase is “Personal Contact.” This is probably the most important phase for two reasons. First, this is the only phase that is going to occur during every DUI investigation. Second, it is often the phase that a jury is going to put the most weight in because they are judging if the driver acts and looks the way they expect an intoxicated person to. In this phase, officers are trained to use their senses to identify indicators of possible impairment (bloodshot eyes, soiled clothing, fumbling fingers, open containers, slurred speech, admission of drinking, inconsistent responses, odor of an alcoholic beverage, cover-up odors, etc.). Officers are then trained to observe the driver’s exit from the vehicle. Do they leave the car in gear, use the car for balance, walk with a staggered gait, etc.? It is important to remember that by the end of this phase, an officer likely has probable cause to arrest the driver for DUI. 


    The last phase is the “Pre-Arrest Screening.” This includes the Standardized Field Sobriety Tests (SFSTs) and the Preliminary Breath Test (PBT). Officers are trained to administer Horizontal Gaze Nystagmus (HGN), Walk and Turn, and One Leg Stand. After administering the SFSTs, officers can ask the driver to submit to a PBT. At the end of this phase, an officer decides whether they will arrest the driver for DUI based on the standard of Probable Cause. Officers are trained to base this decision on the totality of the circumstances, but in many cases, the arrest decision comes down to the results of the SFSTs.

    II. Development of the SFSTs

    Starting in 1975, the Southern California Research Institute (SCRI), with funding from the National Highway Traffic Safety Administration (NHTSA), began research studies to determine which roadside tests were the most accurate. Prior to this, officers were using tests, instructions, and clues that were not standardized between officers. This led to problems in court determining how much weight the tests should be given. The goal was to standardize the tests and observations and determine which tests were the most accurate at distinguishing Blood Alcohol Concentrations (BACs) at or above the legal limit. 


    SCRI started with six field sobriety tests commonly used throughout the United States. These tests were: One Leg Stand, Finger to Nose, Finger Count, Walk and Turn, Alcohol Gaze Nystagmus (HGN now), and tracing (a paper and pencil exercise). The three most accurate tests are the ones we now know as the SFSTs. The Finger to Nose test is used as part of the Drug Recognition Expert (DRE) program, and the Finger Count is taught as a tool that can be used during Personal Contact [4].


    The research included three Standardized Elements for the tests. The first is Standardized Administrative Procedures. Which means there is a required manner in which passes must be conducted for HGN, required instructions for each of the tests, and required demonstrations that must be given for the Walk and Turn and One Leg Stand. The second is Standardized Clues. This means officers are looking for specific clues during each one of the tests. The last is Standardized Criteria. This means that officers must observe a specific thing to count a clue. An example is to count missing heel to toe for the Walk and Turn; a person must miss heel to toe by one-half inch or more. NHTSA emphasizes that the validation only applies when the Standardized Elements are followed.


    The Original Research determined how accurate each of the tests was at predicting if a person’s BAC was at or above 0.10 g/dL. When four or more clues were observed, HGN was 77% accurate. When two or more clues were observed on each test, the Walk and Turn was 68% accurate, and the One Leg Stand was 65% accurate. 


    There were three field validation studies that were conducted between 1995 and 1998. The Colorado (1995), Florida (1997), and San Diego (1998) validation studies. The primary study that will be addressed is the San Diego study because it is the study that officers currently use to testify as to how accurate the SFSTs are. 


    The San Diego study involved 297 drivers, and the mean BAC of those drivers was 0.122 g/dL [5]. Additionally, the mean BAC of the drivers who were arrested was 0.150 g/dL, and the mean BAC of those drivers not arrested was below 0.050 g/dL. Remember that the target BAC is 0.08 g/dL, so the further away from the target that you get, the more likely it is that it will be easier for an officer to make the correct decision. For example, a person who is two times the legal limit would be expected to show more obvious signs of intoxication than someone who is right at the legal limit. That most likely makes it easier for the officer to know an arrest is the correct decision to make. The officers in this study also had access to Preliminary Breath Tests (PBTs).


    How accurate are the SFSTs based on the San Diego study? When four or more clues were observed, HGN was 88% accurate. When two or more clues were observed on each test, the Walk and Turn was 79% accurate, and the One Leg Stand was 83% accurate. The overall accuracy when the officers made their arrest decision was 91%. 


    To understand exactly what this means, you need to understand what constitutes a “correct” decision and an “incorrect” decision. A “correct” decision was when a person was at or above the BAC level (0.08 g/dL) and the officer arrested them, or if the person was below the BAC level (0.08 g/dL) and the officer released them. An “incorrect” decision was when a person was at or above the BAC level (0.08 g/dL) and the officer released the person (false negative), or the person was below the BAC level (0.08 g/dL) and was arrested (false positive).


    Remember, according to the authors of the San Diego Study, these tests have only been validated to predict if a person is at or above a specific BAC. They have not been validated as indicators of driving impairment or alcohol/drug impairment.

    III. False Positives

    What exactly is a false positive? It is a test that incorrectly indicates a condition exists when it in fact does not. An easy way to think of it is if you went to your doctor and your doctor ran some tests on you. Those tests came back and indicated that you have a disease, but you do not. Those tests would be a false positive. 


    What were the false positive rates of the SFSTs from the San Diego study? HGN was 37%, Walk and Turn was 52%, One Leg Stand was 41% and when officers made their arrest decision, it was 28%. So, the Walk and Turn and One Leg Stand are about as statistically accurate as a flip of a coin if the person is below 0.08 g/dL. 

    IV. Robustness of the Horizontal Gaze Nystagmus Test

    This study was published in 2007 and was funded by NHTSA. Dr. Marceline Burns authored the study [8]. It was in reference to defense attorney arguments that if HGN was administered incorrectly, it would affect the validity of the test. It was conducted in a laboratory setting using volunteer drinkers and experienced officers.


    There were (3) elements tested:

    1. Stimulus Speed for Lack of Smooth Pursuit 

      1. Fast (1 second)

      2. Standard (2 seconds)

    2. Stimulus Height 

      1. High (4 inches above eye level)

      2. Standard (2 inches above eye level)

      3. Low (0 inches - at eye level) 

    3. Stimulus Distance

      1. Close (10 inches from the face)

      2. Standard (12-15 inches from the face)

      3. Far (20 inches from the face)


    Looking at the results from the times in which HGN was administered correctly, the false positive rate was 67%. Additionally, 65% of the people below a BAC of 0.05 g/dL had four clues or more. There was a person with six clues at a BAC of 0.029 g/dL.


    What about the times when the test was not administered correctly?


    • Stimulus higher than the standard – 91% False Positive Rate

    • Stimulus lower than the standard – 79% False Positive Rate

    • Stimulus closer than the standard – 92% False Positive Rate

    • Stimulus farther than the standard– 84% False Positive Rate 


    These numbers show that it is imperative that officers position the stimulus correctly, or the false positive rates increase to even higher levels.


    How did Dr. Burns address the extremely high false positive rates? She changed the standards to lower the number of reported false positives! In the current training material  (2025 Edition SFST Manual [9]) and in the San Diego Study, four or more clues correlated to a BAC of 0.08 g/dL or more. In this study, four clues correlated to a BAC of 0.03 g/dL or more. This drastically lowered the published false positives. There was no justification given for this changed standard. 


    Read the rest of the article in PDF.



  • 3 Oct 2025 3:42 PM | ​Aaron ​Olson (Administrator)

    IAFTC Newsletter. Volume 1. Issue 1. October 3, 2025.

    Aaron Olson1  

    1ARO Consulting LLC, PO Box 132, Hugo MN, 55038

    This is an open-access article under the CC BY-NC-ND license.

    Download PDF

    Introduction

    In June 2025, defense attorney Charles Ramsay and I published "Errors in toxicology testing and the need for full discovery" in Forensic Science International: Synergy [1]. Our review documented notable toxicology errors across multiple jurisdictions collected over a combined 48 years of field experience. 

    This news article provides IAFTC members with brief updates on toxicology errors in the news since that publication.

    Minnesota Breath Alcohol Testing: Control Target Error

    In September 2025, Minnesota defense attorneys Charles Ramsay and I discovered that a DataMaster DMT breath alcohol analyzer had been operating with an unknown control target for nearly one year, from May 25, 2024, to May 4, 2025. The error occurred when an operator entered incorrect dry gas cylinder information during a Control Change test, resulting in 73 potentially invalid test results across multiple law enforcement agencies [2].

    When the Minnesota Bureau of Criminal Apprehension (BCA) was confronted with this information, they acknowledged that their scientists cannot testify to the accuracy of these tests, stating that "BCA forensic scientists can only testify to the accuracy of test results with a known valid control target."

    The BCA's internal quality controls missed this error for nearly an entire year. It took an independent review by defense counsel and outside experts to discover what should have been caught by basic quality assurance protocols.

    Internal BCA emails reveal how the laboratory framed who was responsible for the error. In the nonconformity report, the BCA stated: "This is not the result of any work performed by the BCA Calibration Laboratory; it is the result of the agency entering incorrect information during the Control Change." 

    Yet laboratory-level verification of Control Change data, a quality control step, should have been in place from the beginning.

    Early notification drafts credited the defense attorney with discovering the error, but the final version removed this attribution. 

    The first draft stated: "In a recent case, a defense attorney noticed that the dry gas cylinder referenced on a test record did not match the dry gas cylinder reported by the BCA to be installed in the instrument." 

    The final notification sent to agencies simply stated: "It was discovered that the information associated with the installed dry gas cylinder for Instrument 100821...was entered incorrectly by an operator," removing any reference to how the error was actually discovered.

    This pattern reinforces findings from our paper: laboratories often shirk taking responsibility for their errors and fail to recognize the need for independent outside auditors.

    University of Illinois Chicago: THC Isomer Misidentification and Testimony Issues

    In one of the most troubling cases of systematic evidence suppression, the University of Illinois Chicago Analytical Forensic Testing Laboratory (AFTL) knowingly used flawed testing methods for marijuana-impaired driving cases from 2021 through 2024 [3].

    The laboratory's method could not distinguish between delta-9-tetrahydrocannabinol (Δ9-THC), the primary psychoactive compound in cannabis, and delta-8-tetrahydrocannabinol (Δ8-THC). This was important because the state's DUI law ties legal limits exclusively to Δ9-THC. Laboratory personnel became aware of these method deficiencies as early as 2021 but failed to disclose them until 2023, allowing hundreds of potentially wrongful convictions to proceed [4].


    Figure 1. Δ8-THC, Δ9-THC, Δ10-THC. (Image credit: Mantinieks D, 2024; [5])

    Injustice Watch revealed the harm caused to individuals by flawed testimony and testing. The report detailed how a lab analyst testified that THC metabolites in urine could be used to determine impairment, a claim that contradicts established toxicological science. The defense eventually called in renowned toxicologist Marilyn Huestis to testify against this type of testimony.

    Approximately 1,600 marijuana-impaired driving cases were compromised. A 2025 prosecutorial review in DuPage County resulted in the dismissal of charges in 19 cases due to compromised evidentiary reliability [6].

    University of Kentucky: Equine Testing Fraud

    The September 2025 termination of University of Kentucky equine testing lab director Dr. Scott Stanley demonstrated how weak oversight enables systematic misconduct [7]. 

    In November 2023, the Horseracing Integrity and Welfare Unit requested confirmatory analysis for a banned substance. Over the course of two months, Stanley repeatedly reported that the sample had been analyzed with negative results. 

    On February 23, 2024, when HIWU inquired about remaining sample volume, lab staff revealed the sample "had never been analyzed and, in fact, had never even been opened." The university's audit found Stanley falsified results, failed to perform confirmatory analysis on 91 samples that screened positive.

    The case revealed laboratory vulnerabilities. Weak internal controls gave all staff unrestricted data access while giving Stanley sole authority over communicating results to oversight agencies. 

    Tennessee: Field Sobriety Test False Positives

    Recent events in Tennessee illustrate the broader problems with the reliability of field sobriety testing [8]. Sixteen sober drivers were arrested for DUI by Tennessee state troopers in 2025, with eight arrests made by a single officer.

    The most publicized case involved Jane Bondurant, a 71-year-old former U.S. Attorney, whose bloodwork came back clean except for prescribed medication taken the night before. Despite this, she was arrested, handcuffed, and jailed based on subjective field sobriety test performance.

    These cases highlight the high false-positive rate associated with field sobriety tests [9].

    Analysis: Recurring Patterns

    These 2025 errors demonstrate the same patterns documented in our comprehensive review:

    1. Extended Detection Times Errors persist for months or years before discovery (Minnesota: 1 year; UIC: 3 years).

    2. External Discovery Problems are identified by defense attorneys, whistleblowers, or independent experts rather than internal quality controls.

    3. Institutional Resistance Laboratories viewed transparency requests as hostile and developed cultures where concealment becomes normalized.

    4. Systematic Impact: Individual errors affect dozens or thousands of cases before detection.

    Implications for IAFTC Members

    These cases underscore critical considerations for forensic toxicology consultants. Discovery requests must explicitly include all digital data and quality assurance documentation, not just final reports.

    These errors highlight the ongoing need for laboratory culture reform, echoing the 2009 NAS report's recommendations [10]. IAFTC members should advocate for online discovery portals, mandatory retention of digital data, third-party audits beyond standard accreditation, and clear protocols for disclosure of discovery materials.

    Conclusion

    The toxicology errors documented in the months since our June 2025 publication continue to show the need for reform in forensic toxicology. These are not isolated incidents but manifestations of systemic vulnerabilities that persist across jurisdictions and disciplines.

    For IAFTC members serving as expert witnesses, laboratory directors, or policy advisors, these cases underscore the importance of transparency, independent oversight, and cultural change within forensic laboratories. Scientific integrity requires more than technical competence; it demands institutional structures that make concealment impossible and accountability mandatory.

    Conflicts of Interest

    The author serves as an expert witness in forensic toxicology cases and receives compensation for speaking engagements.

    AI Use Disclosure

    The author used Claude (Anthropic) to assist with the organization and formatting of this article. All content was verified and substantially written by the author, who takes full responsibility for accuracy.

    References

    [1] Olson A, Ramsay C. Errors in toxicology testing and the need for full discovery. Forensic Sci Int Synerg 2025;11:100629. https://doi.org/10.1016/j.fsisyn.2025.100629.

    [2] Knudsen C. Attorney discovers problem with alcohol detection device used in DWI cases in the heart of Minnesota’s cabin country. KSTP-TV LLC 2025. https://kstp.com/kstp-news/top-news/attorney-discovers-problem-with-alcohol-detection-device-used-in-dwi-cases-in-the-heart-of-minnesotas-cabin-country/   (accessed September 25, 2025).

    [3] Dukmasova M. How a rogue Chicago forensics lab got people convicted for driving high. Injustice Watch 2025. https://www.injusticewatch.org/project/forensic-failures/2025/uic-forensics-lab-cannabis-dui-scandal/  (accessed August 14, 2025).

    [4] Goudie C, Markoff B, Tressel C, Jones T. Chicago forensic testing lab accused of providing flawed results in marijuana DUI convictions. ABC7 Chicago 2024. https://abc7chicago.com/post/university-illinois-chicago-analytical-forensic-testing-laboratory-accused-providing-flawed-results-marijuana-dui-cases/15624653/  (accessed June 12, 2025).

    [5] Mantinieks D, Di Rago M, Drummer OH, Glowacki L, Schumann J, Gerostamoulos D. Quantitative analysis of tetrahydrocannabinol isomers and other toxicologically relevant drugs in blood. Drug Test Anal 2024;16:1102–12. https://doi.org/10.1002/dta.3632.

    [6] Rivera M, Tressel C, Markoff B, Jones T. DuPage County state’s attorney dismisses marijuana DUI charges after faulty blood tests. ABC7 Chicago 2025. https://abc7chicago.com/post/dupage-county-states-attorney-dismisses-marijuana-dui-charges-faulty-blood-tests-university-illinois-chicago-aftl/15851816/  (accessed June 12, 2025).

    [7] Kuzydym S. University of Kentucky terminates former equine testing lab director. Louisville Courier Journal 2025. https://www.courier-journal.com/story/news/2025/09/11/university-of-kentucky-equine-testing-lab-director-terminated/86097262007/  (accessed September 13, 2025).

    [8] Finley J. Former US attorney is 8th sober driver to be arrested for DUI by state trooper. WSMV 4 2025. https://www.wsmv.com/2025/08/28/former-us-attorney-is-8th-sober-driver-be-arrested-dui-by-state-trooper/  (accessed August 29, 2025).

    [9] Kane G, Kane E. The high reported accuracy of the standardized field sobriety test is a property of the statistic not of the test. Law Probab Risk 2021;20:1–13. https://doi.org/10.1093/lpr/mgab004 .

    [10] National Research Council, Division on Engineering and Physical Sciences, Committee on Applied and Theoretical Statistics, Policy and Global Affairs, Committee on Science, Technology, and Law, Committee on Identifying the Needs of the Forensic Sciences Community. Strengthening forensic science in the United States: A path forward. Washington, D.C., DC: National Academies Press; 2009. https://www.ojp.gov/pdffiles1/nij/grants/228091.pdf



  • 2 Oct 2025 8:16 AM | ​Aaron ​Olson (Administrator)

    Presentation Summary

    On October 17, 2025, Dr. Stefan Rose will challenge retrograde extrapolation's scientific validity by focusing on an impossible requirement: knowing when and how much the pyloric valve opens to allow gastric contents to flow into the small intestine where the majority of alcohol absorption occurs—information unavailable without continuous physiological monitoring of multiple timed blood or breath samples starting at the beginning of the drinking episode.

    The presentation will cover the complex factors controlling pyloric valve opening and closing: neural regulation (vagus nerve, myenteric plexus, splanchnic nerve), hormonal influences (gastrin, insulin, secretin, somatostatin, motilin, and other peptides), and external variables that delay gastric emptying and invalidate any retrograde extrapolation, including food, medications, trauma, surgery, and disease states like diabetes.

    Dr. Rose argues that because pyloric valve status at any point in time of a drinking episode cannot be determined retroactively, retrograde extrapolation is fundamentally flawed and dubious, and addresses why analytical chemistry training has overshadowed comprehensive pharmacological education in the field.

    About the Presenter

    Dr. Stefan Rose is a physician with over 40 years of experience spanning Forensic Toxicology, Clinical Pathology, and General Psychiatry. He completed formal Forensic Toxicology training at the Dade County Medical Examiner Department (1989-1991), founded the first DUI laboratory at the University of Miami (1992), and completed psychiatric residency (1995-1998), where he correlated behavioral effects of drugs and ethanol with laboratory findings. He has served as a courtesy professor in Chemistry at FIU since 1997, is Board Certified by the National Board of Medical Examiners, and has extensive expert testimony experience in state, federal, civil, and criminal courts.

    Join us

    This presentation is for members only. Consider joining us to attend future webinars.



  • 1 Oct 2025 12:50 PM | ​Aaron ​Olson (Administrator)

    The International Association of Forensic Toxicology Consultants (IAFTC) is excited to announce the launch of our professional newsletter and invites members to submit their work for publication.

    This is an exceptional opportunity to gain publication credit in a peer-reviewed professional organization newsletter and share your expertise with colleagues across the field.

    Publication Timeline

    Articles will be published on a rolling basis as soon as they complete peer review and are ready for press. Your work will appear on our website immediately upon acceptance rather than waiting for a traditional issue release date.

    Types of Submissions We're Seeking

    We welcome diverse contributions including:

    • Case Studies - Real-world applications, challenges, or outcomes from your practice
    • Emerging Trends - Analysis of new developments in forensic toxicology
    • Scientific Reports - Research findings, validation studies, or technical investigations
    • Survey Data and Findings - Presentation and analysis of survey results
    • Original Articles - In-depth exploration of relevant topics
    • Perspectives and Commentary - Thought leadership and opinion pieces on current issues
    • News Items - Timely updates and announcements of professional interest
    • Happenings from the Field - Professional events, achievements, or noteworthy developments

    Free Format Submission Guidelines

    What is Free Format?

    We accept submissions in free format, which means you have flexibility in how you structure and present your work. You're not required to follow a rigid template or specific journal style—write in the format that best suits your content and focus on clear communication.

    • No rigid templates required - Structure your article in the way that best serves your content
    • Author's choice of organization - Use the format appropriate for your submission type
    • Focus on content quality - Professional writing and clear communication matter more than strict formatting rules
    • Standard citation styles accepted - APA, AMA, MLA, or any consistent academic format

    Required Elements

    All submissions should include:

    • Title - Clear and descriptive
    • Author name(s) and credentials - Include your professional identification
    • Abstract or summary - For research articles, case studies, and scientific reports (150-250 words)
    • Introduction - Context for why your topic matters
    • Main content - Well-organized sections with appropriate headers
    • Conclusion - Key takeaways or implications for practice
    • References - Any standard academic citation style is acceptable (APA, AMA, MLA, etc.)
    • Conflicts of Interest Statement - Disclosure of any financial or personal interests (see below)
    • Acknowledgements - Recognition of contributors, funding sources, or support (if applicable)
    • AI Use Disclosure - Statement regarding use of artificial intelligence tools (see below)

    Transparency and Disclosure Requirements

    Conflicts of Interest Statement

    All authors must disclose any financial or personal interests that could be perceived as influencing their work. This promotes transparency and maintains the integrity of published research.

    What to disclose:

    • Employment relationships (including government laboratory employment)
    • Consulting arrangements or expert witness work
    • Research funding sources
    • Financial interests in companies or products discussed
    • Personal relationships that might present conflicts
    • Any other circumstances that could be perceived as influencing objectivity

    Example disclosure: "Dr. Smith serves as an expert witness in DUI cases and is employed by [Laboratory Name]. This work was partially funded by [Grant Source]."

    Acknowledgements

    Authors should acknowledge individuals or organizations that contributed to the work but do not meet authorship criteria.

    What to include:

    • Technical assistance or data collection support
    • Funding sources or grants
    • Institutional support
    • Colleagues who provided feedback or review
    • Any other substantive contributions

    Example: "The authors thank [Name] for technical assistance with laboratory analysis and [Institution] for providing access to case files. This work was supported by [Funding Source]."

    AI Use Disclosure

    As artificial intelligence tools become increasingly prevalent in scientific writing, IAFTC requires transparency regarding their use.

    Authors must disclose:

    • Use of AI tools (e.g., ChatGPT, Claude, Grammarly AI, etc.) for any aspect of manuscript preparation
    • Specific ways AI was used (literature search, writing assistance, data analysis, editing, etc.)
    • Which AI tools were used

    Important: Authors remain fully responsible for the accuracy and integrity of all content, regardless of AI assistance. AI-generated content must be verified for accuracy and properly cited where applicable.

    Example disclosure: "The authors used [AI Tool Name] to assist with grammar checking and initial literature organization. All content was verified and substantially revised by the authors. No AI-generated content appears without author review and validation."

    File Format

    Submit your work in any of the following formats:

    • Microsoft Word (.docx)
    • Google Doc (share link with edit/comment access)
    • Plain text (.txt)
    • PDF

    Use standard fonts (Times New Roman, Arial, Calibri) in 11 or 12-point size.

    Peer Review Process

    All submissions undergo peer review to ensure quality and accuracy while maintaining the high professional standards expected of IAFTC publications. Our editorial team will work collaboratively with you on any necessary revisions.

    General announcements will not be peer reviewed.

    Why Submit?

    Publication credit in a professional organization newsletter
    Be among the first authors featured in our inaugural volume
    Share your expertise with colleagues across the field
    Build your professional portfolio with peer-reviewed publication
    Contribute to the community by advancing knowledge in forensic toxicology
    Immediate visibility through rolling publication on our website

    Ready to Submit?

    Whether you're an established practitioner, researcher, or emerging professional, we encourage you to share your knowledge and experience. Your contributions will help establish this important platform for professional development and knowledge sharing within our field.

    Submit your article or questions to:
    editor@iaftc.org

    Questions about topics or submission process?
    Contact editor@iaftc.org - We're happy to discuss ideas before you submit!

    Tips for Success

    • Choose a topic you're knowledgeable about and passionate about
    • Write clearly for a professional audience
    • Support your points with data, examples, or case details
    • Organize logically with headers and clear sections
    • Cite your sources using any consistent academic format
    • Don't worry about perfection - Our editorial team is here to help

    Examples of Potential Topics

    Not sure what to write about? Here are some ideas to spark your thinking:

    • Challenges you've encountered in case work and how you addressed them
    • New technologies or methods you've implemented in your laboratory
    • Analysis of trends you're seeing in drug testing or toxicology results
    • Quality assurance issues and solutions
    • Interpretation challenges in complex cases
    • Updates on regulatory changes or standards
    • Conference highlights or continuing education insights
    • Validation studies or method comparisons
    • Expert witness experiences and lessons learned

    Join Us in Creating Something Special

    The IAFTC Newsletter represents a new platform for our professional community to share knowledge, discuss challenges, and advance the field of forensic toxicology and related disciplines. Your contribution will help establish the tone and quality of this important resource.

    We look forward to featuring your work!



Privacy Policy | Terms of Use

Powered by Wild Apricot Membership Software