The pursuit of, or expectation of, perfection in a clinical researcher’s job performance is a flawed ideology, rife with disappointment for anyone shouldering this impossible burden. As my father, R Stuart Weeks, M.D., used to say, “The pursuit of perfection prevents progress.”
I refer back to that quote to keep my own perfectionist tendencies, and thus sometimes ridiculous expectations, at bay.
The pursuit of excellence in a clinical researcher’s job performance is attainable, inspires optimal performance and ensures quality. It demands learning lessons following error, which completes the circle of accountability and correction. The self-awareness presents an exceptional learning opportunity that increases self-confidence with the preventative measures gleaned.
Clinical researchers are, by nature, type A perfectionists committed to the highest standards. They work tirelessly to perpetuate the drug development cycle. Their work is subject to microscopic speculation by regulatory authorities and the general public, which results in immense pressure. Though these dedicated individuals consistently deliver each daunting deadline, an unfortunate byproduct of this exceptional achievement can be the harshest self-criticism when a mistake is made. It is not logical to punish yourself mercilessly for an error, while educating others to overcome their own mistakes. That spectrum of compassion should extend to every clinical researcher, with the understanding that the ideal of perfection can prohibit personal development, and that it is nobler to make a mistake and rebound than it is to stagnate for lack of blemish.
A foolish oversight at the start of my oncology monitoring career led to an important training opportunity several years later. In early 2000, I was assigned to a phase III oncology study with 10 to 12 regional sites across the West Coast. The study was using Response Evaluation Criteria in Solid Tumors (RECIST) for tumor measurements and assessments. During monitoring visits, a major responsibility of mine was to review prior and current Computed Tomography (CT) scan reports for study patients and calculate the bi-dimensional tumor measurements using RECIST criteria to assess tumor status for response, partial response, progressive or stable disease. The accuracy of tumor status was critical for credible data reporting and I had grown fairly adept with such calculations—or so I thought.
I was monitoring at one of my stronger sites when the study coordinator approached me about a study patient’s tumor status. The study coordinator referred me to a particular study patient’s Case Report Form (CRF) page that listed the tumor status as “stable disease.” However, retrospective review of the prior cycle CT scan reports showed a small non target lesion that we had both overlooked. I immediately performed my own review of the reports and came to the same conclusion. Though it was a small lesion, the patient’s disease status had been listed incorrectly as “stable” when it had become “progressive disease.”
I was crestfallen as I called my study lead CRA and explained the situation. Thankfully, he was understanding, and guided me along the corrective and preventative action (CAPA) pathway. We corrected the previous tumor status to reflect the recalculated assessment via data correction forms, which were immediately sent to data management. The ensuring report documentation reflected the error and correction process. Though the study coordinator and I shared the responsibility of deficit, my perfectionist nature began the mental onslaught of criticism as if I were solely responsible. (How could you have missed that? What were you thinking?) I felt dejected until an encouraging email from my study lead later that evening lent clarity to the situation.
The study lead said, “The best thing you can do is reflect on the mistake, take the learning opportunity and move forward. I once made a similar mistake on a study. Remember that you are a good CRA.”
His words reminded me of my critical responsibility as a monitor. It was my obligation to review study source documents and medical records to the best of my ability to ensure subject safety. To do anything less would be a disservice to the sites with whom I partnered. Duly inspired, I spent the entire evening re-reviewing the protocol and committing each eligibility criterion, procedure and pivotal endpoint to memory. I researched RECIST criteria beyond the protocol/study appendix and transformed a sufficient understanding to a strong understanding of the process. And I never made another RECIST error on that, or any future, oncology study.
A year after that unfortunate oversight, the CRO for whom I worked was having their annual clinical meeting. It was a wonderful opportunity to meet colleagues face to face beyond the remote/teleconference/internet atmosphere. The clinical management staff had arranged several therapeutic-specific training sessions, and to my surprise I was asked to present on the process of RECIST criteria. The error that had devastated me had also resulted in my developing a particular therapeutic knowledge considered worthy of educating others. I was eternally grateful for that learning opportunity.
Elizabeth Blair Weeks-Rowe, LVN, CCRA, has spent nearly 14 years in a variety of clinical research roles including CRA, CRA trainer, CRA manager and clinical research writer. She works in relationship development/study startup. Email email@example.com or tweet @ebwcra.
This article was reprinted from CWWeekly, a leading clinical research industry newsletter providing expanded analysis on breaking news, study leads, trial results and more. Subscribe »