The MCC’s protocol operational complexity tool was employed to score the complexity of several ongoing Jazz clinical trial protocols. The findings were used to identify the key drivers behind the complexity of these protocols, and to suggest modifications that could reduce that complexity. In addition, an analysis of the MCC tool was also conducted in order to propose modifications that make the tool more relevant to the therapeutic areas in which Jazz is active and the nature of the trials that it conducts. This adaptation is intended to demonstrate quantitatively that steps taken to reduce protocol complexity do indeed result in lower complexity scores when the tool is applied.
When it comes to implementing corrective and preventive action (CAPA) plans in clinical trials, we consistently hear similar frustrations from members of the research community. They identify and implement corrective actions, but identifying and implementing preventive actions, which aim to minimize the likelihood that the issue will occur in other studies, is still somehow getting lost in the shuffle. This happens for several reasons, namely because, when it comes to root cause analyses (RCAs), the correct root causes often aren’t identified beyond the individual study being reviewed. Often the focus is solely on the tactical issue or issues at the study level, rather than also at the system level, or across studies. Consequently, issues are only resolved within the individual study where they were identified and not further addressed with preventive measures for other, ongoing, and future clinical trials. Now that organizations have implemented risk-based quality management programs to comply with ICH-E6 R2, they have an opportunity to improve the effectiveness of CAPA preventive actions by applying learnings from current and emerging issues to risk control and reduction at the system level.
Rapid expansion at a biotech company has led to a change in strategy - to outsourcing work to strategic partner CROs. But how to use metrics to assist with the oversight and development of the partnerships? GW Pharmaceuticals is working with an MCC Ambassador to select and define metrics and to lead discussions with the partner CROs on developing metrics that add value for both organizations. The engagement has also led to training opportunities on risk-based quality management, metrics definition & use, and root cause analysis. Join us to find out where we are on the journey, and the future plans.
Many healthcare companies and CROs struggle with the best approach to the implementation of RBQM. This is due to the fact that there is only 'little history' available, i.e. not a lot to piggy-back on what other companies had developed and shared already. Cyntegrity built a whole course on RBQM using the 'belt' approach while Merz Pharmaceuticals was the first company to jump onto the bandwagon and used it for a fully-fletched roll out into their R&D; organization.
The training had been split into four parts (white, green black and executive belt) with a different focus for the different staff levels. Merz staff took the white belt course to get comfortable with the basic idea of RBQM, while the green belt course had been tailored to a specific Merz study in order to link an actual study with the RBQM process.
Eventually, the black belt course had been taken by staff in charge of RBQM management in the future, mainly covering the change management aspects of RBQM implementation. Finally, Senior Management had to be familiarized with the concepts and the return on investment of an RBQM implementation, covered by the 'executive belt' course.
This session will address the lessons learned when developing such a course and the experience made by the participants. We will also discuss a real study as an example for RBQM and how Merz shall implement RBQM in future studies.
Metrics measure performance and progress, but it is also so much more than that. Sufficient set of metrics should be able to tell a story, track progress and give directions. Despite being in the pharma space for a while, RBM remains rapidly evolving area. And as it evolves, so do the metrics. Many established functions were starting in the same manner and progressing with the development of their methodology: user experience, risk management, project management etc. Is it time to benchmark RBM? This presentation is designed to open discussion on the best approach to the RBM metrics.
This topic will focus on how one company developed and implemented a Remote Monitoring Quality Assessment tool and process post COVID-19. The discussion will mostly focus on the process development aspect and will help the listener to understand why the process was necessary and the determinants/components used for developing the tool. The discussion will include factors to consider when determining if there is a difference in data quality between remote monitoring and on-site monitoring.
Have you heard "We are not doing RBM for this study . . ." What? Our quality system should support a risk-based approach to many clinical trial activities like site monitoring, data quality oversight, vendor oversight, safety surveillance, audit planning, and more. Sure you train on what risk management is and also what your process is for risk assessment, but what do the team members that work directly with sites or project teams not understand is that we don't have RBM trials and non-RBM trials. This is holding us back from critical thinking during project management, vendor oversight, and site monitoring. Many times the use of RBM at study start-up as a noun in the vendor selection creates a false understanding that we are not doing risk-based thinking for all trials. There is a large gap in the "elevator speech" from senior management, vendor selection defense teams, audit representatives describing how risk management is ensured at the project level. Come to this session to:
With more trials outsourced, in part or in whole, many sponsors do not have timely access to their operational data. They work with multiple CROs, each with their own dashboards, portals, etc. By getting CROs to use sponsor systems, or by having regular data feeds into sponsor systems (e.g. Clinical Trial Management Systems, Study Start-Up), the reporting capabilities of the sponsor systems can be used. With the increase in outsourcing, is there a point at which using the sponsor’s clinical trial systems in this way becomes inefficient? Would it be more effective to manage decisions using a data aggregation platform? During this session, the presenter will discuss the challenges faced by a mid-sized biotechnology organization and describe the approach used to integrate clinical operational data from CROs and other vendors into a sponsor’s system that allows users to view both planned and actual study start-up data in real time. This has transformed the approach to start-up and enabled real time decision making to keep clinical trials on track.
Assertiveness: Learn to be assertive within your team discussions. Work beyond challenging personalities, limited resources, and unclear expectations by conveying your needs and interests confidently within the same arena as your teammates. Analyze your own style of communicating. Recognize your personal triggers and obtain techniques for overcoming them when the pressure is on. Learn to provide and accept feedback willingly while keeping the relationship in mind.
Influence: Discuss the power within you to influence the success of your study team meetings. There are many ways in which you can have an impact on the team. Knowing your role and how to leverage your influence will help you to build trust with your teammates and rely on each other for accountabilities. Effective RBQM requires the effort of each member of the team. Identify how you can make your best contribution.
The success of Sponsor-Vendor relationships is often measured by milestones achieved, budgets met, and data collected. But the success of clinical development is based on the quality and reliability of the information submitted. Tracking and measuring quality is key to a successful submission and partnership, and this session will review some fundamental indicators to ensure quality is always in the picture.
Feedback from sponsor/vendor surveys is an important part of evaluating a vendor relationship. But what happens when the results look OK but those in your organization know there are real problems? Some people refer to this phenomenon as “the watermelon” – it’s green on the outside but red on the inside. What questions should be asked in surveys? How should results be quantified? How should the data be interpreted so the right actions can be taken to improve the relationship? How can positive results be identified, acknowledged and celebrated? This presentation summarizes one of the key outputs of the MCC Vendor Oversight Work Group – how to design, conduct, interpret, and use the data from relationship surveys.
During this session, presenters will discuss the approach, criteria and performance metrics used by the Bill and Melinda Gates Medical Research Institute to select sites for onsite audits of a Phase 2 study. The approach allows users to weight the importance of selection criteria based on study-specific, align performance measures to review for each criterion and identify high scoring sites to consider for audits. Additionally, this session will review the importance of using leading indicator versions of metrics when evaluating and comparing site performance during study conduct.
Failed large-scale IT systems (e.g. EDC, CTMS, QMS, eTMF) implementations waste hundreds of thousands of dollars and hundreds of hours of precious time. Selecting the right large-scale IT system is the most important step in ensuring a successful implementation. Yet, many organizations take an informal, unstructured approach to IT system selection and open themselves up to risk of failure.
Gary Tyson from Pharma Initiatives Consulting will share the best practices they have developed by successfully leading over 40 large-scale IT system selections and implementations over the past 25 years. This program, which includes a presentation and a workshop, will focus on the practical steps that organizations can take to increase the likelihood of IT system selection success. The workshop will be a deep dive in building a Value Case for your next IT system, a step which will serve as the foundation for your selection process.
ICH requires sponsors to use risk-based monitoring approaches to ensure the reliability of clinical trial results, but does not define reliability or related concepts. The accuracy and reliability do have consensus definitions that can be applied to clinical trials. This presentation will describe how sponsors can implement definitions of accuracy and reliability in their RBM methods, which will fulfill regulatory requirements and improve the success rate of clinical trials.
How do you define a meaningful relationship and how to measure it? When is the right time to establish a governance plan? Is it always defined by the size of engagement or length of it? What risk factors do you need to consider? What are common Partnership challenges and how to overcome them? Do you have tools to build a culture of collaboration to support a productive relationship?
An introduction to the implementation of machine learning based on milestone prediction and how this can evolve. Clinical operations staff need to have confidence in machine learning predictive models and be able to validate the accuracy of outcomes. By knowing which indicators have the most impact on these models, organizations can focus on those indicators to refine their models and learn from these insights, which can ultimately drive behavioral changes (i.e., less reliance on subjective decisions) to optimize business processes.
Machine learning allows organizations to continuously improve with direct implications on timelines and associated costs of clinical trials.
Tuesday, Sept. 8, 2020
Wednesday, Sept. 9, 2020
Thursday, Sept. 10, 2020
Leading the drug-development enterprise in the adoption and utilization of standardized metrics and benchmarks to drive performance improvement. Founded in 2006, MCC is the leading industry association dedicated to the development of standardized performance metrics to improve clinical trials. MCC provides the collaborative environment for biopharmaceutical and device sponsors, service providers and sites to improve clinical-trial development through use of MCC standardized performance metrics.