Patient-Centric Clinical Trials Europe 2015

Jun 8, 2015 - Jun 9, 2015, London

Put the patient at the heart of the clinical trial

Developing a Safety Culture

Where is the safety culture in clinical research and lifestyle management of medicines? We talk to Dr. Brian Edwards to learn more.

Dr Brian Edwards, Vice President ACRES (Alliance for Clinical Research Excellence and Safety)



Edwards is part of the Alliance for Clinical Research Excellence and Safety (ACRES), a non-profit organization, which seeks to build a global system for clinical research in order to serve the public’s interest in the safety of medicines developed and marketed by the pharmaceutical sector, and enhance public confidence.  As an advocate on the importance of developing a safety culture for the clinical development of medicines, Edwards is part of the executive office of ACRES. In addition, he and Chris Seal, the Chair of the UK Air Safety group and an ex-pharmacist, have set up the Pharmaceutical Human Factors group (PharmaHuF) inspired by the Clinical Human Factors Group (CHFG).

Tarnished trust in the system

Following a series of drug safety crises, such as with Merck’s Vioxx in 2004, there was an increasing fear about the ineffectual regulation by and declining trust in the major agencies such as US Food and Drug Administration (FDA). The perception about pharmaceutical companies, rightly or wrongly, was that they were disregarding patient safety in pursuit of profits.

Pharmacovigilance regulations increased, accompanied by risk aversion, which implies fewer medicines are developed, then approved and authorized for sale to the public.  Certainly, the cost of clinical research and development has inexorably risen.  In the meantime, the demand for drugs is still increasing and shows no sign of abating because of the increased health needs of old age and the continual rises of Western diseases such as diabetes and obesity.

The ultimate goal is to develop a “safety culture”

Edwards believes that ACRES is one of the solutions in addressing this loss of public trust in the safety of biomedical products. According to Edwards, a safety culture can be defined as comprising a set of values, attitudes, competencies and patterns of behavior that together create a pervasive commitment to ensuring the highest standards of safety on the part of the pharmaceutical sector - both the regulators and the regulated.  Such a culture will rebuild trust, improve efficiency and demonstrate competency in research and development, while also promoting fairness.

The safe and appropriate use of pharmaceutical products, according to Edwards, involves maximizing therapeutic benefits, reducing risk, and eliminating harm.“The main hurdles in developing a safety culture are an understandable and systematic apathy, helplessness and a Kafkaesque bureaucracy riddled with a blame culture,” he says.“If you try and find out ‘who is in charge of safety,’ you would have a long search!”

Edwards’ further characterizes a “safety culture” as an organizational culture that avoids errors by improving systems which take into consideration human factors.  It is a culture where people learn from their mistakes and respond to change by fostering a questioning attitude that rigorously draws insights into product safety and the health implications of products.

He also places strong importance on the need to differentiate a safety culture based on a ‘just culture’from a blame culture. He offers the following insights:

  • Over-reliance on regulatory agencies for drug safety determination

Edwards explains that the existing “blame culture” throughout the pharmaceutical sector relies too heavily on regulatory agencies, such as the US FDA to determine the safety of drugs. In contrast, in a “safety culture”, the safety of drugs is a matter of concern to all stakeholders, including patients, healthcare professionals, government agencies, financing and marketing companies, and the pharmaceutical companies. He advocates as part of corporate social responsibility the “implementation of voluntary global standards” by industry rather than waiting for further regulations. He also believes that the pharmaceutical sector must formulate and adhere to a consensus-driven definition, within the industry itself, of “guiding principles of safety.”

  • Distorted priorities within post-marketing surveillance

The existing blame culture in post-marketing surveillance prioritizes the collection of data on adverse drug reactions (ADRs), reporting errors in prescribing and dispensing, and verifying efficacy of the drugs. The focus on 15 days for reporting adverse reactions to the agencies has massively distorted priorities in the system, without any evidence that safety has specifically benefited from that precise target. However, a safety culture assesses what matters from the patients’ point of view and what is effective in clinical use so that potential harm may be assessed.

Edwards particularly emphasizes that, in the blame culture, errors are always viewed negatively as end results of individual failings, when errors may also be due to systemic faults. Sometimes errors can’t be avoided; but an organizational attitude toward errors, such as that which exists within a safety culture, involves an attitude of learning from errors and gaining experience from them.  Errors can occur even in the midst of a safety culture, but what can be eliminated are weak and narrowly focused responses to those errors.

  • Poor definition of experts and expertise in safety

Training in safety is primarily focused on compliance with regulation, not on how to perform safely. There has been slow progress in defining competencies in the system, so anyone can claim to be an expert in safety. Conversely, those who do have safety expertise might not be taken as seriously as they should because hierarchically they are too low in the system, leading to frustration, apathy and helplessness. The inability to speak up and be heard in the system is a major concern.

The existing blame culture has science and the law as the foundation for surveillance; that is, drugs companies work within the legal framework of copyrights, patents, government regulations, and what can be confirmed by scientists in a laboratory.  A safety culture looks toward the community and seeks public participation in surveillance. Under the existing blame culture, data is a corporate asset and a closely guarded secret; whereas under a safety culture, data collected is a social asset, which must be shared in the spirit of transparency.

  • Definition of good business

Under a blame culture, as was the case with Vioxx, the public perceives that pharmaceutical companies develop drugs in order to make enormous amounts of money at the expense of patients’ health and well-being.  This leads to an overwhelming push for profits as research and development is speeded up, leaving in its trail unsolved ethical, social and legal problems.

Under a safety culture, in the pharmaceutical industry, good business presupposes good ethics, undergirded by good science that promotes social responsibility.  A safety culture may reduce the cost of clinical research and development, thereby making an impact on the pricing of medicines.  The effectiveness of any drugs marketed can be measured not only by the profits generated and the scientific knowledge which is discovered and generated, but also by an increase in public trust.  Under a safety culture, the pharmaceutical industry would define ethical business practice as a conglomerate of responsibilities for drug safety, company credibility and transparency.

Recommendations for enhancing safety in clinical research

Edwards explains the three basic recommendations for the adoption of a “safety culture” in the pharmaceutical industry: transparency, systemic safety, and public enquiry.

1.      Transparency: Transparency, according to Edwards, will encourage pharmaceutical industries to publish pharmacovigilance inspections and risk management plans.  As an analogy, in much the same way as a board of director’s reports to the stockholders about the operations of the company, so, too, must pharmaceutical companies report to consumers about how the medicines they produce are made with their safety in mind.  He specifically appeals to insurance companies to also look at systemic safety as part of their due diligence before underwriting those ventures.

2.      Systematic safety: Systemic safety has two facets, according to Edwards.  The first involves clinical research and development: there must be stricter self-regulation through registration, accreditation and inspection of all involved in clinical research. He underscores that, as of the moment, “no training is required to perform clinical research on humans about ‘how to do it safely.’” He asserts, “Anybody can do it and anybody does.” Along with others within ACRES, Edwards believes that research sites and organizations should, as a minimum, undergo a certification process. “Society must have assurances that suitably trained and experienced persons are allocated to tasks in clinical research,” he says. After all, all European countries have their own national systems for the education and qualification of hairdressers, so why not clinical researchers?

The second facet involves a “modernization” of obtaining informed consent, which should include not only a one-off explanation of the risks and benefits of participating in a clinical trial, but also use it as an ongoing opportunity to intensify health literacy among patients to maintain consent and enthusiasm for the research and train them as partners in safety culture and communication science. Patients have to learn how to report both the good and the bad effects of the drugs they take and to communicate effectively with researchers to get their message across accurately.

3.      Public enquiry: The last recommendation by Edwards is the creation of an independent investigative body that can enquire into any drug safety crises, incidents and concerns. He emphasizes the need to guarantee this investigative body’s independence, such that it can look into the systems of pharmaceutical companies and even the work of government regulatory agencies to see where their systems fail to ensure drug safety.

Human factors are critical to a safety culture

Edwards pointed out that, “Human factors are the critical root cause in many disasters.” He gave examples including the Fukushima nuclear disaster and the Gulf of Mexico oil spillage, where there was suspected collusion between regulators and the industry that resulted in a multitude of errors and negligence. He likens the resultant distrust in the safety of nuclear power and oil companies engendered by these disasters, to the same distrust by consumers as to the safety of pharmaceutical products. He also asserts that stakeholders in the pharmaceutical industry urgently need to build a new “consensus on the accountability and responsibility for the safe use of pharmaceuticals.”  Commitment to a “safety culture” should be the basis of that consensus. The big question, according to Edwards, is who is willing to lead society towards such a consensus?


Brian Edwards, VP Pharmacovigilance and Safety, Alliance for Clinical Research Excellence and Safety (ACRES) will be speaking on New Approaches to Ensure Safety in Clinical Trials at Patient Centric Clinical Trial Europe. For more information on his presentation, click here.



Patient-Centric Clinical Trials Europe 2015

Jun 8, 2015 - Jun 9, 2015, London

Put the patient at the heart of the clinical trial