FDA bureaucrats see the role of their organization as that of a shield, removing as much risk as possible from medicine. Since it is impossible to remove all risk from any medicine, what this mission means in practice is that no individual bureaucrat ever wants to be held accountable for approving a therapy that later turns out to have unexpected consequences. It doesn't matter if those consequences occur in just a few individuals, while countless others benefit, or even if the medicine in question is actually responsible: the fickle press will rise up in arms; the lawyers will flock. Thus those FDA bureaucrats will always move in the direction of requiring ever greater proof from companies - the cost of commercial development has doubled for no reason other than this in the past decade. Along the way, they also remove the right to choose from patients, the ever-present authoritarian side to the goal of protection. No-one is permitted their own risk assessment, and no organization is permitted to help those patients willing to take educated risks.
There are more subtle, reaching, and harmful effects beyond the obvious ones noted above. The structure of regulation has changed the strategy of research and development for the worse. As the article here argues, it is the major contributing factor to the lack of progress in treatment of cancer over the last half century. The present regulatory environment incentivizes the sort of development programs that produce marginal, incremental results, that build on existing approaches. Bold new directions need not apply. The FDA makes the cost of development so high that only large organizations can follow through to the clinic, and large organizations are risk averse. Few leaders will be willing to take the sort of risks that lead to real, revolutionary progress.
Look at the history of chemotherapy research and you'll find a very different world than the one that characterizes cancer research today: fast bench-to-bedside drug development; courageous, even reckless researchers willing to experiment with deadly drugs on amenable patients; and centralized, interdisciplinary research efforts. Cancer research was much more like a war effort before the feds officially declared war on it. The whole cycle, from no chemotherapies at all to development, trial, and FDA approval for multiple chemotherapy drugs, took just six years, from 1948 to 1953. Modern developments, by contrast, can take decades to get to market.
Today, the National Cancer Institute and various other national agencies now largely fund research through grants. The proliferation of organizations receiving grants means cancer research is no longer primarily funded with specific treatments or cures (and accountability for those outcomes) as a goal. With their funding streams guaranteed regardless of the pace of progress, researchers have become increasingly risk-averse. As the complexity of the research ecosystem grew, so did the bureaucratic requirements. "16.8 percent of the total costs of an observational protocol are devoted to institutional review board interactions, with exchanges of more than 15,000 pages of material, but with minimal or no impact on human subject protection or on study procedures."
As R&D gets more expensive and compliance more onerous, only very large organizations - well-funded universities and giant pharmaceutical companies, say - can afford to field clinical trials. Even these are pressured to favor tried-and-true approaches that already have FDA approval and drugs where researchers can massage the data to just barely show an improvement over the placebo. (Since clinical trials are so expensive that organizations can only do a few, there's an incentive to choose drugs that are almost certain to pass with modest results - and not to select for drugs that could result in spectacular success or failure.) Of course, minimal improvement means effectively no lives saved.
The problem is clear: Despite tens of billions of dollars every year spent on research, progress in combating cancer has slowed to a snail's pace. So how can we start to reverse this frustrating trend? One option is regulatory reform, and much can be done on that front. Streamline the process for getting grant funding and institutional review board approval. Cut down on reporting requirements for clinical trials, and start programs to accelerate drug authorizations for the deadliest illnesses. One proposal is "free-to-choose medicine." Once drugs have passed Phase I trials demonstrating safety, doctors would be able to prescribe them while documenting the results in an open-access database. Patients would get access to drugs far earlier, and researchers would get preliminary data about efficacy long before clinical trials are completed.
More radically, it might be possible to repeal the 1962 Kefauver-Harris amendment to the Federal Food, Drug, and Cosmetic Act, a provision that requires drug developers to prove a medication's efficacy (rather than just its safety) before it can receive FDA approval. Since this more stringent authorization process was enacted, the average number of new drugs greenlighted per year has dropped by more than half, while the death rate from drug toxicity stayed constant. The additional regulation has produced stagnation, in other words, with no upside in terms of improved safety. Years ago, a Cato Institute study estimated the loss of life resulting from FDA-related drug delays from 1962 to 1985 in the hundreds of thousands. And this only included medications that were eventually approved, not the potentially beneficial drugs that were abandoned, rejected, or never developed, so it's probably a vast underestimate.