May 31, 2005
National Council on Ethics in Human Research  (NCEHR)
Re: Consultation on Options Paper
240 Catherine Street, Suite 208
Ottawa, ON   K2P 2G8
Dear Committee Members:
We are a national organization dedicated to academic freedom and scholarship (  We are writing in response to a call for comment entitled “Public Consultation Taking Place on Options for Accrediting Programs to Protect Human Subjects Involved in Research Studies,” as per a letter and documents dated April 15, 2005, available on-line at                  

We have in the past expressed concerns about the expansion of research ethics regulations, given the lack of demonstrated effectiveness for such regulations (cf. and  As we have noted previously, changes in the research ethics review system have greatly increased the burdens on individual researchers in the social sciences and humanities (SSH), in particular, and on local university administrations, with no documented benefit to the genuine protection of research participants.

We have examined the accreditation proposal in an attempt to determine whether it provides any means of objectively confirming that there will be an increase in subject safety as a result of additional policies.  In the end, however, we are again disappointed. All the proposal provides is the promise that what can be termed “policy mechanics” will continue to expand without any bona fide assessment plan.  That is, there is no discussion at all as to what will be improved, and how we will know it has been improved, and by how much – what is the evidence that subjects will be safer as a result of an accreditation agency that is acknowledged (e.g., pp. 18-19) to be an expensive extension of the research ethics enterprise? In this case the cost-benefit is not justifiable, given that the cost is considerable and the benefit unproven. 

Therefore, with regard to the three questions at the bottom of p. 5, we would have to answer to the first (support accreditation), “Definitely not without true assessment,” and to the third (further attention) we would have to answer “assessment”.
This lack of concrete assessment is strange given the persistent use of terms that imply that the proposal would provide benefits, for example, on p. 7, line three, “accurately,” and in line 8, “improvement,” “achievement,” and “quality.” Then on p. 10, “enhance the protection,” and on p. 13 the five bullet points are valid only if there are data somewhere to elevate the improved benefits above the level of opinions.
There are other places in which claims are made that reflect nothing more than assumptions not at all supported by actual data. In particular, there is repeated use of the word “standards,” and the expression “best practices” (e.g., p. 2, Appendix A). If there is no hard evidence on these issues, it is impossible to claim that one knows what is “best,” and thus no sensible way to set “standards.” This is especially worrisome when it is noted (p. 8) that “Standards for accreditation generally set higher goals than regulations” – this seems to imply that accreditation will make even greater demands than the TCPS, but still will not require any evidence of their effectiveness.
There is a notion with a long history in scientific research, captured in the quotation from Huxley: “The great tragedy of science -- the slaying of a beautiful hypothesis by an ugly fact." In a similar vein, Bertrand Russell once observed “Assumptions have all the advantages of theft over honest toil.” The point is that it is easy to generate a theory, or policy, but hard to develop one that actually works. The research ethics industry has for 30 years avoided confronting the ugly fact that their policies may not be effective by substituting opinions for evidence. Unless there is something missing or we have overlooked something here, it appears that accreditation will continue along that gratuitous path.
If this proposal were submitted to a peer-review journal, editors would promptly ask “How do you know?” and the only acceptable answer would be “The data demonstrate it to be true.” Researchers know this, but policy makers consciously or unconsciously avoid this aspect of reality contact.  It appears that we are to have yet more policies with accreditation, based on the “That sounds like a good idea” defense alone. The expectation that whatever money is required will be forthcoming (pp. 18-19) without evidence of actual benefits would normally seem delusional, but not given the 30-year precedent of funding policies without evidence.
There are more points that could be made, but we will stop with just the following few observations. In Appendix A, (p. 6), three review categories short of full review are noted: proportionality, expedited, and exempt.  These categories are infrequently appropriate in medical research, whereas they are clearly the norm in SSH research. Although these categories of review exist in principle, in practice they are often ignored in favor of full review by local reviewers.
This is in fact unethical itself, but it inevitably follows because the regulators insist on the same paper trail (top p. 6, Appendix A) for all research. Accreditation will in all probability provide further impetus for continuing this cookie-cutter approach of recent years, particularly in view of the plan that “Standards for accreditation generally set higher goals than regulations” (p. 8) – this can easily be read to mean full review for everything. This is part of a general pattern of treating SSH research like medical research, because ignoring these three reasonable low-risk categories presents a different problem for medical research compared to SSH research. When local REBs ignore these options in favor of full review, it affects medical researchers hardly at all given that most of those projects have some credible risk and thus warrant full review, whereas the additional unnecessary burden affects the vast majority of reviews of SSH research.
We have been concerned for some time that medical research, specifically clinical trials, is used as the model for all research, and this accreditation proposal appears destined to continue this one-size-fits-all approach. Not only are there demonstrable differences in the proportion of low-risk research in medical versus SSH projects, there are other differences. For example, the notion of “Good clinical practices” (Appendix A, p. 2, and elsewhere) derives from medical research and has no meaning in SSH research. Subjects may be indeed be patients in medical research, but the subjects in SSH research are not patients, and thus policies developed for patients are not appropriate for SSH research. There is nothing in these documents to indicate that the “standards” developed will not further ingrain the one-size-fits-all approach that leads local REBs dealing with SSH research to further distort the review process and criteria.
Finally, in Appendix A, p. 6, Element 4.2 – research ethics boards have no mandate to evaluate the scientific or scholarly merit of research proposals, nor does an accreditation agency. Such judgments of scientific merit being the domain of the grant review panels and journal editors.
In sum, the present proposal continues to ignore the need for evidence of the effectiveness of the REB procedures, and continues to treat all areas of scholarship the same. Without dealing with these two issues “best practices” is a meaningless concept. Everything considered, we find no reason to believe that an accreditation agency will add any benefit to subject safety. We have no doubt, however, that it will add considerable additional expense to an enterprise that even now is unable to justify the expense outside of medical research. This proposal should be abandoned at the earliest opportunity.
John Mueller
Stephen Lupker
Members of the Board of Directors, SAFS