Description
This is the online version of the article. To access a print version with page numbers for citation and reference purposes, select "Download" to the right and then choose "Formatted PDF."
Vote: Do not publish
The authors explore criminology and criminal justice researchers' issues with their Institutional Review Boards (IRB). They rely on interviews (focus groups) with 13 CCJ scholars to determine their concerns and possible solutions. The authors find that these participants are critical of the process, the lack of understanding from members, the IRB chairs, the emphasis on legality instead of research ethics, and the lack of appeals. They then offer recommendations.
My main critiques of the manuscript are that it appears to start from the premise that IRBs are bad and that the analysis and presentation of results are underdeveloped. To the first point, the description of the interview guide is heavy on questions about how bad IRBs are. Asking how the negative experiences harmed them, whether they consulted an attorney, how long the situation took to be resolved, and whether they filed complaints all lead with the premise that IRBs are bad. Were there any who believed that the IRB at their university did not overreach and sought to ensure the protection of research participants? If not, then perhaps this is due to sampling bias where only those with bad experiences chose to do a follow-up interview.
I also think that the analysis is underdeveloped. By this, I mean that the categories/themes are not mutually exclusive or at the same level of abstraction. The last part of the Process section discusses ideas that address the lack of understanding by IRB members rather than the process. Similarly, the section on the IRB Chair addresses issues of lack of understanding, but from a specific IRB member. These can easily be merged. All of these themes are at comparable levels of abstraction—problems with how IRB is run and staffed. The appeal theme is thin, but it is also at this level. However, the Recommendations theme is at a different level. It does not discuss problems. It discusses potential solutions. Thus, it should not be at the same level as the others. What may make more sense is having different levels for the results. Part 1 could be IRB problems and concerns. Here the various concerns are discussed. Part 2 would be elaborating on the offered solutions. Part 3 could be more about mission creep and how they shifted from protecting participants to protecting universities. I encourage the authors to think more about what the participants are saying, find ways to deepen the analysis and look for broader patterns.
The discussion of recommendations is limited. It does not take into account the diversity and complexity of IRBs throughout the United States. Recommending that the IRB chair be compensated and be a good leader within the reviews is not relevant for many universities. For instance, the IRB at my University is quite large. They all get compensated (it is their permanent job). And they review so many protocols that the Chair is rarely in any of the reviewing sessions—they have too much to do just running the IRB department. Also, the quote from FG 1, Part 3, seems unfounded. At least at my university, all IRB members are aware of the federal guidelines and undergo regular training. Thus, these recommendations do not read as if they are applicable to most CCJ researchers. They read idiosyncratic. This limits the larger takeaway of the study.
The authors also set up the manuscript with the idea that CCJ research has unique issues and even devoted questions to this in the interviews. However, the issues they speak of are not unique to CCJ research. Cumbersome bureaucracy and IRB members not familiar with the specific topics are problems for nearly all researchers.
The Methods starts with the assumption that we know about the original survey. I think it would help the reader to fully explain the initial survey and then transition to the follow-up interviews.