Skip to main content



Child-Maltreatment-Research-L (CMRL) List Serve

Browse or Search All Past CMRL Messages

Welcome to the database of past Child-Maltreatment-Research-L (CMRL) list serve messages (10,000+). The table below contains all past CMRL messages (text only, no attachments) from Nov. 20, 1996 - September 14, 2018 and is updated quarterly.

Instructions: Postings are listed for browsing with the newest messages first. Click on the linked ID number to see a message. You can search the author, subject, message ID, and message content fields by entering your criteria into this search box:

Message ID: 7976
Date: 2009-01-05

Author:Todd McDonald

Subject:RE: Evidence-based and "Mix and Match" Programs

Dr. Saunders, would you be willing to share your case example? I would like to opportunity to walk through a similar exercise with my treatment staff. I would also welcome case examples by others on the list. Todd --- On Fri, 12/26/08, Saunders PhD, Benjamin E wrote: From: Saunders PhD, Benjamin E Subject: RE: Evidence-based and "Mix and Match" Programs To: "Child Maltreatment Researchers" Date: Friday, December 26, 2008, 11:43 PM Mark, Thank you for the excellent summary of several very important issues. Two points are particularly critical for future research. IMHO, the current excitement in some quarters over "components" approaches to treatment vs. manualized "protocols" is, from a research perspective, a red herring. As you note, both must have some sort of decision rules about what to do next at certain points in treatment. If not, they just become virtually random in nature. So, the results of those decisions will need to be tested empirically whether it means following a "protocol" or decision rules about using components. Frankly, when one scratches the surface, the two approaches sound suspiciously similar. Some have suggested that components approaches are more efficient because they use only the "active ingredient" components of protocols at key points in the treatment and skip the unnecessary stuff. Unfortunately there is precious little dismantling research discerning exactly what those active ingredients are, and whether or not they only get active when the other "unnecessary" components have been used as well (what one might call conditional component efficacy). It may turn out that the components that have been picked to be used actually do have the most impact even when not used in concert with other techniques. Or not. This hypothesis remains to be tested for most approaches. Others have suggested that "components" approaches are more palpable to clinicians because then they can use their clinical judgment when to do what rather than following the strict rules of a protocol. However, as you point out, many of the components approaches then proceed to teach elaborate rules for when and how to use the particular components chosen and end up being more complicated than protocols (but without the outcome research to support their efficacy). While the whole components vs. protocols debate is an interesting pastime for some of us, from an empirical testing standpoint, it may be a debate without a difference. The empirical question still is, "When therapists do this, do clients get better compared to when therapists do that?" Call it what you will, components, protocols, or whatever, you still have to define the "this" (aka independent variable) in sufficiently replicable manner. You second point about "focused" vs. "comprehensive" treatment planning also is absolutely critical. At a recent training for about 50 CPS workers, I gave them all a case we had seen recently in our clinic and asked them to break into groups and come up with a treatment plan. the case was a typical train wreck, multiproblem, abusive family. We then wrote on a flip chart all of the interventions, treatments, programs and meetings they thought the family should receive and go to. It took 4 large flip chart sheets to write them all down. I then asked the workers two questions. First, did they think any family in the world, much less this family, could get their child to even half of the appointments they were recommending. Second, did anyone in the room believe this was an effective treatment plan that would accomplish the goals we had set for this family. No one in the room believed any family (even their own) could accomplish half of the treatment plan, and not one person thought it was an effective treatment plan. Yet they wrote it. They agreed that they have been trained and acculturated to simply add and add and add and add to treatment plans to the point of being ridiculous. The good news is that at the end of the day, when challenged to come up with a feasible treatment plan composed of evidence supported interventions and programs, they were able to do it and the plan was about 1/3 of a flip chart page. The notions that more is better and that doing an untested something is always better than doing nothing permeates the system and needs to be challenged. Again, thanks for the elegant thoughts. Ben ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Benjamin E. Saunders, Ph.D. National Crime Victims Research and Treatment Center Medical University of South Carolina 843-792-2945 Phone Charleston, SC 29425 843-792-7146 Fax Visit our web sites: www.musc.edu/ncvc www.musc.edu/tfcbt www.musc.edu/ctg www.musc.edu/saprevention ________________________________________ From: bounce-3422423-6832002@list.cornell.edu [bounce-3422423-6832002@list.cornell.edu] On Behalf Of Chaffin, Mark J. (HSC) [Mark-Chaffin@ouhsc.edu] Sent: Wednesday, December 24, 2008 8:31 PM To: Child Maltreatment Researchers Subject: RE: Evidence-based and "Mix and Match" Programs Tom, Yaiiii....where to start. There is considerable interest in the idea of extracting common elements from across evidence based models, then applying these depending on assessed case characteristics and some systematic algorithm. Probably the most detailed system of how this process might be undertaken has been described by Chorpita and colleagues. Note, however, that the processes for identifying both the elements themselves and the matching algorithm as described by Chorpita are NOT just a matter of logic-model 'mix and match' eyeballing based on the clinician's gut or personal preferences. It is a quite structured and quantitative process. It is still a protocol (and a complicated one at that), and not at all the same thing as free-styling, fly-by-the-seat-of-the-pants services often advocated by the anti-EBT crowd. If anything, it is a system probably requiring more expertise and training than most simple EBT's. How well these algorithm-driven, elements-based systems work in practice is a subject of ongoing study. The jury is not in. Some EBT's such as MST, have done essentially this same thing for years and its worked out well. MST, for example, is not a single protocol, but is assessment driven and tailored, yet focuses the intervention on what we know matters, using basic elements that are known to work In this sense, there are fairly complex EBT's, such as MST, which target broad populations, and more specific EBT's, such as TF-CBT for example, which are specific for particular types of well-defined problems (i.e. PTSD). The elements-based idea does have a clear appeal--EBT's are probably not monolithic entities incapable of being subdivided. Most share common elements within a given domain (e.g. most evidence-based parenting programs share many common elements such as use of labeled praise, application of structured time-out protocols, etc.). However, I would be very skeptical of efforts to use this rationale to neuter EBT's, or to simply say "oh, we're already doing all that" or "this is the same thing" when it really isn't the case. One need look no further than the Blueprints implementations to find evidence of this, where well-intended shooting from the hip led to blending in ad hoc crap with EBT's and spoiled the results. The point is that the elements-based approach is not an excuse to allow anything and everything to come into the intervention or to mix-and-match without some fairly tight limits, such as those described by Chorpita and colleagues. The area of most concern in your question lies in the idea of whether more is better. There is excellent evidence at this point that this is not only false but that more can become harmful. For example, where parenting interventions are concerned, it appears that adding additional services to a parenting program actually poisons the benefits (Kaminsky, et al. meta-analysis). CPS and courts are notorious for this misconception--often prescribing so many services that whatever benefits any of the services might have offered may be quickly lost in the confusion. The analogy to "polypharmacy" in psychiatry is not a bad one. Focus (i.e. directly behaviorally targeting the top priority that needs to change) rather than comprehensiveness (trying to fix everything), is the new watchword in many service systems. Exactly how much is too much, and what demands the highest priority is an unanswered empirical question. But the emerging science in this area does, IMO, suggest a couple of general principles. First, tailoring might work...IF the elements are selected carefully based on scientific evidence and are clear essential common components across EBT's. Second, there are clearly both practical and therapeutic limits to how many things can be done well at once, and this point is reached rapidly and past that point quickly begins to ruin the overall service benefit. So, "focus" not "comprehensive" should be the watchword. We need to emphasize this watchword because practitioners have been so imbued with the idea that "comprehensive" is necessary that it takes considerable effort to disabuse them of this unfortunate misconception. This is another reason why any novel algorithm-driven, elements-based protocols need to be structured. And why it needs to be rigorously evaluated. Keep in mind that implementations of novel blended or elements-based programs cannot properly be called evidence-based just because the sources for the elements were evidence-based. For that matter, it cannot even be presumed to be effective, altough we might predict that it would be. Mark ________________________________________ From: Tom Hanna [tph3@cornell.edu] Sent: Thursday, December 18, 2008 8:25 AM Subject: Evidence-based and "Mix and Match" Programs On another list, there is active discussion underway on starting an "ancillary" parenting education program to an existing "core" home visitation program. The conversation quickly turned to the topic of "evidence-based", and then to funders and their requirements. The picture quickly got cloudy for me: 1. Some folks who already have an ancillary parenting education program reported that "blending" aspects of two evidence-based programs allowed them to tailor the trainings to the specific needs of their "home visited" parents. Others quickly pointed out that this is "wrong" and should not be done -- neither evidence based program is being followed precisely, and therefore both are "contaminated." Funders frown. 2. No one has said what additive effect, if any, is expected from providing a parenting program on top of a home visitation program. The underlying assumption is that families will be better off with two distinct interventions instead of one. (In fact, many centers in this home visiting network have many ancillary programs that serve some if not many of their home visited families.) 3. I know that lots of funders are demanding that agencies use "evidence-based" programs. But I now learn that funders are pushing implementation of a "matrix" of "evidence-based" programs. The underlying assumption is that "if one evidence-based program is good for families, then many are better." My question: Is there any research that helps multi-service agencies make their way through this minefield when working with a cohort of families? -- Any study of the "deterioration of effects" of the blending of two evidence based models for the same intervention? -- Any classical studies of "additive effects" of multiple targeted interventions? -- Any evidence that a "matrix" of evidence based programs has a stronger effect than a "pure" one-program approach? -- Any analysis that shows that evidence based programs in different interventions (home visitation vs parenting ed vs therapy groups) are (or are not) internally consistent? (ie, my doctor gave me one instruction about diet, my nutritionist gave me a contradictory instruction, and my home visitor's instruction differed from the other two.) TIA Tom -- -- Tom Hanna, Director Child Abuse Prevention Network www.child-abuse.com tom@child-abuse.com tph3@cornell.edu off 607.275.9360 cel 607.227.4524 fax: 415.962.0510 --

Dr. Saunders, would you be willing to share your case example? I would like to opportunity to walk through a similar exercise with my treatment staff. I would also welcome case examples by others on the list. Todd --- On Fri, 12/26/08, Saunders PhD, Benjamin E wrote: From: Saunders PhD, Benjamin E Subject: RE: Evidence-based and "Mix and Match" Programs To: "Child Maltreatment Researchers" Date: Friday, December 26, 2008, 11:43 PM Mark, Thank you for the excellent summary of several very important issues. Two points are particularly critical for future research. IMHO, the current excitement in some quarters over "components" approaches to treatment vs. manualized "protocols" is, from a research perspective, a red herring. As you note, both must have some sort of decision rules about what to do next at certain points in treatment. If not, they just become virtually random in nature. So, the results of those decisions will need to be tested empirically whether it means following a "protocol" or decision rules about using components. Frankly, when one scratches the surface, the two approaches sound suspiciously similar. Some have suggested that components approaches are more efficient because they use only the "active ingredient" components of protocols at key points in the treatment and skip the unnecessary stuff. Unfortunately there is precious little dismantling research discerning exactly what those active ingredients are, and whether or not they only get active when the other "unnecessary" components have been used as well (what one might call conditional component efficacy). It may turn out that the components that have been picked to be used actually do have the most impact even when not used in concert with other techniques. Or not. This hypothesis remains to be tested for most approaches. Others have suggested that "components" approaches are more palpable to clinicians because then they can use their clinical judgment when to do what rather than following the strict rules of a protocol. However, as you point out, many of the components approaches then proceed to teach elaborate rules for when and how to use the particular components chosen and end up being more complicated than protocols (but without the outcome research to support their efficacy). While the whole components vs. protocols debate is an interesting pastime for some of us, from an empirical testing standpoint, it may be a debate without a difference. The empirical question still is, "When therapists do this, do clients get better compared to when therapists do that?" Call it what you will, components, protocols, or whatever, you still have to define the "this" (aka independent variable) in sufficiently replicable manner. You second point about "focused" vs. "comprehensive" treatment planning also is absolutely critical. At a recent training for about 50 CPS workers, I gave them all a case we had seen recently in our clinic and asked them to break into groups and come up with a treatment plan. the case was a typical train wreck, multiproblem, abusive family. We then wrote on a flip chart all of the interventions, treatments, programs and meetings they thought the family should receive and go to. It took 4 large flip chart sheets to write them all down. I then asked the workers two questions. First, did they think any family in the world, much less this family, could get their child to even half of the appointments they were recommending. Second, did anyone in the room believe this was an effective treatment plan that would accomplish the goals we had set for this family. No one in the room believed any family (even their own) could accomplish half of the treatment plan, and not one person thought it was an effective treatment plan. Yet they wrote it. They agreed that they have been trained and acculturated to simply add and add and add and add to treatment plans to the point of being ridiculous. The good news is that at the end of the day, when challenged to come up with a feasible treatment plan composed of evidence supported interventions and programs, they were able to do it and the plan was about 1/3 of a flip chart page. The notions that more is better and that doing an untested something is always better than doing nothing permeates the system and needs to be challenged. Again, thanks for the elegant thoughts. Ben ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Benjamin E. Saunders, Ph.D. National Crime Victims Research and Treatment Center Medical University of South Carolina 843-792-2945 Phone Charleston, SC 29425 843-792-7146 Fax Visit our web sites: www.musc.edu/ncvc www.musc.edu/tfcbt www.musc.edu/ctg www.musc.edu/saprevention ________________________________________ From: bounce-3422423-6832002list.cornell.edu [bounce-3422423-6832002list.cornell.edu] On Behalf Of Chaffin, Mark J. (HSC) [Mark-Chaffinouhsc.edu] Sent: Wednesday, December 24, 2008 8:31 PM To: Child Maltreatment Researchers Subject: RE: Evidence-based and "Mix and Match" Programs Tom, Yaiiii....where to start. There is considerable interest in the idea of extracting common elements from across evidence based models, then applying these depending on assessed case characteristics and some systematic algorithm. Probably the most detailed system of how this process might be undertaken has been described by Chorpita and colleagues. Note, however, that the processes for identifying both the elements themselves and the matching algorithm as described by Chorpita are NOT just a matter of logic-model 'mix and match' eyeballing based on the clinician's gut or personal preferences. It is a quite structured and quantitative process. It is still a protocol (and a complicated one at that), and not at all the same thing as free-styling, fly-by-the-seat-of-the-pants services often advocated by the anti-EBT crowd. If anything, it is a system probably requiring more expertise and training than most simple EBT's. How well these algorithm-driven, elements-based systems work in practice is a subject of ongoing study. The jury is not in. Some EBT's such as MST, have done essentially this same thing for years and its worked out well. MST, for example, is not a single protocol, but is assessment driven and tailored, yet focuses the intervention on what we know matters, using basic elements that are known to work In this sense, there are fairly complex EBT's, such as MST, which target broad populations, and more specific EBT's, such as TF-CBT for example, which are specific for particular types of well-defined problems (i.e. PTSD). The elements-based idea does have a clear appeal--EBT's are probably not monolithic entities incapable of being subdivided. Most share common elements within a given domain (e.g. most evidence-based parenting programs share many common elements such as use of labeled praise, application of structured time-out protocols, etc.). However, I would be very skeptical of efforts to use this rationale to neuter EBT's, or to simply say "oh, we're already doing all that" or "this is the same thing" when it really isn't the case. One need look no further than the Blueprints implementations to find evidence of this, where well-intended shooting from the hip led to blending in ad hoc crap with EBT's and spoiled the results. The point is that the elements-based approach is not an excuse to allow anything and everything to come into the intervention or to mix-and-match without some fairly tight limits, such as those described by Chorpita and colleagues. The area of most concern in your question lies in the idea of whether more is better. There is excellent evidence at this point that this is not only false but that more can become harmful. For example, where parenting interventions are concerned, it appears that adding additional services to a parenting program actually poisons the benefits (Kaminsky, et al. meta-analysis). CPS and courts are notorious for this misconception--often prescribing so many services that whatever benefits any of the services might have offered may be quickly lost in the confusion. The analogy to "polypharmacy" in psychiatry is not a bad one. Focus (i.e. directly behaviorally targeting the top priority that needs to change) rather than comprehensiveness (trying to fix everything), is the new watchword in many service systems. Exactly how much is too much, and what demands the highest priority is an unanswered empirical question. But the emerging science in this area does, IMO, suggest a couple of general principles. First, tailoring might work...IF the elements are selected carefully based on scientific evidence and are clear essential common components across EBT's. Second, there are clearly both practical and therapeutic limits to how many things can be done well at once, and this point is reached rapidly and past that point quickly begins to ruin the overall service benefit. So, "focus" not "comprehensive" should be the watchword. We need to emphasize this watchword because practitioners have been so imbued with the idea that "comprehensive" is necessary that it takes considerable effort to disabuse them of this unfortunate misconception. This is another reason why any novel algorithm-driven, elements-based protocols need to be structured. And why it needs to be rigorously evaluated. Keep in mind that implementations of novel blended or elements-based programs cannot properly be called evidence-based just because the sources for the elements were evidence-based. For that matter, it cannot even be presumed to be effective, altough we might predict that it would be. Mark ________________________________________ From: Tom Hanna [tph3cornell.edu] Sent: Thursday, December 18, 2008 8:25 AM Subject: Evidence-based and "Mix and Match" Programs On another list, there is active discussion underway on starting an "ancillary" parenting education program to an existing "core" home visitation program. The conversation quickly turned to the topic of "evidence-based", and then to funders and their requirements. The picture quickly got cloudy for me: 1. Some folks who already have an ancillary parenting education program reported that "blending" aspects of two evidence-based programs allowed them to tailor the trainings to the specific needs of their "home visited" parents. Others quickly pointed out that this is "wrong" and should not be done -- neither evidence based program is being followed precisely, and therefore both are "contaminated." Funders frown. 2. No one has said what additive effect, if any, is expected from providing a parenting program on top of a home visitation program. The underlying assumption is that families will be better off with two distinct interventions instead of one. (In fact, many centers in this home visiting network have many ancillary programs that serve some if not many of their home visited families.) 3. I know that lots of funders are demanding that agencies use "evidence-based" programs. But I now learn that funders are pushing implementation of a "matrix" of "evidence-based" programs. The underlying assumption is that "if one evidence-based program is good for families, then many are better." My question: Is there any research that helps multi-service agencies make their way through this minefield when working with a cohort of families? -- Any study of the "deterioration of effects" of the blending of two evidence based models for the same intervention? -- Any classical studies of "additive effects" of multiple targeted interventions? -- Any evidence that a "matrix" of evidence based programs has a stronger effect than a "pure" one-program approach? -- Any analysis that shows that evidence based programs in different interventions (home visitation vs parenting ed vs therapy groups) are (or are not) internally consistent? (ie, my doctor gave me one instruction about diet, my nutritionist gave me a contradictory instruction, and my home visitor's instruction differed from the other two.) TIA Tom -- -- Tom Hanna, Director Child Abuse Prevention Network www.child-abuse.com tomchild-abuse.com tph3cornell.edu off 607.275.9360 cel 607.227.4524 fax: 415.962.0510 --