Skip to main content



Child-Maltreatment-Research-L (CMRL) List Serve

Database of Past CMRL Messages

Welcome to the database of past Child-Maltreatment-Research-L (CMRL) list serve messages. The table below contains all past CMRL messages (text only, no attachments) from Nov. 20, 1996 - December 22, 2017 and is updated quarterly.

Instructions: Postings are listed for browsing with the newest messages first. Click on the linked ID number to see a message. You can search the author, subject, message ID, and message content fields by entering your criteria into this search box:

Message ID: 9353
Date: 2013-02-07

Author:Chaffin, Mark J. (HSC)

Subject:RE: Indicators for Primary Child Maltreatment Prevention?

Hello Scott, Some quick thoughts on this: 1) I completely agree with Brett—drop “substantiation.” You will eliminate a source of inter-rater reliability problems, plus it means squat when it comes to actual wellbeing. IMO, this is one of those things where people often go by what the word “substantiation” sounds like it means in regular English, or what it means in statute, not what it means in actual child welfare field practice or what it has been found to mean in the research studies. 2) I would not treat administrative data and other types of data as mutually exclusive. Rather, I’d treat them as complimentary. 3) We all know the issues with report data. However, there are several things to be said in their defense. The analogy I would use is that CPS report data in child maltreatment prevention research is a direct analogue of arrest data in crime prevention research. Would we really do crime prevention research and not include arrest data as an outcome of interest? Here are some benefits of CPS report data: a. They are directly cost relevant—a report has economic costs associated with it. A questionnaire measure—not so much. b. A report is a hard “bottom-line” outcome. A check mark on a soft outcome like a questionnaire—not so much. c. A report doesn’t need a logic model to explain it….like a questionnaire outcome does (e.g. “we think that parenting stress is related to what we really care about because…..[insert spin here]”) d. Unlike other types of data, administrative data is less often missing. You have to be careful here because you may not always know when its missing. e. It collects itself at low cost. This is huge. The cost of collecting other types of data can be tremendous. f. It is available long-term. You can get years of follow-up down the line. g. It can be easily aggregated at various levels (individual child, individual parent, family, zip code, county, region, state) h. Other types data have their problems too. Questionnaires? Just check marks on paper with self-report bias. Observational data? Just a snap-shot in time, and people behave differently when they are being watched. i. I’ve had a sneaking suspicion over the years that many advocates are “down” on child abuse report data because their favored programs didn’t yield outcome on these measures. People have frequently selected report data as a bottom-line outcome for their studies, then dissed reported data when the results didn’t work out as they hoped. So, it was OK when they picked it, but became not OK when it didn’t work out. True, it is a difficult graph to move, but then hard downstream outcomes often are difficult to move—much moreso than the soft (and I would argue less meaningful) outcomes such as questionnaires. 4) Your study sounds a bit similar in design to the prevention study done by Ron Prinz. You might want to look at what Ron did. 5) Before looking at county-level rates, I’d be very careful that you were actually doing a county-level experiment….i.e. the entire county or close to that got the intervention. If only selected individuals got the intervention, then county-level outcomes might not be expected to move much. MC On Thu, Jan 31, 2013 at 6:29 PM, Bates - CDPHE, Scott > wrote: Hi all- I sit on our Early Childhood Leadership Coalition here in Colorado and we are looking for better, more positive, indicators of the primary prevention of child maltreatment. We currently use county rates of substantiated child maltreatment as an indicator and, as you may imagine, those rates are subject to too many local factors (e.g., case worker training, worker caseloads, etc.) to be comparable from county to county (our child welfare system is county-administered, state-supervised). I've looked at the data collected and am considering an indicator of new involvements as a function of child population (but I'm no epidemiologist or statistician, either!). Does anyone consider better indicators to measure child safety from maltreatment? Ideas regarding positively-worded indicators are especially welcome. Thanks! Error! Filename not specified. -Scott -- Scott Bates, MSW Program Manager Child Maltreatment Prevention Unit (Colorado Children's Trust Fund and Family Resource Centers) Colorado Department of Public Health and Environment scott.bates@state.co.us w (303) 692-2942 f (303) 691-7901

Hello Scott, Some quick thoughts on this: 1) I completely agree with Brett—drop “substantiation.” You will eliminate a source of inter-rater reliability problems, plus it means squat when it comes to actual wellbeing. IMO, this is one of those things where people often go by what the word “substantiation” sounds like it means in regular English, or what it means in statute, not what it means in actual child welfare field practice or what it has been found to mean in the research studies. 2) I would not treat administrative data and other types of data as mutually exclusive. Rather, I’d treat them as complimentary. 3) We all know the issues with report data. However, there are several things to be said in their defense. The analogy I would use is that CPS report data in child maltreatment prevention research is a direct analogue of arrest data in crime prevention research. Would we really do crime prevention research and not include arrest data as an outcome of interest? Here are some benefits of CPS report data: a. They are directly cost relevant—a report has economic costs associated with it. A questionnaire measure—not so much. b. A report is a hard “bottom-line” outcome. A check mark on a soft outcome like a questionnaire—not so much. c. A report doesn’t need a logic model to explain it….like a questionnaire outcome does (e.g. “we think that parenting stress is related to what we really care about because…..[insert spin here]”) d. Unlike other types of data, administrative data is less often missing. You have to be careful here because you may not always know when its missing. e. It collects itself at low cost. This is huge. The cost of collecting other types of data can be tremendous. f. It is available long-term. You can get years of follow-up down the line. g. It can be easily aggregated at various levels (individual child, individual parent, family, zip code, county, region, state) h. Other types data have their problems too. Questionnaires? Just check marks on paper with self-report bias. Observational data? Just a snap-shot in time, and people behave differently when they are being watched. i. I’ve had a sneaking suspicion over the years that many advocates are “down” on child abuse report data because their favored programs didn’t yield outcome on these measures. People have frequently selected report data as a bottom-line outcome for their studies, then dissed reported data when the results didn’t work out as they hoped. So, it was OK when they picked it, but became not OK when it didn’t work out. True, it is a difficult graph to move, but then hard downstream outcomes often are difficult to move—much moreso than the soft (and I would argue less meaningful) outcomes such as questionnaires. 4) Your study sounds a bit similar in design to the prevention study done by Ron Prinz. You might want to look at what Ron did. 5) Before looking at county-level rates, I’d be very careful that you were actually doing a county-level experiment….i.e. the entire county or close to that got the intervention. If only selected individuals got the intervention, then county-level outcomes might not be expected to move much. MC On Thu, Jan 31, 2013 at 6:29 PM, Bates - CDPHE, Scott > wrote: Hi all- I sit on our Early Childhood Leadership Coalition here in Colorado and we are looking for better, more positive, indicators of the primary prevention of child maltreatment. We currently use county rates of substantiated child maltreatment as an indicator and, as you may imagine, those rates are subject to too many local factors (e.g., case worker training, worker caseloads, etc.) to be comparable from county to county (our child welfare system is county-administered, state-supervised). I've looked at the data collected and am considering an indicator of new involvements as a function of child population (but I'm no epidemiologist or statistician, either!). Does anyone consider better indicators to measure child safety from maltreatment? Ideas regarding positively-worded indicators are especially welcome. Thanks! Error! Filename not specified. -Scott -- Scott Bates, MSW Program Manager Child Maltreatment Prevention Unit (Colorado Children's Trust Fund and Family Resource Centers) Colorado Department of Public Health and Environment scott.batesstate.co.us w (303) 692-2942 f (303) 691-7901