Skip to main content



Child-Maltreatment-Research-L (CMRL) List Serve

Database of Past CMRL Messages

Welcome to the database of past Child-Maltreatment-Research-L (CMRL) list serve messages. The table below contains all past CMRL messages (text only, no attachments) from Nov. 20, 1996 - March 6, 2018 and is updated quarterly.

Instructions: Postings are listed for browsing with the newest messages first. Click on the linked ID number to see a message. You can search the author, subject, message ID, and message content fields by entering your criteria into this search box:

Message ID: 8513
Date: 2010-07-06

Author:E. Christopher Lloyd

Subject:RE: Longitudinal Analysis using PLS3 and BDI scores for children as dependent vars

Hi, all. I've worked with NSCAW data at some length independently and as a doctoral student at UNC-Chapel Hill. After some discussions amongst ourselves and consulting with RTI and others, we decided to avoid using the baseline wave infant data. Here's the rationale: 1. Assessing infants is a task that really should be done by clinicians with expert training. The data collectors had some basic training but not really the expertise required. 2. The NSCAW instruments used to assess infants are the screener versions, which means they have few items. Hence, they are less reliable since any error is magnified. In doing growth modeling (and this was part of my dissertation), you can use the raw scores but you'll need to include time as a predictor. Really, it would be better to use the adjusted score since we already know the scores should rise as time passes. In some diagnostic analyses I ran a few years ago on the NSCAW developmental instruments, I found most of them are most strongly influenced by immediate temporal influences, rather than historical ones. That is, the scores at Wave 4 are most closed correlated with other scores at Wave 4, not scores at Wave 3 or Wave 1. While you can generate growth curves (I used Bollen and Curran's SEM-based methodology but it's been shown that SEM and HLM based models produced basically identical results), there's a lot of missing data to fill in and the imputation models are necessarily complex (I used Markov Chain Monte Carlo but there are other systems that would work). Hope this helps. Feel free to contact me on or off list if it's not. My office email is below my signature. Chris E. Christopher Lloyd, PhD Assistant Professor School of Social Work University of Arkansas at Little Rock 2801 South University Avenue Little Rock, AR 72204 501.569.8486 eclloyd@ualr.edu --- On Wed, 6/30/10, Chaffin, Mark J. (HSC) wrote: From: Chaffin, Mark J. (HSC) Subject: RE: Longitudinal Analysis using PLS3 and BDI scores for children as dependent vars To: "'Child Maltreatment Researchers'" Cc: "'lchen@chapinhall.org'" Date: Wednesday, June 30, 2010, 9:04 AM A different analytic tack might revel some different findings. Early childhood developmental data is vulnerable to a variety of analytic problems. Given the pace and discontinuity of child development during this period, test items that were previously impossible suddenly become trivial only a few months later. These stage-sequential patterns of development are precisely why items often change on measures across early childhood stages. In the developmental literature, analysts have sometimes cautioned against analyzing standardized scores to model these types of staged phenomena that do not approximate a smooth underlying growth pattern. Latent transition models may offer a better fit with the underlying phenomena. Depending upon the task, these can be fitted at the item or task level for failure/success, assuming that after the item is passed it would stay passed even after it is dropped from the measure, and assuming that a new item that is failed would also have been failed earlier had it been included. This would clearly involve use of raw scores in the analysis, not standardized. There are a number of other possibilities here that seem less likely. Measure reliability may not be the issue, given that it would tend to increase intercept variability, but not necessarily intercept position. You might check variability of baseline scores to see if you have a non-constant variability issue. I doubt that regression to the mean or ceiling effects at baseline are the problem, but are worth checking, especially checking models with vs. without slope-intercept correlation pathways. The other possibility, which I unfortunately suspect may be the actual one, is that this is not an artifact at all, but rather reflects real development of children in child welfare and the growing impact of their environments over time. That is, you might assume that infant measures reflect mostly biologic or basic health status, whereas later measures come to reflect suboptimal environments more and more the longer the child is in that environment, then you would expect to see children whose development started off fairly normal, but had growth that didn’t keep pace. Mark From: Lijun Chen [mailto:lchen@chapinhall.org] Sent: Monday, June 28, 2010 1:30 PM Subject: Longitudinal Analysis using PLS3 and BDI scores for children as dependent vars Dear All, I am using the NSCAW data to examine the developmental indicators, especially BDI-Cognitive and PLS3, for infants (0-12 months at wave 1) through the 4 waves of data collections. One thing that baffles me is that the BDI and PLS3 standard scores for most children at the 2nd wave have dropped precipitously from the baseline wave when they were under 12 months. This may indicate the poor performance of these infants relative to the national norm. I wonder whether the (un)reliability of the baseline scores should also be a contributing cause. I would appreciate your opinions / comments on this. I plan to adopt growth curve modeling in analyzing the development trajectories of these infants using BDI and PLS3 scores as dependent variable. Is it preferable to use the Standard Scores or the raw scores as the dependent variables? Is it valid to use the raw scores as the dependent variable since items included in the instrument at different waves are not the same? Your advice is appreciated. Lijun Chen Chapin Hall at the University of Chicago

Hi, all. I've worked with NSCAW data at some length independently and as a doctoral student at UNC-Chapel Hill. After some discussions amongst ourselves and consulting with RTI and others, we decided to avoid using the baseline wave infant data. Here's the rationale: 1. Assessing infants is a task that really should be done by clinicians with expert training. The data collectors had some basic training but not really the expertise required. 2. The NSCAW instruments used to assess infants are the screener versions, which means they have few items. Hence, they are less reliable since any error is magnified. In doing growth modeling (and this was part of my dissertation), you can use the raw scores but you'll need to include time as a predictor. Really, it would be better to use the adjusted score since we already know the scores should rise as time passes. In some diagnostic analyses I ran a few years ago on the NSCAW developmental instruments, I found most of them are most strongly influenced by immediate temporal influences, rather than historical ones. That is, the scores at Wave 4 are most closed correlated with other scores at Wave 4, not scores at Wave 3 or Wave 1. While you can generate growth curves (I used Bollen and Curran's SEM-based methodology but it's been shown that SEM and HLM based models produced basically identical results), there's a lot of missing data to fill in and the imputation models are necessarily complex (I used Markov Chain Monte Carlo but there are other systems that would work). Hope this helps. Feel free to contact me on or off list if it's not. My office email is below my signature. Chris E. Christopher Lloyd, PhD Assistant Professor School of Social Work University of Arkansas at Little Rock 2801 South University Avenue Little Rock, AR 72204 501.569.8486 eclloydualr.edu --- On Wed, 6/30/10, Chaffin, Mark J. (HSC) wrote: From: Chaffin, Mark J. (HSC) Subject: RE: Longitudinal Analysis using PLS3 and BDI scores for children as dependent vars To: "'Child Maltreatment Researchers'" Cc: "'lchenchapinhall.org'" Date: Wednesday, June 30, 2010, 9:04 AM A different analytic tack might revel some different findings. Early childhood developmental data is vulnerable to a variety of analytic problems. Given the pace and discontinuity of child development during this period, test items that were previously impossible suddenly become trivial only a few months later. These stage-sequential patterns of development are precisely why items often change on measures across early childhood stages. In the developmental literature, analysts have sometimes cautioned against analyzing standardized scores to model these types of staged phenomena that do not approximate a smooth underlying growth pattern. Latent transition models may offer a better fit with the underlying phenomena. Depending upon the task, these can be fitted at the item or task level for failure/success, assuming that after the item is passed it would stay passed even after it is dropped from the measure, and assuming that a new item that is failed would also have been failed earlier had it been included. This would clearly involve use of raw scores in the analysis, not standardized. There are a number of other possibilities here that seem less likely. Measure reliability may not be the issue, given that it would tend to increase intercept variability, but not necessarily intercept position. You might check variability of baseline scores to see if you have a non-constant variability issue. I doubt that regression to the mean or ceiling effects at baseline are the problem, but are worth checking, especially checking models with vs. without slope-intercept correlation pathways. The other possibility, which I unfortunately suspect may be the actual one, is that this is not an artifact at all, but rather reflects real development of children in child welfare and the growing impact of their environments over time. That is, you might assume that infant measures reflect mostly biologic or basic health status, whereas later measures come to reflect suboptimal environments more and more the longer the child is in that environment, then you would expect to see children whose development started off fairly normal, but had growth that didn’t keep pace. Mark From: Lijun Chen [mailto:lchenchapinhall.org] Sent: Monday, June 28, 2010 1:30 PM Subject: Longitudinal Analysis using PLS3 and BDI scores for children as dependent vars Dear All, I am using the NSCAW data to examine the developmental indicators, especially BDI-Cognitive and PLS3, for infants (0-12 months at wave 1) through the 4 waves of data collections. One thing that baffles me is that the BDI and PLS3 standard scores for most children at the 2nd wave have dropped precipitously from the baseline wave when they were under 12 months. This may indicate the poor performance of these infants relative to the national norm. I wonder whether the (un)reliability of the baseline scores should also be a contributing cause. I would appreciate your opinions / comments on this. I plan to adopt growth curve modeling in analyzing the development trajectories of these infants using BDI and PLS3 scores as dependent variable. Is it preferable to use the Standard Scores or the raw scores as the dependent variables? Is it valid to use the raw scores as the dependent variable since items included in the instrument at different waves are not the same? Your advice is appreciated. Lijun Chen Chapin Hall at the University of Chicago