Evaluating intervention programs




















Accordingly, our findings cannot be interpreted as an increase in prosocial conducts among less prosocial participants. Future studies are needed to examine to what extent the introduction of the YPA in more intensive school-based intervention programs see Caprara et al. Albeit the advantages of the proposed LCM approach, several limitations should be acknowledged. First of all, the use of a second order LCM with two available time points requires that the construct is measured by more than one observed indicators.

As such, this technique cannot be used for single-item measures e. Second, as any structural equation model, our SO-MG-LCM makes the strong assumption that the specified model should be true in the population.

An assumption that is likely to be violated in empirical studies. Moreover, it requires to be empirically identified, and thus an entire set of constraints that leave aside substantive considerations. Third, in this paper, we restricted our attention to the two parallel indicators case to address the more basic situation that a researcher can encounter in the evaluation of a two time-point intervention.

Our aim was indeed to confront researchers with the more restrictive case, in terms of model identification. The case in which only two observed indicators are available is indeed, in our opinion, one of the more intimidating for researchers. Moreover, when a scale is composed of a long set of items or the target construct is a second order-construct loaded by two indicators e.

In these circumstances, the interest is indeed on estimating the LCM, and the invariance of indicators likely represent a prerequisite.

Measurement invariance issues should never be undervalued by researchers. Instead, they should be routinely evaluated in preliminary research phases, and, when it is possible, incorporated in the measurement model specification phase. Finally, although intervention programs with two time points can still offer useful indications, the use of three and possibly more points in time provides the researcher with a stronger evidence to assess the actual efficacy of the program at different follow-up.

Hence, the methodology described in this paper should be conceived as a support to take the best of pretest-posttest studies and not as an encouragement to collect only two-wave data. Therefore, our procedure may not be suited for the evaluation of intervention programs based on small samples. Although several rules of thumb have been proposed in the past for conducting SEM e. Despite these limitations, we believe that our LCM approach could represent a useful and easy-to-use methodology that should be in the toolbox of psychologists and prevention scientists.

Several factors, often uncontrollable, can oblige the researcher to collect data from only two points in time. In front of this less optimal scenario, all is not lost and researchers should be aware that more accurate and informative analytical techniques than ANOVA are available to assess intervention programs based on a pretest-posttest design.

GA proposed the research question for the study and the methodological approach, and the focus and style of the manuscript; he contributed substantially to the conception and revision of the manuscript, and wrote the first drafts of all manuscript sections and incorporated revisions based on the suggestions and feedback from AZ and EP. AZ contributed the empirical data set, described the intervention and part of the discussion section, and critically revised the content of the study.

EP conducted analyses and revised the style and structure of the manuscript. The authors thank the students who participated in this study. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

In detail, participating classrooms were chosen according to the interest in the project showed by the head teachers. National Center for Biotechnology Information , U. Front Psychol. Published online Mar 2. Author information Article notes Copyright and License information Disclaimer. This article was submitted to Quantitative Psychology and Measurement, a section of the journal Frontiers in Psychology.

Received Nov 21; Accepted Feb 6. The use, distribution or reproduction in other forums is permitted, provided the original author s or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. This article has been cited by other articles in PMC. Abstract A common situation in the evaluation of intervention programs is the researcher's possibility to rely on two waves of data only i.

Keywords: experimental design, pretest-posttest, intervention, multiple group latent curve model, second order latent curve model, structural equation modeling, latent variables.

Introduction Evaluating intervention programs is at the core of many educational and clinical psychologists' research agenda Malti et al. Evaluation approaches: observed variables vs. Open in a separate window. Figure 1. Sequence of models We suggest a four-step approach to intervention evaluation. Model 1: no-change model A no-change model is specified for both intervention group henceforth G1 and for control group henceforth G2. Model 2: latent change model in the intervention group In this model, a slope growth factor is estimated in the intervention group only.

Model 3: latent change model in both the intervention and control group In model 3, a latent change model is estimated simultaneously in both G1 and G2. Model 4: sensitivity model After having identified the best fitting model, the parameters of the intercept i.

Hypotheses We expected Model 2 a latent change model in the intervention group and a no-change model in the control group to be the best fitting model. Methods Design The study followed a quasi-experimental design , with both the intervention and control groups assessed at two different time points: Before Time 1 YPA intervention and 6 months after Time 2.

Results We created two parallel forms of the prosociality scale by following the procedure described in Little et al. Table 2 Goodness-of-fit indices for the tested models. M2 Model 4 15 The full Mplus syntaxes for these models were reported in Appendices. Figure 2. Figure 3. Discussion Data collected in intervention programs are often limited to two points in time, namely before and after the delivery of the treatment i.

Limitations and conclusions Albeit the advantages of the proposed LCM approach, several limitations should be acknowledged. Author contributions GA proposed the research question for the study and the methodological approach, and the focus and style of the manuscript; he contributed substantially to the conception and revision of the manuscript, and wrote the first drafts of all manuscript sections and incorporated revisions based on the suggestions and feedback from AZ and EP.

Funding The authors thank the students who participated in this study. Conflict of interest statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes 1 Directed by Leder Click here for additional data file. References Achenbach T. Future directions for clinical research, services, and training: evidence-based assessment across informants, cultures, and dimensional hierarchies. Child Adolesc. The ego resiliency scale revised: a crosscultural study in Italy, Spain, and the United States.

Self-efficacy: toward a unifying theory of behavioral change. Modeling latent growth with multiple indicators: a comparison of three approaches. Methods 20 , 43— Structural Equations with Latent Variables. New York, NY: Wiley. Hoboken, NJ: Wiley.

Confirmatory Factor Analysis for Applied Research. Methods Res. Prosociality: the contribution of traits, values, and self-efficacy beliefs.

Positive effects of promoting prosocial behavior in early adolescents: evidence from a school-based intervention. A new scale for measuring adults' prosocialness. Manifest variable path analysis: potentially serious and misleading consequences due to uncorrected measurement error. Methods 19 , — Boston, MA: Houghton Mifflin.

New York, NY: Irvington. The application of latent curve analysis to testing developmental theories in intervention research. The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Methods 1 , 16— Hoboken, NJ: Wiley; , — Applied Missing Data Analysis. First-versus second-order latent growth curve models: some insights from latent state-trait theory.

Modeling 20 , — The prevention of mental disorders in school-aged children: current state of the field. New York, NY: Routledge. Conclusions: The telemedicine technology was effective using the eCCM combined with a behavioral intervention strategy centering on self-efficacy. Pulmonary rehabilitation at home through PeR and FtF could improve the sense of self-efficacy and quality of life and alleviate symptoms in patients with COPD.

Keywords: WeChat; chronic obstructive pulmonary disease; randomized controlled trial; self-efficacy; telemedicine; the eHealth enhanced chronic care model. A majority of employees problematized the scarcity of time and the fact that line managers often prioritized focusing on other tasks than conducting and following up on OI activities. Interviewees expanded on this perspective and underlined the key role of line managers in making sure OI progress was taking place, and that continuous communication about the intervention process was happening.

Many employees experienced positive developments during the implementation of the OI, most concretely improved social relations and team climate.

Others agreed on the development but were not sure if it was due to the OI. Some employees expressed disappointment with regards to having spent too much time and energy on assessment and too little on developing actions. These disappointments were linked to difficulties regarding what activities stemmed from the OI and how they related to changes in working conditions. Some expressed a hesitance about ascribing too clear causality between the OI and the improvements that could be observed, and others commented that the OI did lead to practical improvements though not on a large scale.

Many interviewees likewise commented on the OI and presented their perceptions of its working mechanisms. This demonstrates how interviews can help researchers explain why and how an OI works. For example, a clear positive factor in the interviews regarding the outcome of the intervention was, for some, a feeling of being involved and participating in the development and follow-up on activities.

The clear difference to the quantitative factor is the substantial doubt and hesitance expressed by the employees with regards to intervention causality. Similarly opinions and suggestion regarding weighing of the energy spent on different components of the intervention is a parameter more easily assessed by explorative qualitative methods. When asked about information about changes in the workplace, respondents talked about several interrelated issues: information about the OI activities, problems of assigning time for information distribution and general information about changes.

Regarding the OI, some respondents experienced a lack of information and hence did not know where the process was headed. One interviewee explained that information did not come about by itself, one needed to actively seek out information and another employee problematized the balancing act of having limited time to seek information. A consistent theme in the interviews was that changes in the company on a both organizational and team level significantly affected the OI and that information about these changes was insufficient.

Not only did the interviewees report several cases of restructuring of work tasks but also of layoffs. These disturbances were even seen by interviewees as being used by line and area managers as excuses for not focusing sufficiently on the implementation of the OI.

A problem that was raised about concurrent projects, especially during the layoffs, was that the information and developed practices were fleeting. Several interviewees hence articulated a reluctance to commit themselves to novel projects as many had substantial previous experiences with change failure. This theme demonstrated that though employees positively rated the information regarding changes in the questionnaire, their daily experiences of lack of information and navigating in a complex organization proved difficult.

Likewise the interviews highlighted that the juxtaposition of wanting more information and the cost of having to spend time on acquiring it. Interviewees presented a lot of statements about how they perceived the need for specific aspects of the OI such as the format of being involved, developing action plans and participating.

Some experienced that there had been a need for a new way of working with screening and action planning in smaller groups, while others would have preferred that everyone was participating in the activities.

In the interviews talk about the OI was also often linked to experiences with other similar activities and how they had often been forgotten in the long run.

Some excused not having had sufficient time and resources for the OI due to concurrent organizational changes such as layoffs, merging teams or changing managers. A general assessment was that the process and outcome questionnaire used in the OI was too long but some relevant aspects were identified.

Some interviewees did not remember completing the questionnaire, but they often explain that they had likely done it and since forgotten about it. A group of interviewees explained that the questionnaire is superseded by concurrent events such as managerial change.

The final theme was very different in the interviews than the two items in the questionnaire. Interviewees in the semi-structured interviews did not restrain themselves to only answering the questions regarding the need for the OI, but instead gave accounts of the contextual setting that they had to assess the need for an OI in. They expressed change fatigue and compared the OI to previous failed projects and an annual attitude surveys that suffered from a lack of follow-up. Thus, the interviews provided important information about what factors employees consider before deciding whether to commit to an OI.

In the interviews we are offered explanations of how the OI fared in the practical reality of the daily postal life with hindrances such as canceled meetings, forgotten questionnaires, and unsupportive line managers. Such information is paramount in the task of providing a detailed assessment of whether an intervention as such has failed theory failure , or it has not been adopted adequately to have had a chance to be effective implementation failure; Nielsen et al.

It allowed us to investigate, not only the degree of implementation, but also which contextual factors have caused the OI to function as it did. A further central quality of the interviews is that they reveal how the intervention became embedded in the larger narrative of the company and became a part of the intervention history of the company. How the intervention is seen by participants compared to previous similar projects is a key result of the interviews.

The aim of this paper was to examine what information about the intervention process is to be gained from quantitative RQ1 and qualitative RQ2 process evaluation. The results in this paper have shown that for RQ1 the EFA analysis identified four distinct factors in the data, providing a set of scales for potential further inquiry and comparison. The qualitative data assessed in RQ2 in contrast demonstrated how the intervention fit the organization, and provided colorful context specific details about the intervention.

A central question in mixed methods research has been how data are combined and what role different sources play in analyses Bryman, ; Johnson et al. The relevance of using a thorough qualitative assessment of the context and perceptions as well as a quantitative assessment of implementation and proximal effect of change processes seems to intuitively speak for a methodological approach where both methods are used to approximate the details of the intervention process in question Greene et al.

Studies have shown the potential of mixed methods by drawing on both types of process data in combination with outcome measures to get a precise estimate of processes and effects e. These studies can be seen as using a form of mixed methods, labeled by Bryman as complimentary mixed methods, which demonstrates how the use of one data type qualitative in this case to show depth and detail can complement and nuance the results from another data type showing breadth and representativeness quantitative in this case.

The current study, however, sheds light on specific aspects of the use of qualitative and quantitative data in mixed methods evaluations of organizational interventions. The fact that the quantitative process evaluation results presented a psychometrically valid factor structure with constructs that were mirrored in the qualitative data shows speaks for the validity of this method and the validity of the following characteristics: First of all a key quality of quantitative measurement is that researchers can gain valuable information about key issues from a large proportion of the sample using few resources.

If intervention outcomes are measured using pre- and post-intervention questionnaires, one should not overlook the practicality of also measuring process using questionnaire items. Compared to conducting lengthy interviews or focus groups it is convenient for respondents to also answer a number of process questions that measure key constructs known to be relevant for implementation and that can be linked to quantitative outcome evaluation Murta et al.

Several studies have shown that interventions do not necessarily affect the entire intervention group, or have similar effects in all subgroups Nielsen et al. The use of quantitative data also enables for comparison of items of implementation across different contexts or intervention instances which is a substantial quality of quantitative process evaluation data.

First and foremost the qualitative interviews provided a more detailed narrative contextual account of the themes identified in the factor analysis, which gives the reader a richer understanding of the intervention and its context than the quantitative methods.

The qualitative data shed light on how organizations and their members do not exist in a historical vacuum; the intervention is compared to past activities and concurrent events. Qualitative data is also central for conducting a thorough process evaluation because aspects not measured in the quantitative questionnaires are likely to be affecting the results.

This was seen in quotes where the employees explained nuanced aspects of line managers actions, how line managers were focusing on other aspects, how information was somehow both needed, but not wanted badly enough to call for action.

Complex aspects of organizational reality, such as these, need to be uncovered using a qualitative assessment, as quantitative methods have difficulties illuminating these aspects. Similarly the interviews reveal a substantial insecurity about which outcomes are related to which activities, a problem that is not easily assessed with the questionnaires. Identifying such problematic gaps in implementation is a key benefit of explorative qualitative assessment that helps push implementation and evaluation of OIs further.

Another issue was how employees were focused on the increasing problems of downsizing and organizational change in the postal service. Conducting interviews where questions were posed about the general state of the organization made it possible to analyze how the changes were perceived, and hence how the changes might influence the outcome of the OI.

The results from this study first of all confirm the relevance and need for application of mixed methods designs to the process evaluation of organizational interventions, as different methodological tasks are better handled by applying different methods. Though this study demonstrates that it is possible to combine data sources to a mixed methods analysis of specific constructs it also puts weight behind the argument that each method would be suboptimal on its own Greene et al.

A key aspect of intervention evaluation projects is that they are linked to time limited events i. In contrast to the parallel design the results from this study suggest that there are potential benefits from sequentially harnessing methods to improve the evaluation, or even using reiterative cycles of mixed methods application Nastasi et al.

The results from quantitative analyses can be used to guide, not only qualitative analysis as was done in this study but also the qualitative data collection to ensure that specific aspects that have been found to be puzzling are being qualitatively uncovered Nastasi et al. Likewise interviews can be used to guide survey development to both select items and scales or even develop tailored items based on interview content c.

The question is hence not whether or not mixed methods should be used, but instead which mixed methods design is most appropriate. Here a starting point could be to examine the program theory Pawson, underpinning the OI and consider which aspects are most appropriately and comprehensively covered by different methods. The present study used data from an OI conducted in two regions in one company. Though this is a clear limitation of the generalizability of the results, the fit with general findings in the literature suggest that the results are still usable for other researchers.

As this is a study of evaluation methods, generalizability of the concrete findings is not a key quality of the study and therefore we consider the amount of data adequate. Another limitation is that the process data collection in the intervention is very thorough in the qualitative part and perhaps not as thorough in the quantitative where only 16 items were used to measure the process.

The quantitative results presented a limited picture of the intervention, but we might be able to legitimate more complex analyses if we had included more items. The survey was conducted after the interviews and hence the adaptation of the IPM would be influenced by crucial elements of the interviews. We suggest that researchers venturing into mixed methods evaluation designs carefully consider what aspects of the intervention process should be assessed by which data collection method.

Qualitative process data has the potential to tie together meaning, context and narratives of the intervention and the organization. Both are applicable in OI evaluation but researchers must use them wisely to harness their strengths as they have different methodological presuppositions and answer different questions. JA and KN conducted the intervention and collected the data for the study. JA wrote the draft of the paper and conducted the qualitative and quantitative analyses.

PS and KN contributed substantially to its development, refinement of the analyses, presentation and discussion of the results. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

National Center for Biotechnology Information , U. Front Psychol. Published online Sep Johan S. Saksvik , 2 and Karina Nielsen 3. Author information Article notes Copyright and License information Disclaimer. Reviewed by: M. Abildgaard, kd. This article was submitted to Organizational Psychology, a section of the journal Frontiers in Psychology.

Received Jul 14; Accepted Aug The use, distribution or reproduction in other forums is permitted, provided the original author s or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. This article has been cited by other articles in PMC. DOCX 21K. Abstract Organizational interventions aiming at improving employee health and wellbeing have proven to be challenging to evaluate.

Keywords: organizational interventions, qualitative methods, quantitative methods, research methodology, mixed methods, process evaluation. Introduction The evaluation of organizational interventions targeting employee health and wellbeing has been found to be a challenging task Murta et al.

Mixed Methods OI Evaluation Though OI evaluation has historically focused on whether the interventions improve working conditions on quantitatively measured outcomes Griffiths, mixed methods approaches have become a commonly chosen evaluation design.

Quantitative Process Evaluation Data Collection A commonly used way to quantify perceptions of intervention processes is the development and use of process evaluation scales Havermans et al. This will contribute to our understanding of how process questionnaires are best put to use in conducting evaluation of complex OIs, and we hence pose the following research question: Research question 1: What information about the intervention process is gained from quantitative process evaluation?

Qualitative Process Evaluation Data Collection The other approach, qualitative evaluation, is based on collecting and analyzing data of a very different nature. To assess the characteristics of the knowledge gained from conducting process evaluation interviews, we aim to analyze the same constructs identified in the quantitative analysis to make comparison possible and pose the second research question: Research question 2: What information about the intervention process is gained from qualitative process evaluation?

Quantitative Evaluation Process Items The process questionnaire contained 22 items based on the IPM questionnaire but tailored to the specific context as recommended by Randall et al.



0コメント

  • 1000 / 1000