• mockupRIIR

    Volume 78-3 is online!

    RI/IR is an open access journal. Enjoy your reading!

  • New associate editors

    New associate editors

    Welcome to our new associate editors : Professor Tania Saba, Professor Ernesto Noronha, Professor Ann Frost and Professor Jean-Étienne Joullié!

  • Campus Hiver

    RIIR in one minute

    Watch this short video that introduce the journal, its recent accomplishments and our future ambitions!

L’évaluation des programmes de main-d’oeuvre : Observations méthodologiques

L’évaluation des programmes de main-d’oeuvre : Observations méthodologiques

Jean Sexton

Volume : 28-3 (1973)

Abstract

Evaluation of Manpower Programs : Some Methodological Notes

The purpose of this article is to present a survey of the literature on the evaluation of manpower programs in French. To do so, it has been chosen to successively present the origin, the nature, the methodology and the limits of such evaluative efforts.

Dating back to 1897, the evaluation preoccupation has known a renewed popularity since the mid 1960's in the manpower field. The movement has originated in the U.S.A. and has been and still is spreading in other North American juridictions and in certain European countries.

Public insatisfaction, limited public funds, competing needs, changes in the nature of social problems, an increased professionalization of the public sector, a more educated and more sophisticated public, and many obvious failures are some of the reasons that have led to this increased popularity of evaluating research. The purpose of evaluation is then to supply information leading to a better decision making.

Fundamentally, evaluation is a subjective and complex process. In fact evaluation is an appraisal of the extent to which a program has met its objectives, and why such a desired result was or was not fully achieved. The extent to which scientific methods are used in making that judgment will reduce but not eliminate the subjectivity involved in any evaluation effort. Evaluative research is therefore one from of applied research.

The first step of any evaluation is generally accepted to be the identification of the program objectives. While this may theoretically seem obvious and easy, many difficulties, especially in the manpower field, are met in such an exercise because few manpower programs have a simple, clearly defined single objective or even a dominant one. Then the evaluation process requires that the identified objectives are operationalized for success is rarely directly observable.

The methodology of evaluation requires that the type of evaluation done be chosen and that a control group be defined. As to the type of evaluation to be chosen, it must be noted that there are as many kinds of evaluation that there are criterias to judge the success of a program : input evaluation, output evaluation, adequacy of performance evaluation, cost-benefit evaluation, formative, summative, absolute, relative evaluation, etc.

The most accepted methodological requirement of evaluative research is undoubtedly the need for a control group. Indeed, in addition to searching if a program has met its objectives, it must be specified that such results were in fact due to the program evaluated. A control group is therefore used as a proxy for the absence of program. While there is general acceptance of the need for a control group, the consensus surely does not exist as to method of chosing the members of such a group.Ideally such members should be on all points similar to those exposed to the program with the sole exception of the program participation. Such an ideal principle is surely impossible to meet fully. Second best methods have been developped for the construction of control groups. Random assignment of individuals to an experimental and a control group ,before and after measures, comparison groups are some of them. The most popular however is to chose qualified and interested non-participants. The fact that perfect comparability between the two groups is impossible requires that an effort be made to establish the (statistical) degree of comparison. Moreover, the experimental and the control group must be independent. Again such a requirement may be difficult to fulfil.

This suggests that manpower program evaluation has many limitations. The presence of subjectivity, the dynamic character of the programs evaluated, the numerous criteria of success, objectives often badly defined, the problems to build operational definitions, the particular methodological difficulties of the control group, and the general methodological problems of any applied research are some of them. Moreover, serious limits may be due to the evaluator himself, and to the fact that many programs are evaluated at the same time. Finally, the way to really evaluate externalities or third party effects is yet to be found.

Experience suggests that manpower program evaluation is difficult, time consuming, and expensive. Moreover its results may be difficult to interpret and its methodology is often qualified as poor. Manpower program evaluation therefore has a bad reputation in many circles. This surely limits its use as a change agent. While further work is needed and will surely be done to ameliorate the quality of evaluation research methodology, there is also work to be done to favorize a better consumption of evaluation results.