IIER logo 3
Issues In Educational Research, Vol 14, 2004
[ Contents Vol 14 ] [ IIER Home ]

Beyond satisfaction surveys: The development of an evaluation process for a postgraduate transferable skills program

Marcia Devlin
Swinburne University of Technology
Teresa Tjia
The University of Melbourne
Leadership, professional and other transferable skills are embedded in the expected attributes of Australian research postgraduates at the successful completion of their degrees. This paper reports on the development of an evaluation process for a postgraduate transferable skills program at The University of Melbourne, Australia. Existing and emerging evaluation practices and processes are examined in light of the literature on what constitutes 'good' evaluation of the type of program under consideration. The development of the process to go beyond the commonly used participant satisfaction surveys and to improve evaluation practices is described in detail. The results of the evaluation to date are provided and discussed in terms of their usefulness in incorporating particular improvements to the program. The implications for the evaluation of other programs of this type are considered.


Introduction

As the number of Australian postgraduate students increases, questions from the wider community about the relevance of their education, particularly in terms of the economic benefit and employment prospects, have become more prominent (DETYA, 1998). Postgraduate students themselves are increasingly aware of the need to seek work in a broader range of fields outside the traditional academic and research arenas, including fields outside the one(s) in which they have studied. Even within the traditional pathways from postgraduate research degree to work and careers in research and/or academia, the skills set of many postgraduates increasingly requires an industry focus (Kemp, 1999; Wills, 1998). Over the last several years in particular, there has been increasing focus within universities on ways in which to enhance the employability of postgraduates. This focus has led to a wide range of initiatives, one of which is the development of transferable skills programs, especially for those undertaking research degrees.

Transferable skills programs

The term 'transferable skills' is often used interchangeably with 'generic skills' and refers to fundamental attributes applicable for any number of careers in the public or private sectors. Some of these skills include the commonly cited teamwork skills; interpersonal communication skills; the ability to solve problems; analytical skills and written communication skills. They can also include the less frequently cited career planning and goal setting skills; understanding of industry and the workplace; entrepreneurship and IT literacy. Transferable skills can also refer to specific skills relevant to a particular industry, or workplace such as skills involved in managing intellectual property and commercialising innovation.

In response to the increasing demand for graduates with transferable skills of a wide variety, the University of Melbourne's School of Graduate Studies has developed a program of advanced modules in leadership and professional skills for postgraduate research students nearing completion of their degrees. This program, the Advanced Leadership and Professional Skills Program (ALPS) is aimed at providing such students with a professional skills base of the highest quality that is transferable across research, industry and the public sector. The broad objectives of ALPS include providing:

The program is made up of seven discrete modules, any number of which may be taken in any combination by postgraduate students. It was decided that in order to develop and refine the evaluation process in a focused manner, two of the seven modules would form the core of the initial investigation. The two modules in ALPS on which this investigation is focused are 'Leadership and Professional Development' (LPD) and 'Commercialising Innovation and Intellectual Property' (CIIP). These two modules in particular were chosen because they cover both generic and specific transferable skills. Once the evaluation process has been refined, it will be expanded to cover the remaining five modules. In addition, funding to run these two modules more frequently than usual was provided by a grant from the Commonwealth Department of Education, Training and Youth Affairs (DETYA, now DEST - the Department of Education, Science and Training) as part of a Science Lectureships Initiative. The evaluation of the Initiative, and therefore of the modules, was also funded through this initiative.

The LPD module has been provided by the University since 1995 as a five-day intensive program with an additional follow up day held approximately two months later. The module is designed to help participants understand the nature of leadership, how to work effectively in groups and how to plan their professional careers. The module is relevant and open to postgraduate students from all disciplines.

The CIIP module has been in operation since 2000 as a five-day program, designed to assist participants in managing and protecting intellectual property, forging industry and business links and understanding the processes involved in the commercialisation of research and technology innovations. The program aims to enhance the leadership qualities of upcoming researchers in all disciplines, but is of particular relevance to those studying in the fields of science, engineering and technology.

Existing evaluation

More than 270 postgraduate research students, most of whom have been PhD candidates, from all disciplines represented at the University, have participated in the LPD program since 1995. Since its inception, during the five-day component of the program, the module facilitator asks participants to complete a daily feedback sheet to provide their views on the effectiveness of their activities over the day, and on their general satisfaction with the module program that day. The program is then adjusted on a daily basis, as necessary, to better meet the needs of the particular participants enrolled. This is especially important in a cross-disciplinary group where participant backgrounds, interests and needs can vary enormously.

At the conclusion of the module, each participant is asked to complete a more substantial feedback questionnaire from the ALPS coordinator within the School of Graduate Studies. This questionnaire focuses on gathering participants' immediate reactions to and satisfaction with the module. A similar questionnaire has been distributed to the 33 students who have participated in the CIIP module since 2000. This feedback mechanism provides the ALPS organisers with information about participants' satisfaction with the module content, format, facilitators and venue. It also provides the opportunity for students to comment on key areas of learning and outcomes, and to make suggestions for improvement of the module. The ALPS coordinator also actively seeks anecdotal feedback from students and presenters through informal conversation during and after each module. This information is collated, included in a formal report and used to make adjustments to the modules, as necessary, for the following time they are run.

Participant 'satisfaction surveys' such as those used in the ALPS modules are useful for determining participant initial reactions and a broad overall indication of the impressions that participants have of the module. But such surveys are unlikely to reveal detailed information about whether participants have, in fact, learnt anything. Participant satisfaction surveys do not reveal whether learning has changed or improved. Often, the responses to satisfaction surveys are, as was the case with the ALPS modules, the principal or sole indicator used to make evaluative judgements and then changes to programs. In university based evaluation, there is often confusion between student satisfaction and student learning. 'Level of student enjoyment' becomes a proxy for measurement of 'learning outcomes'. As Alexander and McKenzie (1998) point out, although positive student attitudes and increased motivation might encourage better learning outcomes, they are not in themselves evidence of improved learning.

Hodges (2002) agrees, stating that an evaluation process based on a broad satisfaction survey does not give a competent indication of learning. She suggests that, depending on the program, program evaluation might include three to four types or levels of evaluation. She describes four from which an evaluator can choose: reaction, learning, performance, and impact evaluation. Each is explained below.

Reaction evaluation. The ultimate purpose of reaction evaluation is to derive an overall impression of level of satisfaction of participants in a program, or in this case, module. As has principally been the case in ALPS through feedback sheets and questionnaires and informal interviews (conversations), reaction evaluation is normally conducted immediately after a program or module.

Learning evaluation. The purpose of learning evaluation is to determine the extent to which the module has met its intended learning outcomes. The extent to which the knowledge, understanding, skills and/or attitudes relevant to the module have been acquired are tested or measured in some way. To date, this had not been occurring in evaluating the ALPS modules, but was introduced as part of this investigation, as described below.

Performance evaluation. The purpose of performance evaluation is to determine to what extent participants can apply or transfer the learning outcomes to an appropriate site of application. One such site would be a workplace. Performance evaluation is referred to below.

Impact evaluation. Impact evaluation has a different focus to the previous three aspects of evaluation discussed. Here, the purpose is to determine the degree to which the business objectives of the module for stakeholders have been met. This aspect of evaluation is included here for completeness, but is not considered further in this paper.

Hodges (2002) suggests further that there is merit in conducting evaluation by moving sequentially through the types of evaluation from reaction to learning to performance evaluation. This is because if there are negative findings later in the process, the earlier sources of data may prove useful in determining why such findings eventually came about. The authors of the current paper agree with Hodges about the limitations of reaction evaluation, who further asserts that "The really rich sources of information are found in measuring if the participants got it (learning evaluation) and if they can use it (performance evaluation)" (p. 6).

Evaluation within universities is commonly viewed as a measure of the effectiveness of, for example, teaching methods or learning outcomes. There is little doubt that these are worth measuring but as Bruner (1966) recognised more than 35 years ago, measurement is the least important aspect of evaluation. The most important, he says, is to provide information on how to improve these methods or outcomes. A robust evaluation process, then, has both a systematic review of the quality of a program and a procedure for putting in place the improvements identified by the review (George, 1996).

Evaluation trialled

This paper reports on the development and implementation of a trial evaluation of the LPD and CIIP modules of ALPS that went beyond satisfaction surveys. The paper also reports on the process adopted for incorporating particular improvements to both the program and the evaluation process. The central purpose of the trial evaluation was to assess the strengths and weaknesses of the program with the ultimate aim of improving the learning experiences and performance outcomes for students. The evaluation conducted is diagnostic, formative and continuing. This paper reports on the process to date.

Method

Evaluation design

The three major data gathering approaches from which to choose for the type of survey investigation necessarily employed to assess the effectiveness of the ALPS program modules in developing and highlighting skills are the personal interview, the telephone interview and the questionnaire. Of these, the questionnaire was chosen as the data gathering technique most cost effective and appropriate for the evaluative study. However, one major limitation of a questionnaire is the possibility of misinterpretation of the questions by respondents (Ary, Cheser-Jacobs & Razavieh, 1990). As described below, this was addressed as much as possible through the collaborative effort to generate simple, clear learning objectives; the use of short statements and the clear wording of items. The items were formatted into a one page questionnaire. Using a pre-post design, module participants were asked to indicate the strength of their agreement or disagreement with each item at the beginning of the first session of the module and then again at the end of the module. At the end of the module, participants did not have access to the responses they had given at the start of the module.

The development of a learning evaluation

The aim was to develop a trial questionnaire that indicated in some way the extent to which the program had met its specific learning objectives. Specifically, the aim was to determine the extent to which participants had acquired the knowledge and skills specified in the learning objectives of the modules.

The first step, then, was to specify the intended learning outcomes for the two modules of the program. This step was more difficult than anticipated as close examination of the existing stated objectives revealed a number of challenges. The first of these challenges was that some of the stated objectives were long and difficult to interpret. Second, some contained more than one objective. Third, some were 'motherhood' objectives and could not easily be measured. Finally, there were some aspects of the specific module content that were not encapsulated by the objectives. After a significant collaborative effort between the authors, the ALPS Coordinator and the module facilitators, a comprehensive set of short, clear, measurable learning objectives was compiled.

The second step was to use these learning outcomes as the basis of an examination of the students' perceptions of whether or not they had been achieved. Because there is no formal assessment of learning in these modules, self reports, that is, through responses to the trial questionnaire, were the most valid method through which data on learning outcomes could be gathered. This part of the evaluation is based on the assumed validity of self reports. A central question then is, 'Can module participants be trusted to give accurate reports of their own learning?'. There exists a considerable body of social science research that indicates that the validity of self reports is likely to be increased when a number of conditions are met. These are that the questions are clearly worded, refer to recent activities to which the respondents have first hand experience, don't intrude on private matters and don't prompt socially desirable responses (Kuh, 2001). The trial questionnaires constructed for this investigation met these criteria satisfactorily.

Each of the learning outcomes was turned into a statement with which module participants could indicate the strength of their agreement. For example, two of the intended learning outcomes for the CIIP program are that, on completion:

  1. Students will understand the principles of intellectual property; and
  2. Students will be able to successfully manage the process of commercialising research.

These became the statements and later, items in the trial questionnaire:

  1. I understand the principles of intellectual property; and
  2. I can successfully manage the process of commercialising research.

As part of this second step, in order to discourage a response set once the statements became items in a survey, in each of the two trial questionnaires, a small number of the items were reversed. For example, the positively oriented item in the CIIP questionnaire:

  1. I understand the principles of intellectual property

became

  1. The principles of intellectual property are very unclear to me.

Administration of the survey

The way in which an instrument is administered can impact on the validity and reliability of the findings generated using that instrument (Hodges, 2002). In this evaluation, the ALPS Coordinator administered and collected the surveys from participants. In order to ensure appropriate and standard administration, instructions were complied. At the pre-module use of the questionnaire, the Coordinator:

  1. informed participants that the purpose of the questionnaire was to determine the group (and not individual) level of knowledge of content related to the module;

  2. reassured participants that they were not expected to have high levels of knowledge or understanding about the program content at this stage;

  3. informed participants that they would be completing the same survey at the end of the module;

  4. informed participants that the information gathered through the surveys would be highly valued and used to monitor and improve the effectiveness of the module; and

  5. informed participants that this was a voluntary exercise and that they were under no obligation to complete the questionnaire.

The Coordinator repeated steps 1, 3, 4 and 5 (above) when administering the post-module questionnaire and replaced step 2 (above) with:

  1. reassured participants that their responses will have no impact or bearing on their completion of or graduation from the module.

The development of a performance evaluation

At this stage, although the evaluation process does not include a formal performance component, it does include an informal performance component for the LPD module. At the beginning of the sixth day of the module, held about two months after the first five days, the module facilitator conducts an informal discussion with participants about ways in which they may have applied what they learnt from the earlier component. Responses tend to include anecdotes of application in participants' personal lives, lives as students and/or in the workplace. In future, it is intended that an external consultant will use this opportunity to formalise the discussion into a more objective group interview with a focus on performance evaluation.

Results

The mean scores on each item (statement) in the questionnaire were tallied and compared for survey I (pre-module) and survey II (post-module) for each of the two modules. The mean scores and significance levels are displayed in Tables 1 and 2 below.

Learning outcomes

The changes in mean scores for every item in both modules was in the expected direction. In addition, most of the changes in level of agreement with the statements were statistically significant. This measurement finding is encouraging as it demonstrates that the module learning objectives were met, in most cases significantly. But this measurement finding is not an end in itself. The evaluation process sought to measure not only outcomes but also to generate data that could be used to improve the modules. The next step in the process involved a closer examination of each finding that was not statistically significant.

Leadership and professional development module

There were three items in the LPD module pre-post measure that did not result in a statistically significant change in the level of agreement among participants. The first was item 2, 'I can describe my preferred learning style'. There was no obvious reason for this finding. The ALPS Coordinator discussed it with the module facilitator, who was surprised by the finding, especially given the significant finding for the similar item 1, 'I can describe my preferred thinking style' and the fact that these two components are closely related in the module. Because understanding and being able to articulate one's own learning style is an essential part of the module, the facilitator decided that he would further emphasise this component in the future.

Table 1: Comparison of mean item scores before and after LPD program completed

ItemSurvey I
n=32
Survey II
n=30
Signif.
Level
1.I can describe my preferred thinking style3.74.4 **
2. I can describe my preferred learning style4.04.2NS
3. I am aware of the way in which I prefer to collect data3.64.0*
4. I am aware of my preferred decision making style3.64.3**
5. I am clear about the way in which I prefer to solve problems3.64.5**
6. I understand how each of my styles in thinking, learning, collecting data, making decisions and problem solving impact on other team members in the workplace3.04.3**
7. I am aware of the strengths and weaknesses of my communication strategies3.34.5**
8. I am aware of the strengths and weaknesses of my planning strategies3.34.1**
9. I understand how to analyse group behaviour patterns when necessary2.84.4**
10. I understand how to change group behaviour pattern when necessary2.53.8**
11. I find it difficult to use innovative methods to have my ideas heard and implemented3.22.8NS
12. I understand change management processes2.64.0**
13. I have clear career goals3.43.7NS
14. I have begun to develop the practical steps necessary to achieve my career goals3.64.2**
Note: * Significant at the 0.05 level Independent t-test;
** Significant at the 0.01 level, Independent t-test.
NS Not significant;
Participants indicated the strength of their agreement with each statement on a 5-point scale (1 = Strongly Disagree, 5 = Strongly Agree)

The second item that resulted in a non-significant change was item 11, 'I find it difficult to use innovative methods to have my ideas heard and implemented'. There are three possible interpretations of this finding. The first is that the item was reversed which meant that respondents had to disagree that they found something difficult. A double negative response is sometimes less clear than its simpler, positively worded alternative and there is therefore an increased possibility of misunderstanding the item. However, since the item was not trialled in any other form, this interpretation is speculative. The second possible interpretation is that the item is double-barrelled and refers to both having ideas heard and having ideas implemented. It is possible that some respondents agreed/disagreed with one part but not the other - with 'heard' but not 'implemented', for example. Finally, it is possible that because the item referred specifically to the application or performance of methods, respondents found it difficult to predict this accurately and this may have affected their responses.

Table 2: Comparison of mean item scores before and after CIIP program completed

ItemSurvey I
n=15
Survey II
n=13
Signif.
Level
1.I understand the process of commercialising research2.34.3**
2.I can successfully manage the process of commercialising research1.93.4**
3.The principles of intellectual property are very unclear to me2.62.1NS
4.I am familiar with the strategies and processes to protect intellectual property2.74.0**
5.I am familiar with the process of applying for a patent2.54.0**
6.I have the ability to assess the commercial potential of research2.53.8**
7.I understand how to prepare and evaluate commercialisation plans1.63.3**
8.I understand how to finance commercialisation of research2.04.1**
9.I understand how to seek funding for my own research2.23.9**
10.I am aware of my own strengths and weaknesses in entrepreneurial skills2.83.8**
11.I am able to assess the culture of my workplace culture in terms of entrepreneurship, innovation and research strengths and challenges2.63.8*
12.I lack confidence in managing and leading a process of commercialising innovation3.52.8NS
Note: * Significant at the 0.05 level Independent t-test;
** Significant at the 0.01 level, Independent t-test.
NS Not significant;
Participants indicated the strength of their agreement with each statement on a 5-point scale (1 = Strongly Disagree, 5 = Strongly Agree)

After discussion it was agreed that the item be changed to 'I am aware of the strategies that can help me have my ideas heard and implemented'. This changed the content focus from application of knowledge to the more simple awareness of knowledge. This was an appropriate change as, after discussion, it was clear that the learning objective underpinning this item was more closely aligned with the overall course and program objectives. It was also agreed that this component of the module be monitored closely by the facilitator.

Finally, there was a non-significant change for item 13, 'I have clear career goals'. This result is most likely to simply reflect the fact that these are complex goals that cannot reasonably be finalised in 6 days, especially given the current study circumstances of the module participants. It was decided that this item would remain as is and that this aspect of the module be noted as one that might necessitate closer examination in future.

Commercialising innovation and intellectual property module

There were two items in the CIIP module pre-post measure that did not result in a statistically significant change in the level of agreement among participants. The first was item 3, 'The principles of intellectual property are very unclear to me'. At first this finding was difficult to understand as in the same questionnaire participants indicated that they were significantly more familiar with other aspects of intellectual property (for example, items 4 and 5) that would seem difficult if not impossible without a firm grasp of the principles. It may have been that participants were clear about the principles before attending the workshop and it was in the processes for applying those processes that the bulk of their learning occurred (items 4 and 5), although this seems unlikely. An alternative interpretation was that the wording of the item was not ideal. Specifically, it was possible that some respondents misread 'unclear' as 'clear'. After seeking advice on questionnaire design (M. Anderson, personal communication, June 19, 2002) it was agreed that 'unclear' would be replaced with 'clear' for the next version of the questionnaire as the need for reversed items in such a brief questionnaire was not pressing.

The second item that resulted in a non-significant change was item 12, 'I lack confidence in managing and leading a process of commercialising innovation'. There were three potential reasons or explanations. First, the item was reversed, which meant that respondents had to disagree that they lacked confidence. As mentioned above, there may be an increased possibility of confusion with double negative responses. However, once again such interpretation is speculative as the item was not trialled in any other form. Secondly, the item was double-barrelled and referred to leading and managing. Some respondents may have agreed with 'leading' and not with 'managing' or vice versa. Finally, it might have been that the item referred to the application of the concepts in the module and that respondents recognised that they might not be confident in applying these ideas and/or that they could not accurately predict success with application. In any case, on review of the trial questionnaire, it was clear that the content of this item had been previously covered by items 1 and 2, 'I understand the process of commercialising research' and 'I can successfully manage the process of commercialising research'. It was decided that this item would be removed from the questionnaire.

Although there was no problem with the item as indicated in the results, it was decided that item 11, 'I am able to assess the culture of my workplace culture in terms of entrepreneurship, innovation and research strengths and challenges' was repetitious. It was changed to 'I am able to assess my workplace culture in terms of entrepreneurship, innovation and research strengths and challenges'.

Discussion

As mentioned earlier, the evaluation process described here is diagnostic, formative and continuing. To date, the evaluation process has incorporated: This process is underpinned by a preparedness to look critically at practice and a willingness to accept the possibility that improvements to the modules can and should be made.

Seven lessons in evaluation and quality assurance have emerged from the process of designing and conducting the evaluation that are likely to be useful for future attempts:

  1. The first is that determination of program objectives related specifically to learning is an essential first step in conducting an evaluation that goes beyond measuring broad satisfaction.

  2. The second lesson is that a focus on learning objectives also helps refine module curriculum, teaching and learning.

  3. The third is that satisfaction surveys (reaction evaluations) can't reveal whether learning is occurring. This study moves beyond this ultimately limiting focus and attempts to measure more specifically the outcomes of the modules (learning evaluation).

  4. Fourth, a commitment to uncovering and acting on potential weaknesses is useful in terms of facilitating improvement. Without such commitment, evaluation is in danger of becoming a process that ignores and thereby reinforces weaknesses.

  5. Fifth, collaboration in the evaluation process is helpful. Specifically, in this investigation, the input of module facilitators helped to both sharpen their own focus on the alignment between content and learning objectives in the module, and contribute to the process of clarifying those learning objectives. The cooperation of the ALPS Coordinator was also essential for the appropriate administration of the questionnaire and for communication with the facilitators.

  6. The sixth lesson to emerge from the process is that the evaluation of skills is a continuous, cyclical process of determining strengths and weaknesses in teaching and learning, and feeding that information and understanding back into planning and curriculum development.

  7. Finally, performance evaluation is crucial to determining whether skills based programs are effective in their ultimate objective of giving graduates skills that can be transferred. The methodology employed and reported here undoubtedly highlights the strengths of the program. Overall, participants believe that they have improved understanding, knowledge and skills related to the module content and it can be safely assumed that their participation in the program has contributed to these improvements. But these findings beg the question: Can this learning be applied in a practical sense once participants return to, or enter, the workforce? To date, the evaluation process employed for ALPS has not included formal performance evaluation but such evaluation is now planned for the future.
Finally, there were two statistical lessons from the process:
  1. The first was recognising that the recoding the agreement options (strongly disagree, disagree, unsure, agree, strongly agree) as 1, 2, 3, 4 and 5 and analysing the resulting set of numbers as if they possess ratio or equivalent interval properties was inconsistent with sound statistical principles. The resulting numbers have, at best, ordinal properties and a more appropriate treatment therefore would be to collapse items into the dichotomous 'Agree/Disagree' categories, rendering them linear, which then facilitates graphical exploration (Grimbeek, 1999). It was agreed that in order to conduct more rigorous statistical practice, dichotomous categories would be used in the next round of analyses.

  2. A second statistical lesson that emerged was that matched samples would allow a paired t-test that would give a clearer picture of how individuals have changed over the course of the module (M. Anderson, personal communication, 19 June 2002). In order to facilitate this, it was agreed that with appropriate administration of the questionnaire, as described earlier, asking participants to give their name or student number was unlikely to adversely affect the results and that this would therefore be done in the future.

As a result of these lessons, evaluation will now be incorporated into the planning and development of all seven existing modules, rather than considered an 'add on' as has been the case in the past. It is also intended that performance evaluation will become part of the evaluation process. This poses two significant challenges. The first is that participants are difficult to keep track of after they have completed the module(s), and particularly after they have graduated from the university. A process to establish a database for these participants is being considered.

The second challenge is collecting data from these graduates within resource constraints. Hesketh (2002) suggests exploring the role of technology in evaluation and taking this suggestion on board at the simplest level, emailing graduates short surveys is being considered as a first step. Clearly, a useful addition to the performance evaluation of the program would be post-completion, formal, individual interviews with graduates to investigate whether the 'theoretical' learning from the program can be or has been put into practice in the workplace and these are also being considered. In the longer term, interviews with graduates' employers about their views on such application would also be useful, particularly as this is where much of the demand for transferable skills originated.

Conclusion

The evaluation tool discussed in this paper attempts to provide an objective assessment of the learning outcomes for course participants. It builds on the commonly used reaction based evaluation that assesses participants' general satisfaction and more specifically focuses on learning related outcomes, and provides the basis for extending evaluation further to encompass the application of learning.

Transferable skills programs may well be an effective method for enhancing the employability of postgraduate research students. As yet, however, evaluation practices and processes have not provided unequivocal evidence that this is the case. This preliminary investigation provides some indication of the sort of evaluation design that might be fruitful in determining whether learning is, in fact, occurring and if so, whether it is transferred beyond graduation.

References

Ary, D., Cheser Jacobs, L. & Razavieh, A. (1990). Introduction to research in education. (4th ed.). Fort Worth: Holt, Rinehart and Winston, Inc.

Alexander, S. & McKenzie, J. (1998). An evaluation of information technology projects for university learning. Canberra: Australian Government Publishing Service.

Bruner, J. S. (1966). Toward a theory of instruction. Cambridge: Harvard University Press.

Department of Education, Training and Youth Affairs (DETYA). (1998). Research training for the 21st century. Higher Education Division: Commonwealth of Australia.

George, R. (1996). Evaluation of subjects and courses at the University of South Australia. Flexible Learning Centre: University of South Australia.

Grimbeek, P. (1999). Reasons for reconsidering quantitative research based on the use of Likert scale and other social science data sets. The Australian Educational and Developmental Psychologist, 16(2), 89-91.

Hesketh, B. (2002). The science of science teaching and learning. Proceedings of the Uniserve Science Annual Conference (pp. 3-6). The University of Sydney: http://science.uniserve.edu.au/pubs/procs/wshop7/schws001.pdf [verified 17 Mar 2004]

Hodges, T. K. (2002). Linking learning and performance: A practical guide to measuring learning and on-the-job application. Boston: Butterworth Heinemann.

Jaeger, R. M. (1988). Survey methods in education, in Jaeger, R. M. (Ed.), Complementary methods for research in education. Washington, DC: American Educational Research Association.

Kemp, D. (1999). New knowledge, new opportunities - a discussion paper on higher education research and research training. Canberra: Commonwealth of Australia.

Kuh, G. D. (2001). Assessing what really matters to student learning. Change, May/June, 10-23.

Wills, P. (1998). The virtuous cycle - working together for health and medical research, health and medical research strategic review. Canberra: Commonwealth of Australia.

Authors: Marcia Devlin is a Lecturer in Educational Development Research, Higher Education Division, Swinburne University of Technology. Email: mdevlin@swin.edu.au

Teresa Tjia is Manager, Academic Programs Team, School of Graduate Studies, The University of Melbourne. Email: t.tjia@unimelb.edu.au

Please cite as: Devlin, M. & Tjia, T. (2004). Beyond satisfaction surveys: The development of an evaluation process for a postgraduate transferable skills program. Issues In Educational Research, 14(1), 44-58. http://www.iier.org.au/iier14/devlin.html


[ Contents Vol 14 ] [ IIER Home ]
This URL: http://www.iier.org.au/iier14/devlin.html
Created 12 Aug 2004. Last revision: 25 May 2006. © 2004 Issues In Educational Research.
HTML: Clare McBeath [c.mcbeath@bigpond.com] and Roger Atkinson [rjatkinson@bigpond.com]