QJER logo 2
[ Contents Vol 5, 1989 ] [ QJER Home ]

Comments on "Tertiary Entrance In Queensland: A Review" *

Barry McGaw
Australian Council for Educational Research
*This paper was commissioned by the Joint Advisory Committee on Post-secondary Education in Queensland. It provides an assessment of the report Tertiary Entrance in Queensland: A Review (Working Party on Tertiary Entrance, 1987). It deals with specific issues considered by the Working Party, examining the arguments and recommendations advanced by offering personal recommendations. Where it is relevant, attention is drawn to debates and developments in other Australian systems.


In its report Tertiary Entrance in Queensland: A Review, the Working Party on Tertiary Entrance (1987) provides an excellent review of the general issues in the selection of students for tertiary education and the particular issues to be faced in the Queensland system. The Working Party's recommendations respond directly and systematically to the issues and also take good account of experiences in other Australian systems. They have the potential to deal effectively with most of the concerns in Queensland and to avoid some of the new problems emerging in other systems following recent reforms.

The Working Party's report begins with a catalogue of the issues raised in the submissions it received (pp. 13-30). This provides a useful listing of the issues that the Working Party had to face. With some reorganisation, the major issues from that list are addressed in this paper.

Constraints on the Secondary Curriculum

There has been a quite extraordinary growth throughout Australia in recent years in retention rates to Year 12. This is true of all systems, as Figure 1 shows, but none has a growth rate exceeding that of Queensland. The levels are higher in the Australian Capital Territory but that is often thought to be due to the nature of the ACT population. There are special features of the ACT population, of course, but Williams (1987) has clearly shown that by no means all of its higher rate is due to these. Part, at least, appears to be due to characteristics of the ACT education system, a key feature of which is a pattern of curriculum provision and assessment similar to that of Queensland.

Figure 1

Fig. 1: Retention Rates to Year 12

The diversity of curriculum offerings, which is both cause of and response to this enrolment growth, is revealed in the Working Party's report (p.131). The 15 most popular subjects account for the enrolment patterns of only 50 per cent of the students whose subject choices make them eligible to compete for a place in higher education. Twenty subjects account for 80 per cent. This is reported to be a more diverse set of enrolment patterns than those of other systems.

To be eligible to receive a tertiary entrance score, students are obliged to choose five of their subjects from the set of 43 Board subjects. Since most students take six subjects this gives those seeking admission to higher education the opportunity to take one from the further sets of Board-registered school subjects and TAFE subjects. To give the students further opportunities to take subjects from these additional sets, the Working Party recommends that students taking as few as three Board subjects be eligible for admission to higher education. As enrolments in Years ll and 12 continue to grow, this should allow students less certain of their intentions to keep open the option of higher education without committing a minimum of five-sixths of their study to courses that are oriented towards higher education.

The Ministerial Working Party on School Certification and Tertiary Admissions Procedures ( 1984) in Western Australia and the Ministerial Review of Postcompulsory Schooling (1985) in Victoria made similar proposals. It has now been proposed that South Australia follow suit (Gilding, 1988). Western Australian practice is now to allow students seeking admission to higher education to take as few as three subjects from the set that could count for tertiary entrance. Students must achieve a satisfactory performance in a full set of six subjects so they cannot just concentrate on the three that will contribute to their tertiary entrance score. Some benefit is provided for students taking more than three subjects from the tertiary entrance set but it is limited and depends on a somewhat arbitrary requirement that the average be based on at least one subject from each of the humanities/social studies set and a quantitative/ science set.

The Queensland Working Party offers a better strategy for encouraging students to consider taking more than the minimum number of subjects from the Board set. It proposes an averaging system that provides a bonus for taking more than the minimum of three subjects (12 semester units) from that set. Provided the bonus is not too great, students for whom a more varied pattern is appropriate should not be discouraged from taking as few as three Board subjects and including others instead. Provided it is not too small, students who would be best served by taking the traditional five subjects (20 semester units) from the set should continue to do so.

As a further reduction in constraints upon students, the Working Party recommends that only performance in Year 12 be considered in obtaining the assessments for tertiary entrance scores. The Western Australian Ministerial Working Party (1984) suggested a similar alteration from a system in which external examinations at the end of Year 12 had covered the curriculum for Years 11 and 12. In South Australia and Victoria, where the assessments have been only for the Year 12 curriculum, recent reforms have moved towards an integration of Years 11 and 12. That is, systems that have treated Years 11 and 12 as an integrated upper secondary program are separating the two years while those that had treated them separately are moving to integrate them. The Queensland and Western Australian proposals, while focusing assessment on Year 12, do not abandon the notion of a curriculum co-ordinated over the two years so there may be no strong warnings in the new moves in Victoria and South Australia.

The most constraining influence on the enrolment options for some students is the need to take prerequisites for particular courses in higher education. The Working Party suggests that no higher education course should set more than three subjects as prerequisites. The text implies that some presently set five. Whether or not all come down to three, it is appropriate to urge a review. Higher education institutions can too readily set prerequisites with insufficient consideration of their necessity or of precisely how they will be built upon in the higher education courses.

A less direct but nevertheless powerful constrain on enrolment options for stronger students arises from the competition for places in the most selective higher education courses. Views develop about which subject combinations maximise the chances of a high tertiary entrance score and result in pressure upon students to take those combinations. The result is often a narrow focus upon mathematics and science even though the sought after course may not involve predominantly mathematics and science. The Working Party recommends that higher education institutions consider the delaying selection for these courses until after one year of higher education.

The Working Party should be supported in its general recommendation that constraints from higher education on Years 11 and 12 be minimised (Recommendation 2J and its specific recommendations for using only Year 12 assessment (Recommendations 4, 24, 25), for allowing as few as three Board subjects in a package giving eligibility for admission to higher education (Recommendation 5), for reducing the number of prerequisites for any higher education course (Recommendation 15), and for delaying selection for the competitive entry courses (Recommendation 14).

Multidimensionality of Data

Until the mid-1960s, no attempt was made in Queensland to reduce students' results in the Senior Examination to a single index. Individual subjects were graded on an A, B, C, N scale and matriculation was achieved if a student achieved some minimum number of passing grades of which some minimum had to be at B level. The grades were awarded normatively (i.e. there were guidelines about the proportion of students that could receive each grade level) so the minimum profile requirement in a loose way controlled the numbers being 'matriculated'. There was no publicly defined quota and probably no public perception that the system operated through a loose de facto definition of a quota.

When the grading system was changed from letters to numbers on a 7-to-1 scale, the definition of 'matriculation' was changed to a minimum aggregate of best five results. It was initially set at 21 which, since the numerical grades were allocated normatively within subjects, roughly determined the proportion of students to be matriculated. As the number of students completing Year 12 rose in the late 1960s, the minimum requirement was raised to 24 to take a smaller proportion of the now larger numbers of candidates and so contain the growth in higher education enrolments.

While numerical grades were then being aggregated to determine eligibility for enrolment in higher education, actual examination scores had all along been aggregated to produce a more fine-grained order-of merit for the award of Commonwealth Scholarships. Both approaches sought to reduce each student's pattern of performances to single index on a common scale. A similar move away from the use of profiles of performance to the use of orders-of-merit based on unidimensional scales occurred in all Australian systems except Tasmania.

In all of those systems, there is now a growing debate about the appropriateness of seeking to create an overall order-of-merit of all students. This is essentially an argument that the performance data are multidimensional and that too much information is lost if they are reduced to a single dimension.

A single order-of-merit for the total Year 12 candidature has the obvious appeal of simplicity of use for those concerned with admission to higher education. For students, it has a further benefit. The aggregates on which the order-of-merit is established are typically formed without substantial restriction on the subject results that may contribute to them. This gives students considerable freedom in their choice of subjects.

The potential deficiencies of a single order-of-merit arise from the multidimensionality of the subject results that are combined in producing the aggregate on which it is based. There are two questions to be addressed in deciding whether the aggregate provides an adequate expression of student performance. The first is how much information is lost by reduction to the single dimension. The second is whether the information loss is equivalent for all groups of students.

The first question can be answered by determining the proportion of the total variance in subject scores that can be accounted for by a single dimension (the first principal component or, in a factor analysis, the first unrotated factor). The Working Party provides no evidence on this question though it does propose the continued use of all overall aggregate. It is likely that about 80 per cent of the variance in subject scores will be accounted for by the aggregate so clearly it retains the bulk of the information provided in the separate subject scores.

Whether it works as well for all groups of students is, however, an important additional question. For those who receive a relatively homogeneous set of results (such as mathematics and science students whose subjects are closely related) less information will be lost in the aggregation than for those whose results are less homogeneous (such as humanities and social studies students whose subjects typically vary more). The consequence is that the aggregates of the mathematics/ science students will be more widely distributed that those of humanities students. Mathematics/science students will tend to predominate at the top and the bottom of the distribution of aggregates to an extent that is not justified by performances alone.

Where there are gender differences in the pattern of enrolments, or in the pattern of subjects for which results are counted in the aggregate, there can be gender differences in the benefits of aggregation. In the ACT, where aggregates were produced in a manner similar to that in Queensland, there was evidence of a bias against girls. There were more males than females in mathematics and science subjects but, more importantly, males were much more likely than females to have mathematics and science results in the set of best results contributing to their aggregates. The Committee for the Review of Tertiary Entrance Score Calculations in the Australian Capital Territory (1986) concluded that what appeared as a gender bias was actually a subject choice bias favouring mathematics and science students.

A further question about a single aggregate is whether it is appropriate for the range of different admissions decisions required. The aggregate is likely to predict performance in some higher education courses better than others. In some systems, aggregates are calculated in different ways for diff-rent purposes presumably to obtain an aggregate with better predictive validity for specific purposes. In Victoria, for example, a general aggregate is formed from students' best four results plus 10 per cent of the fifth. (A further 10 per cent of any additional results is added as a bonus for those taking more than five subjects.) Selection for the two medical faculties is based on a different aggregate formed from English, chemistry, the next best two results (with the restriction that only one may be a mathematics result) and then 10 per cent of any further results (usually one only and including the second best mathematics result if two mathematics subjects are taken).

This use of multiple aggregates can reduce the problem of the single order-of-merit being differentially valid for selection for different higher education courses. It does not avoid the problems of multidimensionality of the subject results. The selection for each higher education course is still based on consideration of only a single aggregate. The different aggregates provide somewhat different general measures of achievement for different courses but each involves reduction of different subject results for different students to the single dimension defined by the aggregation rules.

The recent review in the Australian Capital Territory (Committee for the Review of Tertiary Entrance Score Calculations, 1986) recommended a different approach to minimising the problems of multidimensionality in the data. It suggested the creation of two aggregates for each student, one as a general measure of quantitative performance and the other as a general measure of verbal performance. The quantitative aggregate was to be formed from results in mathematics and science subjects and the verbal aggregate from results in humanities subjects. Since ACT assessments are school-based, the subject assessments were to be scaled first against the relevant component of the Australian Scholastic Aptitude Test (ASAT). Some subjects were still to be scaled against the total ASAT. Mathematics and science subjects were to be scaled against the quantitative component of the ASAT and humanities subjects were to be scaled against the verbal component of the ASAT. For this purpose, ASAT was to be supplemented by the addition of a test of writing to give a fuller assessment of verbal ability.

The writing test has been added to the ASAT for the ACT and the idea of scaling subjects against relevant ASAT subscore instead of scaling all subjects against the ASAT total score has been accepted. Students' results are still finally reduced to a single aggregate, however, pending further investigation of the properties of the proposed quantitative and verbal aggregates.

The Queensland Working Party proposes the use of five aggregates. The first four are different weighted combinations of the same subject results. The fifth is determined by scores on the common scaling test, proposed to be the Australian Scholastic Aptitude Test (ASAT).

The first is an overall aggregate from which the Overall Achievement Position (OAP) is to be derived. It is essentially the same as the current aggregate on which the Tertiary Entrance Scores (TES) are based. The second, third and fourth aggregates are to be derived from the same subject results as the overall aggregate but as different weighted combinations defined to reflect the extent to which results in the subject are likely to depend on specific skills. The skills are 'use of written English expression', 'use of symbolic data manipulation (symbolising)' and 'involvement in the praxis of the subject (practical activities)'. From these aggregates, three Specific Achievement Positions (SAPs) are to be established. The weight of a subject in the combination that defines each aggregate is proposed to range between one and seven depending on judged relevance of the skill to performance in the subject.

The three SAPs are unlike the proposed quantitative and verbal aggregates in the ACT in one important respect. The ACT aggregates are intended to be based on combinations of different subjects (though some subjects could contribute to both). The proposed Queensland SAPs are all based on the same subjects since subject weights for each combination are all non-zero, ranging from one to seven.

The discussion of the relationship between the OAP and the SAPs in the Working Party's report and in the subsequent paper (Maxwell and Allen, undated) responding to criticisms of the use of the scales (Sadler, 1987a, 1987b) is confused, or at least confusing. It is suggested in the report (pp.107ff) that the OAP be used to reveal 'global' differences among students and that the SAPs be used to reveal 'regional' differences. More significantly, it is suggested that neither class of aggregate could be used appropriately for the other purpose. Maxwell and Allen (undated, p.4) assert that the only use appropriate for the SAPs is to 'unpack the differences, of subject choice and differential patterns of achievement, between students with the same overall achievement'. Such a restricted use could be adopted as a deliberate policy but it is by no means a consequence of the psychometric or conceptual properties of the subscales.

It is clear that the Working Party values the minimisation of influence on students' subject choice that a single overall aggregate (such as the OAP) allows. To sustain that position one need only defend oneself against the arguments about the possibility of bias in an overall aggregate in favour of some groups of students (such as mathematics and science students or, indirectly, males). The Working Party claims that the SAPs are on different dimensions from that of the OAP. This claim is made directly in the claim that the SAPs contain information not exhausted in the OAP (pp.116-117) and in the discussion of a 'dimensions system' (pp.146-149). The claim is made indirectly in the assertion that the SAPs do not simply provide a finer scale on the OAP dimension to which appeal might be made in the case of ties.

It seems unreasonable to claim that the three SAPs define dimensions different from that of the OAP and then to claim that the SAP dimensions provide meaningful comparisons only among students with identical OAPs. The SAPs clearly offer dimensions on which comparisons could be made across the whole performance range. The Working Party wishes to avoid such use to preserve the predominant use of the overall scale for its potentially lower impact on student subject choice. Not using the SAPs for full-range comparisons can be urged as a matter of policy it cannot be ruled out as infeasible or psychometrically inappropriate.

In fact, by allowing the use of the SAPs as supplementary the Working Party runs the risk of introducing the very 'backwash effects' it hopes to avoid. If the 'written symbolic data manipulation' SAP were to be used as the selection measure for engineering and physical science, it would probably strongly influence students to take those subjects that would have the highest weights on this dimension. Even if it were not used as the primary selection measure, but only for discrimination among students at the margin, students may be no less influenced in their subject choices since they could not be confident that they would not be at the margin for selection when the time comes.

The Working Party's proposed sequential use of the information, involving dismissal of information about within-band differences on one scale once appeal is made to the next scale, introduces the possibility of serious anomalies of the type to which Sadler (1987a, 1987b) draws attention and which Maxwell and Allen (undated) in their rejoinder fail to dismiss. The information provided by the OAP and the SAPs could be used sequentially with the supplementary information from additional scales being used only for discrimination at the margins. For those marginal cases, however, it makes much more sense to use a combination of the information from all relevant scales (including the OAP) than to appeal only to the most recently introduced scale.

The Working Party should be supported in its proposal that multiple scales be produced (Recommendations 1, 18, 19, 28, 29, 30, 40, 41) and in its proposal that subject results be combined to produce aggregates in a manner that takes account of achievement levels in the subject and gives a bonus if more than the equivalent of three subjects are taken (Recommendations 7, 39). The notion of using information from a global index and more specific indices of achievement sequentially (Recommendation 10) should be supported but not be in the manner proposed. All relevant information should be used at the margin, not just that which is most recently added for consideration.

Scaling of School Assessments

Before the abolition of external examinations in Queensland there was no scaling of assessments between subjects. When matriculation was achieved by obtaining a profile of letter-grade assessments with a sufficient mix of Bs and Cs, it was clearly easier to matriculate with some combinations of subjects than others because the proportion of students obtaining each letter grade was roughly the same for all subjects regardless of the selectivity of their candidatures. Even the subject scores aggregated to produce the order-of-merit for the award of Commonwealth Scholarships were not scaled before aggregation. Scores from all subjects were fixed with similar distributions with the consequence that the students had the same relative advantage or disadvantage in seeking scholarships as they had in achieving matriculation.

The rescaling of assessments between schools, which was necessitated by the abolition of external examinations and a dependence on school assessments, brought with it the benefits of rescaling between subjects. In all Australian systems which produce aggregates, some method of scaling individual subject results has been introduced in an effort to make the results in different subjects comparable. All scaling methods stake account of the selectivity of the group of students taking each subject. The Working Party defines the aim of scaling to be to ensure that a student's result does not depend on the group of students with which the subject is taken. Another way to state the aim is to say that it is to give students in each subject the result they would have obtained had all students taken the subject. This would remove any disadvantage for students taking subjects in the company of predominantly less able students. No judgment about the quality of subjects per se need be implied, only a judgment that students taking different subjects differ in ability.

There are various critics of scaling procedures. Some oppose scaling in principle, believing that it does discriminate against some subjects and thus against students for whom those subjects are appropriate. The abolition of scaling would, however, re-introduce the inequities that characterised the earlier system when there were clear benefits in taking subjects in the company of less able students. Other critics accept the necessity for scaling but doubt the adequacy of scaling against a general measure such as the Australian Scholastic Aptitude Test (ASAT). The Working Party acknowledges the possibility of improving upon the existing ASAT or of replacing it but recommends that it continue to be used until some clearly superior alternative is identified. It recommends that a writing test be added as it has already been in the ACT.

The Working Party should be supported in its proposal that scaling be continued using ASAT (implied in Recommendation 2 and declared in Recommendations 32, 36, 42), in its proposal that ASAT be broadened through the addition of a test of written expression (Recommendation 37), and in its proposal that clear and publicly declared procedures for checking for anomalies in the scaling process be established (Recommendation 38).

Number of Units on Scales

There are several considerations in determining the size of units on a scale. If there are too few units (or bands, in the terminology of the Working Party), then there will be significant differences among individuals assigned to the same band. If there are too many bands, the band locations assigned will imply an unjustified degree of precision in the data.

For the OAP and the four SAP scales, there are five underlying aggregates in terms of which students can be classified into bands on the scales. The Working Party recommends that there be 20 bands on the OAP scale and 10 bands on each SAP scale and that equal numbers of students be allocated to each band on a scale. Each 'score' on the OAP scale will then represent five percentile bands and each score on a SAP scale will represent 10 percentile bands.

If a band width is equal to two standard errors of measurement on the underlying aggregate, the minimum probability of a correct classification would be 0.48 and the maximum probability of an incorrect classification by two or more bands would be 0.02 (Sadler, 1987b). A larger band width would give a greater probability of correct classification and a smaller probability of incorrect classification by two or more bands, but misclassification by even a single band would haw greater significance. That is, boundary problems for those who just miss the cut for a band are more significant the fewer bands there are.

The Working Party's proposal is not just that the numbers of bands be set at 20 and l0 as indicated but that the more detailed information on which the classification of students into bands is understand not to be used further. If it is not to be used further then the Working Party's proposals regarding sequential use of the band information must prevail. Without recourse to the underlying information from the aggregate scale, students in the same band must be treated as tied precisely and also as equally different from all students in the band above or the band below. This is clearly not the case, particularly towards the two ends of the scales. The Working party's proposal that bands contain equal numbers of students means that the extreme bands contain a much wider range of performances than do the more central bands. The assumption of equality of performance of all students in the same band is clearly less tenable in the extreme bands (the top and bottom three on the 20 band scale) as the Working Party report admits (Appendix 5).

The idea of reporting performances on a scale with fewer bands than the current TE Score scale and certainly with fewer than direct use of the aggregate scales might invite is worthy of support. The use of rectangular distributions with equal numbers of students in each band on a scale, however, invites misinterpretation. The Working Party suggests that equal band sizes are 'easily interpretable' (Appendix 5, p.227). In fact, they are just as readily misinterpreted when it is not obvious what kind of scales they are. With TE Scores, for example, it is likely that the public interprets the difference between 980 and 990 as equivalent to the difference between 740 and 750 on some performance scale not on a percentile scale. Both the A, B, C, N letter-grade and the 7-to-1 numerical scales that predated the TE Score scale carried meaning that related to the underlying performance scale. The difference between an A and a B was roughly the same as the difference between a B and a C on the underlying scale. These letter-grade and numerical scales were normative scales but so, of course, are percentile scales. The earlier scales had too few bands for current purposes but some compromise between them and the TE Score 'percentile' bands is worth considering. The numbers of bands that the Working Party proposes are appropriate but the rectangular distributions are less so. It is worth considering the use of distributions which are approximately normal rather than percentile. If this change were to be made then 'score' might be a better term than 'position' for location on the scale.

The Working Party should be supported in its proposal that, subject to periodic review, the OAP scale have 20 bands and the SAP scales have 10 (Recommendations 22, 23) and its proposal that performances be reported only in terms of band locations (Recommendation 20). Additional information about within-band differences should be provided to higher education institutions, however, to allow for use of combined information from two scales in separating ties on the first scale (Recommendation 20). Higher education institutions should be encouraged, even directed, to use information from the two scales and not just the within-band information from the first scale for such tie breaking. The Working Party's proposal that the same number of students be assigned to each band should be reconsidered with a view to adopting a normal rather than a rectangular distribution of awards (Recommendation 21).

Reporting of Information

One of the difficulties with the current Queensland system is the potential for inconsistency between the criteria-based assessments in individual subjects that appear on students' Certificates and their TE Scores that reflect their ranking in the overall order-of-merit. The inconsistency can arise from two sources. One is an inconsistency between the broad information provided in the school assessments that result in the criteria based assessments and the more precise information submitted by the schools as Special Subject Assessments (SSAs) on a 99-point scale for rescaling and aggregating to produce the TE Scores. The other is that there is no between-subject scaling for the criteria-based scores.

The former problem could be dealt with by obliging schools to report to students the information submitted for rescaling and aggregating. The Board of Secondary School Studies currently urges schools to do this with the SSAs. The Working Party recommends a somewhat stronger approach for the submission of the proposed Subject Achievement Indicators (SAIs) allowing the possibility that guidelines for submitting these scores (on a 1-to-99 scale) include a requirement for reporting to students.

The latter source of inconsistency cannot be removed. TE Scores are clearly normative, providing direct comparisons among students. The achievement grades for individual subjects, on the other hand, provide comparisons with defined criterion levels of performance. A rough ranking can be deduced from these criteria based assessments but it can be inconsistent with the TE Score ranking because there is no between subject scaling in the criteria-based assessments. There is no sensible way to avoid this potential for conflict in the present system and it will also be present in the system proposed by the Working Party.

On the timing of reporting, the Working Party advances a strong case for students to have an opportunity to review their higher education enrolment preferences after their results become available. That is the practice in all other Australian systems.

The Working Party should be supported in its proposal that students be able to change course preferences between the release of results and the determination of first round offers of enrolments in higher education (Recommendations 12, 48) and in its proposals that Subject Achievement Indicators (SAIs) be submitted in a consistent fashion by schools and that schools be given clear guidelines for submission, including a requirement of disclosure to students (Recommendations 33, 34. 35. 49)

Other Issues

Other issues considered by the Working Party are not commented on in any detail. These include the use of subquotas to cope with admission to higher education of students from sources other than Year 12, and various proposals about allocation of responsibility for tasks. None is contentious. All are worthy of support.

Conclusion

The Working Party makes a strong case that its recommendations should be seen as an integrated set, not a list of separate proposals to be accepted or rejected individually. There is logic to the argument but the case for 'all-or-none' consideration cannot be sustained. There are recommendations that should be rejected as indicated above. They can be rejected without sacrificing the principles from which the Working Party developed its analysis and proposals and without jeopardising the benefits that implementation of the other recommendations would bring.

References

Committee for the Review of Tertiary Entrance Score Calculations in the Australian Capital Territory (1986) Making admission to higher education fairer. (Chair: Dr Barry McGaw) Canberra: Australian Capital Territory Schools Authority, Australian National University, Canberra College of Advanced Education.

Gilding, K. (1988) Report of the Enquiry into Immediate Post-Compulsory Education (Report to the Minister of Education and the Minister of Employment and Further Education). Adelaide: Office of the Minister of Education.

Maxwell, G.S. & Allen, J.R. (undated) A rejoinder to the paper by D.R. Sadler, An analysis of certain proposals contained in Tertiary Entrance in Queensland: A Review ... Brisbane: Board of Secondary School Studies.

Ministerial Review of Postcompulsory Schooling (1985) Report. (Chair: Ms Jean Blackburn) Melbourne: Ministerial Review of Postcompulsory Schooling.

Ministerial Working Party on School Certification and Tertiary Admissions Procedures (1984) Assessment in the upper secondary school in Western Australia. (Chair: Dr Barry McGaw) Perth: Western Australian Government Printer.

Sadler, D.R. (1987a) An analysis of certain proposals contained in Tertiary Entrance in Queensland: A Review with particular reference to the achievement position profile and stepwise selection. St Lucia: Assessment and Evaluation Research Unit, Department of Education, University of Queensland.

Sadler, D.R. (1987b) Lexicographic decision rules and selection for higher education. St Lucia: Assessment and Evaluation Research Unit, Department of Education, University of Queensland.

Williams, T. (1987) Participation in education (ACER Research Monograph No. 30). Hawthorn, Vic.: Australian Council for Educational Research.

Withers, G. & Batten, M. (1988) For national consideration: Improving post-compulsory curriculum provision (National Curriculum Issues series, No.2). Canberra: Curriculum Development Centre.

Working Party on Tertiary Entrance (1987) Tertiary entrance in Queensland: a review. (Chair: Mr John Pitman) Brisbane: Minister's Joint Advisory Committee on Post-Secondary Education and Board of Secondary School Studies.

Please cite as: McGaw, B. (1989). Comments on Tertiary Entrance In Queensland: A Review. Queensland Researcher, 5(1), 25-44. http://www.iier.org.au/qjer/qr5/mcgaw.html


[ Contents Vol 5, 1989 ] [ QJER Home ]
Created 3 Apr 2007. Last revision: 3 Apr 2007.
URL: http://www.iier.org.au/qjer/qr5/mcgaw.html