archive-ie.com » IE » U » UCDOER.IE

Total: 813

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Section 7.3 End of Semester - UCD - CTAG
    to experience it as a whole reflect on experiences and provide a considered response As Richardson 2005 states it makes sense to see student feedback at the end of a particular module or programme of study From a summative perspective this rationale holds fast for example a student cannot be expected to comment on all aspects of the assessment procedure if they have yet to sit their end of term exam frequently the largest contributor to their final assessment grade However by waiting until this point to gather feedback could be regarded as unethical McKeachie Kaplan 1996 since this deprives students the opportunity to benefit from any changes made to teaching the module programme as a result of their input Activity 7 3 What are the main advantages and disadvantages of conducting SET solely at the end of the semester as they relate to both staff and students Advantages Disadvantages Lecturer Student Back to 7 2 Continue to Section 7 4 Back To Section 7 Retrieved from http www ucdoer ie index php title Section 7 3 End of Semester oldid 1904 Page tools Printable version A Z glossary of terms Who s online Most recent additions Your account 91

    Original URL path: http://www.ucdoer.ie/index.php?title=Section_7.3_End_of_Semester (2016-02-14)
    Open archived version from archive


  • Section 7.4 Post-Graduation - UCD - CTAG
    be sought sometime after graduation Richardson 2005 Post graduate SETs are predominantly conducted to evaluate at the programme level see Section 2 3 3 and Section 4 13 and are designed to allow students the opportunity to apply the skills learned during their training and to reflect on the adequacy of the training for their chosen vocation Activity 7 4 The use of multiple methods in a multi dimensional approach has been suggested in a bid to circumvent the limitations of SET Saroyan Amundsen 2001 Ellery 2006 suggests a conceptual framework for how this may look The table below provides an overview of the structure Before reading the article provide several suggestions for the methods and nature of feedback at each level Timing Method Type of Feedback Pre Module Mid Module End of Module Post Module Resources Ellery K 2006 Multi dimensional evaluation for module improvement A mathematics based case study Assessment and Evaluation in Higher Education 31 135 149 Back to 7 3 Continue to Section 8 Back To Section 7 Retrieved from http www ucdoer ie index php title Section 7 4 Post Graduation oldid 1907 Page tools Printable version A Z glossary of terms Who s online

    Original URL path: http://www.ucdoer.ie/index.php?title=Section_7.4_Post-Graduation (2016-02-14)
    Open archived version from archive

  • Section 7: When To Do SET? - UCD - CTAG
    circumstances of the lecturer and their immediate needs Section 7 1 Start of Semester Section 7 2 Mid Semester Section 7 3 End of Semester Section 7 4 Post Graduation Activity 7 Before beginning when do you usually gather data What are the advantages and limitations of this time Submit your answers Evaluation Sections Continue to Section 7 1 Retrieved from http www ucdoer ie index php title Section 7

    Original URL path: http://www.ucdoer.ie/index.php?title=Section_7:_When_To_Do_SET%3F (2016-02-14)
    Open archived version from archive

  • Section 8.1 General Issues of Reliability & Validity - UCD - CTAG
    student evaluation forms that have only the lowest level of accuracy and content validity Hinton 1993 Concerns over these issues are understandably amplified when the data may impact on professional progression career prospects or the imposition of external controls or interventions on poorly performing institutions Rakoczy Klieme Bürgermeister Harks 2008 In general the discussion of the two occurs simultaneously as though one were synonymous with the other In reality these are two separate issues that present different challenges for SET Section 8 1 1 Reliability Section 8 1 2 Validity Activity 8 1 Before looking at the information in the following sections take a moment to reflect on your own experience of SET How would you define reliability and validity in relation to SET and what steps have you taken to address each Reliability in SET refers to Steps I have taken to ensure reliablility include Validity in SET refers to Steps I have taken to ensure validity include Submit your answers Back To Section 8 Continue to Section 8 1 1 Retrieved from http www ucdoer ie index php title Section 8 1 General Issues of Reliability 26 Validity oldid 1999 Page tools Printable version A Z glossary of

    Original URL path: http://www.ucdoer.ie/index.php?title=Section_8.1_General_Issues_of_Reliability_%26_Validity (2016-02-14)
    Open archived version from archive

  • Section 8.1.1 Reliability - UCD - CTAG
    Wachtel 1998 or as Arreola 1995 describes it stability of student responses and consistency among responders Murray et al 1990 reports that in general student ratings of academics have been found to be stable across items raters and time periods Hobson Talbot 2001 conclude that the emphasis placed on the psychometrics of instrument development has resulted in very little debate about the reliability a finding not similarly applicable to validity

    Original URL path: http://www.ucdoer.ie/index.php?title=Section_8.1.1_Reliability (2016-02-14)
    Open archived version from archive

  • Section 8.1.2 Validity - UCD - CTAG
    the evaluation They must therefore be demonstrably useful for all stakeholders involved in the SET process namely students lecturers and university administrators managers There are numerous ways in which the validity of SET measures have been accepted as relatively stable including correlation with other credible indicators of effective teaching and the measurement of the association between student ratings and student learning Lemos Queirós Teixeira Menezes 2010 Some have suggested that although results are not always consistent across studies there is general support for both the reliability and validity of student ratings as measures of teaching performance Greenwald 2002 However Stake 1975 challenged the assumption that people could be objectively measured as evidenced by the Hawthorne Effect Coined by Landsberger in 1950 this posits that individuals instinctively alter their behaviour in some manner in response to the fact that they re being observed and not in response to any particular experimental manipulation While lecturers have traditionally accepted formative SET Ballantyne Borthwick Packer 2000 there remains a general suspicion towards summative SET and in its ability to fulfil its prescribed role Light Cox 2001 Martinson 2000 Sproule 2002 At a theoretical level this draws into question the ability to objectively and accurately

    Original URL path: http://www.ucdoer.ie/index.php?title=Section_8.1.2_Validity (2016-02-14)
    Open archived version from archive

  • Section 8.2 Participation Rates - UCD - CTAG
    minimum response rate should be Palmer 2011 though several suggestions have been made Kidder 1981 purports that a response rate of 50 is satisfactory while the Australian Vice Chancellor s Committee 2001 advocates the considerably higher figure of 70 Between these two Johnston and Owens 2003 suggest that the minimally acceptable response rate for research published in reputable journals was 60 and Brennan and Williams 2004 urge caution when response rates are below 30 While this may be achievable in traditional face to face type methods of SET participation rates it s well established that participation tends to be lower with online surveys Avery et al 2006 Wongsurawat 2011 While the response rates for online SET vary Harris et al 2010 report that the average response rate to electronic surveys is 34 Despite this distinct difference between online and paper surveys see Section 4 6 3 for more information it was found that this digital paper dichotomy had no significant effect on evaluation ratings Layne DeCristoforo McGinty 1999 or on non response bias Barkhi Williams 2010 and Chang 2005 reported that although the ratings scores were influenced by the survey method the validities and reliabilities were not Furthermore Palmer 2011 proposed that a lower response is acceptable if there is good evidence that the sample remains representative of the desired population Participation is not a straight forward issue therefore and requires consideration both before and after conducting a SET Activity 8 2 Given the information above the way to ensure a valid representation of student opinion participation rates must either exceed 60 70 or ensure a representative sample of the desired population How would you propose to do each for both online and face to face SET Face to Face Online Increase participation Ensure representation Back to 8 1 Continue

    Original URL path: http://www.ucdoer.ie/index.php?title=Section_8.2_Participation_Rates (2016-02-14)
    Open archived version from archive

  • Section 8.3 Response Error & Bias - UCD - CTAG
    problem since any conclusions or changes to teaching will be based on the information provided only by those attending and not the entire group of interest Richardson 2005 The problem is not simply that some students do not have an opportunity to rate or recount their experiences but that people who chose to respond to surveys are significantly different from those who chose not to in terms of demographic and social characteristics Goyder 1987 It is therefore fair to assume that students who do not engage in SET have different attitudes and experiences than those who do and that those of the non responders are not made known to the lecturer Richardson 2005 posits that inferences based upon samples with lower participation rates or a higher number of non responders may be inaccurate for two reasons While simply increasing representation or participation may seem like a simple solution the ethical implications of enforcing participation and the subsequent value of any information provided has been queried and Richardson 2003 calls into question the point at which SET feedback crosses the line and becomes institutional research Activity 8 3 What are the different implications of non respondents for summative and formative feedback

    Original URL path: http://www.ucdoer.ie/index.php?title=Section_8.3_Response_Error_%26_Bias (2016-02-14)
    Open archived version from archive