skip to main content
10.5555/1150034.1150063dlproceedingsArticle/Chapter ViewAbstractPublication PagesiclsConference Proceedingsconference-collections
Article

Measuring students' scientific content and inquiry reasoning

Published:27 June 2006Publication History

ABSTRACT

Recently, several important documents have promoted inquiry-based science as the main way for science to be taught and learned. In addition, there have been advancements made in the measurement sciences that allow for sophisticated and complex ways to score and interpret student responses on assessment tasks. However, while many studies have shown the benefits of scientific inquiry in the classroom and others have described new types of psychometric models available for scoring analysis, few have combined the two to develop a better understanding of how students "know" science. This paper briefly describes an assessment system used to create items that systematically measure both students' content knowledge as well as two complex inquiry-reasoning skills. Then, using student responses to an assessment made using this system we employ multidimensional psychometric models to allow us to explain the nature of the types of knowledge students draw on when encountering scientific scenarios.

References

  1. Adams, R. J., Wilson, M. R., & Wang, W.-C. (1997). The multidimensional random coefficients multinomial logit model. Applied Psychological Measurement, 21(1), 1--23.Google ScholarGoogle ScholarCross RefCross Ref
  2. Black, P. (2003). The importance of everyday assessment. In J. M. Atkin & J. E. Coffey (Eds.), Everyday assessment in the science classroom (pp. 1--11). Arlington, VA: NSTA Press.Google ScholarGoogle Scholar
  3. Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 75(1), 32--41.Google ScholarGoogle ScholarCross RefCross Ref
  4. Embretson, S. E., & Reise, S. P. (2000). Item Response Theory for Psychologists. Mahwah, New Jersey: Lawrence Erlbaum Associates.Google ScholarGoogle Scholar
  5. Haertel, G. D., & Mislevy, R. J. (2001). Principled Assessment Designs for Inquiry (PADI): Proposal funded by the Interagency Educational Research Initiative (IERI).Google ScholarGoogle Scholar
  6. Huber, A. E., Songer, N. B., & Lee, S.-Y. (2003 April). A Curricular Approach to Teaching Biodiversity through Inquiry in Technology-Rich Environments. Paper presented at the Annual meeting of the National Association of Research in Science Teaching (NARST), Philadelphia.Google ScholarGoogle Scholar
  7. Krajcik, J., Blumenfeld, P., Marx, R., Bass, K. M., Fredericks, J., & Soloway, E. (1998). Middle School Students' Initial Attempts at Inquiry in Project-Based Science Classroom. The Journal of the Learning Sciences, 7(3 & 4), 313--350.Google ScholarGoogle ScholarCross RefCross Ref
  8. Masters, G. N. (1982). A Rasch model for partial credit scoring. Psychometricka, 47, 149--174.Google ScholarGoogle ScholarCross RefCross Ref
  9. Mislevy, R. J. (2003). A Brief Introduction to Evidence-Centered Design (Technical). Los Angeles: National Center for Research on Evaluation, Standards, and Student Testing (CRESST).Google ScholarGoogle Scholar
  10. Mislevy, R. J., Steinberg, L. S., Almond, R. G., Haertel, G. D., & Penuel, W. R. (2003). Leverage Points for Improving Educational Assessment. (PADI Technical Report #2). Palo Alto, CA.: Principled Assessment Designs for Inquiry.Google ScholarGoogle Scholar
  11. Mislevy, R. J., Wilson, M. R., Ercikan, K., & Chudowsky, N. (2002). Psychometric Principles in Student Assessment. Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing (CRESST).Google ScholarGoogle Scholar
  12. Mistler-Jackson, M., & Songer, N. B. (2000). Student motivation and internet technology: Are students empowered to learn science? Journal of Research in Science Teaching, 37(5), 459--479.Google ScholarGoogle ScholarCross RefCross Ref
  13. National Research Council. (1996). National Science Education Standards. Washington, DC: National Research Council.Google ScholarGoogle Scholar
  14. Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing What Students Know: The Science and Design of Educational Assessment. Washington D. C.: National Academy Press.Google ScholarGoogle Scholar
  15. Rasch, G. (1960). Probabalistic models for some intelligence and attainment tests. Chicago: University of Chicago Press.Google ScholarGoogle Scholar
  16. Songer, N. B., & Gotwals, A. W. (2004). What constitutes evidence of complex reasoning in science? Paper presented at the Sixth International Conference of the Learning Sciences. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Songer, N. B., Lee, H.-S., & McDonald, S. (2003). Research Towards an Expanded Understanding of Inquiry Science Beyond One Idealized Standard. Science Education, 87(4), 490--516.Google ScholarGoogle ScholarCross RefCross Ref
  18. White, B., & Frederiksen, J. R. (1998). Inquiry, Modeling, and Metacognition: Making Science Accessible to All Students. Cognition and Instruction, 16(1), 3--118.Google ScholarGoogle ScholarCross RefCross Ref
  19. Wu, M. L., Adams, R. J., & Wilson, M. R. (1998). ConQuest: Generalized Item response Modeling Software. Melbourne, Australia: ACER (Australian Council for Educational Research).Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image DL Hosted proceedings
    ICLS '06: Proceedings of the 7th international conference on Learning sciences
    June 2006
    1127 pages
    ISBN:0805861742

    Publisher

    International Society of the Learning Sciences

    Publication History

    • Published: 27 June 2006

    Qualifiers

    • Article

    Acceptance Rates

    ICLS '06 Paper Acceptance Rate142of142submissions,100%Overall Acceptance Rate307of307submissions,100%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader