Cfp: Workshop on Data Analysis and Interpretation for Learning Environments (DAILE13)
- 0 Kommentaarid
- 1358 Külastuste arv
Held at the 4th STELLAR Alpine Rendez-Vous (ARV)
Date: January 28 – February 1, 2013
Venue: Villard‐de‐Lans, Vercors, French Alps
Deadline for submission:
- Initial Experiential Report: 31st August 2012
- Final Paper: 15th October 2012
The main focus of the workshop DAILE’13 is to elaborate as well as integrate various computational approaches for analysing and interpreting data from technology-enhanced learning (TEL) environments, thereby serving three goals: (i) improving learning and instruction; (ii) designing learning software; (iii) developing deeper understanding of learner and teacher models.
In the spirit of “learning analytics”, we will extend the focus from closed within-system usage of computational analyses towards putting humans in the loop of making use of analysis results, for instance, in terms of monitoring and reflection support. This entails adequate modalities of information provision, including visualisation techniques and visual metaphors.
Contributors are invited to share sample datasets of user interactions with TEL environment and an analysis approach in the form of an experiential report, which will then be further developed into a scientific paper in the scope of the workshop.
“If you cannot measure it, you cannot improve it” (a.k.a. Sir William Thomson, n.d.). This dictum has been employed to justify the quantification of theoretical concepts of various disciplines. We further claim that “if you cannot interpret what is measured, you cannot improve it“. Arguably one can measure (almost) anything in some arbitrary way. The compelling concern is whether the measure is meaningful, useful and valid to reflect the state or nature of the object or event in question. The key attribute “meaningfulness” hinges critically upon interpretability of data.
Recent initiatives in the field of technology-enhanced learning (TEL) consider establishing an open learning environment built upon a dynamically evolving network of people, artefacts, and tools as the most important outcome of learning. Consequently, measuring and interpreting the dynamics of and interactions within flexible TEL environments based on digital traces (action logs, versions of digital artefacts) is of tremendous importance for various target groups.
In the context of open learning environments, a specific challenge is the specification, identification and interpretation of errors through error patterns. However, error identification is not a clear-cut process in certain situations as evaluators may diverge on what constitutes an error. Besides, the issue of data interpretability (and the transferability of findings) will further be complicated by collaborative learning scenarios, and community effects in large-scale learning environments with huge networks of learners are intriguing to explore.
Given the interdisciplinary nature of the workshop, researchers and practitioners from TEL, Human-Computer Interaction (HCI), the Semantic Web, and data-driven research (Learning Analytics, TEL Recommenders, Educational Data Mining etc.) are relevant contributors. The following topics of interest, albeit non-exhaustive, are identified to invite submissions:
- Modelling, capturing, and processing of user interaction data in learning environments
- Anonymization and privacy preservation of real-world datasets
- Educational data mining on interaction traces and user-generated content
- Learning and Visual Analytics for institutions and individuals
- Interpretability and transferability of user interaction patterns in learning environments
- Evaluation of pedagogical models using digital traces of learners
- Effects of learning technology on user behaviour and competence development
- Implications of continuous and discrete values for user interface and usability issues
- Social network analysis and community effects in large-scale learning environments
- Feedback for teachers, learners and developers
- Process analysis (description of traces analysis and transformations steps)
Workshop Format and Submission Procedure
Contributors are invited to submit experiential reports including example datasets, a proposed approach to analyze or exploit the data as well as preliminary or expected findings. Based on a voluntary mentoring process, contributors might be supported by the programme committee members in developing a scientific paper from the experiential report. There will be two types of contributions: full papers with up to 6 pages describing substantial, completed work or position papers with 2 pages describing either results that can be concisely reported or work in progress. Papers should be formatted with the template (http://www.acm.org/sigs/publications/proceedings-templates) and submitted as PDF-file to: http://www.easychair.org/conferences/?conf=daile13. Thereafter, all submissions will be reviewed by the programme committee members according to their originality, significance, clarity, and quality. An open contribution process (i.e. shepherding) will be launched to engage dialogs between experienced and young researchers in the workshop blog already at the early stage to facilitate the writing up of papers.
Furthermore, contributors will be asked to carry out preparatory work before attending the workshop. Specifically, they should prepare an experiential report (blog entry) to describe which research question is addressed and which dataset is used for this purpose. Pre-workshop activities will be arranged to discuss these preliminary findings. Highlights of these discussions will be documented and further analysed in the workshop. All peer-reviewed scientific contributions will be published as CEUR workshop proceeding (http://ceur-ws.org/).
Accepted submissions will be presented at the workshop (15 minutes) and discussed on a round table format (15 minutes). Additionally, a panel of three discussants with different expertise (Technology-Enhanced Learning, Human-Computer Interaction, Semantic Web) will be invited to present their views (each 5 minutes) on the following statements: “Which data is necessary to measure the success of pedagogical models, and how to gather this data in learning environments?”
- Felix Mödritscher, Vienna University of Economics and Business, Austria
- Vanda Luengo, Domaine Universitaire de Saint-Martin d’Hères, France
- Effie Lai-Chong Law, University of Leicester, UK
- Ulrich Hoppe, University of Duisburg-Essen, Germany