Cognitive load and practical work research – an update

An update on my thinking and research into improving the effectiveness of practical work through integrated-instructions.


Estimated reading time – 9 minutes.

As I wrote about in my recent EiC article, I am particularly interested in improving the effectiveness of practical work in my classroom. This interest actually goes back right to the start of my teaching career, and I completed my MEd project on this. Sadly, I never made the time to write this up in a peer-reviewed journal.

Cognitive load theory has resonated strongly with me since I came across it over the last year or so, as it seems to provide a useful framework in which to fit my empirical observation about the effectiveness, or not, of practical work in my classroom.

I consider practical work to be important in student’s science education, but it can be hampered by a variety of factors. One of the key reasons is practicals tend to be overloaded, with too much trying to be achieved in one activity. Students end up seeing practicals as the ‘fun’ part of the lesson, mostly because they aren’t sitting having to think hard about the hard chemistry concepts, and it can provide an opportunity to chat with their mates.

The idea of ‘hand-on, minds-on practical’ has been around for some time. However, despite how well put together these suggested practicals are, I still find the students too often not getting past the ‘following the recipe’ and asking me ‘what do I do next?’

One possible reason for this was, as cognitive load theory discusses, is the split-attention effect, where students are having to refer to two or more sources of information to understand how to proceed. In a practical session, an example would be a list of written instructions with a diagram of the apparatus. By providing information in two distinct forms, the student has to switch back and forth between the two to develop an understanding of what they have to practically do and then measure/observe. This raises the level of extraneous cognitive load, leaving less room in the working memory for the development of understanding (germane load).

To try to combat this, I have been adapting some of the practical activities I use with various classes to integrate the diagrams and the textual instructions into ‘integrated-instructions’.

The first use of integrated-instructions was for the classic properties of halogens practical with a Year 9 group (13-14 year olds). Stage 1 was a demonstration and used to explain how the integrated-instructions worked. Stage 2, they carried out in dimple tile. Stage 3 and 4 we didn’t get time to complete (this incidentally spurring me to getting on with ordering the dropper bottles for my continued introduction of microscale to the department).


I then tried out the method with a Year 7 class (11-12 year olds) doing another classic, the melting/freezing characteristics of stearic acid. While this practical is criticised in parts for being fairly dull for students, I find it useful for a number of reasons. I’m still training up this young group of students, so I find it useful to have them focus on a practical task that doesn’t require much more than safe working and measurement. I can get around the class assessing their practical competence, and spot those who aren’t controlling a natural tendency to wander away from their experimental setup.


Next was a simple thermochemistry practical, mixing two substances and measuring temperature change. This was a Year 10 group (14-15 year olds) with a wide range of attitudes towards Chemistry. There is a common attitude in several of them that practical work is the ‘easy’ lesson where they have plenty of opportunities to chat rather than learn new skills and focus on data collection.


This session went well, with the students self-correcting their practical work in their groups, and producing useable data in a reasonable time. I set them off at different stages on the list of substances to get through, so we shared their data.

Following some useful conversations on Twitter, and being awarded a small research grant from the RSC Chemical Education Research Interest Group (working with my mentor Suzanne Fergus), I have started engaging more with the literature to see what has been done before, and take lessons on how to improve my adaptations. (A big thank you to my fellow Fellow Naomi Hennah for some useful pointers).

Haslam and Hamilton (2009) investigated the use of integrated-instructions at secondary school level. They produced a practical task of setting up a powerpack, bulbs and voltmeter. The control experimental task was written instructions, and this group had the equipment available to look at prior to the practical activity. The experimental group had integrated-instructions, which included photos of the equipment, and diagrams with the instructions integrated.

haslam physics

The effectiveness of the integrated-instructions was assessed in various ways, including speed of task completion, self-reporting of ease of the task and understanding of the underlying physics by the student’s written conclusions.

Overall, the group with integrated instructions i) completed the task quicker; ii) found the tasks easier; iii) had a better understanding of the physics involved.

I think the self-report of how easy the students find the task will be useful. I’m not sure about the level of analysis applied to these data however. The Likert-like scale data were treated as cardinal numbers, with calculated means and standard deviations. Beyond the excessive precision in these calculated values, the issues with treating ordinal numbers as cardinal numbers was well highlighted by Stewart Kirton at MICER17.

An earlier piece of research was carried out by Deschsri, Jones and Heikkinen (1997) along similar lines. (Another thanks to my research critical friend Michael Seery for originally bringing this one to my attention). This was another extensive study in terms of the data collected on the student’s responses to integrated-instructions. The study sought to investigate achievement (cognitive outcomes), student attitudes to practical work (affective outcomes) and physical skills development (psychomotor functions).


Four practicals in a laboratory manual were adapted in the integrated instructions manner.

In general, students with the integrated-instructions achieved better in some areas of interpretation and comprehension of practical work (including rates), had a more favourable attitude to laboratory work (although weren’t more enthusiastic about it) and showed greater manipulative and organisational skills.

In addition to the outcomes of the experiment, the authors produced some useful guidelines, included in how they developed the integrated-instructions:

  • Characteristics of the instructions
    • Clear objectives & short introduction
    • Simple precise language
    • Directing practice as required
    • Sequencing of steps.
  • The diagrams followed criteria
    • Pictures illustrating new equipment
    • Diagrams illustrating construction of apparatus and use of correct procedures
    • Diagrams showing procedural sequences.

I’ve also been thinking about the effectiveness integrated-instructions across my teaching groups age range (11-18), and when and where they will be most effective. A lot of work on this style of practical work seems to be at college level (18+). I have noted that techniques that are effective for novices, can become ineffective, or even inhibitory for experts (Paas et al, 2000expertise reversal effect). This is perhaps not surprising. Diagrams of apparatus will potentially become redundant information, and therefore actually adding to extraneous cognitive load (Cook, 2006). Movement from a novice to an expert will be at different non-linear rates within any classroom context. This has implications in the use of one instructional method for a whole class. The availability of instructions provided at different levels of sophistication may be a way to allow students to work at their level of expertise. Cook makes a further point that interpretation of graphical information has social aspects, and discussion amongst students on the meaning of the graphics is an important part of forming understanding (possibly linking in nicely with Naomi’s research).

Feedback from Twitter on posted images of the integrated-instructions included the idea of adding tick boxes on each instruction to allow the students to keep track of what they had completed, and having all the instructions in sequence (clockwise or anticlockwise).

The latest data

I started to implement the ideas from feedback and published work with the Year 10 group, who have starting their study of reaction kinetics. I wanted to train them up in the techniques used to measure rates (skills development) before introducing too much collision theory and factors affecting rate. The intention was for them to carry out two skill-building practicals in the same lesson so they could make direct comparisons. As it turned out, they were completed on different days.

The written instruction-with-diagram ‘mass loss’ practical was carried out first. The ‘disappearing cross’ integrated-instructions practical included sequencing of the steps in order (anticlockwise in this case), with tick boxes for the students to keep track of their progress.


All students managed to complete both practicals in the time available. Anecdotally, the students sought more practical help during the ‘mass loss’ practical, struggling with the concept of weighing everything together before and after the reaction to find the mass lost.

The ‘disappearing cross’ practical went well, with all students engaging with the practical, and a couple were heard directing their peers back to the integrated-instructions when they started asking me the classic ‘what do I do now?’

I asked the students to write a brief evaluation of the two experiments, and I received some positive feedback:

  • “I found the second experiment easier because it told you what you needed to put in where and it was short sentences that were easier to understand.”
  • “I found myself not asking as many questions and I found that the instructions were clearer.”
  • “Easy to keep track and make notes.”
  • “It had clear steps and what order to do things in.”

So, what’s next?

My research proposal to CERG was:

I intend to apply an action-research type methodology:

  • make changes to ‘standard’ practical activities as provided from publically available sources such as examination boards, RSC LearnChemistry and CLEAPSS;
  • gather data on the effectiveness of the practical activities in the form of
    • teacher perception of student engagement – a reflective diary written on the day of the practical
    • student progress through the practical activities – percentages completing/partially completing practical
    • ability of students to answer questions relevant to the activities – assessment of student responses to questions
    • student feedback on the activities – simple questionnaire
  • analysis of these data would then feed into the modifications made to future practical work.
  • recruitment of another school to trial 2-3 of the practicals during the development progress to get external validation of my findings/conclusions.
  • The expected outcomes of the project would be any general rules for improving the effectiveness of practical work through ‘integrated diagrams’, and a set of modified practical activities available to other teachers.

My next practical session with the Year 10 group will be taking the disappearing cross practical further to look at the effect of concentration on rates.



I have included the developing ideas so far in the practical sheet, so I can start to more systematically collect data on the effectiveness of the method of instruction.

If you’d be interested in discussing this further, do drop me a line via Twitter @dave2004b.


Using microscale practicals in the classroom

Reflections on introducing microscale chemistry practicals in a new school

Reading time: 6-7 minutes

Having spent six weeks in the new school, and trying out microscale practicals with a variety of different year groups, I thought it was time to put some words down on my experiences so far. I have written previously about this in Education in Chemistry with a more theoretical and personal historical perspective, so this post is more on the actuality of introducing a new style (or just new) practicals to a school and my classroom. I had made some use of microscale in previous teaching jobs, and spent a lot of time at OCR thinking and writing about it. I am now keen to give microscale a good go at the new school for several reasons.

Firstly, I think the more limited equipment and small required field of observation can reduce the extraneous cognitive load on the students, giving them more free working memory to allow for interpretion of observations at the time they are making them.

Secondly, my current school has 45 minute lessons, and I get very few double lessons over the fortnight timetable. Having taught nine years of hour-long lessons, I’m still adjusting to this significant reduction in the teaching period. More organisation is needed by me on the preparation the students do before the practical session, e.g. having them read practical sheets / watch videos of the techniques etc.

Thirdly, I have advocated microscale strongly while at OCR, so am very keen to see the benefits across a wide number of groups. While I’m not teaching to either the Gateway or 21st Century specifications, I’m making use of these practicals never-the-less. I think one of the major advantages of the new specifications is the freedom teachers have in choosing which practicals they use throughout the course, so I’m grasping the opportunity with both hands!

Lastly, it fulfils a personal interest in maintaining a toe in the waters of research. I presented some intentions at MICER17, and now that I’m back in school I can do something about investigating the effectiveness or otherwise of microscale, rather than just theorising about it.

So far, I’ve tried out five microscale practical:

1. Chromatography of leaf chloroplasts. I did this with two Year 9 groups, a higher ability and a lower ability group. The chromatograms ran relatively successfully, but the main problem was the size of the spots – most students made these far too large, so any distinction between the components in the mixture was lost. I found extraction of the chloroplasts from the leaves took much longer than I’d predicted, so I suggested to a colleague who also tried it out, to do the extraction prior to the lesson and then have the students just spot and then run the chromatograms – this led to a more efficient practical session in his class.


2. Microscale synthesis of copper sulfate – this was partially successful – half the group managed to follow the written instructions without additional help and got to the copper sulfate solution quickly and efficiently. The other half (same group as the titration – below), seemed very hesitant to read instructions carefully, or more than once, and were persistent in asking me what to do next. We did a larger scale version as well as a context for crystallising solutions over a waterbath and filtration with fluted filter papers, and got some lovely crystals.


3. Electrolysis of copper sulfate to aid consolidation of the discussion of industrial copper production. I replaced the CuCl2 with CuSO4, so we didn’t have to deal with chlorine production, and as such the potassium halide and litmus paper weren’t necessary. The majority of the students set up the electrolysis apparatus quickly and got to relevant observations within about 5 minutes of the ‘go’. I was also using this as a test practical to see how practically competent the Year 11 group were (having just taken them over), and how well trained they were in cooperative working. Some of the students were spontaneously talking about redox when observing the copper and oxygen production, and this led to a fuller discussion of electrolysis than I’d planned.


4. Metal displacement reactions to consolidate work on the reactivity series, with Year 8 students. I didn’t give the students the written instructions for this activity, rather demonstrated the setup with an IPEVO Point to View camera (a great investment I think for any teacher). I used the drop-setup sheet in a plastic wallet and the majority of the students managed to set up accurately after the projected on-screen demonstration. There were some who managed to get the drops wrong the first time, despite the printed chemical formulae on the sheets, but the small scale made clearing the sheet quick and easy, and they didn’t lose too much time. Most made good observations, and some managed to articulate the link with the reactivity series immediately, pointing to the live observations as evidence when questioned.


5. Gravimetric titration – I used the ‘The Vinegar Dilemma’ as a consolidation activity for my Year 10 Combined Science group. While they aren’t required to know titration theory, the practical links directly with the required understanding of concentration. They had also had a string of rather dry calculations lessons previously, and they and I needed some hands-on learning. There was a distinct split in this group. We read through the instructions together, I demonstrated the setup and then ran the first titration with them observing (12 in the group). Half of the group then got on with no further requests for help and gained fairly accurate results. The other half, were persistently asking ‘what do I do now?’, ‘what’s next?’, even while holding the instruction sheet in their hands. This group is still relatively new to me, so we’re still building the class relationship, and I’m working hard to instil some more self-confidence in their abilities. I think this latter half were slowly getting the message the all the necessary instructions were to be found on the printed sheet, but were frustrated with what they perceived as me ‘not helping them’. I’ve taught groups like this in the past, and I’m sure they’ll self-reliance will improve with time.


I think the most significant thing I’ve confirmed for myself so far is the importance of trialling new resources before widespread use in the classroom. There are already adaptations I’ll be making to some of these practicals before using them again. Lots of resources are produced from many individuals and organisations, but live testing of them with students must be critical to their development and long-term success.

So what next? Well, I’ve ordered myself some dropper bottles to make up the boxes for ion tests as a start. We’ve got a set of the small glass vials which have been useful. I’ve got plenty of other practicals I’d like to try out, but I’m feeling a little swamped by the process of starting in a new school, establishing a reputation with colleagues and the students, and taking on adult volunteering in the school’s CCF RAF section.

I intend to be more systematic in my evaluation of the microscale practicals I use before I start rolling them out across the department. I’d be interested to hear how other people have done this in their departments, and the barriers they faced in up-take by other members of their departments.


Quizzing and random question selector

UPDATE 8/10/17 A 10-question version is available here:



Reposting here so there is a permanent page for my quizzing Excel file and any updates over the year.

I would be interested in your feedback on how it works out in the classroom.

Good luck all for the new academic year.




This spreadsheet helps to organise questions, and generate quizes.

The intention is for each lesson, a quiz of six questions is used: three new questions based on the last lesson’s learning, and three random questions from all previous lessons.

For each new lesson, type in the next three numbers in column E, e.g. 013, 014 and 015 for Lesson 3. Then move to the ‘Questions’ tab and type in your questions and answers in the appropriate row.

You can pre-populate your questions in the ‘Questions’ tab – as long as you haven’t referenced future questions in your Planner (column E), they won’t be selected in the random question generator.

To select the three random questions, press F9, which will generate three random question numbers  (yellow box) (up to the most recently used question in your planner). Type, or copy, these into the remaining  three rows for the lesson. (Note if you copy /paste from K2-4, you’ll need to  use Paste Special/Values).

Note – the random question selection includes an aspect of weighting. All questions are weighted such that the more often and more recently a question has been used, the less likely it is to be selected in the random question generator. This should help improve the coverage of all the questions over the year.

The ‘LessonQs’ and ‘LessonQ&A’ tabs will auto-update with the relevant questions and answers (i.e. the last six in the Column E). This can then be displayed to the class.

Separately, a completely random six question quiz can be made with the ‘6RandomQs’ and ‘6RandomQ&A’ tabs – just press F9.

Separately, you can generate a quiz based on six selected questions by filling in  the ‘Six  selected question’ cells (orange) and the ‘6SelectedQs’ and ‘6SelectedQ&A’ tabs will auto-update .

(Note for any Excel-aficionado – the coding is a massive bodge-job – please don’t judge too harshly! I will tidy it up in a later version if this proves useful.)


A response to Wellcome’s concerns about practical work assessment

A response to Wellcome’s concerns about practical work assessment

TES has reported that the Wellcome Trust says

‘removing science practicals from A levels could challenge ‘authenticity’ of qualification’.

I can’t find a press release for these comments, so I’m assuming that this was a direct conversation between Wellcome and TES. Wellcome Trust has well-known concerns over the removal of controlled assessment/coursework (for example here and here, cited about half way down this article, amongst others).

The discussion in the TES article on how the new regime works is pretty accurate: demonstration of competency over 12 practicals (although neglecting that this is a minimum), noting that 15% of marks in the written exam are based on ‘theory and application of practical skills’ (again not noting this is a minimum percentage), and the concerns that there were over the validity and reliability of the previous coursework regime.

The report then runs through the set of concerns that Wellcome has. I find myself disagreeing with a lot of what is said.

‘breaking the link between experiments and grades could cause students to focus exclusively on the assessed aspects of their course, reducing their motivation for practical work’

I find this a rather reductionist view of education, which assumes students are only interested in the final grade and will neglect everything else. I have certainly had some students who are only interested in the final grade (a means to an end, usually those for whom chemistry was the ‘third’ subject), but the majority of my students have studied chemistry broadly and enthusiastically, and were well engaged with their practical work, regardless of whether it was part of a formal assessment.

‘In a time when we know there’s a lot of pressure on the delivery of science, it may be that teachers are less able to prioritise the delivery of practical science – even though they may very much want to do so.’

I’ve only been involved in teaching for a little over a decade and there hasn’t really been a time when there wasn’t pressure on my departments to ‘produce results’ and of ‘tightening budgets’. The very high stakes nature of controlled assessment (at GCSE) and coursework (at A level) often skewed how practical work was used during the course, with the practical work overly focussed on what was assessed rather than broadly over the course. My experience of the new A-level course is that it has allowed teachers to more authentically use practical work during the lessons, and in some cases given them the ‘ammunition’ needed to use more practical work in their lessons. The assessment of practical work throughout the two-year course, and it is a DIRECT assessment of students’ practical skills, is ongoing and can be used formatively. If students are struggling with a practical during an assessed task, teachers can now intervene and re-teach/support/instruct, and then re-assess the students later on in the course. In the old days of coursework, this wouldn’t have been possible as it was all carried out under high-control.

In terms of the actual marks that went towards the final grade, it has always been a small amount. Looking at the last year of coursework tasks for OCR A Level Chemistry A, about 4.5% of the final mark was based on actual practical skills. While the coursework modules accounted for 20% of the final marks, the majority of the questions asked within these papers were analysis and evaluation questions based on the observations/measurements taken during the two practical tasks. (I’m assuming the percentages are broadly similar for other qualifications.) These papers were carried out under exam conditions, just like in the new qualification exams, but internally marked, moderated, then externally moderated, with all the administrative burden and issues with reliability this entails.

We now have the practical endorsement – a holistic judgment of the competence of a student in a wide variety of practical skills, apparatus, and techniques, carried out over the two years of the course. 99% of students have been awarded the endorsement this year, and this is a massive credit to the students, and to their teachers in adapting to a new assessment regime and in delivering what I think is a much more authentic practical experience.

‘downgrading the significance of practicals at A level could make it more difficult for students to decide whether they wanted to go on to pursue science at university’

For all the reasons above, I just don’t believe that the significance of practical work has been downgraded, and again the assumption that if it doesn’t count towards a grade it isn’t significant. For example, the science trips I have taken students on (for example to the Diamond Light Source in Oxfordshire) have had just as much impact on students’ decision to pursue science at university, and this had no direct relevance to their final grade.

A quote from the Wellcome Trust report from February 2017 (https://wellcome.ac.uk/sites/default/files/science-education-tracker-report-feb17.pdf) is given:

‘35% were encouraged to learn science because of practical work’.

This isn’t quite what the report says – the actual statement was ‘enjoying practical work’ as being the second most important factor in encouraging young people to learn science. I think there is a subtle distinction here – enjoyment can encourage learning, but enjoying practical work may mean they like being in science lessons (more than other subjects?) but not necessarily that they are learning from the practical work itself – that’s a whole other topic in itself.

‘Stripping out marked practicals could also challenge the “authenticity of the grade of the A level and whether it’s really reflecting how good you are at doing science”’.

Again the majority of the coursework marks were based on analysis/evaluation which can just as well be assessed in the exam hall as under high control in the science lab, with only 5% of the final mark was based on actual practical skills. Having a whole endorsement that is a result of the assessment of practical skills over two years seems a better, more authentic experience. I’ve spoken to quite a few undergraduate lecturers over the last couple of years, and there are plenty of examples of high grade students who don’t do well in first-year labs, and lower-grade students who excel in the labs. The correlation between practical skills and the ability to attain high grades isn’t concrete.

The TES article then concludes with commentary about the variability of the requirements universities place on students to pass the endorsement. There is quite a lot of variation out there – some universities have a blanket requirement on all STEM degrees, some on some courses, and some universities are ‘waiting and seeing’. This is probably not surprising given this large change to the assessment regime, and no doubt universities will reflect on their students’ abilities this year and decide on whether or not to make it a requirement.

I am with the majority of teachers cited in the Ofqual initial research who see the reform as being positive for teaching and learning of practical skills. The new way of assessment is in line with how I have taught science over the years, and it allows students to build up (assessed) competence over time with ongoing support, rather than having to perform under the artificial constraints of coursework/controlled assessment tasks. I think we need to trust that teachers will do their best in whatever circumstances they are in to give their students a broad and balanced science education, including practical work, and not allow the assessment-tail to completely wag the teaching-and-learning dog.


Late to the party – Core knowledge booklets

Core knowledge booklets for Chemistry and Physics

As I’m going back to teaching after two years working at one of the UK exam boards, I’ve started riffling through my folders to get myself back in the swing of things. I came across these booklets that I put together four years ago now for the legacy AQA Chemistry and Physics GCSEs – only C1, C2, P1 and P2 by the looks of things.


I was interested to see what I wrote on the cover of these booklets:

This is the core knowledge that you need to know in this unit. Knowing the information in this booklet will help you to understand the more complex chemistry that you will study, and to express yourself clearly in class, in assessments and in exams. Knowing the information below is an essential FIRST STEP to gaining top grades – but it is not sufficient. Only regular study, discussion and practice of the chemistry will consistently lead to the top grades.

This seems a good summary of what I expected from my students, and what I encouraged the teacher in my departments to expect as well.

The students were issued the booklets at the start of the year, and expected to keep them in their exercise books for reference to and regular use of. As and when students had completed work in class, I directed them to spend time self- and peer- quizzing using these booklets, and they made for useful resources for homework and revision. A lot of this seems similar to what some others are advocating.

With the rise of Knowledge Organisers and the like, I thought I’d share them again. Please feel free to adapt and use. As and when I update them for my new classes and the new qualifications, I’ll repost.

AQA Chemistry C1 (2011 Core Science)

AQA Chemistry C2 (2011 Additional Science)

AQA Physics P1 (2011 Core Science)

AQA Physics P2 (2011 Additional Science)


Complications and errors in teaching

Commonalities in teaching and medicine; learning from our mistakes and always aiming for perfection.

Estimated reading time: 6-7 minutes


I finished another book by one of my favourite popular-expert authors recently, ‘Complications’ by Atul Gawande. I originally heard about Atul’s work on the BBC radio show Desert Island Discs back in 2015. His ‘Checklist Manifesto identifies the use of checklists in highly complex professional arenas such as piloting airliners and in surgery. In brief summary, he discusses how many of the things humans do are so complex that no one can hope to complete all that is required without making mistakes, such as forgetting important steps. Aircraft test pilots developed checklists to help them fly the increasingly complex planes in the 1950s. As the planes were becoming more complex, the test pilots kept crashing them as they forgot critical steps, not through lack of expertise, rather the limitations of human memory.

The quest for perfection

‘Complications’ discusses Atul’s progression through his medical training, and how the inevitable errors made by doctors are dealt with. Much of what he writes chimes with my own development as a teacher, and in supporting others through their careers.

… This is the uncomfortable truth about teaching. By traditional ethics and public insistence… a patient’s right of the best care possible must trump the objective of training novices. We want perfection without practice. Yet everyone is harmed if no one is trained for the future.” (page 24)

Here Atul discusses the tension between what we want from our doctors and what the system requires. As individual patients, we want the best possible care from the best doctors. However, a medical system with a sole focus on individual outcomes could never function. The system needs continually to train new doctors. Part of this training is treating individual patients, and the training doctors will make mistakes. New members of the profession need to learn, need to practice, need space to improve.

This has direct corollaries with teaching. As parents, we want the best possible teachers teaching our children. As a teaching profession, we always need new teachers coming in, and they will need to be educated, and they will need to practice. In the process of their development, they will make mistakes and inevitably the education of the children will not be as good as if they were taught by more effective teachers.

The problem with choice

If learning is necessary but causes harm, then above all it ought to apply to everyone alike. Given a choice, people wriggle out, and those choices are not offered equally…. If choice cannot go to everyone, maybe it is better when it is not allowed at all.” (page 32)

Here Atul discusses how, when his child was in hospital, he insisted on having the attending physician (senior doctor) treat his child, not the resident (junior doctor) who was assigned to his child’s case. He knows the medical system well and so could argue to get the best possible treatment for his child. Other members of the public, who don’t know the system so well, are more likely to accept the medical treatments as given.

As a former head of department, I was most acutely aware of these tensions when assigning teaching groups. Which classes are the NQTs (newly qualified teachers) assigned to; the most experienced teachers; the exam groups (Year 11, Year 13); those most likely to be getting the grades which have the greatest effects on accountability measures (C/D grade borderline groups)? My own solution, and what I think was the most equitable, tended to be that all teachers taught across the ability range and year groups where possible, and support provided to less experienced teachers as required. My thinking ran in line with Atul’s – we can only provide the service (be it medical or educational) based of the skills and abilities of those professionals we have available at the time. The majority of science departments are likely to have a range of teachers of experience, expertise and enthusiasm. If we can’t have all of the students in our departments taught by the most experienced, enthusiastic experts, then the allocation of teachers should be on the most equitable basis possible, without the possibility of ‘special pleading’ for particular children or groups of children. Inevitably, the power of a head of department is not absolute, and some decisions were over-ruled by senior management who may have had different priorities to mine.

How to deal with mistakes

One of the most powerful passages in the book was the discussion of the Morbidity and Mortality conference (M&M), a place where mistakes in patient care are discussed in open forum – everyone attends from the most junior doctor to the head of department.

In its way, the M & M is an impressively sophisticated and human institution. Unlike the courts or the media, it recognizes that human error is not generally something that can be deterred by punishment. The M & M sees avoiding error as largely a matter of will – of staying sufficiently informed and alert to anticipate the myriad ways that something can go wrong and then trying to head off each potential problem before it happens. It isn’t damnable that an error occurs, but there is more shame to it. In fact, the M & M’s ethos can seem paradoxical. On the one hand, it reinforces the very American idea that error is intolerable. On the other hand, the very existence of the M & M, its place on the weekly schedule, amounts to an acknowledgment that mistakes are an inevitable part of medicine.” (page 73)

Again, the chime with teaching of students. We must always seek to avoid error in what we do when we teach students, but have to accept that errors do occurs. By exposing these in a safe professional environment, we can all learn from the situation and improve everyone’s  on-going practice. As professionals we should always be seeking to improve our practice, and do the best we possibly can for our students, and recognise that mistakes we make are not just learning opportunities for our own practice but that of others. How high stakes accountability, for example graded lesson observations and performance-related pay, can skew this ideology is a topic for another time. Suffice it to say I always had strong reservations about the helpfulness of these techniques in driving good professional improvement.  

How do we continually improve?

No matter what measures are take, doctors will sometimes falter, and it isn’t reasonable to ask that we achieve perfection. What is reasonable is to ask that we never cease to aim for it.” (page 74)

We are human. We make mistakes. This is inevitable. No set of resources, force of will, system of incentives or sanction will ever get us to a place where teachers teach perfectly and all students learn to their optimum ability. What we can insist on is that all teachers recognise that there is always more to learn, always improvements to be made. And the system needs to recognise that teachers need access to forums to allow them to share and learn from each other.

And this access to forums, I think, is one of the biggest problems facing the teaching profession at the moment. I have delivered training and support across England over the last couple of years, and the attendance at events is significantly down of what it was only a few years back. Schools are finding it very hard to release teachers to attend CPD events, and it is quite common now for teachers to use their own free time to attend events such as TeachMeets. I am a big fan of such events, but am always troubled by a system that seems to have normalised an expectation that teacher improvement should come in teacher’s own time.

Can we make improvements? Yes – at a system level, only government can change the funding situation that has driven the reduction in capacity in schools to allow teachers to have time away from the classroom. Government policy can be influenced by engaging with professional bodies like the Chartered College of Teaching, unions and learned societies such as the Royal Society of Chemistry. Yes – at school level, department and school meetings can schedule significant time for discussing good practice and learning from mistakes in a safe professional environment. Yes – at an individual level, by recognising that we aren’t perfect, we won’t always get it right, but that the aim for perfection is a worthy goal, and one we can all work towards across a career.


Teaching shortcuts and when they can trip you up…

Reading time: 4-5 minutes

Along with others of late, including Kristy Turner , Niki Kaiser and Adam Boxer, I have been mulling misconceptions and teaching. Cognitive Load Theory is also being discussed a lot and I’ll be attending Niki’s conference soon to hash out some more on these ideas and how to apply it to the classroom. Much of what I have read resonates with my previous teaching, and I’ve written about it in relation to practical work.


What prompted this particular post was a session with Steve Barnes and David Read at the Wessex Group conference a couple of weeks back, and a Twitter chat recently. Both were related to aspects of equilibrium, a concept many students find hard, especially when questions are a bit different from what they have seen before. With the increased demand in the new A level papers, and the increased emphasis on applying knowledge to unknown situations, this seems like a timely issue.

Eric Scerri noted a couple of years back the problems with Le Chatelier – it works sometimes, but can break down quite quickly, and can potentially stop us from having to think thinking too hard about the details of the context. Somewhat like Kristy’s SEABODI, students tend to go for a stock answer to the almost inevitable NH3 or SO3 production questions, but aren’t necessarily thinking deeply about their understanding.

At the Wessex Group conference, Steve used a set of equilibrium questions, drawing on work by Juan Quilez, to highlight these problems. We also discussed the nitrogen dioxide-dinitrogen tetroxide equilibrium. When applying Le Chatelier to this equilibrium, and looking at changing temperature, the expected changes to equilibrium position and hence observations are borne out, as seen in numerous videos. Increasing the pressure by compression is more complex. The expected shift in the position of equilibrium is to the right to decrease the pressure. A not uncommon prediction of the observation would be that the mixture lightens, as NO2 is converted into N2O4. The problem is that as the total volume has decreased, the mixture actually darkens initially as the NO2 becomes more concentrated, then the colour lightens as the NO2 is converted to N2O4. So the shift in equilibrium position may be correctly predicted, but the predicted observation may be wrong (or at least incomplete) because the full system was not considered or the question is not carefully phrased.

I recognise that at times I have come to rely on shortcuts and when the questions becomes more complex, the shortcuts can break down. Now, Le Chatelier is model like any other – we tend to use the simplest model that will allow us to explain the observations. When the model breaks, we use or develop a more sophisticated one. This issue has tied up with other conversations recently on what is meant by ‘mastery’ in science – is it mastering the concepts at the appropriate level, or introducing more sophisticated concepts earlier on so we aren’t ‘lying’ to the students. I’m not sure it would be appropriate to introduce GCSE and / or A level models directly in KS3. I may be able to get the students to repeat back the facts, but I doubt they’d be able to use them confidently or competently. Of course, this is a whole bucket of worms on assessing understanding – for another time perhaps.

What finally prompted this post was a resource I was reviewing on fuel cells. I have always found electrochemistry one of the harder topics to teach effectively, and tend to take more of a pause before launching into it. I have taught electrolysis plenty at GCSE, but for whatever reason never really gone into galvanic cells and fuel cells in detail. I had developed shortcuts for electrolysis along the lines of ‘it’s the reverse of normal chemical reactions’ and ‘cathodes are negative as cations are attracted to them’.  I got myself into a muddle with the hydrogen fuel cell, on working out the polarity of electrodes, not helped by some vaguely written resources. A quick shout out to Twitter set me straight (thanks to Peter Hoare and Adrian Dingle), but it was a useful reminder of the need to check my understanding of the fundamentals from time to time.

Does this have any wider relevance – certainly for me going back to teaching after a couple of years out. For others – perhaps. I think it points to the importance of subject knowledge CPD. I’m a strong advocate of pedagogical content knowledge CPD, but spending time on deepening my personal understanding of the content knowledge is probably worthwhile from time to time. At this point in my teaching career, I would put depth ahead of breadth now. I have sufficient breadth of chemistry to teach my students effectively at the level I’m teaching at, but I think increasing the depth of my understanding as the years pass can only be a good thing. There are some great resources out there. I’m a particular fan of knockhardy and chemguide. I also have a copy of Chemistry3 close by – I’ve consigned my other university books to the lab shelf – I think one general undergraduate chemistry text is sufficient for what I need for now.

Any thoughts?  When was the last time a student asked you a question that you couldn’t quite answer to your satisfaction? What resources do you use the support your depth of understanding?


Review and reflections on #MICER17

A review and some personal reflections on the MICER 2017 conference.


At the sumptuous RSC Library at Burlington House, we gathered Methods in Chemistry Education Research 2017,  a day of lectures, activities and catching up with friends and colleagues. From school teachers to a Professor Emeritus, we gathered with a common purpose – to spend a day thinking about methods in chemical education research.

The day started with Dr Suzanne Fergus (@suzannefergus), Principal Lecturer in Pharmaceutical Chemistry at University of Hertfordshire (also 2016 RSC Award Winner for Higher Education Teaching). Through the context of her journey into ChemEdRes, Suzanne discussed the difference between anecdote of what works in our own teaching situation, and what constitutes genuine research. Critical features included i) contextualization within the current literature, ii) robust data collection and evaluation, and iii) novelty of work. While replication of others work in our own context can help increased generalisability of ideas, the new learning from such replication needs to be made explicit.

We worked through an exercise in formulating a RESEARCH QUESTION, central to ensuring high-quality research, and ultimately in getting our studies published. In my previous teaching of A-level sciences, I have come across research questions in Biology fieldwork, but their use in Chemistry research are not common. The worksheet proved a useful structure to start the challenging process of formulating high quality and usable research questions. Benefits of starting the research process with the research question include i) helping connect with the literature; ii) influence on the methods used; iii) focus on the presentation of the work and iv) focus on the discussion of the conclusion.

One of Suzanne’s papers (DOI: 10.1021/ed2004966) was highlighted as a useful example of how ChemEdRes can be written. The ‘New Directions’ journal was also suggested as a good starting point for those looking to get into academic publishing. Suzanne also suggested other less formal (more ‘social’) ways of publishing to help build one’s confidence in sharing our thoughts with a wider community. This included speaking at TeachMeets, small conferences, engaging in Twitter conversations, writing personal and professional blogs and writing for institutional publications. On a personal level, Suzanne’s talk gave me that last little push to start a personal blog!

Suzanne’s colleague Dr Stewart Kirton (@skirtonUH), Head of Pharmaceutical Chemistry, University of Hertfordshire, then took us through the use of Likert scales in providing an assessment of the impact of our interventions. While analysis of attainment in assessments is a major source of such information, surveying students’ perceptions is an increasingly used source of information. I used such surveys throughout my time in secondary teaching, and their use is becoming more common at university level with the ‘Teaching Excellent Framework’.

Stewart took us through a process for developing valid questions and Likert-scale responses, including:

  • the importance of trialling the questions with peers
  • trialling with the subjects of the questions (usually your students
  • ensure each question is only examining one idea
  • avoiding jargon
  • think carefully about the possible responses – including ‘Don’t Know’ is acceptable
  • phrase questions positively (if possible)
  • sticking to around eight questions – using many more than this and the students will likely run out of steam!

Our activity involved drafting some questions to help evaluate a programme run by final year students to help second-year students prepare for interviews for industry years. A particularly useful online app was used to share our ideas (www.mentimeter.com) – a virtual notice board where you can send in your responses via smartphone/laptops.

Stewart finished with a clear exhortation on NOT taking the average of responses when using numerical responses on Likert scales (e.g. 1=strongly agree to 5=strongly disagree). Simply put, these numbers are not interval data, where the difference between successive values are identical and meaningful, rather they are ordinal data, i.e. can be ordered but the differences between them are meaningless. Stewart’s suggestion was to present the relative ratio of each response to each question and analyse pre- and post- intervention where appropriate.

After coffee, and meeting up some friends I have made on Twitter over the last year, Dr Orla Kelly (@orlakelly5), Senior Lecturer in Social, Environmental and Science Education, Dublin City University, discussed the evaluation of classroom practice, with a focus on ‘Classroom Action Research’. Orla started with a definition from the Open University, of ‘systematic and collaborative collection of evidence on which to base reflection’. She provided a summary of the cycle of action research as ‘Plan / Act / Observe / Reflect’. Orla’s extensive use of Problem-Based Learning in undergraduate labs provided a context for the talk, and had strong resonances with Suzanne’s earlier talk.

Prof Graham Scott (@grahamscott14), Professor of Bioscience Education at University of Hull then expanded the speaker repertoire beyond chemists to a biologist helping bring a perspective from a related relevant field. Graham’s key message was of the advantage of moving away from our ‘science comfort blanket’ and embracing the discomfiture of collecting and using the more qualitative data derived from interviews. Graham took us through his research journey of using interviews in studying various different areas, from an analysis of student’s and teacher’s perceptions of a course, to barriers to using biological fieldwork in primary schools.

Key ideas in making effective use of interviews included i) establishing a suitable dynamic between the interviewer and interviewee (location, time available, consideration of any prior professional relationship); ii) clearly constructed questions that will elicit the information required (including the use of trialling) and iii) the importance of audio/video recording and the processes of transcribing and analysing the data.

I have used interviews in previous research and as part of evaluating the effectiveness of my previous school departments. While I experienced many of the problems that Graham described, I wholeheartedly agree with Graham that the quality of information you can derive makes them well worth the effort.

After a much-needed lunch (energy levels were flagging by 1.10pm!) we had time to chat and look at posters. This was a nice aspect of the conference, starting off the conference a good month before the get-together, and get feedback on our ideas online. My particular interest right now is in how microscale chemistry can be integrated into my teaching, and whether it has sustained benefit to students’ learning.



Prof Keith Taber (@DrKeithSTaber) took us on a tour of ethics in educational research. It has been many years since my MEd days with Keith as my supervisor, but his erudite and rigorous style continues to shine through and it was a pleasure to be part of the audience.

Starting with a brief tour of various ethical frameworks, including deontology and utilitarianism, we discussed the importance of voluntary informed consent from the subjects of our research, and the responsibility we bear as researchers. These include reporting our findings as completely and fairly as possible, including not selectively reporting our findings, and highlighting the known limitations. We discussed the issues around anonymity and confidentiality, and the particular problems that the easy access to worldwide information via the internet can provide. We discussed the particular cases of the Milgram studies and the Tuskegee Syphilis experiment, highlighting areas of real contention in the ethics of research.

The key message from this double session was that while the rules of ethics can be relatively easily stated, the actual decisions we have to make as researchers can be very nuanced and rightly deserve careful consideration before, during and after our studies.

The day finished with Prof Georgios Tsaparlis, Professor Emeritus of Science Education in the Department of Chemistry at the University of Ioannina, Greece, winner of the 2016 RSC Education Award. Georgios’ work on problem-solving has spanned decades, and a whistle stop tour was presented in this final session. Ideas around how the limitations of our cognitive architecture, with a particular focus on working memory, were discussed. The importance of scaffolding and exercise, as well as success for students, in developing problem-solving skill was clearly emphasised.

MICER17 proved to be all I had been looking forward to, and a great venue to meet new people, make connections and expand my professional network. Most of all, it has helped put the human face to the world of ChemEdRes. Reading articles in CERP or J Chem Educ can be a little daunting to those new to ChemEdRes, and the barrier to entry to the world can seem impossibly high. However, it really is an inclusive and welcoming community, one that I look forward to contributing to in the coming years. Many thanks to Micheal Seery (@seerymk) and Claire McDonnell (@clairemcdonndit) for all the hard work in bringing this together.


Can we make practical work more effective?

Some thoughts on making practical work more effective by considering the cognitive load of practical work.

Reading time: 5-6 minutes

As discussed previously, I’ve been spending a lot of time thinking about practical work of late. I’ve presented most of the below at a few places over the last couple of weeks, including the RSC SaFE National Teachers Conference, and the East Midlands Chemistry Teachers Conference. It seems to be going down well, so I thought I’d summarise here.

The full slides are available, along with a hand-out summary I’ve produced for Niki Kaiser’s #CogSciSci conference tomorrow (Monday 10th July) at Notre Dame High School.

I’ve started these discussions with a quick run through of the why of practical work. I think it is important that we think carefully from time to time about why we do practical work, given the expense (in time and cost) and the opportunity costs (could we get to the same learning more effectively another way). I reference the National Curriculum, the Ofsted ‘Maintaining Curiosity‘ report, an OCR longitudinal survey, and some ‘historical’ literature (Hodson, 1990, SSR, 70(256, 33-40). The key point I make is that the reasons teachers seem to state for using practical work hasn’t changed much over the decades, and we’re still questioning how effective it is.

Quotes from the National Curriculum (Slide 5)

  • …essential aspects of knowledge, methods, processes and uses of science…
  • …curiosity about natural phenomena…
  • … explain what is occurring, predict how things will behave, and analyse causes

Quote from ‘Maintaining curiosity’ (Slide 6)

  • In the best schools visited, teachers ensured that pupils understood the ‘big ideas’ of science. They made sure that pupils mastered the investigative and practical skills that underpin the development of scientific knowledge and could discover for themselves the relevance and usefulness of those ideas.

Top five reasons identified by teachers for using practical work from the OCR longitudinal study (Slide 7)

  • To encourage accurate observation and description
  • To develop conceptual understanding
  • To develop reporting, presenting, data analysis and discussion skills
  • To experience the process of finding facts by investigation
  • To develop manipulative skills and techniques

Top five reasons for using practical work, identified in Hodson (1990) (Slide 8)

  • Motivation
  • Teaching laboratory skills
  • Enhancing learning of scientific knowledge
  • Insight into and developing scientific method
  • Developing ‘scientific attitudes’

I looked at how practical work can be ineffective, and referenced a nice succinct quote from Clackson and Wright (1992, SSR, 74(266), 39-42),

  • Although practical work is commonly considered to be invaluable in scientific teaching, research shows that it is not necessarily so valuable in scientific learning. The evidence points to the uncomfortable conclusion that much laboratory work has been of little benefit in helping pupils and students understand concepts.

and posit three key ideas about why practical can be ineffective:

  • chemistry is hard
  • the practicals used are overloaded
  • there is too much to think about.

A brief segue into Johnstone’s triangle follows (Slide 12). I use the context of dissolving table salt, which seems to help get the idea of the triangle across. Interestingly, on asking, I’d say fewer than 20% of teachers were aware of the triangle explicitly (although they may well understand it and have worked it out for themselves implicitly).

Another segue then into the structure of memory, using the model presented in Baddeley’s 2000 paper – I’m aware isn’t the most up-to-date paper and the model has no-doubt moved on, but I’m always concerned about overloading (ironically?) people when talking about these new areas. (Slide 13)

I’ve summarised Cognitive Load Theory via Intrinsic, Extraneous and Germane load as below – again, probably not a full and up-to-date reflection, but I think sufficient to get the idea across, especially when summarised using Greer’s model. (Slide 16):


So practical work can be ineffective because what we’re asking the students to think about is intrinsically hard (lots of abstract and mostly invisible concepts), we’re trying to do too much in the practicals (plan a method, collect data, process the data etc etc etc) and all the domains of thinking overload their capacity to actually think about what we want them to think about, and we end up with a lot of following the recipe, but little learning.

So my ‘Key Questions’ are:

  • How do we get beyond them just ‘following the recipe’?
    • How do we get them to think?
  • How do we reduce the cognitive load inherent in some practical work?
    • How do get them thinking about the right stuff?
  • How do we maximise the benefit of practical work for our students in our classrooms?
    • How can we design practicals that are effective at promoting learning?

and ‘Key Ideas’ are:

  • Ensure practicals have a clear goal
    • Don’t overload them/students
  • Use of microscale activities
    • e.g. electrolysis, to reduce extraneous load
  • Working up to complex practical tasks
    • e.g. titration, to improve instrinic load

How do we make practical work more effective? This is some further hashing out of ideas from over the last couple of weeks. I’ve had the titration ideas out there for a while, but the analysis of the electrolysis has taken my thinking a bit further.

Firstly, looking at extraneous load – the problem of solely written instructions, and how carefully labelled diagrams may be a better method. The key learning of this practical exemplified below would be the observation rather than the ‘ability to follow written instructions’. I was challenged on this one at the East Midlands conference, along the lines of how this would help with exam preparation, and the students needing to know how to write practical methods. My response was on the wider point of not overloading practicals (not everything has to be about exam preparation) and having a clear focus on what is required. If the observation is key, then everything else should be ‘subservient’ to that. (Slide 21)

I then had a go at a Cognitive Load ‘analysis’ – I’m not sure if this is a thing, but it was a useful exercise to demonstrate what students may be thinking about. I used the comparison between electrolysis using the standard Nuffield apparatus, and the microscale copper chloride electrolysis. (Slide 23 and 24)


  • application of a current to an aqueous solution of copper chloride produces copper and chlorine
  • qualitative tests for chlorine


  • charge on ions
  • nature of ions in solution
  • flow of charge
  • formation of metal/covalent substances


  • nature of apparatus
  • quality of practical instruction
  • quality and reliability of equipment
  • classroom environment

Doing the full micro-scale practical as written is potentially itself overloaded – far too much to put onto students the first time. (Slide 25)

But this can be easily relieved by leaving out all the ‘indicators’ and building up to them. (Slide 28)

I then went through my ‘breaking down titration‘ – in summary my first teaching of this went straight in with a 20 minute demonstration of titration theory and practice to a Yr11 Triple group and then me expecting them to replicate this with a written method. I was still a very green teacher at this point, and learnt some very valuable lessons. My next attempt was somewhat more nuanced, and led to much better learning – including the use of micro-titration. (Slide 29)

Finally, I discussed an introduction to rates of reaction using simplified kit (£10 2d.p. balanced from Amazon, vinegar from the kitchen cupboard and chalk from the garden). I’m impressed with the quality of the data I managed to get from this, and I think it may be a useful introduction to rates, without having to worry about all the standard kit. (Slide 31 and 32)

Following David Didau’s ‘what if I’m wrong’ motif (having finished ‘What if everything…‘ a couple of months back), I also reference a report by Moreno who neatly summarises some of the concerns about CLT.

  • It doesn’t deal with affective factors, including motivation
  • Extraneous load may be strongly inter-related with germane load
  • There are ambiguous and contradictory studies on the effectiveness of CLT in explaining outcomes.

Summarising with four key bullet points (Slide 37), I reiterated what is for me is a critically important part this – that we continue to think about and discuss our practice, and look to see whether we can improve on what we are doing in the classroom. I don’t particularly mind that I may well be coming to the same conclusions other already have. For me, expanding my personal knowledge and effectiveness is a critical part of my professional development. If I can help others with their’s, then so much the better, and if I can push at the boundaries of the public knowledge, then that’s a bonus.


21/7/17: Rehosted slides and powerpoint on social.ocr.org.uk and indicated specific slides in body of text rather than screenshots.