Can we make practical work more effective?

Some thoughts on making practical work more effective by considering the cognitive load of practical work.

Reading time: 5-6 minutes

As discussed previously, I’ve been spending a lot of time thinking about practical work of late. I’ve presented most of the below at a few places over the last couple of weeks, including the RSC SaFE National Teachers Conference, and the East Midlands Chemistry Teachers Conference. It seems to be going down well, so I thought I’d summarise here.

The full slides are available, along with a hand-out summary I’ve produced for Niki Kaiser’s #CogSciSci conference tomorrow (Monday 10th July) at Notre Dame High School.

I’ve started these discussions with a quick run through of the why of practical work. I think it is important that we think carefully from time to time about why we do practical work, given the expense (in time and cost) and the opportunity costs (could we get to the same learning more effectively another way). I reference the National Curriculum, the Ofsted ‘Maintaining Curiosity‘ report, an OCR longitudinal survey, and some ‘historical’ literature (Hodson, 1990, SSR, 70(256, 33-40). The key point I make is that the reasons teachers seem to state for using practical work hasn’t changed much over the decades, and we’re still questioning how effective it is.

Quotes from the National Curriculum (Slide 5)

  • …essential aspects of knowledge, methods, processes and uses of science…
  • …curiosity about natural phenomena…
  • … explain what is occurring, predict how things will behave, and analyse causes

Quote from ‘Maintaining curiosity’ (Slide 6)

  • In the best schools visited, teachers ensured that pupils understood the ‘big ideas’ of science. They made sure that pupils mastered the investigative and practical skills that underpin the development of scientific knowledge and could discover for themselves the relevance and usefulness of those ideas.

Top five reasons identified by teachers for using practical work from the OCR longitudinal study (Slide 7)

  • To encourage accurate observation and description
  • To develop conceptual understanding
  • To develop reporting, presenting, data analysis and discussion skills
  • To experience the process of finding facts by investigation
  • To develop manipulative skills and techniques

Top five reasons for using practical work, identified in Hodson (1990) (Slide 8)

  • Motivation
  • Teaching laboratory skills
  • Enhancing learning of scientific knowledge
  • Insight into and developing scientific method
  • Developing ‘scientific attitudes’

I looked at how practical work can be ineffective, and referenced a nice succinct quote from Clackson and Wright (1992, SSR, 74(266), 39-42),

  • Although practical work is commonly considered to be invaluable in scientific teaching, research shows that it is not necessarily so valuable in scientific learning. The evidence points to the uncomfortable conclusion that much laboratory work has been of little benefit in helping pupils and students understand concepts.

and posit three key ideas about why practical can be ineffective:

  • chemistry is hard
  • the practicals used are overloaded
  • there is too much to think about.

A brief segue into Johnstone’s triangle follows (Slide 12). I use the context of dissolving table salt, which seems to help get the idea of the triangle across. Interestingly, on asking, I’d say fewer than 20% of teachers were aware of the triangle explicitly (although they may well understand it and have worked it out for themselves implicitly).

Another segue then into the structure of memory, using the model presented in Baddeley’s 2000 paper – I’m aware isn’t the most up-to-date paper and the model has no-doubt moved on, but I’m always concerned about overloading (ironically?) people when talking about these new areas. (Slide 13)

I’ve summarised Cognitive Load Theory via Intrinsic, Extraneous and Germane load as below – again, probably not a full and up-to-date reflection, but I think sufficient to get the idea across, especially when summarised using Greer’s model. (Slide 16):


So practical work can be ineffective because what we’re asking the students to think about is intrinsically hard (lots of abstract and mostly invisible concepts), we’re trying to do too much in the practicals (plan a method, collect data, process the data etc etc etc) and all the domains of thinking overload their capacity to actually think about what we want them to think about, and we end up with a lot of following the recipe, but little learning.

So my ‘Key Questions’ are:

  • How do we get beyond them just ‘following the recipe’?
    • How do we get them to think?
  • How do we reduce the cognitive load inherent in some practical work?
    • How do get them thinking about the right stuff?
  • How do we maximise the benefit of practical work for our students in our classrooms?
    • How can we design practicals that are effective at promoting learning?

and ‘Key Ideas’ are:

  • Ensure practicals have a clear goal
    • Don’t overload them/students
  • Use of microscale activities
    • e.g. electrolysis, to reduce extraneous load
  • Working up to complex practical tasks
    • e.g. titration, to improve instrinic load

How do we make practical work more effective? This is some further hashing out of ideas from over the last couple of weeks. I’ve had the titration ideas out there for a while, but the analysis of the electrolysis has taken my thinking a bit further.

Firstly, looking at extraneous load – the problem of solely written instructions, and how carefully labelled diagrams may be a better method. The key learning of this practical exemplified below would be the observation rather than the ‘ability to follow written instructions’. I was challenged on this one at the East Midlands conference, along the lines of how this would help with exam preparation, and the students needing to know how to write practical methods. My response was on the wider point of not overloading practicals (not everything has to be about exam preparation) and having a clear focus on what is required. If the observation is key, then everything else should be ‘subservient’ to that. (Slide 21)

I then had a go at a Cognitive Load ‘analysis’ – I’m not sure if this is a thing, but it was a useful exercise to demonstrate what students may be thinking about. I used the comparison between electrolysis using the standard Nuffield apparatus, and the microscale copper chloride electrolysis. (Slide 23 and 24)


  • application of a current to an aqueous solution of copper chloride produces copper and chlorine
  • qualitative tests for chlorine


  • charge on ions
  • nature of ions in solution
  • flow of charge
  • formation of metal/covalent substances


  • nature of apparatus
  • quality of practical instruction
  • quality and reliability of equipment
  • classroom environment

Doing the full micro-scale practical as written is potentially itself overloaded – far too much to put onto students the first time. (Slide 25)

But this can be easily relieved by leaving out all the ‘indicators’ and building up to them. (Slide 28)

I then went through my ‘breaking down titration‘ – in summary my first teaching of this went straight in with a 20 minute demonstration of titration theory and practice to a Yr11 Triple group and then me expecting them to replicate this with a written method. I was still a very green teacher at this point, and learnt some very valuable lessons. My next attempt was somewhat more nuanced, and led to much better learning – including the use of micro-titration. (Slide 29)

Finally, I discussed an introduction to rates of reaction using simplified kit (£10 2d.p. balanced from Amazon, vinegar from the kitchen cupboard and chalk from the garden). I’m impressed with the quality of the data I managed to get from this, and I think it may be a useful introduction to rates, without having to worry about all the standard kit. (Slide 31 and 32)

Following David Didau’s ‘what if I’m wrong’ motif (having finished ‘What if everything…‘ a couple of months back), I also reference a report by Moreno who neatly summarises some of the concerns about CLT.

  • It doesn’t deal with affective factors, including motivation
  • Extraneous load may be strongly inter-related with germane load
  • There are ambiguous and contradictory studies on the effectiveness of CLT in explaining outcomes.

Summarising with four key bullet points (Slide 37), I reiterated what is for me is a critically important part this – that we continue to think about and discuss our practice, and look to see whether we can improve on what we are doing in the classroom. I don’t particularly mind that I may well be coming to the same conclusions other already have. For me, expanding my personal knowledge and effectiveness is a critical part of my professional development. If I can help others with their’s, then so much the better, and if I can push at the boundaries of the public knowledge, then that’s a bonus.


21/7/17: Rehosted slides and powerpoint on social.ocr.org.uk and indicated specific slides in body of text rather than screenshots.


Teaching shortcuts and when they can trip you up…

Reading time: 4-5 minutes

Along with others of late, including Kristy Turner , Niki Kaiser and Adam Boxer, I have been mulling misconceptions and teaching. Cognitive Load Theory is also being discussed a lot and I’ll be attending Niki’s conference soon to hash out some more on these ideas and how to apply it to the classroom. Much of what I have read resonates with my previous teaching, and I’ve written about it in relation to practical work.


What prompted this particular post was a session with Steve Barnes and David Read at the Wessex Group conference a couple of weeks back, and a Twitter chat recently. Both were related to aspects of equilibrium, a concept many students find hard, especially when questions are a bit different from what they have seen before. With the increased demand in the new A level papers, and the increased emphasis on applying knowledge to unknown situations, this seems like a timely issue.

Eric Scerri noted a couple of years back the problems with Le Chatelier – it works sometimes, but can break down quite quickly, and can potentially stop us from having to think thinking too hard about the details of the context. Somewhat like Kristy’s SEABODI, students tend to go for a stock answer to the almost inevitable NH3 or SO3 production questions, but aren’t necessarily thinking deeply about their understanding.

At the Wessex Group conference, Steve used a set of equilibrium questions, drawing on work by Juan Quilez, to highlight these problems. We also discussed the nitrogen dioxide-dinitrogen tetroxide equilibrium. When applying Le Chatelier to this equilibrium, and looking at changing temperature, the expected changes to equilibrium position and hence observations are borne out, as seen in numerous videos. Increasing the pressure by compression is more complex. The expected shift in the position of equilibrium is to the right to decrease the pressure. A not uncommon prediction of the observation would be that the mixture lightens, as NO2 is converted into N2O4. The problem is that as the total volume has decreased, the mixture actually darkens initially as the NO2 becomes more concentrated, then the colour lightens as the NO2 is converted to N2O4. So the shift in equilibrium position may be correctly predicted, but the predicted observation may be wrong (or at least incomplete) because the full system was not considered or the question is not carefully phrased.

I recognise that at times I have come to rely on shortcuts and when the questions becomes more complex, the shortcuts can break down. Now, Le Chatelier is model like any other – we tend to use the simplest model that will allow us to explain the observations. When the model breaks, we use or develop a more sophisticated one. This issue has tied up with other conversations recently on what is meant by ‘mastery’ in science – is it mastering the concepts at the appropriate level, or introducing more sophisticated concepts earlier on so we aren’t ‘lying’ to the students. I’m not sure it would be appropriate to introduce GCSE and / or A level models directly in KS3. I may be able to get the students to repeat back the facts, but I doubt they’d be able to use them confidently or competently. Of course, this is a whole bucket of worms on assessing understanding – for another time perhaps.

What finally prompted this post was a resource I was reviewing on fuel cells. I have always found electrochemistry one of the harder topics to teach effectively, and tend to take more of a pause before launching into it. I have taught electrolysis plenty at GCSE, but for whatever reason never really gone into galvanic cells and fuel cells in detail. I had developed shortcuts for electrolysis along the lines of ‘it’s the reverse of normal chemical reactions’ and ‘cathodes are negative as cations are attracted to them’.  I got myself into a muddle with the hydrogen fuel cell, on working out the polarity of electrodes, not helped by some vaguely written resources. A quick shout out to Twitter set me straight (thanks to Peter Hoare and Adrian Dingle), but it was a useful reminder of the need to check my understanding of the fundamentals from time to time.

Does this have any wider relevance – certainly for me going back to teaching after a couple of years out. For others – perhaps. I think it points to the importance of subject knowledge CPD. I’m a strong advocate of pedagogical content knowledge CPD, but spending time on deepening my personal understanding of the content knowledge is probably worthwhile from time to time. At this point in my teaching career, I would put depth ahead of breadth now. I have sufficient breadth of chemistry to teach my students effectively at the level I’m teaching at, but I think increasing the depth of my understanding as the years pass can only be a good thing. There are some great resources out there. I’m a particular fan of knockhardy and chemguide. I also have a copy of Chemistry3 close by – I’ve consigned my other university books to the lab shelf – I think one general undergraduate chemistry text is sufficient for what I need for now.

Any thoughts?  When was the last time a student asked you a question that you couldn’t quite answer to your satisfaction? What resources do you use the support your depth of understanding?


Review and reflections on #MICER17

A review and some personal reflections on the MICER 2017 conference.


At the sumptuous RSC Library at Burlington House, we gathered Methods in Chemistry Education Research 2017,  a day of lectures, activities and catching up with friends and colleagues. From school teachers to a Professor Emeritus, we gathered with a common purpose – to spend a day thinking about methods in chemical education research.

The day started with Dr Suzanne Fergus (@suzannefergus), Principal Lecturer in Pharmaceutical Chemistry at University of Hertfordshire (also 2016 RSC Award Winner for Higher Education Teaching). Through the context of her journey into ChemEdRes, Suzanne discussed the difference between anecdote of what works in our own teaching situation, and what constitutes genuine research. Critical features included i) contextualization within the current literature, ii) robust data collection and evaluation, and iii) novelty of work. While replication of others work in our own context can help increased generalisability of ideas, the new learning from such replication needs to be made explicit.

We worked through an exercise in formulating a RESEARCH QUESTION, central to ensuring high-quality research, and ultimately in getting our studies published. In my previous teaching of A-level sciences, I have come across research questions in Biology fieldwork, but their use in Chemistry research are not common. The worksheet proved a useful structure to start the challenging process of formulating high quality and usable research questions. Benefits of starting the research process with the research question include i) helping connect with the literature; ii) influence on the methods used; iii) focus on the presentation of the work and iv) focus on the discussion of the conclusion.

One of Suzanne’s papers (DOI: 10.1021/ed2004966) was highlighted as a useful example of how ChemEdRes can be written. The ‘New Directions’ journal was also suggested as a good starting point for those looking to get into academic publishing. Suzanne also suggested other less formal (more ‘social’) ways of publishing to help build one’s confidence in sharing our thoughts with a wider community. This included speaking at TeachMeets, small conferences, engaging in Twitter conversations, writing personal and professional blogs and writing for institutional publications. On a personal level, Suzanne’s talk gave me that last little push to start a personal blog!

Suzanne’s colleague Dr Stewart Kirton (@skirtonUH), Head of Pharmaceutical Chemistry, University of Hertfordshire, then took us through the use of Likert scales in providing an assessment of the impact of our interventions. While analysis of attainment in assessments is a major source of such information, surveying students’ perceptions is an increasingly used source of information. I used such surveys throughout my time in secondary teaching, and their use is becoming more common at university level with the ‘Teaching Excellent Framework’.

Stewart took us through a process for developing valid questions and Likert-scale responses, including:

  • the importance of trialling the questions with peers
  • trialling with the subjects of the questions (usually your students
  • ensure each question is only examining one idea
  • avoiding jargon
  • think carefully about the possible responses – including ‘Don’t Know’ is acceptable
  • phrase questions positively (if possible)
  • sticking to around eight questions – using many more than this and the students will likely run out of steam!

Our activity involved drafting some questions to help evaluate a programme run by final year students to help second-year students prepare for interviews for industry years. A particularly useful online app was used to share our ideas (www.mentimeter.com) – a virtual notice board where you can send in your responses via smartphone/laptops.

Stewart finished with a clear exhortation on NOT taking the average of responses when using numerical responses on Likert scales (e.g. 1=strongly agree to 5=strongly disagree). Simply put, these numbers are not interval data, where the difference between successive values are identical and meaningful, rather they are ordinal data, i.e. can be ordered but the differences between them are meaningless. Stewart’s suggestion was to present the relative ratio of each response to each question and analyse pre- and post- intervention where appropriate.

After coffee, and meeting up some friends I have made on Twitter over the last year, Dr Orla Kelly (@orlakelly5), Senior Lecturer in Social, Environmental and Science Education, Dublin City University, discussed the evaluation of classroom practice, with a focus on ‘Classroom Action Research’. Orla started with a definition from the Open University, of ‘systematic and collaborative collection of evidence on which to base reflection’. She provided a summary of the cycle of action research as ‘Plan / Act / Observe / Reflect’. Orla’s extensive use of Problem-Based Learning in undergraduate labs provided a context for the talk, and had strong resonances with Suzanne’s earlier talk.

Prof Graham Scott (@grahamscott14), Professor of Bioscience Education at University of Hull then expanded the speaker repertoire beyond chemists to a biologist helping bring a perspective from a related relevant field. Graham’s key message was of the advantage of moving away from our ‘science comfort blanket’ and embracing the discomfiture of collecting and using the more qualitative data derived from interviews. Graham took us through his research journey of using interviews in studying various different areas, from an analysis of student’s and teacher’s perceptions of a course, to barriers to using biological fieldwork in primary schools.

Key ideas in making effective use of interviews included i) establishing a suitable dynamic between the interviewer and interviewee (location, time available, consideration of any prior professional relationship); ii) clearly constructed questions that will elicit the information required (including the use of trialling) and iii) the importance of audio/video recording and the processes of transcribing and analysing the data.

I have used interviews in previous research and as part of evaluating the effectiveness of my previous school departments. While I experienced many of the problems that Graham described, I wholeheartedly agree with Graham that the quality of information you can derive makes them well worth the effort.

After a much-needed lunch (energy levels were flagging by 1.10pm!) we had time to chat and look at posters. This was a nice aspect of the conference, starting off the conference a good month before the get-together, and get feedback on our ideas online. My particular interest right now is in how microscale chemistry can be integrated into my teaching, and whether it has sustained benefit to students’ learning.



Prof Keith Taber (@DrKeithSTaber) took us on a tour of ethics in educational research. It has been many years since my MEd days with Keith as my supervisor, but his erudite and rigorous style continues to shine through and it was a pleasure to be part of the audience.

Starting with a brief tour of various ethical frameworks, including deontology and utilitarianism, we discussed the importance of voluntary informed consent from the subjects of our research, and the responsibility we bear as researchers. These include reporting our findings as completely and fairly as possible, including not selectively reporting our findings, and highlighting the known limitations. We discussed the issues around anonymity and confidentiality, and the particular problems that the easy access to worldwide information via the internet can provide. We discussed the particular cases of the Milgram studies and the Tuskegee Syphilis experiment, highlighting areas of real contention in the ethics of research.

The key message from this double session was that while the rules of ethics can be relatively easily stated, the actual decisions we have to make as researchers can be very nuanced and rightly deserve careful consideration before, during and after our studies.

The day finished with Prof Georgios Tsaparlis, Professor Emeritus of Science Education in the Department of Chemistry at the University of Ioannina, Greece, winner of the 2016 RSC Education Award. Georgios’ work on problem-solving has spanned decades, and a whistle stop tour was presented in this final session. Ideas around how the limitations of our cognitive architecture, with a particular focus on working memory, were discussed. The importance of scaffolding and exercise, as well as success for students, in developing problem-solving skill was clearly emphasised.

MICER17 proved to be all I had been looking forward to, and a great venue to meet new people, make connections and expand my professional network. Most of all, it has helped put the human face to the world of ChemEdRes. Reading articles in CERP or J Chem Educ can be a little daunting to those new to ChemEdRes, and the barrier to entry to the world can seem impossibly high. However, it really is an inclusive and welcoming community, one that I look forward to contributing to in the coming years. Many thanks to Micheal Seery (@seerymk) and Claire McDonnell (@clairemcdonndit) for all the hard work in bringing this together.


A morning with PGCE students

An overview of a session with Chemistry PGCE students, discussing practical work and pedagogy.

Estimated reading time – 6-7 minutes

I spent an enjoyable morning in the company of the latest cohort of Chemistry PGCE students at my former college Homerton, Cambridge. Elaine Wilson had invited me in to talk about practical work, something I’ve spent a lot of my time working on in my current job. I particularly like talking with teachers early in their teaching career. They enjoy a good discussion, and can provide a fresh perspective on teaching, perhaps ‘unencumbered’ by years spent dealing with the stresses and strains of the classroom (not that PGCE is a walk in the park of course!)


This particular cohort is entering a different world – no A level coursework or GCSE controlled assessment in any year groups. They’ll be right in there with linear qualifications, terminal exams, and practical work only formally assessed in written exams or directly through Practical Endorsement at A Level. It’s an interesting time to be sure, although with all the changes, departmental systems, curricula, resources etc may not be as well embedded as 3-4 years ago.

The session covered a lot of (minds-on) ground, and I pointed the students at a range of reports and resources that would make interesting reading for them over the summer. We started off by briefly reviewing the work of the SCORE reports on Practical Work (2008 and 2013), discussing the National Curriculum and the Ofsted ‘Maintaining Curiosity’ report from 2013. I’ve spent some time thinking about this particular Ofsted report as I’ll be speaking at the ASE Guildford conference later this month.

A pause in my talking allowed the students to discuss together the reasons they use practical work and then looking at the reasons given in a survey (ASE login required – Wilson et al. (2016), SSR, 98(362), 119-128) carried out by OCR recently. The discussion afterwards was particularly interesting, as many of the students reflected critically on their progression over their two teaching practices. A lot saw that they started out with a ‘classic’ view that ‘science is a practical subject’ and therefore ‘we do practical work’. Over the year they became more nuanced in their use of practical work, and were working towards carefully identifying what the purpose(s) of the practical was before committing to it.

After a brief foray into the distinguishing features of Direct and Indirect Assessment of Practical Skills, as research by Abrahams, Reiss and Sharpe, we looked at the new practical requirements at GCSE. Compared with the A Levels, the requirements are much lighter touch and give teachers and science departments significant latitude in how they provide the required ‘broad and balanced practical experience’. The different exam boards have implemented this is different ways, but fundamentally, students are expected to complete a minimum number of practicals (8 for each separate science, 16 for combined sciences), to give the students opportunities to use the required apparatus and techniques and make records of their practical work.

I have spent a good amount of time producing suggested practicals for GCSE Chemistry over the last year, including adaptations of the high-quality resources already available from RSC LearnChemistry and CLEAPSS. Given the new freedoms teachers have, I didn’t want to go down the route of ‘the exam board practical’ and reinventing the wheel. I think the resources (under Practical Activities) produced are useful, and will certainly be making use of them with my classes next year.

The main thrust of the hands-on practical session was to identify and try out different ways of ‘doing’ the practical, recognising the many school students can end up doing very similar practicals multiple times over their schooling. For example, it’s not uncommon for them to germinate a broad-bean in a jam-jar, carry out chromatography of pen ink, and study heat loss in cardboard model houses at primary, secondary and then further education. While not suggesting that repetition isn’t an important part of learning, there are different ways of getting to the same learning end-point that may bring additional benefits of a wider appreciation of practical science, and to help students move away from the idea that there is one ‘correct’ answer.

All of the practicals we tried out were adaptations from practicals developed by CLEAPSS (with acknowledgements to them). The first, and my favourite is the electrolysis of copper chloride in petri dish. Quick to set-up, and very adaptable, within 10 minutes the students got to several relevant observation, and thinking about how they could use it in different ways in the classroom. There was general agreement that having all of the ‘indicators’ the first time round would likely overload the students, and that starting off with just copper chloride and the two electrodes would allow the pupils to focus on making accurate observations. Given the perpetual issues around using power packs in lessons, the use of 9V batteries was particular enjoyed!


We briefly tried out a further adaptation of the disappearing cross reaction. The reaction box is normally made from food containers. As I didn’t have enough around the house the night before when I was making them up, I used some spare petri-dishes instead. They mostly worked, although they polystyrene proved much more brittle than the polythene (polypropylene perhaps?) of the food boxes, so making the holes big enough is definitely important!


We discussed how rates of reaction can be introduced with a greatly simplified equipment. This is a potentially useful introduction, with reduced extraneous cognitive load due to the familiar apparatus compared with the relative complexity of some practical setups in standard practicals (thinking Mg+HCl). I showed a brief practical I did in my kitchen using vinegar + chalk and using one of the very cheap 2dp balances now available online, and collected some pretty reasonable data.

We finished with a quick go on my adaptation of the CLEAPSS precipitation on a sheet, ‘March of the precipitANTs’. There was a discussion about whether the students would have the fine-motor skills to deal with moving around a few crystals on the sheet, rather than aliquoting out solutions into test tubes. I contended that most students, given the opportunity and time, could make a good job of this. They are required to carry out fine tasks throughout their schooling, including developing handwriting, art and D&T work etc, and I think close fine work is a useful skill to develop in science (education of the head, heart and HAND?)


We spent some time discussing progression of learning, something I have written about in more detail here. In summary, I used the context of teaching and learning of titration. My first go at this was early in my teaching career, and was a (pedagogic) disaster. I gave a Year 11 group a 20 minute demonstration on how to carry out a titration, and then sent them off with a practical sheet and expected them to replicate what they had just seen. Understandably, very few managed to produce anything like useable results! My subsequent attempts have had a significantly more thought through instructional progression, and they were much more successful.

We finished with a very brief run through of the requirements of the A-level Practical endorsement, and how large-scale investigative work can still be done in Chemistry A Level.

Overall, the session was a refreshing break from the office, and it was good to see that there are still vibrant and enthusiastic people coming into the profession. I look forward to joining them back at the chalkface in September.